I scrape with bash lord help me.
you scrape WITH BASH?
someone’s never used a good api. like mastodon
It’s all fun and games until you have to support all this shit and it breaks weekly!
That being said, I do miss the simplicity of maintaining selenium projects for work
I use scrapy. It has a steeper learning curve than other libraries, but it’s totally worth it.
Let me introduce you to WooB (formerly WEBooB).
Why on earth would they have changed that. WEBooB is a way better name.
But it’s got boob in it.
Ok then make a spotify scraper
Let’s see what WEI (if implemented ) will do with the scrapers. The future doesn’t look promising.
What’s that?
A google/chrome proposal for browser verification, i.e. killing addons and custom browsers.
Nice name, beat me to it
My undergrad project was a scraper - there just wasn’t a name for it yet,
Scrapers have been a thing since the web exists.
One of the first search engines is even called WebCrawler
That’s why I use geddit
I really hope Libreddit switches to scraping, the “Error: Too many request” thing is so annoying, I have to click the redirect button in Libredirect like 20 times until I can actually see a post.
Still a better experience than Reddits official site tho.
Sorry, I’m ignorant in this matter. Why exactly would you want to scrape websites aside from collecting data for ML? What kind of irreplaceable API are you using? Someone please educate me here.
API might cost a lot of money for the amount of requests you want to send. API may not include some fields in the data you want. API is rate limited, scraping might not be. API requires agreement to usage terms, scraping does not (though the recent LinkedIn scraping case might weaken that argument.)
This kinda reminds me of pirating vs paying. Using api = you know it will always be the same structure and you will get the data you asked for. Otherwise you will be notified unless they version their api. There is usual good documentation. You can always ask for help.
Scraping = you need to scout the whole website yourself. you need to keep up to date with the the websites structure and to make sure they haven’t added ways to block bots (scraping). Error handling is a lot more intense on your end, like missing content, hidden content, query for data. the website may not follow the same standards/structuree throughout the website so you need to have checks for when to use x to get y. The data may need multiple request because they do not show for example all the user settings on one page but in an api call they would or it is a ajax page and you need to run Javascript scripts and click on buttons that may change id, class or text info and they may load data when you do x with Javascript so you need to emulate the webpage.
So my guess is that scraping is used most often when you only need to fetch simple data structures and you are fine with cleaning up the data afterwards. Like all the text/images on a page, checking if a page has been updated or just save the whole page like wayback machine.
As someone who used to scrape government websites for a living (with permission from them cause they’d rather us break their single 100yr old server than give us a csv), I can confirm that maintaining scraping scripts is a huge pain in the ass.
Everyone loves the idea of scraping, no one likes maintaining scrapers that break once a week because the CSS or HTML changed.
The sad part is that scrapping is often easier then using the api.
Much less beholden to arbitrary rules also. Way too many times companies will just up and lift their API access or push through restrictions. No ty, I’ll just access it myself then
API starter kit
- Outdated and unsupported and hasn’t been replaced yet but is the standard way to use the service.
- Lots of authorization tokens.
- The example in the docs doesn’t work (if there is one).
- You have no idea where the online tutorial got the information because it doesn’t have links to resources and the docs have barely anything even though its giant.
- Uses asynchronous programming to make it faster but its still much much slower then scrapping without asynchronous programming.
I have totally no idea what these are about…
Websites and services create APIs for programmers to use them. So Spotify has code that let’s you build a program that can use its features. But you need a token they give you after you sign up. The token can be revoked and used to monitor how much of their service you’re using. That way they can restrict if its too much.
Scraping is raw dogging the web slut you met at the cougar ranch who went home with you because you reminded her of her dog
So uh…as someone who’s currently trying to scrape the web for email addresses to add to my potential client list … where do I start researching this?
Start looking into selenium, probably in Python. It’s one of the easier to understand forms of scraping. It’s mainly used to web testing, though you can definitely use it for less… nice purposes.