You had me cracking up at
parses HTML with regex
I started thinking about performance gains.
Are there benefits to websites thinking your agent is a phone? I assumed phones just came with additional restrictions such as meta tags in the stylesheet, not like stylesheets matter at all to a scraper lol
Just remeber that the captcha flood is because AI companies do rogue scraping. Be nice especially to little private sites.
They should just charge a tiny fee, or return Error 402 Payment Required.
Better yet, return bogus pages created by ai to poison the data.
Yeah that’s exactly what Cloudflare proposed a while back if I’m not mistaken. Not sure if they ever implemented that feature.
Instead of CAPTCHAs?
Local data hoarder who looks down on calls outside the network as obscenities. (Entire collection scraped more aggressively than tech bros training an AI model)parses HTML with regex
shudders
You can’t parse [X]HTML with regex. Because HTML can’t be parsed by regex. Regex is not a tool that can be used to correctly parse HTML. As I have answered in HTML-and-regex questions here so many times before, the use of regex will not allow you to consume HTML. Regular expressions are a tool that is insufficiently sophisticated to understand the constructs employed by HTML. HTML is not a regular language and hence cannot be parsed by regular expressions. Regex queries are not equipped to break down HTML into its meaningful parts. so many times but it is not getting to me. Even enhanced irregular regular expressions as used by Perl are not up to the task of parsing HTML. You will never make me crack. HTML is a language of sufficient complexity that it cannot be parsed by regular expressions. Even Jon Skeet cannot parse HTML using regular expressions. Every time you attempt to parse HTML with regular expressions, the unholy child weeps the blood of virgins, and Russian hackers pwn your webapp. Parsing HTML with regex summons tainted souls into the realm of the living. HTML and regex go together like love, marriage, and ritual infanticide. The <center> cannot hold it is too late. The force of regex and HTML together in the same conceptual space will destroy your mind like so much watery putty. If you parse HTML with regex you are giving in to Them and their blasphemous ways which doom us all to inhuman toil for the One whose Name cannot be expressed in the Basic Multilingual Plane, he comes. HTML-plus-regexp will liquify the nerves of the sentient whilst you observe, your psyche withering in the onslaught of horror. Rege̿̔̉x-based HTML parsers are the cancer that is killing StackOverflow it is too late it is too late we cannot be saved the transgression of a chi͡ld ensures regex will consume all living tissue (except for HTML which it cannot, as previously prophesied) dear lord help us how can anyone survive this scourge using regex to parse HTML has doomed humanity to an eternity of dread torture and security holes using regex as a tool to process HTML establishes a breach between this world and the dread realm of c͒ͪo͛ͫrrupt entities (like SGML entities, but more corrupt) a mere glimpse of the world of regex parsers for HTML will instantly transport a programmer’s consciousness into a world of ceaseless screaming, he comes~~, the pestilent sl
ithy regex-infection will devour your HTML parser, application and existence for all time like Visual Basic only worse he comes he comes do not fight he com̡e̶s, ̕h̵is un̨ho͞ly radiańcé destro҉ying all enli̍̈́̂̈́ghtenment, HTML tags lea͠ki̧n͘g fr̶ǫm ̡yo͟ur eye͢s̸ ̛l̕ik͏e liquid pain, the song of re̸gular expression parsingwill extinguish the voices of mortal man from the sphere I can see it can you see ̲͚̖͔̙î̩́t̲͎̩̱͔́̋̀ it is beautiful the fes he co~~inal snuffing of the lies of Man ALL IS LOŚ͖̩͇̗̪̏̈́T A*LL IS LOST the pon̷y he come*s he c̶̮ommes the ichor permeates all MY FACE MY FACE ᵒh god no NO NOO̼*OO NΘ stop the an*̶͑̾̾̅ͫ͏̙̤g͇̫͛͆̾ͫ̑͆l͖͉̗̩̳̟̍ͫͥͨ*e̠̅s͎a̧͈͖r̽̾̈́͒͑enot rè̑ͧ̌aͨl̘̝̙̃ͤ͂̾̆ ZA̡͊͠͝LGΌ ISͮ̂҉̯͈͕̹̘̱ TO͇̹̺ͅƝ̴ȳ̳ TH̘Ë͖́̉ ͠P̯͍̭O̚N̐Y̡ H̸̡̪̯ͨ͊̽̅̾̎Ȩ̬̩̾͛ͪ̈́̀́͘ ̶̧̨̱̹̭̯ͧ̾ͬC̷̙̲̝͖ͭ̏ͥͮ͟Oͮ͏̮̪̝͍M̲̖͊̒ͪͩͬ̚̚͜Ȇ̴̟̟͙̞ͩ͌͝S̨̥̫͎̭ͯ̿̔̀ͅHave you tried using an XML parser instead?
xmllint --root + regex = chefs kiss
I got a bot on lemmy that scrapes espn for sports/football updates using regex to retrieve the JSON that is embedded in the html file, it works perfectly so far 🤷♂️
Delete all line breaks, add one each
<, get the data you want per line and you’re good.Of course, only for targeted data extraction. …which should be the default in any parser, really.
🤢
we’re in web 3.0 now, apis and data access are a thing of the past. so scraping it is!
What exactly are y’all scraping?
Zombocom
I scrape my own bank and financial aggregator to have a self hosted financial tool. I scrape my health insurance to pull in data to track for my HSA. I scrape Strava to build my own health reports.
How so? Shouldn’t that information be behind quite a few layers of security?
I developed my own scraping system using browser automation frameworks. I also developed a secure storage mechanism to keep my data protected.
Yeah there is some security, but ultimately if they expose it to me via a username and password, I can use that same information to scrape it. Its helpful that I know my own credentials and have access to all 2FA mechanisms and am not brute forcing lots of logins so it looks normal.
Some providers protect it their websites with bot detection systems which are hard to bypass, but I’ve closed accounts with places that made it too difficult to do the analysis I need to do.
I only did one scraping script that took the top 25 hotels from a booking.com Web page with their prices. They used to do those manually
postmarket OS tables because I was looking forna device that was unofficially supported but somehow not in their damn table
Hey, you guys got any cool tips for website scraping?
I recommend Zombocom
They’re gonna tell not to parse HTML with regular expressions. Heed this warning, and do it anyways.
Thanks for your reply. What are your arguments in favour of parsing HTML with regex instead of using another method?
it’s quick, it’s easy and it’s free
You have basically two options: treat HTML as a string or parse it then process it with higher level DOM features.
The problem with the second approach is that HTML may look like an XML dialect but it is actually immensely quirky and tolerant. Moreover the modern web page is crazy bloated, so mass processing pages might be surprisingly demanding. And in the end you still need to do custom code to grab the data you’re after.
On the other hand string searching is as lightweight as it gets and you typically don’t really need to care about document structure as a scraper anyways.
Are you a LLM?
Selenium is your fren
Consider free API first if possible.
Beautiful Soup (python library, bs4) is also fren
what do you want to scrape.
Ha, this reminds me of implementing “API” access in the shipping world for companies that only ship a 90s-style web portal.











