You must log in or register to comment.

Erisian wrote

brb gonna go outsource this job to someone else ;)

2

MichaelPemulis wrote (edited )

Goddamn, this is making me realize how absolutely shit I am at wget/httptrack/basic IT skills. How does one get wget to recursively scan through each "https://raddle.me/f/memer?next[id]=" URL? Or "https://raddle.me/f/memer/*"... I seem to be able to superficially grab images from the first page of "memer" but I'm struggling with wildcarding the URL to grab every page. Sigh...

current attempt is "wget -nd -nc -r -l3 -e robots=off -A jpg,jpeg,png,gif,bmp --wait 1 -H https://raddle.me/f/memer?next[id]=*"

2