You must log in or register to comment.

MichaelPemulis wrote (edited )

Goddamn, this is making me realize how absolutely shit I am at wget/httptrack/basic IT skills. How does one get wget to recursively scan through each "[id]=" URL? Or "*"... I seem to be able to superficially grab images from the first page of "memer" but I'm struggling with wildcarding the URL to grab every page. Sigh...

current attempt is "wget -nd -nc -r -l3 -e robots=off -A jpg,jpeg,png,gif,bmp --wait 1 -H[id]=*"