Full Code of Riczap/The-Gatherer for AI

The-Gatherer-2.0 60eb4ac778d3 cached
11 files
54.4 MB
6.8k tokens
48 symbols
1 requests
Download .txt
Repository: Riczap/The-Gatherer
Branch: The-Gatherer-2.0
Commit: 60eb4ac778d3
Files: 11
Total size: 54.4 MB

Directory structure:
gitextract_f8eyn87l/

├── README.md
├── bot_thread.py
├── gadgets.py
├── main.py
├── models/
│   ├── iron.onnx
│   ├── iron.txt
│   ├── rough_stone.onnx
│   └── rough_stone.txt
├── onnx_detextion.py
├── requirements.txt
└── window_capture.py

================================================
FILE CONTENTS
================================================

================================================
FILE: README.md
================================================

# The-Gatherer-2.0
You can still access the previous version by changing the branch on Github.

This was made using YOLOv5 and OpenCV. The model is now comaptible with CPU and GPU and it detects automaticly the best option for your computer. You can load your custom YOLO models (exported to **ONNX**) by using your own .onnx file.

On how to use your own data set to train a custom model, for now I'ld recomend following this tutorial for custom training. [https://www.youtube.com/watch?v=GRtgLlwxpc4](https://www.youtube.com/watch?v=GRtgLlwxpc4). I'm also uploading very soon a simple guide to export your .pt model to .onnx so it can run in this version of the bot.

If you want to train and export your own custom onnx model you can follow the steps that are set up in the following Google Colab: https://colab.research.google.com/drive/19kVzBERhRwB1jywcKeJ3dALARNd5-dR7?usp=sharing

You can check out the rewritten version on C++ (with no UI) that also uses Onnx to run inference here: https://github.com/Riczap/The-Gatherer-Cpp

- **Known Issue:** Trying to move the window of The Gatherer 2 while having the Bot Vision activated, will crash the program. (You can move the command prompt at any time without issues)

 - **Note:** The Bot and the Vision are independent, you can have the bot running without the Computer Vision function activated. The model is running on the background whenever you activate either of them.
 
 - **Note:** All of the parameters have default values, so you can leve them blank and it'll work fine.

- Training Data (Demo): https://drive.google.com/drive/u/2/folders/17X_f17WpzoxHMURSj5QIZ4lMUWPImf5V

- Showcase: Note that the following video is of the previous 1.0 version with an outdated GUI. [https://www.youtube.com/watch?v=y669rc18ia4](https://www.youtube.com/watch?v=y669rc18ia4)
![Showcase](https://user-images.githubusercontent.com/77018982/230541525-271eea09-be75-47e8-be8f-6c8bb133668a.PNG)

## New Features
I've implemented some quality of live updates so it's easier to use for general purposes.
- You can choose the resolution that you are currently using for your game manually

![Resolution](https://user-images.githubusercontent.com/77018982/230542772-f769a8ff-7da7-4b67-9fbb-f76bdfd8fa6f.PNG)
- You can add and select your custom models with a drop down menu

![Models](https://user-images.githubusercontent.com/77018982/230542819-248199e9-3c06-4323-b472-fce2487b5446.PNG)
- You can now input your desired waiting time between the actions of the bot.

Just remember to click the **Save changes** button after you selected your custom parameters
![Save](https://user-images.githubusercontent.com/77018982/230543242-8bdbd567-e4e6-493d-bb11-cf7b62abba1e.PNG)


## Installation
To use the new version of The Gatherer you can install the dependencies either in your main python environment, using anaconda or as an executable file.

-Download Tutorial: https://www.youtube.com/watch?v=dljCXzuKTKo
### Python
 1. Clone the repository on GitHub (Download the files).
 2. Open a console terminal and run the following command to install all of the dependencies: `pip install -r requirements.txt`
### Conda
 1. Clone the repository on GitHub (Download the files).
 2. Install Anaconda: [https://www.anaconda.com/products/distribution](https://www.anaconda.com/products/distribution)
 3. Create an Environment using the following command on the anaconda prompt: `conda create -n myenv` (you can choose any name you want for the env)
 4. Activate the environment using `conda activate myenv` and open the directory where you downloaded the source code for the bot. Run the following line to install all of the dependencies: `pip install -r requirements.txt`
 5. Now you can run the **main.py** file through the conda environment using `python main.py`
### Executable
 1. Download and extract the zip file: https://drive.google.com/file/d/1HImNmd06msfE_RuhBxIzT-rLlXL6LCa5/view?usp=share_link
 2. Right click and create a shortcut of **The Gatherer 2.exe** file and move it to your desired location
 3. Remeber that you'll need to acces the **models** directory to add new custom models.

## How to Add a Custom Model
### Exporting to Onnx
 I'm also finishing up a video tutorial explaining how to export your custom models. And here is a step by step guide on how to do it.
- Open the Google Colab link and follow the steps: https://colab.research.google.com/drive/1uJMeZP4QbSVuA5TNkfeXQDIpFDHVTeAB?usp=share_link
 ### Adding the model to The Gatherer 2.0
 2. Once you've you custom model as a yolov5.onnx file you can proceede to create a text file with a matching name to the name of your model containing the name of your custom classes.
 
 ![Text File](https://user-images.githubusercontent.com/77018982/230546123-b4ef79b7-b65a-42ce-be44-0ad4ee847e22.PNG)
 
 3. Move both files into the **models** directory.

Feel free to use the code for your own projects!

If you have any issues and need assistance send me a message or post something on:
Discord: WanderingEye#0330
Forum: https://www.unknowncheats.me/forum/usercp.php



================================================
FILE: bot_thread.py
================================================
import threading
from time import sleep
import pyautogui
import random
import math
import cv2
import numpy

def get_center(rectangles):
    centers = []
    for i in rectangles:
        x = int((i[0]+(i[0]+i[2]))/2)
        y = int((i[1]+(i[1]+i[3]))/2)
        centers.append([x, y])
        #print(centers)
    return centers


def get_rectangles(results):
    rectangles = []
    x = results.xyxy[0].tolist()
    for i in x:
        rectangles.append(i[:-2])
    return rectangles


class Move:
	def __init__(self):
		#Lock the thread
		self.lock = threading.Lock()


		
	def nearest_object(self, screen_center):
		dictionary = {}
		for i in range(len(self.centers)):
			object_location = self.centers[i]
			#Calculate distance between an object and the character
			distance = math.dist(object_location, screen_center)
			#Add result to dictionary
			dictionary[i] = distance 

		#print(dictionary)
		sort_dictionary = sorted(dictionary, key=dictionary.get, reverse=False)
		closest_object = self.centers[sort_dictionary[0]]
		return closest_object

	#Move player to mine rock
	def go_to(self, waiting_time, screen_center):
		b = 0
		rand_pos = [[660, 500], [424, 226]]

		if len(self.centers)>0:
			print(f"Waiting {waiting_time} seconds")
			#Find the closest object to the player
			closest_object = self.nearest_object(screen_center)
			print(closest_object)
			
			#Display moving position
			print(f"Moving to: {closest_object[0]}, {closest_object[1]}")

			#Action 1 (Move mouse)
			pyautogui.moveTo(closest_object[0],closest_object[1],duration=0.5)
			print("click\n")
			#Action 2 (Movement click)
			pyautogui.click(button="left")
			sleep(waiting_time)

		else:
			print(f"Waiting 2.5 seconds")
			print("No results")
			#Choose "random" position to move
			b = random.randint(0,1)
			
			#Display moving position
			print(f"Stuck, moving to: {rand_pos[b][0]}, {rand_pos[b][1]}")
			
			#Action 1 (Move mouse)
			pyautogui.moveTo(rand_pos[b][0],rand_pos[b][1],duration=0.5)
			print("click\n")

			
			#Action 2 (Movement click)
			pyautogui.click(button="left")
			sleep(2.5)
			
			#Reset var
			b = 0

    #Thread Functions
	def start(self):
		self.stopped = False
		self.state = 0
		self.t = threading.Thread(target=self.run)
		self.t.start()
    
	def update(self, centers, bot_status, waiting_time, screen_center):
		self.screen_center = screen_center
		self.waiting_time = waiting_time
		#print(f"Bot Status Thread: {bot_status}")
		if bot_status==True:
			self.state = 1
		elif bot_status==False:
			self.state = 0
		self.centers = centers


	def stop(self):
		self.stopped = True
		print("Terminating...")


	def run(self):
		while not self.stopped:
			if self.state == 0:
				sleep(3)

			elif self.state == 1:
				self.lock.acquire()
				self.go_to(self.waiting_time, self.screen_center)
				self.lock.release()


================================================
FILE: gadgets.py
================================================
import customtkinter as ctk
import tkinter

def if_empty(value, default):
    if(value == ""):
        value = float(default)
    else:
        pass
    return value


class SwitchesFrame(ctk.CTkFrame):
    def __init__(self, *args, header_name="TestFrame", name, text1, text2, command_name1, command_name2 , **kwargs):
        super().__init__(*args, **kwargs)


        # Add a label to the new entry frame
        self.label1 = ctk.CTkLabel(self, text=name, anchor="center")
        self.label1.grid(row=0, column=0, columnspan=1, pady=(10, 0), padx=10, sticky="n")

        # Add two entries to the new entry frame
        self.switch_var1 = ctk.StringVar(value="off")
        self.switch1 = ctk.CTkSwitch(self, text=text1, variable=self.switch_var1, onvalue="on", offvalue="off", command=command_name1)
        self.switch1.grid(row=1, column=0, pady=12, padx=10)

        self.switch_var2 = ctk.StringVar(value="off")
        self.switch2 = ctk.CTkSwitch(self, text=text2, variable=self.switch_var2, onvalue="on", offvalue="off", command=command_name2)
        self.switch2.grid(row=2, column=0, pady=12, padx=10)


    def get_state1(self):
        """ returns on or off"""
        return self.switch_var1.get()

    def get_state2(self):
        """ returns on or off"""
        return self.switch_var2.get()

    def reset_values(self):
        self.switch1.deselect()
        self.switch2.deselect()



class DoubleEntryFrame(ctk.CTkFrame):
    def __init__(self, *args, header_name="TestFrame", name, text1, text2, default1, default2, **kwargs):
        super().__init__(*args, **kwargs)

        self.default1 = default1
        self.default2 = default2

        # Add a label to the new entry frame
        self.label1 = ctk.CTkLabel(self, text=name, anchor="center")
        self.label1.grid(row=0, column=0, columnspan=2, pady=(10, 0), padx=10, sticky="n")

        # Define the validation function to allow only integers
        validate_int = (self.register(self._validate_int), '%P')

        # Add two entries to the new entry frame
        self.entry1 = ctk.CTkEntry(self, width=50, placeholder_text=text1, validate="key", validatecommand=validate_int)
        self.entry1.grid(row=1, column=0, pady=12, padx=1)

        self.entry2 = ctk.CTkEntry(self, width=50, placeholder_text=text2, validate="key", validatecommand=validate_int)
        self.entry2.grid(row=1, column=1, pady=12, padx=10)

    def _validate_int(self, value):
        """Validate that the input value is an integer."""
        if not value:
            return True

        try:
            int(value)
            return True
        except ValueError:
            return False

    def get_value1(self):
        """ returns selected value as a string, returns an empty string if nothing selected """
        self.value1 = if_empty(self.entry1.get(), self.default1)
        return self.value1

    def get_value2(self):
        """ returns selected value as a string, returns an empty string if nothing selected """

        self.value2 = if_empty(self.entry2.get(), self.default2)
        return self.value2


class SingleEntryFrame(ctk.CTkFrame):
    def __init__(self, *args, header_name="TestFrame", name, text, default, **kwargs):
        super().__init__(*args, **kwargs)
        self.default = default
        # Add a label to the new entry frame
        self.label1 = ctk.CTkLabel(self, text=name, anchor="center")
        self.label1.grid(row=0, column=0, columnspan=2, pady=(10, 0), padx=10, sticky="n")

        # Define the validation function to allow only integers
        validate_int = (self.register(self._validate_int), '%P')

        # Add a single entry to the new entry frame
        self.entry = ctk.CTkEntry(self, width=50, placeholder_text=text, validate="key", validatecommand=validate_int)
        self.entry.grid(row=1, column=0, pady=12, padx=1)

    def _validate_int(self, value):
        """Validate that the input value is an integer."""
        if not value:
            return True

        try:
            float(value)
            return True
        except ValueError:
            return False

    def get_value(self):
        """ returns selected value as a string, returns an empty string if nothing selected """
        self.value = if_empty(self.entry.get(), self.default)
        return self.value


class DropdownFrame(ctk.CTkFrame):
    def __init__(self, *args, header_name="TestFrame", name, text, default, options, **kwargs):
        super().__init__(*args, **kwargs)


        # Add a label to the new entry frame
        self.label1 = ctk.CTkLabel(self, text=name, anchor="center")
        self.label1.grid(row=0, column=0, columnspan=1, pady=(10, 0), padx=10, sticky="n")

        self.combobox_var = ctk.StringVar(value=text)  # set initial value
        self.option = ""
        self.default = default

        def combobox_callback(choice):
            #print(f"{text}:", choice)
            self.option = choice


        #["800 x 600", "1024 x 768", "1280 x 720", "1280 x 1024", "1366 x 768", "1600 x 900", "1680 x 1050", "1920 x 1080"]

        self.combobox = ctk.CTkComboBox(self, values=options, command=combobox_callback, variable=self.combobox_var)
        self.combobox.grid(row=1, column=0, pady=12, padx=10)


    def get_option(self):
        if self.option == "":
            self.option = self.default
        return self.option


================================================
FILE: main.py
================================================
import customtkinter as ctk
from gadgets import *
from onnx_detextion import *
from window_capture import WindowCapture
from bot_thread import *
import cv2
import sys

#Move Albion Client to the corner of the screen

#Bot thread
go = Move()
go.start()

# Set custom theme
ctk.set_appearance_mode("dark")
ctk.set_default_color_theme("dark-blue")

class App(ctk.CTk):
    def __init__(self):
        super().__init__()

        self.models = filter_models(get_files_in_folder())
        self.model = "rough_stone.onnx"
        self.is_cuda = len(sys.argv) > 1 and sys.argv[1] == "cuda"
        self.net = build_model(self.is_cuda, f"models/{self.model}")

        self.resolution = "1024x720"
        self.waiting_time = 3.5
        self.width, self.height = self.resolution.split('x')
        self.screen_center = [int(self.width)/2, int(self.height)/2]
        self.lock = threading.Lock()

        self.vision_status = "off"
        self.bot_status = "off"
        self.wincap = WindowCapture(None)

        self.class_ids, self.confidences, self.boxes, self.class_list, self.centers = [], [], [], [], []
        

        self.geometry("370x320")
        self.title("The Gatherer 2.0 - Wandering Eye")
        self.iconbitmap("wanderingeye.ico")

        self.protocol("WM_DELETE_WINDOW", self.on_close)


        def update_vision_status():
            self.vision_status = self.actions_frame.get_state1()
            print("Vision status updated to:", self.vision_status)


        def update_bot_status():
            self.bot_status = self.actions_frame.get_state2()
            print("Bot status updated to:", self.bot_status)
                

        def update_info():
            cv2.destroyAllWindows()
            self.actions_frame.reset_values()
            self.bot_status="off"
            self.vision_status="off"
            
            print(self.onnx_model_box.get_option())
            self.model = self.onnx_model_box.get_option()
            self.net = build_model(self.is_cuda, f"models/{self.model}")          
            print(f"Using: {self.model}")
            
            self.width, self.height = self.game_size_box.get_option().split('x')
            self.wincap = WindowCapture(None, width=int(self.width), height=int(self.height))
            print(f"Game resolution: {self.width}x{self.height}")
            
            self.waiting_time = float(self.waiting_time_frame.get_value())
            print(f"Waiting time: {self.waiting_time}\n")

        

        #Creating Objects
        self.actions_frame = SwitchesFrame(self, name="Actions", text1="Display bot's vision", text2="Gather resources", command_name1 = update_vision_status, command_name2 = update_bot_status)
        self.game_size_box = DropdownFrame(self, name="Select Window Size", text="Game resolution", default="1024x720" , options=["1024x720","1280x720", "1280x1024", "1366x768", "1600x900", "1680x1050", "1920x1080"])
        self.onnx_model_box = DropdownFrame(self, name="Select detection model", text="Onnx model", default="rough_stone.onnx", options=self.models)
        self.update_info_button = ctk.CTkButton(self, text="Save changes", command=update_info)
        self.waiting_time_frame = SingleEntryFrame(self, header_name="EntryFrame1", name="Waiting Time", text="3.5", default=3.5)
        

        #Drawing Objects
        self.actions_frame.grid(row=0, column=0, pady=12, padx=10)
        self.waiting_time_frame.grid(row=1, column=0, pady=12, padx=10)
        self.game_size_box.grid(row=0, column=1, pady=12, padx=10)
        self.onnx_model_box.grid(row=1, column=1, pady=12, padx=10)
        self.update_info_button.grid(row=2, column=0, padx=20, pady=10)



    def bot_gathering(self):
        if self.bot_status=="on":
            go.update(self.centers, True, self.waiting_time, self.screen_center)
        elif self.bot_status=="off":
            go.update([], False, self.waiting_time, self.screen_center)

    def update_screenshot(self):
        #Avoid running inference if there are no actions activated
        if(self.vision_status == "on" or self.bot_status =="on"):
            self.screenshot = self.wincap.get_screenshot()
            self.class_ids, self.confidences, self.boxes, self.class_list = results_objects(self.screenshot, self.net, self.model)
            self.centers = get_center(self.boxes)
            self.frame = results_frame(self.screenshot, self.class_ids, self.confidences, self.boxes, self.class_list)
        
        if (self.vision_status == "on"):
            cv2.imshow("Computer Vision", self.frame)
            cv2.waitKey(1)
            self.after(100,self.update_screenshot)
            self.after(100,self.bot_gathering)
            
        elif self.vision_status == "off":
            cv2.destroyAllWindows()
            self.after(100,self.update_screenshot)
            self.after(100,self.bot_gathering)

            
            
    def on_close(self):
        print("Closing")
        go.stop()
        self.destroy()    



if __name__ == "__main__":

    app = App()
    app.after(100, app.update_screenshot)
    app.mainloop()


================================================
FILE: models/iron.onnx
================================================
[File too large to display: 27.2 MB]

================================================
FILE: models/iron.txt
================================================
iron

================================================
FILE: models/rough_stone.onnx
================================================
[File too large to display: 27.2 MB]

================================================
FILE: models/rough_stone.txt
================================================
rock

================================================
FILE: onnx_detextion.py
================================================
import cv2
import time
import sys
import numpy as np
from window_capture import WindowCapture
import os

def build_model(is_cuda, path="models/custom_yolov5.onnx"):
    net = cv2.dnn.readNet(path)
    if is_cuda:
        print("Attempty to use CUDA")
        net.setPreferableBackend(cv2.dnn.DNN_BACKEND_CUDA)
        net.setPreferableTarget(cv2.dnn.DNN_TARGET_CUDA_FP16)
    else:
        print("Running on CPU")
        net.setPreferableBackend(cv2.dnn.DNN_BACKEND_OPENCV)
        net.setPreferableTarget(cv2.dnn.DNN_TARGET_CPU)
    return net

INPUT_WIDTH = 640
INPUT_HEIGHT = 640
SCORE_THRESHOLD = 0.2
NMS_THRESHOLD = 0.4
CONFIDENCE_THRESHOLD = 0.4

def detect(image, net):
    blob = cv2.dnn.blobFromImage(image, 1/255.0, (INPUT_WIDTH, INPUT_HEIGHT), swapRB=True, crop=False)
    net.setInput(blob)
    preds = net.forward()
    return preds


def load_classes(classes_name):
    class_list = []
    with open(f"models/{classes_name}", "r") as f:
        class_list = [cname.strip() for cname in f.readlines()]
    return class_list


def wrap_detection(input_image, output_data):
    class_ids = []
    confidences = []
    boxes = []

    rows = output_data.shape[0]

    image_width, image_height, _ = input_image.shape

    x_factor = image_width / INPUT_WIDTH
    y_factor =  image_height / INPUT_HEIGHT

    for r in range(rows):
        row = output_data[r]
        confidence = row[4]
        if confidence >= 0.4:

            classes_scores = row[5:]
            _, _, _, max_indx = cv2.minMaxLoc(classes_scores)
            class_id = max_indx[1]
            if (classes_scores[class_id] > .25):

                confidences.append(confidence)

                class_ids.append(class_id)

                x, y, w, h = row[0].item(), row[1].item(), row[2].item(), row[3].item() 
                left = int((x - 0.5 * w) * x_factor)
                top = int((y - 0.5 * h) * y_factor)
                width = int(w * x_factor)
                height = int(h * y_factor)
                box = np.array([left, top, width, height])
                boxes.append(box)

    indexes = cv2.dnn.NMSBoxes(boxes, confidences, 0.25, 0.45) 

    result_class_ids = []
    result_confidences = []
    result_boxes = []

    for i in indexes:
        result_confidences.append(confidences[i])
        result_class_ids.append(class_ids[i])
        result_boxes.append(boxes[i])

    return result_class_ids, result_confidences, result_boxes

def format_yolov5(frame):

    row, col, _ = frame.shape
    _max = max(col, row)
    result = np.zeros((_max, _max, 3), np.uint8)
    result[0:row, 0:col] = frame
    return result




def results_objects(frame, net, model):
    classes = get_classes(model)
    class_list = load_classes(classes)
    inputImage = format_yolov5(frame)
    outs = detect(inputImage, net)

    class_ids, confidences, boxes = wrap_detection(inputImage, outs[0])
    return class_ids, confidences, boxes, class_list

def results_frame(frame, class_ids, confidences, boxes, class_list):
    colors = [(255, 255, 0), (0, 255, 0), (0, 255, 255), (255, 0, 0)]
    for (classid, confidence, box) in zip(class_ids, confidences, boxes):
         color = colors[int(classid) % len(colors)]
         cv2.rectangle(frame, box, color, 2)
         cv2.rectangle(frame, (box[0], box[1] - 20), (box[0] + box[2], box[1]), color, -1)
         cv2.putText(frame, class_list[classid], (box[0], box[1] - 10), cv2.FONT_HERSHEY_SIMPLEX, .5, (0,0,0))
    return frame


def get_files_in_folder():
    folder_path = "models"
    file_names = []
    for filename in os.listdir(folder_path):
        if os.path.isfile(os.path.join(folder_path, filename)):
            file_names.append(filename)
    return file_names

def get_classes(model):
    class_name = model.split(".")[0]
    classes = f"{class_name}.txt"
    return classes

def filter_models(file_names):
    onnx_models = []
    for file in file_names:
        if file.split(".")[-1] == "onnx":
            onnx_models.append(file)    
    return onnx_models

================================================
FILE: requirements.txt
================================================
# Usage: pip install -r requirements.txt
# IMPORTANT: Use conda to install pywin32 in case of having errors with the package
# Note: If you have issues with the win32 packages you can install them individualy with: pip install win32gui, win32ui, win32con
# Base ----------------------------------------
opencv-python
customtkinter
pyautogui
pywin32


================================================
FILE: window_capture.py
================================================
import numpy as np
import win32gui, win32ui, win32con

class WindowCapture:
    
    #Properties
    w = 0
    h = 0
    hwnd = None
    offset_x = 0
    offset_y = 0


    def __init__(self, window_name=None, width=1024, height=768):
        
        if window_name is None:
            self.hwnd = win32gui.GetDesktopWindow()
        else:
        #Call specific window to capture
            self.hwnd = win32gui.FindWindow(None, window_name)
            if not self.hwnd:
                raise Exception("Window not found: {}".format(window_name))
        
        #Define monitor dimentions
        self.w = width #1366
        self.h = height #768
    
    def get_screenshot(self):
        #bmpfilenamename = "out.bmp" #set this

        wDC = win32gui.GetWindowDC(self.hwnd)
        dcObj = win32ui.CreateDCFromHandle(wDC)
        cDC = dcObj.CreateCompatibleDC()
        dataBitMap = win32ui.CreateBitmap()
        dataBitMap.CreateCompatibleBitmap(dcObj, self.w, self.h)
        cDC.SelectObject(dataBitMap)
        cDC.BitBlt((0,0),(self.w, self.h) , dcObj, (0,0), win32con.SRCCOPY)

        signedIntsArray = dataBitMap.GetBitmapBits(True)
        img = np.fromstring(signedIntsArray, dtype = "uint8")
        img.shape = (self.h,self.w,4)
        #save screenshot
        #dataBitMap.SaveBitmapFile(cDC, bmpfilenamename)

        # Free Resources
        dcObj.DeleteDC()
        cDC.DeleteDC()
        win32gui.ReleaseDC(self.hwnd, wDC)
        win32gui.DeleteObject(dataBitMap.GetHandle())
        
        img = img[...,:3]
        img = np.ascontiguousarray(img)
        
        return img
    
    @staticmethod
    def list_window_names():
        def winEnumHandler(hwnd, ctx):
            if win32gui.IsWindowVisible(hwnd):
                print(hex(hwnd), win32gui.GetWindowText(hwnd))
        win32gui.EnumWindows(winEnumHandler, None)

    def get_screen_position(self, pos):
        return (pos[0] + self.offset_x, pos[1] + self.offset_y)
Download .txt
gitextract_f8eyn87l/

├── README.md
├── bot_thread.py
├── gadgets.py
├── main.py
├── models/
│   ├── iron.onnx
│   ├── iron.txt
│   ├── rough_stone.onnx
│   └── rough_stone.txt
├── onnx_detextion.py
├── requirements.txt
└── window_capture.py
Download .txt
SYMBOL INDEX (48 symbols across 5 files)

FILE: bot_thread.py
  function get_center (line 9) | def get_center(rectangles):
  function get_rectangles (line 19) | def get_rectangles(results):
  class Move (line 27) | class Move:
    method __init__ (line 28) | def __init__(self):
    method nearest_object (line 34) | def nearest_object(self, screen_center):
    method go_to (line 49) | def go_to(self, waiting_time, screen_center):
    method start (line 91) | def start(self):
    method update (line 97) | def update(self, centers, bot_status, waiting_time, screen_center):
    method stop (line 108) | def stop(self):
    method run (line 113) | def run(self):

FILE: gadgets.py
  function if_empty (line 4) | def if_empty(value, default):
  class SwitchesFrame (line 12) | class SwitchesFrame(ctk.CTkFrame):
    method __init__ (line 13) | def __init__(self, *args, header_name="TestFrame", name, text1, text2,...
    method get_state1 (line 31) | def get_state1(self):
    method get_state2 (line 35) | def get_state2(self):
    method reset_values (line 39) | def reset_values(self):
  class DoubleEntryFrame (line 45) | class DoubleEntryFrame(ctk.CTkFrame):
    method __init__ (line 46) | def __init__(self, *args, header_name="TestFrame", name, text1, text2,...
    method _validate_int (line 66) | def _validate_int(self, value):
    method get_value1 (line 77) | def get_value1(self):
    method get_value2 (line 82) | def get_value2(self):
  class SingleEntryFrame (line 89) | class SingleEntryFrame(ctk.CTkFrame):
    method __init__ (line 90) | def __init__(self, *args, header_name="TestFrame", name, text, default...
    method _validate_int (line 104) | def _validate_int(self, value):
    method get_value (line 115) | def get_value(self):
  class DropdownFrame (line 121) | class DropdownFrame(ctk.CTkFrame):
    method __init__ (line 122) | def __init__(self, *args, header_name="TestFrame", name, text, default...
    method get_option (line 145) | def get_option(self):

FILE: main.py
  class App (line 19) | class App(ctk.CTk):
    method __init__ (line 20) | def __init__(self):
    method bot_gathering (line 95) | def bot_gathering(self):
    method update_screenshot (line 101) | def update_screenshot(self):
    method on_close (line 122) | def on_close(self):

FILE: onnx_detextion.py
  function build_model (line 8) | def build_model(is_cuda, path="models/custom_yolov5.onnx"):
  function detect (line 26) | def detect(image, net):
  function load_classes (line 33) | def load_classes(classes_name):
  function wrap_detection (line 40) | def wrap_detection(input_image, output_data):
  function format_yolov5 (line 87) | def format_yolov5(frame):
  function results_objects (line 98) | def results_objects(frame, net, model):
  function results_frame (line 107) | def results_frame(frame, class_ids, confidences, boxes, class_list):
  function get_files_in_folder (line 117) | def get_files_in_folder():
  function get_classes (line 125) | def get_classes(model):
  function filter_models (line 130) | def filter_models(file_names):

FILE: window_capture.py
  class WindowCapture (line 4) | class WindowCapture:
    method __init__ (line 14) | def __init__(self, window_name=None, width=1024, height=768):
    method get_screenshot (line 28) | def get_screenshot(self):
    method list_window_names (line 57) | def list_window_names():
    method get_screen_position (line 63) | def get_screen_position(self, pos):
Condensed preview — 11 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (26K chars).
[
  {
    "path": "README.md",
    "chars": 5074,
    "preview": "\n# The-Gatherer-2.0\nYou can still access the previous version by changing the branch on Github.\n\nThis was made using YOL"
  },
  {
    "path": "bot_thread.py",
    "chars": 2838,
    "preview": "import threading\nfrom time import sleep\nimport pyautogui\nimport random\nimport math\nimport cv2\nimport numpy\n\ndef get_cent"
  },
  {
    "path": "gadgets.py",
    "chars": 5388,
    "preview": "import customtkinter as ctk\nimport tkinter\n\ndef if_empty(value, default):\n    if(value == \"\"):\n        value = float(def"
  },
  {
    "path": "main.py",
    "chars": 5113,
    "preview": "import customtkinter as ctk\nfrom gadgets import *\nfrom onnx_detextion import *\nfrom window_capture import WindowCapture\n"
  },
  {
    "path": "models/iron.txt",
    "chars": 4,
    "preview": "iron"
  },
  {
    "path": "models/rough_stone.txt",
    "chars": 4,
    "preview": "rock"
  },
  {
    "path": "onnx_detextion.py",
    "chars": 4015,
    "preview": "import cv2\nimport time\nimport sys\nimport numpy as np\nfrom window_capture import WindowCapture\nimport os\n\ndef build_model"
  },
  {
    "path": "requirements.txt",
    "chars": 349,
    "preview": "# Usage: pip install -r requirements.txt\n# IMPORTANT: Use conda to install pywin32 in case of having errors with the pac"
  },
  {
    "path": "window_capture.py",
    "chars": 1965,
    "preview": "import numpy as np\nimport win32gui, win32ui, win32con\n\nclass WindowCapture:\n    \n    #Properties\n    w = 0\n    h = 0\n   "
  }
]

// ... and 2 more files (download for full content)

About this extraction

This page contains the full source code of the Riczap/The-Gatherer GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 11 files (54.4 MB), approximately 6.8k tokens, and a symbol index with 48 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!