[
  {
    "path": "README.md",
    "content": "\n# The-Gatherer-2.0\nYou can still access the previous version by changing the branch on Github.\n\nThis was made using YOLOv5 and OpenCV. The model is now comaptible with CPU and GPU and it detects automaticly the best option for your computer. You can load your custom YOLO models (exported to **ONNX**) by using your own .onnx file.\n\nOn how to use your own data set to train a custom model, for now I'ld recomend following this tutorial for custom training. [https://www.youtube.com/watch?v=GRtgLlwxpc4](https://www.youtube.com/watch?v=GRtgLlwxpc4). I'm also uploading very soon a simple guide to export your .pt model to .onnx so it can run in this version of the bot.\n\nIf you want to train and export your own custom onnx model you can follow the steps that are set up in the following Google Colab: https://colab.research.google.com/drive/19kVzBERhRwB1jywcKeJ3dALARNd5-dR7?usp=sharing\n\nYou can check out the rewritten version on C++ (with no UI) that also uses Onnx to run inference here: https://github.com/Riczap/The-Gatherer-Cpp\n\n- **Known Issue:** Trying to move the window of The Gatherer 2 while having the Bot Vision activated, will crash the program. (You can move the command prompt at any time without issues)\n\n - **Note:** The Bot and the Vision are independent, you can have the bot running without the Computer Vision function activated. The model is running on the background whenever you activate either of them.\n \n - **Note:** All of the parameters have default values, so you can leve them blank and it'll work fine.\n\n- Training Data (Demo): https://drive.google.com/drive/u/2/folders/17X_f17WpzoxHMURSj5QIZ4lMUWPImf5V\n\n- Showcase: Note that the following video is of the previous 1.0 version with an outdated GUI. [https://www.youtube.com/watch?v=y669rc18ia4](https://www.youtube.com/watch?v=y669rc18ia4)\n![Showcase](https://user-images.githubusercontent.com/77018982/230541525-271eea09-be75-47e8-be8f-6c8bb133668a.PNG)\n\n## New Features\nI've implemented some quality of live updates so it's easier to use for general purposes.\n- You can choose the resolution that you are currently using for your game manually\n\n![Resolution](https://user-images.githubusercontent.com/77018982/230542772-f769a8ff-7da7-4b67-9fbb-f76bdfd8fa6f.PNG)\n- You can add and select your custom models with a drop down menu\n\n![Models](https://user-images.githubusercontent.com/77018982/230542819-248199e9-3c06-4323-b472-fce2487b5446.PNG)\n- You can now input your desired waiting time between the actions of the bot.\n\nJust remember to click the **Save changes** button after you selected your custom parameters\n![Save](https://user-images.githubusercontent.com/77018982/230543242-8bdbd567-e4e6-493d-bb11-cf7b62abba1e.PNG)\n\n\n## Installation\nTo use the new version of The Gatherer you can install the dependencies either in your main python environment, using anaconda or as an executable file.\n\n-Download Tutorial: https://www.youtube.com/watch?v=dljCXzuKTKo\n### Python\n 1. Clone the repository on GitHub (Download the files).\n 2. Open a console terminal and run the following command to install all of the dependencies: `pip install -r requirements.txt`\n### Conda\n 1. Clone the repository on GitHub (Download the files).\n 2. Install Anaconda: [https://www.anaconda.com/products/distribution](https://www.anaconda.com/products/distribution)\n 3. Create an Environment using the following command on the anaconda prompt: `conda create -n myenv` (you can choose any name you want for the env)\n 4. Activate the environment using `conda activate myenv` and open the directory where you downloaded the source code for the bot. Run the following line to install all of the dependencies: `pip install -r requirements.txt`\n 5. Now you can run the **main.py** file through the conda environment using `python main.py`\n### Executable\n 1. Download and extract the zip file: https://drive.google.com/file/d/1HImNmd06msfE_RuhBxIzT-rLlXL6LCa5/view?usp=share_link\n 2. Right click and create a shortcut of **The Gatherer 2.exe** file and move it to your desired location\n 3. Remeber that you'll need to acces the **models** directory to add new custom models.\n\n## How to Add a Custom Model\n### Exporting to Onnx\n I'm also finishing up a video tutorial explaining how to export your custom models. And here is a step by step guide on how to do it.\n- Open the Google Colab link and follow the steps: https://colab.research.google.com/drive/1uJMeZP4QbSVuA5TNkfeXQDIpFDHVTeAB?usp=share_link\n ### Adding the model to The Gatherer 2.0\n 2. Once you've you custom model as a yolov5.onnx file you can proceede to create a text file with a matching name to the name of your model containing the name of your custom classes.\n \n ![Text File](https://user-images.githubusercontent.com/77018982/230546123-b4ef79b7-b65a-42ce-be44-0ad4ee847e22.PNG)\n \n 3. Move both files into the **models** directory.\n\nFeel free to use the code for your own projects!\n\nIf you have any issues and need assistance send me a message or post something on:\nDiscord: WanderingEye#0330\nForum: https://www.unknowncheats.me/forum/usercp.php\n\n"
  },
  {
    "path": "bot_thread.py",
    "content": "import threading\nfrom time import sleep\nimport pyautogui\nimport random\nimport math\nimport cv2\nimport numpy\n\ndef get_center(rectangles):\n    centers = []\n    for i in rectangles:\n        x = int((i[0]+(i[0]+i[2]))/2)\n        y = int((i[1]+(i[1]+i[3]))/2)\n        centers.append([x, y])\n        #print(centers)\n    return centers\n\n\ndef get_rectangles(results):\n    rectangles = []\n    x = results.xyxy[0].tolist()\n    for i in x:\n        rectangles.append(i[:-2])\n    return rectangles\n\n\nclass Move:\n\tdef __init__(self):\n\t\t#Lock the thread\n\t\tself.lock = threading.Lock()\n\n\n\t\t\n\tdef nearest_object(self, screen_center):\n\t\tdictionary = {}\n\t\tfor i in range(len(self.centers)):\n\t\t\tobject_location = self.centers[i]\n\t\t\t#Calculate distance between an object and the character\n\t\t\tdistance = math.dist(object_location, screen_center)\n\t\t\t#Add result to dictionary\n\t\t\tdictionary[i] = distance \n\n\t\t#print(dictionary)\n\t\tsort_dictionary = sorted(dictionary, key=dictionary.get, reverse=False)\n\t\tclosest_object = self.centers[sort_dictionary[0]]\n\t\treturn closest_object\n\n\t#Move player to mine rock\n\tdef go_to(self, waiting_time, screen_center):\n\t\tb = 0\n\t\trand_pos = [[660, 500], [424, 226]]\n\n\t\tif len(self.centers)>0:\n\t\t\tprint(f\"Waiting {waiting_time} seconds\")\n\t\t\t#Find the closest object to the player\n\t\t\tclosest_object = self.nearest_object(screen_center)\n\t\t\tprint(closest_object)\n\t\t\t\n\t\t\t#Display moving position\n\t\t\tprint(f\"Moving to: {closest_object[0]}, {closest_object[1]}\")\n\n\t\t\t#Action 1 (Move mouse)\n\t\t\tpyautogui.moveTo(closest_object[0],closest_object[1],duration=0.5)\n\t\t\tprint(\"click\\n\")\n\t\t\t#Action 2 (Movement click)\n\t\t\tpyautogui.click(button=\"left\")\n\t\t\tsleep(waiting_time)\n\n\t\telse:\n\t\t\tprint(f\"Waiting 2.5 seconds\")\n\t\t\tprint(\"No results\")\n\t\t\t#Choose \"random\" position to move\n\t\t\tb = random.randint(0,1)\n\t\t\t\n\t\t\t#Display moving position\n\t\t\tprint(f\"Stuck, moving to: {rand_pos[b][0]}, {rand_pos[b][1]}\")\n\t\t\t\n\t\t\t#Action 1 (Move mouse)\n\t\t\tpyautogui.moveTo(rand_pos[b][0],rand_pos[b][1],duration=0.5)\n\t\t\tprint(\"click\\n\")\n\n\t\t\t\n\t\t\t#Action 2 (Movement click)\n\t\t\tpyautogui.click(button=\"left\")\n\t\t\tsleep(2.5)\n\t\t\t\n\t\t\t#Reset var\n\t\t\tb = 0\n\n    #Thread Functions\n\tdef start(self):\n\t\tself.stopped = False\n\t\tself.state = 0\n\t\tself.t = threading.Thread(target=self.run)\n\t\tself.t.start()\n    \n\tdef update(self, centers, bot_status, waiting_time, screen_center):\n\t\tself.screen_center = screen_center\n\t\tself.waiting_time = waiting_time\n\t\t#print(f\"Bot Status Thread: {bot_status}\")\n\t\tif bot_status==True:\n\t\t\tself.state = 1\n\t\telif bot_status==False:\n\t\t\tself.state = 0\n\t\tself.centers = centers\n\n\n\tdef stop(self):\n\t\tself.stopped = True\n\t\tprint(\"Terminating...\")\n\n\n\tdef run(self):\n\t\twhile not self.stopped:\n\t\t\tif self.state == 0:\n\t\t\t\tsleep(3)\n\n\t\t\telif self.state == 1:\n\t\t\t\tself.lock.acquire()\n\t\t\t\tself.go_to(self.waiting_time, self.screen_center)\n\t\t\t\tself.lock.release()\n"
  },
  {
    "path": "gadgets.py",
    "content": "import customtkinter as ctk\nimport tkinter\n\ndef if_empty(value, default):\n    if(value == \"\"):\n        value = float(default)\n    else:\n        pass\n    return value\n\n\nclass SwitchesFrame(ctk.CTkFrame):\n    def __init__(self, *args, header_name=\"TestFrame\", name, text1, text2, command_name1, command_name2 , **kwargs):\n        super().__init__(*args, **kwargs)\n\n\n        # Add a label to the new entry frame\n        self.label1 = ctk.CTkLabel(self, text=name, anchor=\"center\")\n        self.label1.grid(row=0, column=0, columnspan=1, pady=(10, 0), padx=10, sticky=\"n\")\n\n        # Add two entries to the new entry frame\n        self.switch_var1 = ctk.StringVar(value=\"off\")\n        self.switch1 = ctk.CTkSwitch(self, text=text1, variable=self.switch_var1, onvalue=\"on\", offvalue=\"off\", command=command_name1)\n        self.switch1.grid(row=1, column=0, pady=12, padx=10)\n\n        self.switch_var2 = ctk.StringVar(value=\"off\")\n        self.switch2 = ctk.CTkSwitch(self, text=text2, variable=self.switch_var2, onvalue=\"on\", offvalue=\"off\", command=command_name2)\n        self.switch2.grid(row=2, column=0, pady=12, padx=10)\n\n\n    def get_state1(self):\n        \"\"\" returns on or off\"\"\"\n        return self.switch_var1.get()\n\n    def get_state2(self):\n        \"\"\" returns on or off\"\"\"\n        return self.switch_var2.get()\n\n    def reset_values(self):\n        self.switch1.deselect()\n        self.switch2.deselect()\n\n\n\nclass DoubleEntryFrame(ctk.CTkFrame):\n    def __init__(self, *args, header_name=\"TestFrame\", name, text1, text2, default1, default2, **kwargs):\n        super().__init__(*args, **kwargs)\n\n        self.default1 = default1\n        self.default2 = default2\n\n        # Add a label to the new entry frame\n        self.label1 = ctk.CTkLabel(self, text=name, anchor=\"center\")\n        self.label1.grid(row=0, column=0, columnspan=2, pady=(10, 0), padx=10, sticky=\"n\")\n\n        # Define the validation function to allow only integers\n        validate_int = (self.register(self._validate_int), '%P')\n\n        # Add two entries to the new entry frame\n        self.entry1 = ctk.CTkEntry(self, width=50, placeholder_text=text1, validate=\"key\", validatecommand=validate_int)\n        self.entry1.grid(row=1, column=0, pady=12, padx=1)\n\n        self.entry2 = ctk.CTkEntry(self, width=50, placeholder_text=text2, validate=\"key\", validatecommand=validate_int)\n        self.entry2.grid(row=1, column=1, pady=12, padx=10)\n\n    def _validate_int(self, value):\n        \"\"\"Validate that the input value is an integer.\"\"\"\n        if not value:\n            return True\n\n        try:\n            int(value)\n            return True\n        except ValueError:\n            return False\n\n    def get_value1(self):\n        \"\"\" returns selected value as a string, returns an empty string if nothing selected \"\"\"\n        self.value1 = if_empty(self.entry1.get(), self.default1)\n        return self.value1\n\n    def get_value2(self):\n        \"\"\" returns selected value as a string, returns an empty string if nothing selected \"\"\"\n\n        self.value2 = if_empty(self.entry2.get(), self.default2)\n        return self.value2\n\n\nclass SingleEntryFrame(ctk.CTkFrame):\n    def __init__(self, *args, header_name=\"TestFrame\", name, text, default, **kwargs):\n        super().__init__(*args, **kwargs)\n        self.default = default\n        # Add a label to the new entry frame\n        self.label1 = ctk.CTkLabel(self, text=name, anchor=\"center\")\n        self.label1.grid(row=0, column=0, columnspan=2, pady=(10, 0), padx=10, sticky=\"n\")\n\n        # Define the validation function to allow only integers\n        validate_int = (self.register(self._validate_int), '%P')\n\n        # Add a single entry to the new entry frame\n        self.entry = ctk.CTkEntry(self, width=50, placeholder_text=text, validate=\"key\", validatecommand=validate_int)\n        self.entry.grid(row=1, column=0, pady=12, padx=1)\n\n    def _validate_int(self, value):\n        \"\"\"Validate that the input value is an integer.\"\"\"\n        if not value:\n            return True\n\n        try:\n            float(value)\n            return True\n        except ValueError:\n            return False\n\n    def get_value(self):\n        \"\"\" returns selected value as a string, returns an empty string if nothing selected \"\"\"\n        self.value = if_empty(self.entry.get(), self.default)\n        return self.value\n\n\nclass DropdownFrame(ctk.CTkFrame):\n    def __init__(self, *args, header_name=\"TestFrame\", name, text, default, options, **kwargs):\n        super().__init__(*args, **kwargs)\n\n\n        # Add a label to the new entry frame\n        self.label1 = ctk.CTkLabel(self, text=name, anchor=\"center\")\n        self.label1.grid(row=0, column=0, columnspan=1, pady=(10, 0), padx=10, sticky=\"n\")\n\n        self.combobox_var = ctk.StringVar(value=text)  # set initial value\n        self.option = \"\"\n        self.default = default\n\n        def combobox_callback(choice):\n            #print(f\"{text}:\", choice)\n            self.option = choice\n\n\n        #[\"800 x 600\", \"1024 x 768\", \"1280 x 720\", \"1280 x 1024\", \"1366 x 768\", \"1600 x 900\", \"1680 x 1050\", \"1920 x 1080\"]\n\n        self.combobox = ctk.CTkComboBox(self, values=options, command=combobox_callback, variable=self.combobox_var)\n        self.combobox.grid(row=1, column=0, pady=12, padx=10)\n\n\n    def get_option(self):\n        if self.option == \"\":\n            self.option = self.default\n        return self.option\n"
  },
  {
    "path": "main.py",
    "content": "import customtkinter as ctk\nfrom gadgets import *\nfrom onnx_detextion import *\nfrom window_capture import WindowCapture\nfrom bot_thread import *\nimport cv2\nimport sys\n\n#Move Albion Client to the corner of the screen\n\n#Bot thread\ngo = Move()\ngo.start()\n\n# Set custom theme\nctk.set_appearance_mode(\"dark\")\nctk.set_default_color_theme(\"dark-blue\")\n\nclass App(ctk.CTk):\n    def __init__(self):\n        super().__init__()\n\n        self.models = filter_models(get_files_in_folder())\n        self.model = \"rough_stone.onnx\"\n        self.is_cuda = len(sys.argv) > 1 and sys.argv[1] == \"cuda\"\n        self.net = build_model(self.is_cuda, f\"models/{self.model}\")\n\n        self.resolution = \"1024x720\"\n        self.waiting_time = 3.5\n        self.width, self.height = self.resolution.split('x')\n        self.screen_center = [int(self.width)/2, int(self.height)/2]\n        self.lock = threading.Lock()\n\n        self.vision_status = \"off\"\n        self.bot_status = \"off\"\n        self.wincap = WindowCapture(None)\n\n        self.class_ids, self.confidences, self.boxes, self.class_list, self.centers = [], [], [], [], []\n        \n\n        self.geometry(\"370x320\")\n        self.title(\"The Gatherer 2.0 - Wandering Eye\")\n        self.iconbitmap(\"wanderingeye.ico\")\n\n        self.protocol(\"WM_DELETE_WINDOW\", self.on_close)\n\n\n        def update_vision_status():\n            self.vision_status = self.actions_frame.get_state1()\n            print(\"Vision status updated to:\", self.vision_status)\n\n\n        def update_bot_status():\n            self.bot_status = self.actions_frame.get_state2()\n            print(\"Bot status updated to:\", self.bot_status)\n                \n\n        def update_info():\n            cv2.destroyAllWindows()\n            self.actions_frame.reset_values()\n            self.bot_status=\"off\"\n            self.vision_status=\"off\"\n            \n            print(self.onnx_model_box.get_option())\n            self.model = self.onnx_model_box.get_option()\n            self.net = build_model(self.is_cuda, f\"models/{self.model}\")          \n            print(f\"Using: {self.model}\")\n            \n            self.width, self.height = self.game_size_box.get_option().split('x')\n            self.wincap = WindowCapture(None, width=int(self.width), height=int(self.height))\n            print(f\"Game resolution: {self.width}x{self.height}\")\n            \n            self.waiting_time = float(self.waiting_time_frame.get_value())\n            print(f\"Waiting time: {self.waiting_time}\\n\")\n\n        \n\n        #Creating Objects\n        self.actions_frame = SwitchesFrame(self, name=\"Actions\", text1=\"Display bot's vision\", text2=\"Gather resources\", command_name1 = update_vision_status, command_name2 = update_bot_status)\n        self.game_size_box = DropdownFrame(self, name=\"Select Window Size\", text=\"Game resolution\", default=\"1024x720\" , options=[\"1024x720\",\"1280x720\", \"1280x1024\", \"1366x768\", \"1600x900\", \"1680x1050\", \"1920x1080\"])\n        self.onnx_model_box = DropdownFrame(self, name=\"Select detection model\", text=\"Onnx model\", default=\"rough_stone.onnx\", options=self.models)\n        self.update_info_button = ctk.CTkButton(self, text=\"Save changes\", command=update_info)\n        self.waiting_time_frame = SingleEntryFrame(self, header_name=\"EntryFrame1\", name=\"Waiting Time\", text=\"3.5\", default=3.5)\n        \n\n        #Drawing Objects\n        self.actions_frame.grid(row=0, column=0, pady=12, padx=10)\n        self.waiting_time_frame.grid(row=1, column=0, pady=12, padx=10)\n        self.game_size_box.grid(row=0, column=1, pady=12, padx=10)\n        self.onnx_model_box.grid(row=1, column=1, pady=12, padx=10)\n        self.update_info_button.grid(row=2, column=0, padx=20, pady=10)\n\n\n\n    def bot_gathering(self):\n        if self.bot_status==\"on\":\n            go.update(self.centers, True, self.waiting_time, self.screen_center)\n        elif self.bot_status==\"off\":\n            go.update([], False, self.waiting_time, self.screen_center)\n\n    def update_screenshot(self):\n        #Avoid running inference if there are no actions activated\n        if(self.vision_status == \"on\" or self.bot_status ==\"on\"):\n            self.screenshot = self.wincap.get_screenshot()\n            self.class_ids, self.confidences, self.boxes, self.class_list = results_objects(self.screenshot, self.net, self.model)\n            self.centers = get_center(self.boxes)\n            self.frame = results_frame(self.screenshot, self.class_ids, self.confidences, self.boxes, self.class_list)\n        \n        if (self.vision_status == \"on\"):\n            cv2.imshow(\"Computer Vision\", self.frame)\n            cv2.waitKey(1)\n            self.after(100,self.update_screenshot)\n            self.after(100,self.bot_gathering)\n            \n        elif self.vision_status == \"off\":\n            cv2.destroyAllWindows()\n            self.after(100,self.update_screenshot)\n            self.after(100,self.bot_gathering)\n\n            \n            \n    def on_close(self):\n        print(\"Closing\")\n        go.stop()\n        self.destroy()    \n\n\n\nif __name__ == \"__main__\":\n\n    app = App()\n    app.after(100, app.update_screenshot)\n    app.mainloop()\n"
  },
  {
    "path": "models/iron.txt",
    "content": "iron"
  },
  {
    "path": "models/rough_stone.txt",
    "content": "rock"
  },
  {
    "path": "onnx_detextion.py",
    "content": "import cv2\nimport time\nimport sys\nimport numpy as np\nfrom window_capture import WindowCapture\nimport os\n\ndef build_model(is_cuda, path=\"models/custom_yolov5.onnx\"):\n    net = cv2.dnn.readNet(path)\n    if is_cuda:\n        print(\"Attempty to use CUDA\")\n        net.setPreferableBackend(cv2.dnn.DNN_BACKEND_CUDA)\n        net.setPreferableTarget(cv2.dnn.DNN_TARGET_CUDA_FP16)\n    else:\n        print(\"Running on CPU\")\n        net.setPreferableBackend(cv2.dnn.DNN_BACKEND_OPENCV)\n        net.setPreferableTarget(cv2.dnn.DNN_TARGET_CPU)\n    return net\n\nINPUT_WIDTH = 640\nINPUT_HEIGHT = 640\nSCORE_THRESHOLD = 0.2\nNMS_THRESHOLD = 0.4\nCONFIDENCE_THRESHOLD = 0.4\n\ndef detect(image, net):\n    blob = cv2.dnn.blobFromImage(image, 1/255.0, (INPUT_WIDTH, INPUT_HEIGHT), swapRB=True, crop=False)\n    net.setInput(blob)\n    preds = net.forward()\n    return preds\n\n\ndef load_classes(classes_name):\n    class_list = []\n    with open(f\"models/{classes_name}\", \"r\") as f:\n        class_list = [cname.strip() for cname in f.readlines()]\n    return class_list\n\n\ndef wrap_detection(input_image, output_data):\n    class_ids = []\n    confidences = []\n    boxes = []\n\n    rows = output_data.shape[0]\n\n    image_width, image_height, _ = input_image.shape\n\n    x_factor = image_width / INPUT_WIDTH\n    y_factor =  image_height / INPUT_HEIGHT\n\n    for r in range(rows):\n        row = output_data[r]\n        confidence = row[4]\n        if confidence >= 0.4:\n\n            classes_scores = row[5:]\n            _, _, _, max_indx = cv2.minMaxLoc(classes_scores)\n            class_id = max_indx[1]\n            if (classes_scores[class_id] > .25):\n\n                confidences.append(confidence)\n\n                class_ids.append(class_id)\n\n                x, y, w, h = row[0].item(), row[1].item(), row[2].item(), row[3].item() \n                left = int((x - 0.5 * w) * x_factor)\n                top = int((y - 0.5 * h) * y_factor)\n                width = int(w * x_factor)\n                height = int(h * y_factor)\n                box = np.array([left, top, width, height])\n                boxes.append(box)\n\n    indexes = cv2.dnn.NMSBoxes(boxes, confidences, 0.25, 0.45) \n\n    result_class_ids = []\n    result_confidences = []\n    result_boxes = []\n\n    for i in indexes:\n        result_confidences.append(confidences[i])\n        result_class_ids.append(class_ids[i])\n        result_boxes.append(boxes[i])\n\n    return result_class_ids, result_confidences, result_boxes\n\ndef format_yolov5(frame):\n\n    row, col, _ = frame.shape\n    _max = max(col, row)\n    result = np.zeros((_max, _max, 3), np.uint8)\n    result[0:row, 0:col] = frame\n    return result\n\n\n\n\ndef results_objects(frame, net, model):\n    classes = get_classes(model)\n    class_list = load_classes(classes)\n    inputImage = format_yolov5(frame)\n    outs = detect(inputImage, net)\n\n    class_ids, confidences, boxes = wrap_detection(inputImage, outs[0])\n    return class_ids, confidences, boxes, class_list\n\ndef results_frame(frame, class_ids, confidences, boxes, class_list):\n    colors = [(255, 255, 0), (0, 255, 0), (0, 255, 255), (255, 0, 0)]\n    for (classid, confidence, box) in zip(class_ids, confidences, boxes):\n         color = colors[int(classid) % len(colors)]\n         cv2.rectangle(frame, box, color, 2)\n         cv2.rectangle(frame, (box[0], box[1] - 20), (box[0] + box[2], box[1]), color, -1)\n         cv2.putText(frame, class_list[classid], (box[0], box[1] - 10), cv2.FONT_HERSHEY_SIMPLEX, .5, (0,0,0))\n    return frame\n\n\ndef get_files_in_folder():\n    folder_path = \"models\"\n    file_names = []\n    for filename in os.listdir(folder_path):\n        if os.path.isfile(os.path.join(folder_path, filename)):\n            file_names.append(filename)\n    return file_names\n\ndef get_classes(model):\n    class_name = model.split(\".\")[0]\n    classes = f\"{class_name}.txt\"\n    return classes\n\ndef filter_models(file_names):\n    onnx_models = []\n    for file in file_names:\n        if file.split(\".\")[-1] == \"onnx\":\n            onnx_models.append(file)    \n    return onnx_models"
  },
  {
    "path": "requirements.txt",
    "content": "# Usage: pip install -r requirements.txt\n# IMPORTANT: Use conda to install pywin32 in case of having errors with the package\n# Note: If you have issues with the win32 packages you can install them individualy with: pip install win32gui, win32ui, win32con\n# Base ----------------------------------------\nopencv-python\ncustomtkinter\npyautogui\npywin32\n"
  },
  {
    "path": "window_capture.py",
    "content": "import numpy as np\nimport win32gui, win32ui, win32con\n\nclass WindowCapture:\n    \n    #Properties\n    w = 0\n    h = 0\n    hwnd = None\n    offset_x = 0\n    offset_y = 0\n\n\n    def __init__(self, window_name=None, width=1024, height=768):\n        \n        if window_name is None:\n            self.hwnd = win32gui.GetDesktopWindow()\n        else:\n        #Call specific window to capture\n            self.hwnd = win32gui.FindWindow(None, window_name)\n            if not self.hwnd:\n                raise Exception(\"Window not found: {}\".format(window_name))\n        \n        #Define monitor dimentions\n        self.w = width #1366\n        self.h = height #768\n    \n    def get_screenshot(self):\n        #bmpfilenamename = \"out.bmp\" #set this\n\n        wDC = win32gui.GetWindowDC(self.hwnd)\n        dcObj = win32ui.CreateDCFromHandle(wDC)\n        cDC = dcObj.CreateCompatibleDC()\n        dataBitMap = win32ui.CreateBitmap()\n        dataBitMap.CreateCompatibleBitmap(dcObj, self.w, self.h)\n        cDC.SelectObject(dataBitMap)\n        cDC.BitBlt((0,0),(self.w, self.h) , dcObj, (0,0), win32con.SRCCOPY)\n\n        signedIntsArray = dataBitMap.GetBitmapBits(True)\n        img = np.fromstring(signedIntsArray, dtype = \"uint8\")\n        img.shape = (self.h,self.w,4)\n        #save screenshot\n        #dataBitMap.SaveBitmapFile(cDC, bmpfilenamename)\n\n        # Free Resources\n        dcObj.DeleteDC()\n        cDC.DeleteDC()\n        win32gui.ReleaseDC(self.hwnd, wDC)\n        win32gui.DeleteObject(dataBitMap.GetHandle())\n        \n        img = img[...,:3]\n        img = np.ascontiguousarray(img)\n        \n        return img\n    \n    @staticmethod\n    def list_window_names():\n        def winEnumHandler(hwnd, ctx):\n            if win32gui.IsWindowVisible(hwnd):\n                print(hex(hwnd), win32gui.GetWindowText(hwnd))\n        win32gui.EnumWindows(winEnumHandler, None)\n\n    def get_screen_position(self, pos):\n        return (pos[0] + self.offset_x, pos[1] + self.offset_y)\n"
  }
]