[
  {
    "path": ".dockerignore",
    "content": ".cache/\ncudnn_windows/\nbitsandbytes_windows/\nbitsandbytes_windows_deprecated/\ndataset/\n__pycache__/\nvenv/\n**/.hadolint.yml\n**/*.log\n**/.git\n**/.gitignore\n**/.env\n**/.github\n**/.vscode\n**/*.ps1\nsd-scripts/"
  },
  {
    "path": "Dockerfile",
    "content": "# Base image with CUDA 12.2\nFROM nvidia/cuda:12.2.2-base-ubuntu22.04\n\n# Install pip if not already installed\nRUN apt-get update -y && apt-get install -y \\\n    python3-pip \\\n    python3-dev \\\n    git \\\n    build-essential  # Install dependencies for building extensions\n\n# Define environment variables for UID and GID and local timezone\nENV PUID=${PUID:-1000}\nENV PGID=${PGID:-1000}\n\n# Create a group with the specified GID\nRUN groupadd -g \"${PGID}\" appuser\n# Create a user with the specified UID and GID\nRUN useradd -m -s /bin/sh -u \"${PUID}\" -g \"${PGID}\" appuser\n\nWORKDIR /app\n\n# Get sd-scripts from kohya-ss and install them\nRUN git clone -b sd3 https://github.com/kohya-ss/sd-scripts && \\\n    cd sd-scripts && \\\n    pip install --no-cache-dir -r ./requirements.txt\n\n# Install main application dependencies\nCOPY ./requirements.txt ./requirements.txt\nRUN pip install --no-cache-dir -r ./requirements.txt\n\n# Install Torch, Torchvision, and Torchaudio for CUDA 12.2\nRUN pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu122/torch_stable.html\n\nRUN chown -R appuser:appuser /app\n\n# delete redundant requirements.txt and sd-scripts directory within the container\nRUN rm -r ./sd-scripts\nRUN rm ./requirements.txt\n\n#Run application as non-root\nUSER appuser\n\n# Copy fluxgym application code\nCOPY . ./fluxgym\n\nEXPOSE 7860\n\nENV GRADIO_SERVER_NAME=\"0.0.0.0\"\n\nWORKDIR /app/fluxgym\n\n# Run fluxgym Python application\nCMD [\"python3\", \"./app.py\"]"
  },
  {
    "path": "Dockerfile.cuda12.4",
    "content": "# Base image with CUDA 12.4\nFROM nvidia/cuda:12.4.1-base-ubuntu22.04\n\n# Install pip if not already installed\nRUN apt-get update -y && apt-get install -y \\\n    python3-pip \\\n    python3-dev \\\n    git \\\n    build-essential  # Install dependencies for building extensions\n\n# Define environment variables for UID and GID and local timezone\nENV PUID=${PUID:-1000}\nENV PGID=${PGID:-1000}\n\n# Create a group with the specified GID\nRUN groupadd -g \"${PGID}\" appuser\n# Create a user with the specified UID and GID\nRUN useradd -m -s /bin/sh -u \"${PUID}\" -g \"${PGID}\" appuser\n\nWORKDIR /app\n\n# Get sd-scripts from kohya-ss and install them\nRUN git clone -b sd3 https://github.com/kohya-ss/sd-scripts && \\\n    cd sd-scripts && \\\n    pip install --no-cache-dir -r ./requirements.txt\n\n# Install main application dependencies\nCOPY ./requirements.txt ./requirements.txt\nRUN pip install --no-cache-dir -r ./requirements.txt\n\n# Install Torch, Torchvision, and Torchaudio for CUDA 12.4\nRUN pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124\n\nRUN chown -R appuser:appuser /app\n\n# delete redundant requirements.txt and sd-scripts directory within the container\nRUN rm -r ./sd-scripts\nRUN rm ./requirements.txt\n\n#Run application as non-root\nUSER appuser\n\n# Copy fluxgym application code\nCOPY . ./fluxgym\n\nEXPOSE 7860\n\nENV GRADIO_SERVER_NAME=\"0.0.0.0\"\n\nWORKDIR /app/fluxgym\n\n# Run fluxgym Python application\nCMD [\"python3\", \"./app.py\"]"
  },
  {
    "path": "LICENSE",
    "content": "Copyright 2024 cocktailpeanut\n\nPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n"
  },
  {
    "path": "README.md",
    "content": "# Flux Gym\n\nDead simple web UI for training FLUX LoRA **with LOW VRAM (12GB/16GB/20GB) support.**\n\n- **Frontend:** The WebUI forked from [AI-Toolkit](https://github.com/ostris/ai-toolkit) (Gradio UI created by https://x.com/multimodalart)\n- **Backend:** The Training script powered by [Kohya Scripts](https://github.com/kohya-ss/sd-scripts)\n\nFluxGym supports 100% of Kohya sd-scripts features through an [Advanced](#advanced) tab, which is hidden by default.\n\n![screenshot.png](screenshot.png)\n\n---\n\n\n# What is this?\n\n1. I wanted a super simple UI for training Flux LoRAs\n2. The [AI-Toolkit](https://github.com/ostris/ai-toolkit) project is great, and the gradio UI contribution by [@multimodalart](https://x.com/multimodalart) is perfect, but the project only works for 24GB VRAM.\n3. [Kohya Scripts](https://github.com/kohya-ss/sd-scripts) are very flexible and powerful for training FLUX, but you need to run in terminal.\n4. What if you could have the simplicity of AI-Toolkit WebUI and the flexibility of Kohya Scripts?\n5. Flux Gym was born. Supports 12GB, 16GB, 20GB VRAMs, and extensible since it uses Kohya Scripts underneath.\n\n---\n\n# News\n\n- September 25: Docker support + Autodownload Models (No need to manually download models when setting up) + Support custom base models (not just flux-dev but anything, just need to include in the [models.yaml](models.yaml) file.\n- September 16: Added \"Publish to Huggingface\" + 100% Kohya sd-scripts feature support: https://x.com/cocktailpeanut/status/1835719701172756592\n- September 11: Automatic Sample Image Generation + Custom Resolution: https://x.com/cocktailpeanut/status/1833881392482066638\n\n---\n\n# Supported Models\n\n1. Flux1-dev\n2. Flux1-dev2pro (as explained here: https://medium.com/@zhiwangshi28/why-flux-lora-so-hard-to-train-and-how-to-overcome-it-a0c70bc59eaf)\n3. Flux1-schnell (Couldn't get high quality results, so not really recommended, but feel free to experiment with it)\n4. More?\n\nThe models are automatically downloaded when you start training with the model selected.\n\nYou can easily add more to the supported models list by editing the [models.yaml](models.yaml) file. If you want to share some interesting base models, please send a PR.\n\n---\n\n# How people are using Fluxgym\n\nHere are people using Fluxgym to locally train Lora sharing their experience:\n\nhttps://pinokio.computer/item?uri=https://github.com/cocktailpeanut/fluxgym\n\n\n# More Info\n\nTo learn more, check out this X thread: https://x.com/cocktailpeanut/status/1832084951115972653\n\n# Install\n\n## 1. One-Click Install\n\nYou can automatically install and launch everything locally with Pinokio 1-click launcher: https://pinokio.computer/item?uri=https://github.com/cocktailpeanut/fluxgym\n\n\n## 2. Install Manually\n\nFirst clone Fluxgym and kohya-ss/sd-scripts:\n\n```\ngit clone https://github.com/cocktailpeanut/fluxgym\ncd fluxgym\ngit clone -b sd3 https://github.com/kohya-ss/sd-scripts\n```\n\nYour folder structure will look like this:\n\n```\n/fluxgym\n  app.py\n  requirements.txt\n  /sd-scripts\n```\n\nNow activate a venv from the root `fluxgym` folder:\n\nIf you're on Windows:\n\n```\npython -m venv env\nenv\\Scripts\\activate\n```\n\nIf your're on Linux:\n\n```\npython -m venv env\nsource env/bin/activate\n```\n\nThis will create an `env` folder right below the `fluxgym` folder:\n\n```\n/fluxgym\n  app.py\n  requirements.txt\n  /sd-scripts\n  /env\n```\n\nNow go to the `sd-scripts` folder and install dependencies to the activated environment:\n\n```\ncd sd-scripts\npip install -r requirements.txt\n```\n\nNow come back to the root folder and install the app dependencies:\n\n```\ncd ..\npip install -r requirements.txt\n```\n\nFinally, install pytorch Nightly:\n\n```\npip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121\n```\n\nOr, in case of NVIDIA RTX 50-series (5090, etc.) you will need to install cu128 torch and update bitsandbytes to the latest:\n\n```\npip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu128\npip install -U bitsandbytes\n```\n\n\n# Start\n\nGo back to the root `fluxgym` folder, with the venv activated, run:\n\n```\npython app.py\n```\n\n> Make sure to have the venv activated before running `python app.py`.\n>\n> Windows: `env/Scripts/activate`\n> Linux: `source env/bin/activate`\n\n## 3. Install via Docker\n\nFirst clone Fluxgym and kohya-ss/sd-scripts:\n\n```\ngit clone https://github.com/cocktailpeanut/fluxgym\ncd fluxgym\ngit clone -b sd3 https://github.com/kohya-ss/sd-scripts\n```\nCheck your `user id` and `group id` and change it if it's not 1000 via `environment variables` of `PUID` and `PGID`. \nYou can find out what these are in linux by running the following command: `id`\n\nNow build the image and run it via `docker-compose`:\n```\ndocker compose up -d --build\n```\n\nOpen web browser and goto the IP address of the computer/VM: http://localhost:7860\n\n# Usage\n\nThe usage is pretty straightforward:\n\n1. Enter the lora info\n2. Upload images and caption them (using the trigger word)\n3. Click \"start\".\n\nThat's all!\n\n![flow.gif](flow.gif)\n\n# Configuration\n\n## Sample Images\n\nBy default fluxgym doesn't generate any sample images during training.\n\nYou can however configure Fluxgym to automatically generate sample images for every N steps. Here's what it looks like:\n\n![sample.png](sample.png)\n\nTo turn this on, just set the two fields:\n\n1. **Sample Image Prompts:** These prompts will be used to automatically generate images during training. If you want multiple, separate teach prompt with new line.\n2. **Sample Image Every N Steps:** If your \"Expected training steps\" is 960 and your \"Sample Image Every N Steps\" is 100, the images will be generated at step 100, 200, 300, 400, 500, 600, 700, 800, 900, for EACH prompt.\n\n![sample_fields.png](sample_fields.png)\n\n## Advanced Sample Images\n\nThanks to the built-in syntax from [kohya/sd-scripts](https://github.com/kohya-ss/sd-scripts?tab=readme-ov-file#sample-image-generation-during-training), you can control exactly how the sample images are generated during the training phase:\n\nLet's say the trigger word is **hrld person.** Normally you would try sample prompts like:\n\n```\nhrld person is riding a bike\nhrld person is a body builder\nhrld person is a rock star\n```\n\nBut for every prompt you can include **advanced flags** to fully control the image generation process. For example, the `--d` flag lets you specify the SEED.\n\nSpecifying a seed means every sample image will use that exact seed, which means you can literally see the LoRA evolve. Here's an example usage:\n\n```\nhrld person is riding a bike --d 42\nhrld person is a body builder --d 42\nhrld person is a rock star --d 42\n```\n\nHere's what it looks like in the UI:\n\n![flags.png](flags.png)\n\nAnd here are the results:\n\n![seed.gif](seed.gif)\n\nIn addition to the `--d` flag, here are other flags you can use:\n\n\n- `--n`: Negative prompt up to the next option.\n- `--w`: Specifies the width of the generated image.\n- `--h`: Specifies the height of the generated image.\n- `--d`: Specifies the seed of the generated image.\n- `--l`: Specifies the CFG scale of the generated image.\n- `--s`: Specifies the number of steps in the generation.\n\nThe prompt weighting such as `( )` and `[ ]` also work. (Learn more about [Attention/Emphasis](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#attentionemphasis))\n\n## Publishing to Huggingface\n\n1. Get your Huggingface Token from https://huggingface.co/settings/tokens\n2. Enter the token in the \"Huggingface Token\" field and click \"Login\". This will save the token text in a local file named `HF_TOKEN` (All local and private).\n3. Once you're logged in, you will be able to select a trained LoRA from the dropdown, edit the name if you want, and publish to Huggingface.\n\n![publish_to_hf.png](publish_to_hf.png)\n\n\n## Advanced\n\nThe advanced tab is automatically constructed by parsing the launch flags available to the latest version of [kohya sd-scripts](https://github.com/kohya-ss/sd-scripts). This means Fluxgym is a full fledged UI for using the Kohya script.\n\n> By default the advanced tab is hidden. You can click the \"advanced\" accordion to expand it.\n\n![advanced.png](advanced.png)\n\n\n## Advanced Features\n\n### Uploading Caption Files\n\nYou can also upload the caption files along with the image files. You just need to follow the convention:\n\n1. Every caption file must be a `.txt` file.\n2. Each caption file needs to have a corresponding image file that has the same name.\n3. For example, if you have an image file named `img0.png`, the corresponding caption file must be `img0.txt`.\n"
  },
  {
    "path": "app-launch.sh",
    "content": "#!/usr/bin/env bash\n\ncd \"`dirname \"$0\"`\" || exit 1\n. env/bin/activate\npython app.py\n"
  },
  {
    "path": "app.py",
    "content": "import os\nimport sys\nos.environ[\"HF_HUB_ENABLE_HF_TRANSFER\"] = \"1\"\nos.environ['GRADIO_ANALYTICS_ENABLED'] = '0'\nsys.path.insert(0, os.getcwd())\nsys.path.append(os.path.join(os.path.dirname(__file__), 'sd-scripts'))\nimport subprocess\nimport gradio as gr\nfrom PIL import Image\nimport torch\nimport uuid\nimport shutil\nimport json\nimport yaml\nfrom slugify import slugify\nfrom transformers import AutoProcessor, AutoModelForCausalLM\nfrom gradio_logsview import LogsView, LogsViewRunner\nfrom huggingface_hub import hf_hub_download, HfApi\nfrom library import flux_train_utils, huggingface_util\nfrom argparse import Namespace\nimport train_network\nimport toml\nimport re\nMAX_IMAGES = 150\n\nwith open('models.yaml', 'r') as file:\n    models = yaml.safe_load(file)\n\ndef readme(base_model, lora_name, instance_prompt, sample_prompts):\n\n    # model license\n    model_config = models[base_model]\n    model_file = model_config[\"file\"]\n    base_model_name = model_config[\"base\"]\n    license = None\n    license_name = None\n    license_link = None\n    license_items = []\n    if \"license\" in model_config:\n        license = model_config[\"license\"]\n        license_items.append(f\"license: {license}\")\n    if \"license_name\" in model_config:\n        license_name = model_config[\"license_name\"]\n        license_items.append(f\"license_name: {license_name}\")\n    if \"license_link\" in model_config:\n        license_link = model_config[\"license_link\"]\n        license_items.append(f\"license_link: {license_link}\")\n    license_str = \"\\n\".join(license_items)\n    print(f\"license_items={license_items}\")\n    print(f\"license_str = {license_str}\")\n\n    # tags\n    tags = [ \"text-to-image\", \"flux\", \"lora\", \"diffusers\", \"template:sd-lora\", \"fluxgym\" ]\n\n    # widgets\n    widgets = []\n    sample_image_paths = []\n    output_name = slugify(lora_name)\n    samples_dir = resolve_path_without_quotes(f\"outputs/{output_name}/sample\")\n    try:\n        for filename in os.listdir(samples_dir):\n            # Filename Schema: [name]_[steps]_[index]_[timestamp].png\n            match = re.search(r\"_(\\d+)_(\\d+)_(\\d+)\\.png$\", filename)\n            if match:\n                steps, index, timestamp = int(match.group(1)), int(match.group(2)), int(match.group(3))\n                sample_image_paths.append((steps, index, f\"sample/{filename}\"))\n\n        # Sort by numeric index\n        sample_image_paths.sort(key=lambda x: x[0], reverse=True)\n\n        final_sample_image_paths = sample_image_paths[:len(sample_prompts)]\n        final_sample_image_paths.sort(key=lambda x: x[1])\n        for i, prompt in enumerate(sample_prompts):\n            _, _, image_path = final_sample_image_paths[i]\n            widgets.append(\n                {\n                    \"text\": prompt,\n                    \"output\": {\n                        \"url\": image_path\n                    },\n                }\n            )\n    except:\n        print(f\"no samples\")\n    dtype = \"torch.bfloat16\"\n    # Construct the README content\n    readme_content = f\"\"\"---\ntags:\n{yaml.dump(tags, indent=4).strip()}\n{\"widget:\" if os.path.isdir(samples_dir) else \"\"}\n{yaml.dump(widgets, indent=4).strip() if widgets else \"\"}\nbase_model: {base_model_name}\n{\"instance_prompt: \" + instance_prompt if instance_prompt else \"\"}\n{license_str}\n---\n\n# {lora_name}\n\nA Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)\n\n<Gallery />\n\n## Trigger words\n\n{\"You should use `\" + instance_prompt + \"` to trigger the image generation.\" if instance_prompt else \"No trigger words defined.\"}\n\n## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.\n\nWeights for this model are available in Safetensors format.\n\n\"\"\"\n    return readme_content\n\ndef account_hf():\n    try:\n        with open(\"HF_TOKEN\", \"r\") as file:\n            token = file.read()\n            api = HfApi(token=token)\n            try:\n                account = api.whoami()\n                return { \"token\": token, \"account\": account['name'] }\n            except:\n                return None\n    except:\n        return None\n\n\"\"\"\nhf_logout.click(fn=logout_hf, outputs=[hf_token, hf_login, hf_logout, repo_owner])\n\"\"\"\ndef logout_hf():\n    os.remove(\"HF_TOKEN\")\n    global current_account\n    current_account = account_hf()\n    print(f\"current_account={current_account}\")\n    return gr.update(value=\"\"), gr.update(visible=True), gr.update(visible=False), gr.update(value=\"\", visible=False)\n\n\n\"\"\"\nhf_login.click(fn=login_hf, inputs=[hf_token], outputs=[hf_token, hf_login, hf_logout, repo_owner])\n\"\"\"\ndef login_hf(hf_token):\n    api = HfApi(token=hf_token)\n    try:\n        account = api.whoami()\n        if account != None:\n            if \"name\" in account:\n                with open(\"HF_TOKEN\", \"w\") as file:\n                    file.write(hf_token)\n                global current_account\n                current_account = account_hf()\n                return gr.update(visible=True), gr.update(visible=False), gr.update(visible=True), gr.update(value=current_account[\"account\"], visible=True)\n        return gr.update(), gr.update(), gr.update(), gr.update()\n    except:\n        print(f\"incorrect hf_token\")\n        return gr.update(), gr.update(), gr.update(), gr.update()\n\ndef upload_hf(base_model, lora_rows, repo_owner, repo_name, repo_visibility, hf_token):\n    src = lora_rows\n    repo_id = f\"{repo_owner}/{repo_name}\"\n    gr.Info(f\"Uploading to Huggingface. Please Stand by...\", duration=None)\n    args = Namespace(\n        huggingface_repo_id=repo_id,\n        huggingface_repo_type=\"model\",\n        huggingface_repo_visibility=repo_visibility,\n        huggingface_path_in_repo=\"\",\n        huggingface_token=hf_token,\n        async_upload=False\n    )\n    print(f\"upload_hf args={args}\")\n    huggingface_util.upload(args=args, src=src)\n    gr.Info(f\"[Upload Complete] https://huggingface.co/{repo_id}\", duration=None)\n\ndef load_captioning(uploaded_files, concept_sentence):\n    uploaded_images = [file for file in uploaded_files if not file.endswith('.txt')]\n    txt_files = [file for file in uploaded_files if file.endswith('.txt')]\n    txt_files_dict = {os.path.splitext(os.path.basename(txt_file))[0]: txt_file for txt_file in txt_files}\n    updates = []\n    if len(uploaded_images) <= 1:\n        raise gr.Error(\n            \"Please upload at least 2 images to train your model (the ideal number with default settings is between 4-30)\"\n        )\n    elif len(uploaded_images) > MAX_IMAGES:\n        raise gr.Error(f\"For now, only {MAX_IMAGES} or less images are allowed for training\")\n    # Update for the captioning_area\n    # for _ in range(3):\n    updates.append(gr.update(visible=True))\n    # Update visibility and image for each captioning row and image\n    for i in range(1, MAX_IMAGES + 1):\n        # Determine if the current row and image should be visible\n        visible = i <= len(uploaded_images)\n\n        # Update visibility of the captioning row\n        updates.append(gr.update(visible=visible))\n\n        # Update for image component - display image if available, otherwise hide\n        image_value = uploaded_images[i - 1] if visible else None\n        updates.append(gr.update(value=image_value, visible=visible))\n\n        corresponding_caption = False\n        if(image_value):\n            base_name = os.path.splitext(os.path.basename(image_value))[0]\n            if base_name in txt_files_dict:\n                with open(txt_files_dict[base_name], 'r') as file:\n                    corresponding_caption = file.read()\n\n        # Update value of captioning area\n        text_value = corresponding_caption if visible and corresponding_caption else concept_sentence if visible and concept_sentence else None\n        updates.append(gr.update(value=text_value, visible=visible))\n\n    # Update for the sample caption area\n    updates.append(gr.update(visible=True))\n    updates.append(gr.update(visible=True))\n\n    return updates\n\ndef hide_captioning():\n    return gr.update(visible=False), gr.update(visible=False)\n\ndef resize_image(image_path, output_path, size):\n    with Image.open(image_path) as img:\n        width, height = img.size\n        if width < height:\n            new_width = size\n            new_height = int((size/width) * height)\n        else:\n            new_height = size\n            new_width = int((size/height) * width)\n        print(f\"resize {image_path} : {new_width}x{new_height}\")\n        img_resized = img.resize((new_width, new_height), Image.Resampling.LANCZOS)\n        img_resized.save(output_path)\n\ndef create_dataset(destination_folder, size, *inputs):\n    print(\"Creating dataset\")\n    images = inputs[0]\n    if not os.path.exists(destination_folder):\n        os.makedirs(destination_folder)\n\n    for index, image in enumerate(images):\n        # copy the images to the datasets folder\n        new_image_path = shutil.copy(image, destination_folder)\n\n        # if it's a caption text file skip the next bit\n        ext = os.path.splitext(new_image_path)[-1].lower()\n        if ext == '.txt':\n            continue\n\n        # resize the images\n        resize_image(new_image_path, new_image_path, size)\n\n        # copy the captions\n\n        original_caption = inputs[index + 1]\n\n        image_file_name = os.path.basename(new_image_path)\n        caption_file_name = os.path.splitext(image_file_name)[0] + \".txt\"\n        caption_path = resolve_path_without_quotes(os.path.join(destination_folder, caption_file_name))\n        print(f\"image_path={new_image_path}, caption_path = {caption_path}, original_caption={original_caption}\")\n        # if caption_path exists, do not write\n        if os.path.exists(caption_path):\n            print(f\"{caption_path} already exists. use the existing .txt file\")\n        else:\n            print(f\"{caption_path} create a .txt caption file\")\n            with open(caption_path, 'w') as file:\n                file.write(original_caption)\n\n    print(f\"destination_folder {destination_folder}\")\n    return destination_folder\n\n\ndef run_captioning(images, concept_sentence, *captions):\n    print(f\"run_captioning\")\n    print(f\"concept sentence {concept_sentence}\")\n    print(f\"captions {captions}\")\n    #Load internally to not consume resources for training\n    device = \"cuda\" if torch.cuda.is_available() else \"cpu\"\n    print(f\"device={device}\")\n    torch_dtype = torch.float16\n    model = AutoModelForCausalLM.from_pretrained(\n        \"multimodalart/Florence-2-large-no-flash-attn\", torch_dtype=torch_dtype, trust_remote_code=True\n    ).to(device)\n    processor = AutoProcessor.from_pretrained(\"multimodalart/Florence-2-large-no-flash-attn\", trust_remote_code=True)\n\n    captions = list(captions)\n    for i, image_path in enumerate(images):\n        print(captions[i])\n        if isinstance(image_path, str):  # If image is a file path\n            image = Image.open(image_path).convert(\"RGB\")\n\n        prompt = \"<DETAILED_CAPTION>\"\n        inputs = processor(text=prompt, images=image, return_tensors=\"pt\").to(device, torch_dtype)\n        print(f\"inputs {inputs}\")\n\n        generated_ids = model.generate(\n            input_ids=inputs[\"input_ids\"], pixel_values=inputs[\"pixel_values\"], max_new_tokens=1024, num_beams=3\n        )\n        print(f\"generated_ids {generated_ids}\")\n\n        generated_text = processor.batch_decode(generated_ids, skip_special_tokens=False)[0]\n        print(f\"generated_text: {generated_text}\")\n        parsed_answer = processor.post_process_generation(\n            generated_text, task=prompt, image_size=(image.width, image.height)\n        )\n        print(f\"parsed_answer = {parsed_answer}\")\n        caption_text = parsed_answer[\"<DETAILED_CAPTION>\"].replace(\"The image shows \", \"\")\n        print(f\"caption_text = {caption_text}, concept_sentence={concept_sentence}\")\n        if concept_sentence:\n            caption_text = f\"{concept_sentence} {caption_text}\"\n        captions[i] = caption_text\n\n        yield captions\n    model.to(\"cpu\")\n    del model\n    del processor\n    if torch.cuda.is_available():\n        torch.cuda.empty_cache()\n\ndef recursive_update(d, u):\n    for k, v in u.items():\n        if isinstance(v, dict) and v:\n            d[k] = recursive_update(d.get(k, {}), v)\n        else:\n            d[k] = v\n    return d\n\ndef download(base_model):\n    model = models[base_model]\n    model_file = model[\"file\"]\n    repo = model[\"repo\"]\n\n    # download unet\n    if base_model == \"flux-dev\" or base_model == \"flux-schnell\":\n        unet_folder = \"models/unet\"\n    else:\n        unet_folder = f\"models/unet/{repo}\"\n    unet_path = os.path.join(unet_folder, model_file)\n    if not os.path.exists(unet_path):\n        os.makedirs(unet_folder, exist_ok=True)\n        gr.Info(f\"Downloading base model: {base_model}. Please wait. (You can check the terminal for the download progress)\", duration=None)\n        print(f\"download {base_model}\")\n        hf_hub_download(repo_id=repo, local_dir=unet_folder, filename=model_file)\n\n    # download vae\n    vae_folder = \"models/vae\"\n    vae_path = os.path.join(vae_folder, \"ae.sft\")\n    if not os.path.exists(vae_path):\n        os.makedirs(vae_folder, exist_ok=True)\n        gr.Info(f\"Downloading vae\")\n        print(f\"downloading ae.sft...\")\n        hf_hub_download(repo_id=\"cocktailpeanut/xulf-dev\", local_dir=vae_folder, filename=\"ae.sft\")\n\n    # download clip\n    clip_folder = \"models/clip\"\n    clip_l_path = os.path.join(clip_folder, \"clip_l.safetensors\")\n    if not os.path.exists(clip_l_path):\n        os.makedirs(clip_folder, exist_ok=True)\n        gr.Info(f\"Downloading clip...\")\n        print(f\"download clip_l.safetensors\")\n        hf_hub_download(repo_id=\"comfyanonymous/flux_text_encoders\", local_dir=clip_folder, filename=\"clip_l.safetensors\")\n\n    # download t5xxl\n    t5xxl_path = os.path.join(clip_folder, \"t5xxl_fp16.safetensors\")\n    if not os.path.exists(t5xxl_path):\n        print(f\"download t5xxl_fp16.safetensors\")\n        gr.Info(f\"Downloading t5xxl...\")\n        hf_hub_download(repo_id=\"comfyanonymous/flux_text_encoders\", local_dir=clip_folder, filename=\"t5xxl_fp16.safetensors\")\n\n\ndef resolve_path(p):\n    current_dir = os.path.dirname(os.path.abspath(__file__))\n    norm_path = os.path.normpath(os.path.join(current_dir, p))\n    return f\"\\\"{norm_path}\\\"\"\ndef resolve_path_without_quotes(p):\n    current_dir = os.path.dirname(os.path.abspath(__file__))\n    norm_path = os.path.normpath(os.path.join(current_dir, p))\n    return norm_path\n\ndef gen_sh(\n    base_model,\n    output_name,\n    resolution,\n    seed,\n    workers,\n    learning_rate,\n    network_dim,\n    max_train_epochs,\n    save_every_n_epochs,\n    timestep_sampling,\n    guidance_scale,\n    vram,\n    sample_prompts,\n    sample_every_n_steps,\n    *advanced_components\n):\n\n    print(f\"gen_sh: network_dim:{network_dim}, max_train_epochs={max_train_epochs}, save_every_n_epochs={save_every_n_epochs}, timestep_sampling={timestep_sampling}, guidance_scale={guidance_scale}, vram={vram}, sample_prompts={sample_prompts}, sample_every_n_steps={sample_every_n_steps}\")\n\n    output_dir = resolve_path(f\"outputs/{output_name}\")\n    sample_prompts_path = resolve_path(f\"outputs/{output_name}/sample_prompts.txt\")\n\n    line_break = \"\\\\\"\n    file_type = \"sh\"\n    if sys.platform == \"win32\":\n        line_break = \"^\"\n        file_type = \"bat\"\n\n    ############# Sample args ########################\n    sample = \"\"\n    if len(sample_prompts) > 0 and sample_every_n_steps > 0:\n        sample = f\"\"\"--sample_prompts={sample_prompts_path} --sample_every_n_steps=\"{sample_every_n_steps}\" {line_break}\"\"\"\n\n\n    ############# Optimizer args ########################\n#    if vram == \"8G\":\n#        optimizer = f\"\"\"--optimizer_type adafactor {line_break}\n#    --optimizer_args \"relative_step=False\" \"scale_parameter=False\" \"warmup_init=False\" {line_break}\n#        --split_mode {line_break}\n#        --network_args \"train_blocks=single\" {line_break}\n#        --lr_scheduler constant_with_warmup {line_break}\n#        --max_grad_norm 0.0 {line_break}\"\"\"\n    if vram == \"16G\":\n        # 16G VRAM\n        optimizer = f\"\"\"--optimizer_type adafactor {line_break}\n  --optimizer_args \"relative_step=False\" \"scale_parameter=False\" \"warmup_init=False\" {line_break}\n  --lr_scheduler constant_with_warmup {line_break}\n  --max_grad_norm 0.0 {line_break}\"\"\"\n    elif vram == \"12G\":\n      # 12G VRAM\n        optimizer = f\"\"\"--optimizer_type adafactor {line_break}\n  --optimizer_args \"relative_step=False\" \"scale_parameter=False\" \"warmup_init=False\" {line_break}\n  --split_mode {line_break}\n  --network_args \"train_blocks=single\" {line_break}\n  --lr_scheduler constant_with_warmup {line_break}\n  --max_grad_norm 0.0 {line_break}\"\"\"\n    else:\n        # 20G+ VRAM\n        optimizer = f\"--optimizer_type adamw8bit {line_break}\"\n\n\n    #######################################################\n    model_config = models[base_model]\n    model_file = model_config[\"file\"]\n    repo = model_config[\"repo\"]\n    if base_model == \"flux-dev\" or base_model == \"flux-schnell\":\n        model_folder = \"models/unet\"\n    else:\n        model_folder = f\"models/unet/{repo}\"\n    model_path = os.path.join(model_folder, model_file)\n    pretrained_model_path = resolve_path(model_path)\n\n    clip_path = resolve_path(\"models/clip/clip_l.safetensors\")\n    t5_path = resolve_path(\"models/clip/t5xxl_fp16.safetensors\")\n    ae_path = resolve_path(\"models/vae/ae.sft\")\n    sh = f\"\"\"accelerate launch {line_break}\n  --mixed_precision bf16 {line_break}\n  --num_cpu_threads_per_process 1 {line_break}\n  sd-scripts/flux_train_network.py {line_break}\n  --pretrained_model_name_or_path {pretrained_model_path} {line_break}\n  --clip_l {clip_path} {line_break}\n  --t5xxl {t5_path} {line_break}\n  --ae {ae_path} {line_break}\n  --cache_latents_to_disk {line_break}\n  --save_model_as safetensors {line_break}\n  --sdpa --persistent_data_loader_workers {line_break}\n  --max_data_loader_n_workers {workers} {line_break}\n  --seed {seed} {line_break}\n  --gradient_checkpointing {line_break}\n  --mixed_precision bf16 {line_break}\n  --save_precision bf16 {line_break}\n  --network_module networks.lora_flux {line_break}\n  --network_dim {network_dim} {line_break}\n  {optimizer}{sample}\n  --learning_rate {learning_rate} {line_break}\n  --cache_text_encoder_outputs {line_break}\n  --cache_text_encoder_outputs_to_disk {line_break}\n  --fp8_base {line_break}\n  --highvram {line_break}\n  --max_train_epochs {max_train_epochs} {line_break}\n  --save_every_n_epochs {save_every_n_epochs} {line_break}\n  --dataset_config {resolve_path(f\"outputs/{output_name}/dataset.toml\")} {line_break}\n  --output_dir {output_dir} {line_break}\n  --output_name {output_name} {line_break}\n  --timestep_sampling {timestep_sampling} {line_break}\n  --discrete_flow_shift 3.1582 {line_break}\n  --model_prediction_type raw {line_break}\n  --guidance_scale {guidance_scale} {line_break}\n  --loss_type l2 {line_break}\"\"\"\n   \n\n\n    ############# Advanced args ########################\n    global advanced_component_ids\n    global original_advanced_component_values\n   \n    # check dirty\n    print(f\"original_advanced_component_values = {original_advanced_component_values}\")\n    advanced_flags = []\n    for i, current_value in enumerate(advanced_components):\n#        print(f\"compare {advanced_component_ids[i]}: old={original_advanced_component_values[i]}, new={current_value}\")\n        if original_advanced_component_values[i] != current_value:\n            # dirty\n            if current_value == True:\n                # Boolean\n                advanced_flags.append(advanced_component_ids[i])\n            else:\n                # string\n                advanced_flags.append(f\"{advanced_component_ids[i]} {current_value}\")\n\n    if len(advanced_flags) > 0:\n        advanced_flags_str = f\" {line_break}\\n  \".join(advanced_flags)\n        sh = sh + \"\\n  \" + advanced_flags_str\n\n    return sh\n\ndef gen_toml(\n  dataset_folder,\n  resolution,\n  class_tokens,\n  num_repeats\n):\n    toml = f\"\"\"[general]\nshuffle_caption = false\ncaption_extension = '.txt'\nkeep_tokens = 1\n\n[[datasets]]\nresolution = {resolution}\nbatch_size = 1\nkeep_tokens = 1\n\n  [[datasets.subsets]]\n  image_dir = '{resolve_path_without_quotes(dataset_folder)}'\n  class_tokens = '{class_tokens}'\n  num_repeats = {num_repeats}\"\"\"\n    return toml\n\ndef update_total_steps(max_train_epochs, num_repeats, images):\n    try:\n        num_images = len(images)\n        total_steps = max_train_epochs * num_images * num_repeats\n        print(f\"max_train_epochs={max_train_epochs} num_images={num_images}, num_repeats={num_repeats}, total_steps={total_steps}\")\n        return gr.update(value = total_steps)\n    except:\n        print(\"\")\n\ndef set_repo(lora_rows):\n    selected_name = os.path.basename(lora_rows)\n    return gr.update(value=selected_name)\n\ndef get_loras():\n    try:\n        outputs_path = resolve_path_without_quotes(f\"outputs\")\n        files = os.listdir(outputs_path)\n        folders = [os.path.join(outputs_path, item) for item in files if os.path.isdir(os.path.join(outputs_path, item)) and item != \"sample\"]\n        folders.sort(key=lambda file: os.path.getctime(file), reverse=True)\n        return folders\n    except Exception as e:\n        return []\n\ndef get_samples(lora_name):\n    output_name = slugify(lora_name)\n    try:\n        samples_path = resolve_path_without_quotes(f\"outputs/{output_name}/sample\")\n        files = [os.path.join(samples_path, file) for file in os.listdir(samples_path)]\n        files.sort(key=lambda file: os.path.getctime(file), reverse=True)\n        return files\n    except:\n        return []\n\ndef start_training(\n    base_model,\n    lora_name,\n    train_script,\n    train_config,\n    sample_prompts,\n):\n    # write custom script and toml\n    if not os.path.exists(\"models\"):\n        os.makedirs(\"models\", exist_ok=True)\n    if not os.path.exists(\"outputs\"):\n        os.makedirs(\"outputs\", exist_ok=True)\n    output_name = slugify(lora_name)\n    output_dir = resolve_path_without_quotes(f\"outputs/{output_name}\")\n    if not os.path.exists(output_dir):\n        os.makedirs(output_dir, exist_ok=True)\n\n    download(base_model)\n\n    file_type = \"sh\"\n    if sys.platform == \"win32\":\n        file_type = \"bat\"\n\n    sh_filename = f\"train.{file_type}\"\n    sh_filepath = resolve_path_without_quotes(f\"outputs/{output_name}/{sh_filename}\")\n    with open(sh_filepath, 'w', encoding=\"utf-8\") as file:\n        file.write(train_script)\n    gr.Info(f\"Generated train script at {sh_filename}\")\n\n\n    dataset_path = resolve_path_without_quotes(f\"outputs/{output_name}/dataset.toml\")\n    with open(dataset_path, 'w', encoding=\"utf-8\") as file:\n        file.write(train_config)\n    gr.Info(f\"Generated dataset.toml\")\n\n    sample_prompts_path = resolve_path_without_quotes(f\"outputs/{output_name}/sample_prompts.txt\")\n    with open(sample_prompts_path, 'w', encoding='utf-8') as file:\n        file.write(sample_prompts)\n    gr.Info(f\"Generated sample_prompts.txt\")\n\n    # Train\n    if sys.platform == \"win32\":\n        command = sh_filepath\n    else:\n        command = f\"bash \\\"{sh_filepath}\\\"\"\n\n    # Use Popen to run the command and capture output in real-time\n    env = os.environ.copy()\n    env['PYTHONIOENCODING'] = 'utf-8'\n    env['LOG_LEVEL'] = 'DEBUG'\n    runner = LogsViewRunner()\n    cwd = os.path.dirname(os.path.abspath(__file__))\n    gr.Info(f\"Started training\")\n    yield from runner.run_command([command], cwd=cwd)\n    yield runner.log(f\"Runner: {runner}\")\n\n    # Generate Readme\n    config = toml.loads(train_config)\n    concept_sentence = config['datasets'][0]['subsets'][0]['class_tokens']\n    print(f\"concept_sentence={concept_sentence}\")\n    print(f\"lora_name {lora_name}, concept_sentence={concept_sentence}, output_name={output_name}\")\n    sample_prompts_path = resolve_path_without_quotes(f\"outputs/{output_name}/sample_prompts.txt\")\n    with open(sample_prompts_path, \"r\", encoding=\"utf-8\") as f:\n        lines = f.readlines()\n    sample_prompts = [line.strip() for line in lines if len(line.strip()) > 0 and line[0] != \"#\"]\n    md = readme(base_model, lora_name, concept_sentence, sample_prompts)\n    readme_path = resolve_path_without_quotes(f\"outputs/{output_name}/README.md\")\n    with open(readme_path, \"w\", encoding=\"utf-8\") as f:\n        f.write(md)\n\n    gr.Info(f\"Training Complete. Check the outputs folder for the LoRA files.\", duration=None)\n\n\ndef update(\n    base_model,\n    lora_name,\n    resolution,\n    seed,\n    workers,\n    class_tokens,\n    learning_rate,\n    network_dim,\n    max_train_epochs,\n    save_every_n_epochs,\n    timestep_sampling,\n    guidance_scale,\n    vram,\n    num_repeats,\n    sample_prompts,\n    sample_every_n_steps,\n    *advanced_components,\n):\n    output_name = slugify(lora_name)\n    dataset_folder = str(f\"datasets/{output_name}\")\n    sh = gen_sh(\n        base_model,\n        output_name,\n        resolution,\n        seed,\n        workers,\n        learning_rate,\n        network_dim,\n        max_train_epochs,\n        save_every_n_epochs,\n        timestep_sampling,\n        guidance_scale,\n        vram,\n        sample_prompts,\n        sample_every_n_steps,\n        *advanced_components,\n    )\n    toml = gen_toml(\n        dataset_folder,\n        resolution,\n        class_tokens,\n        num_repeats\n    )\n    return gr.update(value=sh), gr.update(value=toml), dataset_folder\n\n\"\"\"\ndemo.load(fn=loaded, js=js, outputs=[hf_token, hf_login, hf_logout, hf_account])\n\"\"\"\ndef loaded():\n    global current_account\n    current_account = account_hf()\n    print(f\"current_account={current_account}\")\n    if current_account != None:\n        return gr.update(value=current_account[\"token\"]), gr.update(visible=False), gr.update(visible=True), gr.update(value=current_account[\"account\"], visible=True)\n    else:\n        return gr.update(value=\"\"), gr.update(visible=True), gr.update(visible=False), gr.update(value=\"\", visible=False)\n\ndef update_sample(concept_sentence):\n    return gr.update(value=concept_sentence)\n\ndef refresh_publish_tab():\n    loras = get_loras()\n    return gr.Dropdown(label=\"Trained LoRAs\", choices=loras)\n\ndef init_advanced():\n    # if basic_args\n    basic_args = {\n        'pretrained_model_name_or_path',\n        'clip_l',\n        't5xxl',\n        'ae',\n        'cache_latents_to_disk',\n        'save_model_as',\n        'sdpa',\n        'persistent_data_loader_workers',\n        'max_data_loader_n_workers',\n        'seed',\n        'gradient_checkpointing',\n        'mixed_precision',\n        'save_precision',\n        'network_module',\n        'network_dim',\n        'learning_rate',\n        'cache_text_encoder_outputs',\n        'cache_text_encoder_outputs_to_disk',\n        'fp8_base',\n        'highvram',\n        'max_train_epochs',\n        'save_every_n_epochs',\n        'dataset_config',\n        'output_dir',\n        'output_name',\n        'timestep_sampling',\n        'discrete_flow_shift',\n        'model_prediction_type',\n        'guidance_scale',\n        'loss_type',\n        'optimizer_type',\n        'optimizer_args',\n        'lr_scheduler',\n        'sample_prompts',\n        'sample_every_n_steps',\n        'max_grad_norm',\n        'split_mode',\n        'network_args'\n    }\n\n    # generate a UI config\n    # if not in basic_args, create a simple form\n    parser = train_network.setup_parser()\n    flux_train_utils.add_flux_train_arguments(parser)\n    args_info = {}\n    for action in parser._actions:\n        if action.dest != 'help':  # Skip the default help argument\n            # if the dest is included in basic_args\n            args_info[action.dest] = {\n                \"action\": action.option_strings,  # Option strings like '--use_8bit_adam'\n                \"type\": action.type,              # Type of the argument\n                \"help\": action.help,              # Help message\n                \"default\": action.default,        # Default value, if any\n                \"required\": action.required       # Whether the argument is required\n            }\n    temp = []\n    for key in args_info:\n        temp.append({ 'key': key, 'action': args_info[key] })\n    temp.sort(key=lambda x: x['key'])\n    advanced_component_ids = []\n    advanced_components = []\n    for item in temp:\n        key = item['key']\n        action = item['action']\n        if key in basic_args:\n            print(\"\")\n        else:\n            action_type = str(action['type'])\n            component = None\n            with gr.Column(min_width=300):\n                if action_type == \"None\":\n                    # radio\n                    component = gr.Checkbox()\n    #            elif action_type == \"<class 'str'>\":\n    #                component = gr.Textbox()\n    #            elif action_type == \"<class 'int'>\":\n    #                component = gr.Number(precision=0)\n    #            elif action_type == \"<class 'float'>\":\n    #                component = gr.Number()\n    #            elif \"int_or_float\" in action_type:\n    #                component = gr.Number()\n                else:\n                    component = gr.Textbox(value=\"\")\n                if component != None:\n                    component.interactive = True\n                    component.elem_id = action['action'][0]\n                    component.label = component.elem_id\n                    component.elem_classes = [\"advanced\"]\n                if action['help'] != None:\n                    component.info = action['help']\n            advanced_components.append(component)\n            advanced_component_ids.append(component.elem_id)\n    return advanced_components, advanced_component_ids\n\n\ntheme = gr.themes.Monochrome(\n    text_size=gr.themes.Size(lg=\"18px\", md=\"15px\", sm=\"13px\", xl=\"22px\", xs=\"12px\", xxl=\"24px\", xxs=\"9px\"),\n    font=[gr.themes.GoogleFont(\"Source Sans Pro\"), \"ui-sans-serif\", \"system-ui\", \"sans-serif\"],\n)\ncss = \"\"\"\n@keyframes rotate {\n    0% {\n        transform: rotate(0deg);\n    }\n    100% {\n        transform: rotate(360deg);\n    }\n}\n#advanced_options .advanced:nth-child(even) { background: rgba(0,0,100,0.04) !important; }\nh1{font-family: georgia; font-style: italic; font-weight: bold; font-size: 30px; letter-spacing: -1px;}\nh3{margin-top: 0}\n.tabitem{border: 0px}\n.group_padding{}\nnav{position: fixed; top: 0; left: 0; right: 0; z-index: 1000; text-align: center; padding: 10px; box-sizing: border-box; display: flex; align-items: center; backdrop-filter: blur(10px); }\nnav button { background: none; color: firebrick; font-weight: bold; border: 2px solid firebrick; padding: 5px 10px; border-radius: 5px; font-size: 14px; }\nnav img { height: 40px; width: 40px; border-radius: 40px; }\nnav img.rotate { animation: rotate 2s linear infinite; }\n.flexible { flex-grow: 1; }\n.tast-details { margin: 10px 0 !important; }\n.toast-wrap { bottom: var(--size-4) !important; top: auto !important; border: none !important; backdrop-filter: blur(10px); }\n.toast-title, .toast-text, .toast-icon, .toast-close { color: black !important; font-size: 14px; }\n.toast-body { border: none !important; }\n#terminal { box-shadow: none !important; margin-bottom: 25px; background: rgba(0,0,0,0.03); }\n#terminal .generating { border: none !important; }\n#terminal label { position: absolute !important; }\n.tabs { margin-top: 50px; }\n.hidden { display: none !important; }\n.codemirror-wrapper .cm-line { font-size: 12px !important; }\nlabel { font-weight: bold !important; }\n#start_training.clicked { background: silver; color: black; }\n\"\"\"\n\njs = \"\"\"\nfunction() {\n    let autoscroll = document.querySelector(\"#autoscroll\")\n    if (window.iidxx) {\n        window.clearInterval(window.iidxx);\n    }\n    window.iidxx = window.setInterval(function() {\n        let text=document.querySelector(\".codemirror-wrapper .cm-line\").innerText.trim()\n        let img = document.querySelector(\"#logo\")\n        if (text.length > 0) {\n            autoscroll.classList.remove(\"hidden\")\n            if (autoscroll.classList.contains(\"on\")) {\n                autoscroll.textContent = \"Autoscroll ON\"\n                window.scrollTo(0, document.body.scrollHeight, { behavior: \"smooth\" });\n                img.classList.add(\"rotate\")\n            } else {\n                autoscroll.textContent = \"Autoscroll OFF\"\n                img.classList.remove(\"rotate\")\n            }\n        }\n    }, 500);\n    console.log(\"autoscroll\", autoscroll)\n    autoscroll.addEventListener(\"click\", (e) => {\n        autoscroll.classList.toggle(\"on\")\n    })\n    function debounce(fn, delay) {\n        let timeoutId;\n        return function(...args) {\n            clearTimeout(timeoutId);\n            timeoutId = setTimeout(() => fn(...args), delay);\n        };\n    }\n\n    function handleClick() {\n        console.log(\"refresh\")\n        document.querySelector(\"#refresh\").click();\n    }\n    const debouncedClick = debounce(handleClick, 1000);\n    document.addEventListener(\"input\", debouncedClick);\n\n    document.querySelector(\"#start_training\").addEventListener(\"click\", (e) => {\n      e.target.classList.add(\"clicked\")\n      e.target.innerHTML = \"Training...\"\n    })\n\n}\n\"\"\"\n\ncurrent_account = account_hf()\nprint(f\"current_account={current_account}\")\n\nwith gr.Blocks(elem_id=\"app\", theme=theme, css=css, fill_width=True) as demo:\n    with gr.Tabs() as tabs:\n        with gr.TabItem(\"Gym\"):\n            output_components = []\n            with gr.Row():\n                gr.HTML(\"\"\"<nav>\n            <img id='logo' src='/file=icon.png' width='80' height='80'>\n            <div class='flexible'></div>\n            <button id='autoscroll' class='on hidden'></button>\n        </nav>\n        \"\"\")\n            with gr.Row(elem_id='container'):\n                with gr.Column():\n                    gr.Markdown(\n                        \"\"\"# Step 1. LoRA Info\n        <p style=\"margin-top:0\">Configure your LoRA train settings.</p>\n        \"\"\", elem_classes=\"group_padding\")\n                    lora_name = gr.Textbox(\n                        label=\"The name of your LoRA\",\n                        info=\"This has to be a unique name\",\n                        placeholder=\"e.g.: Persian Miniature Painting style, Cat Toy\",\n                    )\n                    concept_sentence = gr.Textbox(\n                        elem_id=\"--concept_sentence\",\n                        label=\"Trigger word/sentence\",\n                        info=\"Trigger word or sentence to be used\",\n                        placeholder=\"uncommon word like p3rs0n or trtcrd, or sentence like 'in the style of CNSTLL'\",\n                        interactive=True,\n                    )\n                    model_names = list(models.keys())\n                    print(f\"model_names={model_names}\")\n                    base_model = gr.Dropdown(label=\"Base model (edit the models.yaml file to add more to this list)\", choices=model_names, value=model_names[0])\n                    vram = gr.Radio([\"20G\", \"16G\", \"12G\" ], value=\"20G\", label=\"VRAM\", interactive=True)\n                    num_repeats = gr.Number(value=10, precision=0, label=\"Repeat trains per image\", interactive=True)\n                    max_train_epochs = gr.Number(label=\"Max Train Epochs\", value=16, interactive=True)\n                    total_steps = gr.Number(0, interactive=False, label=\"Expected training steps\")\n                    sample_prompts = gr.Textbox(\"\", lines=5, label=\"Sample Image Prompts (Separate with new lines)\", interactive=True)\n                    sample_every_n_steps = gr.Number(0, precision=0, label=\"Sample Image Every N Steps\", interactive=True)\n                    resolution = gr.Number(value=512, precision=0, label=\"Resize dataset images\")\n                with gr.Column():\n                    gr.Markdown(\n                        \"\"\"# Step 2. Dataset\n        <p style=\"margin-top:0\">Make sure the captions include the trigger word.</p>\n        \"\"\", elem_classes=\"group_padding\")\n                    with gr.Group():\n                        images = gr.File(\n                            file_types=[\"image\", \".txt\"],\n                            label=\"Upload your images\",\n                            #info=\"If you want, you can also manually upload caption files that match the image names (example: img0.png => img0.txt)\",\n                            file_count=\"multiple\",\n                            interactive=True,\n                            visible=True,\n                            scale=1,\n                        )\n                    with gr.Group(visible=False) as captioning_area:\n                        do_captioning = gr.Button(\"Add AI captions with Florence-2\")\n                        output_components.append(captioning_area)\n                        #output_components = [captioning_area]\n                        caption_list = []\n                        for i in range(1, MAX_IMAGES + 1):\n                            locals()[f\"captioning_row_{i}\"] = gr.Row(visible=False)\n                            with locals()[f\"captioning_row_{i}\"]:\n                                locals()[f\"image_{i}\"] = gr.Image(\n                                    type=\"filepath\",\n                                    width=111,\n                                    height=111,\n                                    min_width=111,\n                                    interactive=False,\n                                    scale=2,\n                                    show_label=False,\n                                    show_share_button=False,\n                                    show_download_button=False,\n                                )\n                                locals()[f\"caption_{i}\"] = gr.Textbox(\n                                    label=f\"Caption {i}\", scale=15, interactive=True\n                                )\n\n                            output_components.append(locals()[f\"captioning_row_{i}\"])\n                            output_components.append(locals()[f\"image_{i}\"])\n                            output_components.append(locals()[f\"caption_{i}\"])\n                            caption_list.append(locals()[f\"caption_{i}\"])\n                with gr.Column():\n                    gr.Markdown(\n                        \"\"\"# Step 3. Train\n        <p style=\"margin-top:0\">Press start to start training.</p>\n        \"\"\", elem_classes=\"group_padding\")\n                    refresh = gr.Button(\"Refresh\", elem_id=\"refresh\", visible=False)\n                    start = gr.Button(\"Start training\", visible=False, elem_id=\"start_training\")\n                    output_components.append(start)\n                    train_script = gr.Textbox(label=\"Train script\", max_lines=100, interactive=True)\n                    train_config = gr.Textbox(label=\"Train config\", max_lines=100, interactive=True)\n            with gr.Accordion(\"Advanced options\", elem_id='advanced_options', open=False):\n                with gr.Row():\n                    with gr.Column(min_width=300):\n                        seed = gr.Number(label=\"--seed\", info=\"Seed\", value=42, interactive=True)\n                    with gr.Column(min_width=300):\n                        workers = gr.Number(label=\"--max_data_loader_n_workers\", info=\"Number of Workers\", value=2, interactive=True)\n                    with gr.Column(min_width=300):\n                        learning_rate = gr.Textbox(label=\"--learning_rate\", info=\"Learning Rate\", value=\"8e-4\", interactive=True)\n                    with gr.Column(min_width=300):\n                        save_every_n_epochs = gr.Number(label=\"--save_every_n_epochs\", info=\"Save every N epochs\", value=4, interactive=True)\n                    with gr.Column(min_width=300):\n                        guidance_scale = gr.Number(label=\"--guidance_scale\", info=\"Guidance Scale\", value=1.0, interactive=True)\n                    with gr.Column(min_width=300):\n                        timestep_sampling = gr.Textbox(label=\"--timestep_sampling\", info=\"Timestep Sampling\", value=\"shift\", interactive=True)\n                    with gr.Column(min_width=300):\n                        network_dim = gr.Number(label=\"--network_dim\", info=\"LoRA Rank\", value=4, minimum=4, maximum=128, step=4, interactive=True)\n                    advanced_components, advanced_component_ids = init_advanced()\n            with gr.Row():\n                terminal = LogsView(label=\"Train log\", elem_id=\"terminal\")\n            with gr.Row():\n                gallery = gr.Gallery(get_samples, inputs=[lora_name], label=\"Samples\", every=10, columns=6)\n\n        with gr.TabItem(\"Publish\") as publish_tab:\n            hf_token = gr.Textbox(label=\"Huggingface Token\")\n            hf_login = gr.Button(\"Login\")\n            hf_logout = gr.Button(\"Logout\")\n            with gr.Row() as row:\n                gr.Markdown(\"**LoRA**\")\n                gr.Markdown(\"**Upload**\")\n            loras = get_loras()\n            with gr.Row():\n                lora_rows = refresh_publish_tab()\n                with gr.Column():\n                    with gr.Row():\n                        repo_owner = gr.Textbox(label=\"Account\", interactive=False)\n                        repo_name = gr.Textbox(label=\"Repository Name\")\n                    repo_visibility = gr.Textbox(label=\"Repository Visibility ('public' or 'private')\", value=\"public\")\n                    upload_button = gr.Button(\"Upload to HuggingFace\")\n                    upload_button.click(\n                        fn=upload_hf,\n                        inputs=[\n                            base_model,\n                            lora_rows,\n                            repo_owner,\n                            repo_name,\n                            repo_visibility,\n                            hf_token,\n                        ]\n                    )\n            hf_login.click(fn=login_hf, inputs=[hf_token], outputs=[hf_token, hf_login, hf_logout, repo_owner])\n            hf_logout.click(fn=logout_hf, outputs=[hf_token, hf_login, hf_logout, repo_owner])\n\n\n    publish_tab.select(refresh_publish_tab, outputs=lora_rows)\n    lora_rows.select(fn=set_repo, inputs=[lora_rows], outputs=[repo_name])\n\n    dataset_folder = gr.State()\n\n    listeners = [\n        base_model,\n        lora_name,\n        resolution,\n        seed,\n        workers,\n        concept_sentence,\n        learning_rate,\n        network_dim,\n        max_train_epochs,\n        save_every_n_epochs,\n        timestep_sampling,\n        guidance_scale,\n        vram,\n        num_repeats,\n        sample_prompts,\n        sample_every_n_steps,\n        *advanced_components\n    ]\n    advanced_component_ids = [x.elem_id for x in advanced_components]\n    original_advanced_component_values = [comp.value for comp in advanced_components]\n    images.upload(\n        load_captioning,\n        inputs=[images, concept_sentence],\n        outputs=output_components\n    )\n    images.delete(\n        load_captioning,\n        inputs=[images, concept_sentence],\n        outputs=output_components\n    )\n    images.clear(\n        hide_captioning,\n        outputs=[captioning_area, start]\n    )\n    max_train_epochs.change(\n        fn=update_total_steps,\n        inputs=[max_train_epochs, num_repeats, images],\n        outputs=[total_steps]\n    )\n    num_repeats.change(\n        fn=update_total_steps,\n        inputs=[max_train_epochs, num_repeats, images],\n        outputs=[total_steps]\n    )\n    images.upload(\n        fn=update_total_steps,\n        inputs=[max_train_epochs, num_repeats, images],\n        outputs=[total_steps]\n    )\n    images.delete(\n        fn=update_total_steps,\n        inputs=[max_train_epochs, num_repeats, images],\n        outputs=[total_steps]\n    )\n    images.clear(\n        fn=update_total_steps,\n        inputs=[max_train_epochs, num_repeats, images],\n        outputs=[total_steps]\n    )\n    concept_sentence.change(fn=update_sample, inputs=[concept_sentence], outputs=sample_prompts)\n    start.click(fn=create_dataset, inputs=[dataset_folder, resolution, images] + caption_list, outputs=dataset_folder).then(\n        fn=start_training,\n        inputs=[\n            base_model,\n            lora_name,\n            train_script,\n            train_config,\n            sample_prompts,\n        ],\n        outputs=terminal,\n    )\n    do_captioning.click(fn=run_captioning, inputs=[images, concept_sentence] + caption_list, outputs=caption_list)\n    demo.load(fn=loaded, js=js, outputs=[hf_token, hf_login, hf_logout, repo_owner])\n    refresh.click(update, inputs=listeners, outputs=[train_script, train_config, dataset_folder])\nif __name__ == \"__main__\":\n    cwd = os.path.dirname(os.path.abspath(__file__))\n    demo.launch(debug=True, show_error=True, allowed_paths=[cwd])\n"
  },
  {
    "path": "docker-compose.yml",
    "content": "services:\n\n  fluxgym:\n    build:\n      context: .\n      # change the dockerfile to Dockerfile.cuda12.4 if you are running CUDA 12.4 drivers otherwise leave as is\n      dockerfile: Dockerfile\n    image: fluxgym\n    container_name: fluxgym\n    ports:\n      - 7860:7860\n    environment:\n      - PUID=${PUID:-1000}\n      - PGID=${PGID:-1000}\n    volumes:\n      - /etc/localtime:/etc/localtime:ro\n      - /etc/timezone:/etc/timezone:ro\n      - ./:/app/fluxgym\n    stop_signal: SIGKILL\n    tty: true\n    deploy:\n      resources:\n        reservations:\n          devices:\n          - driver: nvidia\n            count: all\n            capabilities: [gpu]\n    restart: unless-stopped"
  },
  {
    "path": "install.js",
    "content": "module.exports = {\n  requires: {\n    bundle: \"ai\",\n  },\n  run: [\n    {\n      method: \"shell.run\",\n      params: {\n        venv: \"env\",\n        message: [\n          \"git config --global --add safe.directory '*'\",\n          \"git clone -b sd3 https://github.com/kohya-ss/sd-scripts\"\n        ]\n      }\n    },\n    {\n      method: \"shell.run\",\n      params: {\n        path: \"sd-scripts\",\n        venv: \"../env\",\n        message: [\n          \"uv pip install -r requirements.txt\",\n        ]\n      }\n    },\n    {\n      method: \"shell.run\",\n      params: {\n        venv: \"env\",\n        message: [\n          \"uv pip uninstall diffusers[torch] torch\",\n          \"uv pip install -r requirements.txt\",\n          \"uv pip install -U bitsandbytes hf-xet\"\n        ]\n      }\n    },\n    {\n      method: \"script.start\",\n      params: {\n        uri: \"torch.js\",\n        params: {\n          venv: \"env\",\n          // xformers: true\n        }\n      }\n    },\n    {\n      method: \"fs.link\",\n      params: {\n        drive: {\n          vae: \"models/vae\",\n          clip: \"models/clip\",\n          unet: \"models/unet\",\n          loras: \"outputs\",\n        },\n        peers: [\n          \"https://github.com/pinokiofactory/stable-diffusion-webui-forge.git\",\n          \"https://github.com/pinokiofactory/comfy.git\",\n          \"https://github.com/pinokiofactory/MagicQuill.git\",\n          \"https://github.com/cocktailpeanutlabs/comfyui.git\",\n          \"https://github.com/cocktailpeanutlabs/fooocus.git\",\n          \"https://github.com/cocktailpeanutlabs/automatic1111.git\",\n          \"https://github.com/6Morpheus6/forge-neo.git\"\n        ]\n      }\n    }\n  ]\n}\n"
  },
  {
    "path": "link.js",
    "content": "module.exports = {\n  run: [\n    {\n      method: \"fs.link\",\n      params: {\n        venv: \"app/env\"\n      }\n    }\n  ]\n}\n"
  },
  {
    "path": "models/.gitkeep",
    "content": ""
  },
  {
    "path": "models/clip/.gitkeep",
    "content": ""
  },
  {
    "path": "models/unet/.gitkeep",
    "content": ""
  },
  {
    "path": "models/vae/.gitkeep",
    "content": ""
  },
  {
    "path": "models.yaml",
    "content": "# Add your own model here\n# <name that will show up on the dropdown>:\n#   repo: <the huggingface repo ID to pull from>\n#   base: <the model used to run inference with (The Huggingface \"Inference API\" widget will use this to generate demo images)>\n#   license: <follow the other examples. Any model inherited from DEV should use the dev license, schenll is apache-2.0>\n#   license_name: <follow the other examples. only needed for dev inherited models>\n#   license_link: <follow the other examples. only needed for dev inherited models>\n#   file: <the file name within the huggingface repo>\nflux-dev:\n    repo: cocktailpeanut/xulf-dev\n    base: black-forest-labs/FLUX.1-dev\n    license: other\n    license_name: flux-1-dev-non-commercial-license\n    license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md\n    file: flux1-dev.sft\nflux-schnell:\n    repo: black-forest-labs/FLUX.1-schnell\n    base: black-forest-labs/FLUX.1-schnell\n    license: apache-2.0\n    file: flux1-schnell.safetensors\nbdsqlsz/flux1-dev2pro-single:\n    repo: bdsqlsz/flux1-dev2pro-single\n    base: black-forest-labs/FLUX.1-dev\n    license: other\n    license_name: flux-1-dev-non-commercial-license\n    license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md\n    file: flux1-dev2pro.safetensors\n"
  },
  {
    "path": "outputs/.gitkeep",
    "content": ""
  },
  {
    "path": "pinokio.js",
    "content": "const path = require('path')\nmodule.exports = {\n  version: \"3.7\",\n  title: \"fluxgym\",\n  description: \"[NVIDIA Only] Dead simple web UI for training FLUX LoRA with LOW VRAM support (From 12GB)\",\n  icon: \"icon.png\",\n  menu: async (kernel, info) => {\n    let installed = info.exists(\"env\")\n    let running = {\n      install: info.running(\"install.js\"),\n      start: info.running(\"start.js\"),\n      update: info.running(\"update.js\"),\n      reset: info.running(\"reset.js\"),\n      link: info.running(\"link.js\")\n    }\n    if (running.install) {\n      return [{\n        default: true,\n        icon: \"fa-solid fa-plug\",\n        text: \"Installing\",\n        href: \"install.js\",\n      }]\n    } else if (running.update) {\n      return [{\n        default: true,\n        icon: 'fa-solid fa-terminal',\n        text: \"Updating\",\n        href: \"update.js\",\n      }]\n    } else if (installed) {\n      if (running.start) {\n        let local = info.local(\"start.js\")\n        if (local && local.url) {\n          return [{\n            default: true,\n            icon: \"fa-solid fa-rocket\",\n            text: \"Open Web UI\",\n            href: local.url,\n            popout: true\n          }, {\n            icon: 'fa-solid fa-terminal',\n            text: \"Terminal\",\n            href: \"start.js\",\n          }, {\n            icon: \"fa-solid fa-flask\",\n            text: \"Outputs\",\n            href: \"outputs?fs\"\n          }]\n        } else {\n          return [{\n            default: true,\n            icon: 'fa-solid fa-terminal',\n            text: \"Terminal\",\n            href: \"start.js\",\n          }]\n        }\n      } else if (running.reset) {\n        return [{\n          default: true,\n          icon: 'fa-solid fa-terminal',\n          text: \"Resetting\",\n          href: \"reset.js\",\n        }]\n      } else if (running.link) {\n        return [{\n          default: true,\n          icon: 'fa-solid fa-terminal',\n          text: \"Deduplicating\",\n          href: \"link.js\",\n        }]\n      } else {\n        return [{\n          default: true,\n          icon: \"fa-solid fa-power-off\",\n          text: \"Start\",\n          href: \"start.js\",\n        }, {\n          icon: \"fa-solid fa-flask\",\n          text: \"Outputs\",\n          href: \"sd-scripts/fluxgym/outputs?fs\"\n        }, {\n          icon: \"fa-solid fa-plug\",\n          text: \"Update\",\n          href: \"update.js\",\n        }, {\n          icon: \"fa-solid fa-plug\",\n          text: \"Install\",\n          href: \"install.js\",\n        }, {\n          icon: \"fa-solid fa-file-zipper\",\n          text: \"<div><strong>Save Disk Space</strong><div>Deduplicates redundant library files</div></div>\",\n          href: \"link.js\",\n        }, {\n          icon: \"fa-regular fa-circle-xmark\",\n          text: \"<div><strong>Reset</strong><div>Revert to pre-install state</div></div>\",\n          href: \"reset.js\",\n          confirm: \"Are you sure you wish to reset the app?\"\n        }]\n      }\n    } else if (!running.update) {\n      return [{\n        default: true,\n        icon: \"fa-solid fa-plug\",\n        text: \"Install\",\n        href: \"install.js\",\n      }]\n    }\n  }\n}\n"
  },
  {
    "path": "pinokio_meta.json",
    "content": "{\n  \"posts\": [\n    \"https://x.com/cocktailpeanut/status/1851721405408166064\",\n    \"https://x.com/cocktailpeanut/status/1835719701172756592\",\n    \"https://x.com/LikeToasters/status/1834258975384092858\",\n    \"https://x.com/cocktailpeanut/status/1834245329627009295\",\n    \"https://x.com/jkch0205/status/1834003420132614450\",\n    \"https://x.com/huwhitememes/status/1834074992209699132\",\n    \"https://x.com/GorillaRogueGam/status/1834148656791888139\",\n    \"https://x.com/cocktailpeanut/status/1833964839519068303\",\n    \"https://x.com/cocktailpeanut/status/1833935061907079521\",\n    \"https://x.com/cocktailpeanut/status/1833940728881242135\",\n    \"https://x.com/cocktailpeanut/status/1833881392482066638\",\n    \"https://x.com/Alone1Moon/status/1833348850662445369\",\n    \"https://x.com/_f_ai_9/status/1833485349995397167\",\n    \"https://x.com/intocryptoast/status/1833061082862412186\",\n    \"https://x.com/cocktailpeanut/status/1833888423716827321\",\n    \"https://x.com/cocktailpeanut/status/1833884852992516596\",\n    \"https://x.com/cocktailpeanut/status/1833885335077417046\",\n    \"https://x.com/NiwonArt/status/1833565746624139650\",\n    \"https://x.com/cocktailpeanut/status/1833884361986380117\",\n    \"https://x.com/NiwonArt/status/1833599399764889685\",\n    \"https://x.com/LikeToasters/status/1832934391217045913\",\n    \"https://x.com/cocktailpeanut/status/1832924887456817415\",\n    \"https://x.com/cocktailpeanut/status/1832927154536902897\",\n    \"https://x.com/YabaiHamster/status/1832697724690386992\",\n    \"https://x.com/cocktailpeanut/status/1832747889497366706\",\n    \"https://x.com/PhotogenicWeekE/status/1832720544959185202\",\n    \"https://x.com/zuzaritt/status/1832748542164652390\",\n    \"https://x.com/foxyy4i/status/1832764883710185880\",\n    \"https://x.com/waynedahlberg/status/1832226132999213095\",\n    \"https://x.com/PhotoGarrido/status/1832214644515041770\",\n    \"https://x.com/cocktailpeanut/status/1832787205774786710\",\n    \"https://x.com/cocktailpeanut/status/1832151307198541961\",\n    \"https://x.com/cocktailpeanut/status/1832145996014612735\",\n    \"https://x.com/cocktailpeanut/status/1832084951115972653\",\n    \"https://x.com/cocktailpeanut/status/1832091112086843684\"\n  ],\n  \"links\": [{\n    \"type\": \"bitcoin\",\n    \"value\": \"bc1qx90z3ce9qz4p2pnt06gd0ytntl86qw4d6qv39k\"\n  }, {\n    \"title\": \"X\",\n    \"value\": \"https://x.com/cocktailpeanut\"\n  }, {\n    \"title\": \"Github\",\n    \"value\": \"https://github.com/cocktailpeanut\"\n  }, {\n    \"title\": \"Discord\",\n    \"value\": \"https://discord.gg/TQdNwadtE4\"\n  }]\n}\n"
  },
  {
    "path": "requirements.txt",
    "content": "safetensors\ngit+https://github.com/huggingface/diffusers.git\ngradio_logsview@https://huggingface.co/spaces/cocktailpeanut/gradio_logsview/resolve/main/gradio_logsview-0.0.17-py3-none-any.whl\ntransformers==4.49.0\nlycoris-lora==1.8.3\nflatten_json\npyyaml\noyaml\ntensorboard\nkornia\ninvisible-watermark\neinops\naccelerate\ntoml\nalbumentations\npydantic\nomegaconf\nk-diffusion\nopen_clip_torch\ntimm\nprodigyopt\ncontrolnet_aux==0.0.7\npython-dotenv\nbitsandbytes\nhf_transfer\nlpips\npytorch_fid\noptimum-quanto\nsentencepiece\nhuggingface_hub\npeft==0.17.1\ngradio\npython-slugify\nimagesize\npydantic==2.9.2\n"
  },
  {
    "path": "reset.js",
    "content": "module.exports = {\n  run: [{\n    method: \"fs.rm\",\n    params: {\n      path: \"sd-scripts\"\n    }\n  }, {\n    method: \"fs.rm\",\n    params: {\n      path: \"env\"\n    }\n  }]\n}\n"
  },
  {
    "path": "start.js",
    "content": "module.exports = {\n  requires: {\n    bundle: \"ai\"\n  },\n  daemon: true,\n  run: [\n    {\n      method: \"shell.run\",\n      params: {\n        venv: \"env\",\n        env: {\n          LOG_LEVEL: \"DEBUG\",\n          CUDA_VISIBLE_DEVICES: \"0\"\n        },\n        message: [\n          \"python app.py\",\n        ],\n        on: [{\n          \"event\": \"/http:\\\\/\\\\/[^\\\\s\\\\/]+:\\\\d{2,5}(?=[^\\\\w]|$)/\",\n          \"done\": true\n        }]\n      }\n    },\n    {\n      method: \"local.set\",\n      params: {\n        url: \"{{input.event[0]}}\"\n      }\n    }\n  ]\n}\n"
  },
  {
    "path": "torch.js",
    "content": "module.exports = {\n  run: [\n    // windows nvidia\n    {\n      \"when\": \"{{platform === 'win32' && gpu === 'nvidia'}}\",\n      \"method\": \"shell.run\",\n      \"params\": {\n        \"venv\": \"{{args && args.venv ? args.venv : null}}\",\n        \"path\": \"{{args && args.path ? args.path : '.'}}\",\n        \"message\": \"uv pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128 --force-reinstall --no-deps\"\n\n      }\n    },\n    // windows amd\n    {\n      \"when\": \"{{platform === 'win32' && gpu === 'amd'}}\",\n      \"method\": \"shell.run\",\n      \"params\": {\n        \"venv\": \"{{args && args.venv ? args.venv : null}}\",\n        \"path\": \"{{args && args.path ? args.path : '.'}}\",\n        \"message\": \"uv pip install torch-directml torch torchvision torchaudio --force-reinstall --no-deps\"\n      }\n    },\n    // windows cpu\n    {\n      \"when\": \"{{platform === 'win32' && (gpu !== 'nvidia' && gpu !== 'amd')}}\",\n      \"method\": \"shell.run\",\n      \"params\": {\n        \"venv\": \"{{args && args.venv ? args.venv : null}}\",\n        \"path\": \"{{args && args.path ? args.path : '.'}}\",\n        \"message\": \"uv pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu --force-reinstall --no-deps\"\n      }\n    },\n    // mac\n    {\n      \"when\": \"{{platform === 'darwin'}}\",\n      \"method\": \"shell.run\",\n      \"params\": {\n        \"venv\": \"{{args && args.venv ? args.venv : null}}\",\n        \"path\": \"{{args && args.path ? args.path : '.'}}\",\n        \"message\": \"uv pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu --force-reinstall --no-deps\"\n      }\n    },\n    // linux nvidia\n    {\n      \"when\": \"{{platform === 'linux' && gpu === 'nvidia'}}\",\n      \"method\": \"shell.run\",\n      \"params\": {\n        \"venv\": \"{{args && args.venv ? args.venv : null}}\",\n        \"path\": \"{{args && args.path ? args.path : '.'}}\",\n        \"message\": \"uv pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121 --force-reinstall\"\n      }\n    },\n    // linux rocm (amd)\n    {\n      \"when\": \"{{platform === 'linux' && gpu === 'amd'}}\",\n      \"method\": \"shell.run\",\n      \"params\": {\n        \"venv\": \"{{args && args.venv ? args.venv : null}}\",\n        \"path\": \"{{args && args.path ? args.path : '.'}}\",\n        \"message\": \"uv pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm6.4 --force-reinstall --no-deps\"\n      }\n    },\n    // linux cpu\n    {\n      \"when\": \"{{platform === 'linux' && (gpu !== 'amd' && gpu !=='nvidia')}}\",\n      \"method\": \"shell.run\",\n      \"params\": {\n        \"venv\": \"{{args && args.venv ? args.venv : null}}\",\n        \"path\": \"{{args && args.path ? args.path : '.'}}\",\n        \"message\": \"uv pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu --force-reinstall --no-deps\"\n      }\n    }\n  ]\n}\n"
  },
  {
    "path": "update.js",
    "content": "module.exports = {\n  run: [{\n    method: \"shell.run\",\n    params: {\n      message: \"git pull\"\n    }\n  }, {\n    method: \"shell.run\",\n    params: {\n      path: \"sd-scripts\",\n      message: \"git pull\"\n    }\n  }, {\n    method: \"fs.rm\",\n    params: {\n      path: \"env\"\n    }\n  }, {\n    method: \"shell.run\",\n    params: {\n      path: \"sd-scripts\",\n      venv: \"../env\",\n      message: [\n        \"uv pip install -r requirements.txt\",\n      ]\n    }\n  }, {\n    method: \"shell.run\",\n    params: {\n      venv: \"env\",\n      message: [\n        \"uv pip uninstall diffusers[torch] torch\",\n        \"uv pip install -r requirements.txt\",\n      ]\n    }\n  }, {\n    method: \"script.start\",\n    params: {\n      uri: \"torch.js\",\n      params: {\n        venv: \"env\",\n        // xformers: true   // uncomment this line if your project requires xformers\n      }\n    }\n  }]\n}\n"
  }
]