[
  {
    "path": "README.md",
    "content": "# TouchDiffusion\n<a href=\"https://discord.com/invite/wNW8xkEjrf\"><img src=\"https://discord.com/api/guilds/838923088997122100/widget.png?style=shield\" alt=\"Discord Shield\"/></a>\n\nTouchDesigner implementation for real-time Stable Diffusion interactive generation with [StreamDiffusion](https://github.com/cumulo-autumn/StreamDiffusion).\n\n**Benchmarks with stabilityai/sd-turbo, 512x512 and 1 batch size.**\n\n| GPU | FPS |\n| --- | --- |\n| 4090 | 55-60 FPS |\n| 4080 | 47 FPS |\n| 3090ti | 37 FPS |\n| 3090 | 30-32 FPS |\n| 4070 Laptop | 24 FPS |\n| 3060 12GB | 16 FPS |\n\n## Disclaimer\n**Notice:** This repository is in an early testing phase and may undergo significant changes. Use it at your own risk. \n\n## Usage\n> [!TIP]\n> TouchDiffusion can be installed in multiple ways. **Portable version** have prebuild dendencies, so it prefered way to install or **Manuall install** is step by step instruction.\n\n#### Portable version:\nIncludes preinstalled configurations, ensuring everything is readily available for immediate use.\n1. Download and extract [archive](https://boosty.to/vjschool/posts/39931cd6-b9c5-4c27-93ff-d7a09b0918c5?share=post_link)\n2. Run ```webui.bat```. It will provide url to web interface (ex. ```http://127.0.0.1:7860```)\n3. Open ```install & update``` tab and run ```Update dependencies```.\n   \n#### Manuall install:\nYou can follow [YouTube tutorial](https://youtu.be/3WqUrWfCX1A)\n\nRequired TouchDesigner 2023 & Python 3.11\n1. Install [Python 3.11](https://www.python.org/downloads/release/python-3118/)\n2. Install [Git](https://git-scm.com/downloads)\n3. Install [CUDA Toolkit](https://developer.nvidia.com/cuda-11-8-0-download-archive) 11.8 (required PC restart)\n4. Download [TouchDiffusion](https://github.com/olegchomp/TouchDiffusion/archive/refs/heads/main.zip).\n5. Open ```webui.bat``` with text editor and set path to Python 3.11 in ```set PYTHON_PATH=```. (ex. ```set PYTHON_PATH=\"C:\\Program Files\\Python311\\python.exe\"```)\n6. Run ```webui.bat```. After installation it will provide url to web interface (ex. ```http://127.0.0.1:7860```)\n7. Open ```install & update``` tab and run ```Update dependencies```. (could take ~10 minutes, depending on your internet connection)\n8. If you get pop up window with error related to .dll, run ```Fix pop up```\n9. Restart webui.bat\n\n#### Accelerate model:\nModels in ```.safetensors``` format must be in ```models\\checkpoints``` folder. (as for sd_turbo, it  will be auto-downloaded).\n\n**Internet connection required, while making engines.**\n\n1) Run ```webui.bat```\n2) Select model type. \n3) Select model.\n4) Set width, height and amount of sampling steps (Batch size)\n5) Select acceleration lora if available.\n6) Run ```Make engine``` and wait for acceleration to finish. (could take ~10 minutes, depending on your hardware)\n\n#### TouchDesigner inference:\n1. Add **TouchDiffusion.tox** to project\n2. On ```Settings``` page change path to ```TouchDiffusion``` folder (same as where webui.bat).\n3. Save and restart TouchDesigner project.\n4. On ```Settings``` page select Engine and click **Load Engine**.\n5. Connect animated TOP to input. Component cook only if input updates. \n\n#### Known issues / Roadmap:\n- [x] Fix Re-init. Sometimes required to restart TouchDesigner for initializing site-packages.\n- [ ] Code clean-up and rework.\n- [x] Custom resolution (for now fixed 512x512)\n- [ ] CFG not affecting image\n- [ ] Add Lora\n- [x] Add Hyper Lora support\n- [ ] Add ControlNet support\n- [ ] Add SDXL support\n\n## Acknowledgement\nBased on the following projects:\n* [StreamDiffusion](https://github.com/cumulo-autumn/StreamDiffusion) - Pipeline-Level Solution for Real-Time Interactive Generation\n* [TopArray](https://github.com/IntentDev/TopArray) - Interaction between Python/PyTorch tensor operations and TouchDesigner TOPs.\n"
  },
  {
    "path": "TouchDiffusionExt.py",
    "content": "from TDStoreTools import StorageManager\r\nimport TDFunctions as TDF\r\nimport numpy as np\r\nimport torch\r\nimport os \r\nimport webbrowser\r\nimport json\r\nfrom datetime import datetime\r\nimport webbrowser\r\n\r\ntry:\r\n\tfrom StreamDiffusion.utils.wrapper import StreamDiffusionWrapper\r\nexcept Exception as e:\r\n\tcurrent_time = datetime.now()\r\n\tformated_time = current_time.strftime(\"%H:%M:%S\")\r\n\top('fifo1').appendRow([formated_time, 'Error', e])\r\n\r\n\r\nclass TouchDiffusionExt:\r\n\t\"\"\"\r\n\tDefaultExt description\r\n\t\"\"\"\r\n\tdef __init__(self, ownerComp):\r\n\t\tself.ownerComp = ownerComp\r\n\r\n\t\tself.source = op('null1')\r\n\t\tself.device = \"cuda\"\r\n\t\tself.to_tensor = TopArrayInterface(self.source)\r\n\t\tself.stream_toparray = torch.cuda.current_stream(device=self.device)\r\n\t\tself.rgba_tensor = torch.zeros((512, 512, 4), dtype=torch.float32).to(self.device) #512,768\r\n\t\tself.rgba_tensor[..., 3] = 0\r\n\t\tself.output_interface = TopCUDAInterface(512,512,4,np.float32) #768,512\r\n\t\tself.stream = None\r\n\r\n\tdef activate_stream(self):\r\n\t\tself.update_size()\r\n\r\n\t\tacceleration_lora = op('parameter1')['Accelerationlora',1].val\r\n\t\tif acceleration_lora == 'LCM':\r\n\t\t\tuse_lcm_lora = True\r\n\t\telif acceleration_lora == 'HyperSD':\r\n\t\t\tuse_hyper_lora = True\r\n\t\telse:\r\n\t\t\tuse_lcm_lora = False\r\n\t\t\tuse_hyper_lora = False\r\n\t\t\r\n\t\ttry:\r\n\t\t\tself.stream = StreamDiffusionWrapper(\r\n\t\t\t\tmodel_id_or_path=f\"{op('parameter1')['Checkpoint',1].val}\",\r\n\t\t\t\tlora_dict=op('parameter1')['Loralist',1].val,\r\n\t\t\t\tt_index_list=self.generate_t_index_list(),\r\n\t\t\t\tframe_buffer_size=1,\r\n\t\t\t\twidth= int(op('parameter1')['Sizex',1]),\r\n\t\t\t\theight=int(op('parameter1')['Sizey',1]),\r\n\t\t\t\twarmup=0,\r\n\t\t\t\tacceleration=\"tensorrt\",\r\n\t\t\t\tmode= op('parameter1')['Checkpointmode',1].val,\r\n\t\t\t\tuse_denoising_batch=True,\r\n\t\t\t\tcfg_type=\"self\",\r\n\t\t\t\tseed=int(op('parameter1')['Seed',1]),\r\n\t\t\t\tuse_lcm_lora=use_lcm_lora,\r\n\t\t\t\tuse_hyper_lora=use_hyper_lora,\r\n\t\t\t\toutput_type='pt',\r\n\t\t\t\tmodel_type=op('parameter1')['Checkpointtype',1].val,\r\n\t\t\t\ttouchdiffusion=True,\r\n\t\t\t\t#turbo=False\r\n\t\t\t)\r\n\r\n\t\t\tself.stream.prepare(\r\n\t\t\t\tprompt = parent().par.Prompt.val,\r\n\t\t\t\tnegative_prompt = parent().par.Negprompt.val,\r\n\t\t\t\tguidance_scale=parent().par.Cfgscale.val,\r\n\t\t\t\tdelta=parent().par.Deltamult.val,\r\n\t\t\t\tt_index_list=self.update_denoising_strength()\r\n\t\t\t)\r\n\r\n\t\t\tself.fifolog('Status', 'Engine activated')\r\n\t\texcept Exception as e:\r\n\t\t\tself.fifolog('Error', e)\r\n\t\r\n\tdef generate(self, scriptOp):\r\n\t\tstream = self.stream\r\n\t\tself.to_tensor.update(self.stream_toparray.cuda_stream)\r\n\t\timage = torch.as_tensor(self.to_tensor, device=self.device)\r\n\t\timage_tensor = self.preprocess_image(image)\r\n\r\n\t\tif hasattr(self.stream, 'batch_size'):\r\n\t\t\tlast_element = 1 if stream.batch_size != 1 else 0\r\n\t\t\tfor _ in range(stream.batch_size - last_element):\r\n\t\t\t\toutput_image = stream(image=image_tensor)\r\n\t\t\t\r\n\t\t\toutput_tensor = self.postprocess_image(output_image)\r\n\t\t\tscriptOp.copyCUDAMemory(\r\n\t\t\t\toutput_tensor.data_ptr(), \r\n\t\t\t\tself.output_interface.size,  \r\n\t\t\t\tself.output_interface.mem_shape)\r\n\t\r\n\tdef update_size(self):\r\n\t\twidth = int(op('parameter1')['Sizex',1])\r\n\t\theight = int(op('parameter1')['Sizey',1])\r\n\t\tprint(width,height)\r\n\t\tself.rgba_tensor = torch.zeros((height, width, 4), dtype=torch.float32).to(self.device)\r\n\t\tself.rgba_tensor[..., 3] = 0\r\n\t\tself.output_interface = TopCUDAInterface(width,height,4,np.float32)\r\n\r\n\t\r\n\tdef preprocess_image(self, image):\r\n\t\timage = torch.flip(image, [1])\r\n\t\timage = torch.clamp(image, 0, 1)\r\n\t\timage = image[:3, :, :] \r\n\t\t_, h, w = image.shape\r\n\t\t# Resize to integer multiple of 32\r\n\t\th, w = map(lambda x: x - x % 32, (h, w))\r\n\t\t#image = self.blend_tensors(self.prev_frame, image, 0.5)\r\n\t\timage = image.unsqueeze(0)\r\n\t\treturn image\r\n\r\n\tdef postprocess_image(self, image):\r\n\t\timage = torch.flip(image, [1])\r\n\t\timage = image.permute(1, 2, 0)\r\n\t\tself.rgba_tensor[..., :3] = image\r\n\t\treturn self.rgba_tensor\r\n\t\r\n\tdef acceleration_mode(self):\r\n\t\tturbo = False\r\n\t\tlcm = False\r\n\t\tacceleration_mode = parent().par.Acceleration.val \r\n\t\tif acceleration_mode == 'LCM':\r\n\t\t\tlcm = True\r\n\t\tif acceleration_mode == 'sd_turbo':\r\n\t\t\tturbo = True\r\n\r\n\t\treturn lcm, turbo\r\n\r\n\r\n\tdef update_engines(self):\r\n\t\tmenuNames = []\r\n\t\tmenuLabels = []\r\n\t\tfor root, dirs, files in os.walk('engines'):\r\n\t\t\tif 'unet.engine' in files:\r\n\t\t\t\tfolder_name = os.path.basename(root)\r\n\t\t\t\tsplit_folder_name = folder_name.split('--')\r\n\t\t\t\tif len(split_folder_name) >= 10:\r\n\t\t\t\t\tname = [split_folder_name[0], \r\n\t\t\t\t\t\tsplit_folder_name[2], \r\n\t\t\t\t\t\tsplit_folder_name[3],\r\n\t\t\t\t\t\tsplit_folder_name[5]]\r\n\t\t\t\t\tname = '-'.join(name)\r\n\t\t\t\t\tmenuLabels.append(name)\r\n\t\t\t\t\tmenuNames.append(folder_name)\r\n\t\t\r\n\t\tparent().par.Enginelist.menuNames = menuNames\r\n\t\tparent().par.Enginelist.menuLabels = menuLabels\r\n\t\tself.update_selected_engine()\r\n\t\r\n\tdef update_selected_engine(self):\r\n\t\ttry:\r\n\t\t\tvals = parent().par.Enginelist.val.split('--')\r\n\t\t\tparent().par.Checkpoint = vals[0]\r\n\t\t\tparent().par.Checkpointtype = vals[1]\r\n\t\t\tparent().par.Accelerationlora = vals[4]\r\n\t\t\tparent().par.Checkpointmode = vals[7]\r\n\t\t\tparent().par.Controlnet = vals[9]\r\n\t\t\tparent().par.Loralist = vals[8]\r\n\t\t\tparent().par.Sizex = vals[2]\r\n\t\t\tparent().par.Sizey = vals[3]\r\n\t\t\tparent().par.Batchsizex = vals[5]\r\n\t\t\tparent().par.Batchsizey = vals[6]\r\n\t\texcept:\r\n\t\t\tparent().par.Checkpoint, parent().par.Checkpointtype, parent().par.Accelerationlora = '', '', ''\r\n\t\t\tparent().par.Checkpointmode, parent().par.Controlnet, parent().par.Loralist = '', '', ''\r\n\t\t\tparent().par.Sizex, parent().par.Sizey, parent().par.Batchsizex, parent().par.Batchsizey = 0,0,0,0\r\n\r\n\r\n\r\n\tdef update_prompt(self):\r\n\t\tprompt = parent().par.Prompt.val\r\n\t\tself.stream.touchdiffusion_prompt(prompt)\r\n\t\r\n\tdef prompt_to_str(self):\r\n\t\tprompt_list = []\r\n\t\tseq = parent().seq.Promptblock\r\n\t\tenable_weights = parent().par.Enableweight\r\n\t\tfor block in seq.blocks:\r\n\t\t\tif block.par.Weight.val > 0:\r\n\t\t\t\tif enable_weights:\r\n\t\t\t\t\tprompt_with_weight = f'({block.par.Prompt.val}){block.par.Weight.val}'\r\n\t\t\t\telse:\r\n\t\t\t\t\tprompt_with_weight = block.par.Prompt.val\r\n\r\n\t\t\t\tprompt_list.append(prompt_with_weight)\r\n\t\t\r\n\t\tprompt_str = \", \".join(prompt_list)\r\n\t\treturn prompt_str\r\n\r\n\tdef update_scheduler(self):\r\n\t\tt_index_list = []\r\n\t\tseq = parent().seq.Schedulerblock\r\n\t\tfor block in seq.blocks:\r\n\t\t\tt_index_list.append(block.par.Step)\r\n\t\tself.stream.touchdiffusion_scheduler(t_index_list)\r\n\t\r\n\tdef update_denoising_strength(self):\r\n\t\tamount = parent().par.Denoise\r\n\t\tmode = parent().par.Denoisemode\r\n\t\t#self.stream.touchdiffusion_generate_t_index_list(amount, mode)\r\n\t\tt_index_list = self.stream.touchdiffusion_generate_t_index_list(amount, mode)\r\n\t\treturn t_index_list\r\n\r\n\tdef generate_t_index_list(self):\r\n\t\tbatchsize = op('parameter1')['Batchsizex',1]\r\n\t\tt_index_list = []\r\n\t\tfor i in range(int(batchsize)):\r\n\t\t\tt_index_list.append(i)\r\n\t\treturn t_index_list\r\n\r\n\r\n\tdef update_cfg_setting(self):\r\n\t\tguidance_scale = parent().par.Cfgscale\r\n\t\tdelta = parent().par.Deltamult.val\r\n\t\tself.stream.touchdiffusion_update_cfg_setting(guidance_scale=guidance_scale, delta=delta)\r\n\r\n\tdef update_noise(self):\r\n\t\tseed = parent().par.Seed.val\r\n\t\tself.stream.touchdiffusion_update_noise(seed=seed)\r\n\r\n\t\r\n\tdef parexec_onValueChange(self, par, prev):\r\n\t\tif hasattr(self.stream, 'batch_size'):\r\n\t\t\tif par.name == 'Prompt':\r\n\t\t\t\tself.update_prompt()\r\n\t\t\telif par.name == 'Denoise':\r\n\t\t\t\tself.update_denoising_strength()\r\n\t\t\telif par.name == 'Cfgscale':\r\n\t\t\t\tself.update_cfg_setting()\r\n\t\t\telif par.name == 'Seed':\r\n\t\t\t\tself.update_noise()\r\n\t\r\n\tdef parexec_onPulse(self, par):\r\n\t\tif par.name == 'Loadengine':\r\n\t\t\tself.activate_stream()\r\n\t\telif par.name == 'Refreshenginelist':\r\n\t\t\tself.update_engines()\r\n\t\tif par.name[0:3] == 'Url':\r\n\t\t\tself.about(par.name)\r\n\t\r\n\tdef fifolog(self, status, message):\r\n\t\tcurrent_time = datetime.now()\r\n\t\tformated_time = current_time.strftime(\"%H:%M:%S\")\r\n\t\top('fifo1').appendRow([formated_time, status, message])\r\n\t\r\n\tdef about(self, endpoint):\r\n\t\tif endpoint == 'Urlg':\r\n\t\t\twebbrowser.open('https://github.com/olegchomp/TouchDiffusion', new=2)\r\n\t\tif endpoint == 'Urld':\r\n\t\t\twebbrowser.open('https://discord.gg/wNW8xkEjrf', new=2)\r\n\t\tif endpoint == 'Urlt':\r\n\t\t\twebbrowser.open('https://www.youtube.com/vjschool', new=2)\r\n\t\tif endpoint == 'Urla':\r\n\t\t\twebbrowser.open('https://olegcho.mp/', new=2)\r\n\t\tif endpoint == 'Urldonate':\r\n\t\t\twebbrowser.open('https://boosty.to/vjschool/', new=2)\r\n\r\nclass TopCUDAInterface:\r\n\tdef __init__(self, width, height, num_comps, dtype):\r\n\t\tself.mem_shape = CUDAMemoryShape()\r\n\t\tself.mem_shape.width = width\r\n\t\tself.mem_shape.height = height\r\n\t\tself.mem_shape.numComps = num_comps\r\n\t\tself.mem_shape.dataType = dtype\r\n\t\tself.bytes_per_comp = np.dtype(dtype).itemsize\r\n\t\tself.size = width * height * num_comps * self.bytes_per_comp\r\n\r\nclass TopArrayInterface:\r\n\tdef __init__(self, top, stream=0):\r\n\t\tself.top = top\r\n\t\tmem = top.cudaMemory(stream=stream)\r\n\t\tself.w, self.h = mem.shape.width, mem.shape.height\r\n\t\tself.num_comps = mem.shape.numComps\r\n\t\tself.dtype = mem.shape.dataType\r\n\t\tshape = (mem.shape.numComps, self.h, self.w)\r\n\t\tdtype_info = {'descr': [('', '<f4')], 'num_bytes': 4}\r\n\t\tdtype_descr = dtype_info['descr']\r\n\t\tnum_bytes = dtype_info['num_bytes']\r\n\t\tnum_bytes_px = num_bytes * mem.shape.numComps\r\n\t\t\r\n\t\tself.__cuda_array_interface__ = {\r\n\t\t\t\"version\": 3,\r\n\t\t\t\"shape\": shape,\r\n\t\t\t\"typestr\": dtype_descr[0][1],\r\n\t\t\t\"descr\": dtype_descr,\r\n\t\t\t\"stream\": stream,\r\n\t\t\t\"strides\": (num_bytes, num_bytes_px * self.w, num_bytes_px),\r\n\t\t\t\"data\": (mem.ptr, False),\r\n\t\t}\r\n\r\n\tdef update(self, stream=0):\r\n\t\tmem = self.top.cudaMemory(stream=stream)\r\n\t\tself.__cuda_array_interface__['stream'] = stream\r\n\t\tself.__cuda_array_interface__['data'] = (mem.ptr, False)\r\n\t\treturn"
  },
  {
    "path": "webui.bat",
    "content": "@echo off\r\n\r\nset PYTHON_PATH=\r\n\r\nREM Check if Git is installed\r\nwhere git > nul 2>&1\r\nif %errorlevel% neq 0 (\r\n    echo Git is not installed or not found in the system PATH.\r\n    pause\r\n    exit /b 1\r\n) else (\r\n    echo Git is installed.\r\n)\r\n\r\nif not exist .venv (\r\n    echo Creating .venv directory...\r\n    %PYTHON_PATH% -m venv \".venv\" || (\r\n        echo Failed to create virtual environment.\r\n        pause\r\n        exit /b 1\r\n    )\r\n\r\n    echo Activating virtual environment...\r\n    call .venv\\Scripts\\activate || (\r\n        echo Failed to activate virtual environment.\r\n        pause\r\n        exit /b 1\r\n    )\r\n\r\n    echo Installing dependencies...\r\n    python -m pip install --upgrade pip || (\r\n        echo Failed to update pip.\r\n        pause\r\n        exit /b 1\r\n    )\r\n\r\n    echo Installing dependencies...\r\n    pip install gradio || (\r\n        echo Failed to install gradio.\r\n        pause\r\n        exit /b 1\r\n    )\r\n    \r\n    if not exist StreamDiffusion (\r\n        echo Downloading StreamDiffusion...\r\n        git clone https://github.com/olegchomp/StreamDiffusion || (\r\n            echo Failed to download StreamDiffusion\r\n            pause\r\n            exit /b 1\r\n        )\r\n    )\r\n    \r\n    echo Installation complete.\r\n    \r\n    echo Launching WebUI...\r\n    python StreamDiffusion\\webui.py || (\r\n        echo No launch file found\r\n        pause\r\n        exit /b 1\r\n    )\r\n\r\n) else (\r\n    echo Activating virtual environment...\r\n    call .venv\\Scripts\\activate.bat || (\r\n        echo Failed to activate virtual environment.\r\n        pause\r\n        exit /b 1\r\n    )\r\n    \r\n    if not exist StreamDiffusion (\r\n        echo Downloading StreamDiffusion...\r\n        git clone https://github.com/olegchomp/StreamDiffusion || (\r\n            echo Failed to download StreamDiffusion\r\n            pause\r\n            exit /b 1\r\n        )\r\n    )\r\n  \r\n    echo Launching WebUI...\r\n    python StreamDiffusion\\webui.py || (\r\n        echo No launch file found\r\n        pause\r\n        exit /b 1\r\n    )\r\n)\r\n\r\npause\r\n"
  }
]