Repository: camenduru/Diffutoon-jupyter
Branch: main
Commit: 6ac80dc7a2fe
Files: 3
Total size: 24.0 KB
Directory structure:
gitextract_11r30370/
├── Diffutoon_color_jupyter.ipynb
├── Diffutoon_jupyter.ipynb
└── README.md
================================================
FILE CONTENTS
================================================
================================================
FILE: Diffutoon_color_jupyter.ipynb
================================================
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github"
},
"source": [
"[](https://colab.research.google.com/github/camenduru/Diffutoon-jupyter/blob/main/Diffutoon_color_jupyter.ipynb)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "VjYy0F2gZIPR"
},
"outputs": [],
"source": [
"# https://github.com/modelscope/DiffSynth-Studio/blob/main/examples/Diffutoon/Diffutoon.ipynb modified\n",
"\n",
"%cd /content\n",
"!git clone https://github.com/Artiprocher/DiffSynth-Studio\n",
"%cd /content/DiffSynth-Studio\n",
"\n",
"!pip install -q einops transformers controlnet-aux==0.0.7 sentencepiece imageio imageio-ffmpeg\n",
"\n",
"!apt -y install -qq aria2\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://civitai.com/api/download/models/229575 -d /content/DiffSynth-Studio/models/stable_diffusion -o aingdiffusion_v12.safetensors\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/guoyww/animatediff/resolve/main/mm_sd_v15_v2.ckpt -d /content/DiffSynth-Studio/models/AnimateDiff -o mm_sd_v15_v2.ckpt\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lllyasviel/ControlNet-v1-1/resolve/main/control_v11p_sd15_lineart.pth -d /content/DiffSynth-Studio/models/ControlNet -o control_v11p_sd15_lineart.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lllyasviel/ControlNet-v1-1/resolve/main/control_v11f1e_sd15_tile.pth -d /content/DiffSynth-Studio/models/ControlNet -o control_v11f1e_sd15_tile.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lllyasviel/ControlNet-v1-1/resolve/main/control_v11f1p_sd15_depth.pth -d /content/DiffSynth-Studio/models/ControlNet -o control_v11f1p_sd15_depth.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lllyasviel/ControlNet-v1-1/resolve/main/control_v11p_sd15_softedge.pth -d /content/DiffSynth-Studio/models/ControlNet -o control_v11p_sd15_softedge.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lllyasviel/Annotators/resolve/main/dpt_hybrid-midas-501f0c75.pt -d /content/DiffSynth-Studio/models/Annotators -o dpt_hybrid-midas-501f0c75.pt\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lllyasviel/Annotators/resolve/main/ControlNetHED.pth -d /content/DiffSynth-Studio/models/Annotators -o ControlNetHED.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lllyasviel/Annotators/resolve/main/sk_model.pth -d /content/DiffSynth-Studio/models/Annotators -o sk_model.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lllyasviel/Annotators/resolve/main/sk_model2.pth -d /content/DiffSynth-Studio/models/Annotators -o sk_model2.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M \"https://civitai.com/api/download/models/25820?type=Model&format=PickleTensor&size=full&fp=fp16\" -d /content/DiffSynth-Studio/models/textual_inversion -o verybadimagenegative_v1.3.pt\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/camenduru/Diffutoon/resolve/main/input_video.mp4 -d /content -o input_video.mp4"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"config_stage_1_template = {\n",
" \"models\": {\n",
" \"model_list\": [\n",
" \"models/stable_diffusion/aingdiffusion_v12.safetensors\",\n",
" \"models/ControlNet/control_v11p_sd15_softedge.pth\",\n",
" \"models/ControlNet/control_v11f1p_sd15_depth.pth\"\n",
" ],\n",
" \"textual_inversion_folder\": \"models/textual_inversion\",\n",
" \"device\": \"cuda\",\n",
" \"lora_alphas\": [],\n",
" \"controlnet_units\": [\n",
" {\n",
" \"processor_id\": \"softedge\",\n",
" \"model_path\": \"models/ControlNet/control_v11p_sd15_softedge.pth\",\n",
" \"scale\": 0.5\n",
" },\n",
" {\n",
" \"processor_id\": \"depth\",\n",
" \"model_path\": \"models/ControlNet/control_v11f1p_sd15_depth.pth\",\n",
" \"scale\": 0.5\n",
" }\n",
" ]\n",
" },\n",
" \"data\": {\n",
" \"input_frames\": {\n",
" \"video_file\": \"/content/input_video.mp4\",\n",
" \"image_folder\": None,\n",
" \"height\": 512,\n",
" \"width\": 512,\n",
" \"start_frame_id\": 0,\n",
" \"end_frame_id\": 30\n",
" },\n",
" \"controlnet_frames\": [\n",
" {\n",
" \"video_file\": \"/content/input_video.mp4\",\n",
" \"image_folder\": None,\n",
" \"height\": 512,\n",
" \"width\": 512,\n",
" \"start_frame_id\": 0,\n",
" \"end_frame_id\": 30\n",
" },\n",
" {\n",
" \"video_file\": \"/content/input_video.mp4\",\n",
" \"image_folder\": None,\n",
" \"height\": 512,\n",
" \"width\": 512,\n",
" \"start_frame_id\": 0,\n",
" \"end_frame_id\": 30\n",
" }\n",
" ],\n",
" \"output_folder\": \"data/examples/diffutoon_edit/color_video\",\n",
" \"fps\": 25\n",
" },\n",
" \"smoother_configs\": [\n",
" {\n",
" \"processor_type\": \"FastBlend\",\n",
" \"config\": {}\n",
" }\n",
" ],\n",
" \"pipeline\": {\n",
" \"seed\": 0,\n",
" \"pipeline_inputs\": {\n",
" \"prompt\": \"best quality, perfect anime illustration, orange clothes, night, a girl is dancing, smile, solo, black silk stockings\",\n",
" \"negative_prompt\": \"verybadimagenegative_v1.3\",\n",
" \"cfg_scale\": 7.0,\n",
" \"clip_skip\": 1,\n",
" \"denoising_strength\": 0.9,\n",
" \"num_inference_steps\": 20,\n",
" \"animatediff_batch_size\": 8,\n",
" \"animatediff_stride\": 4,\n",
" \"unet_batch_size\": 8,\n",
" \"controlnet_batch_size\": 8,\n",
" \"cross_frame_attention\": True,\n",
" \"smoother_progress_ids\": [-1],\n",
" # The following parameters will be overwritten. You don't need to modify them.\n",
" \"input_frames\": [],\n",
" \"num_frames\": 30,\n",
" \"width\": 512,\n",
" \"height\": 512,\n",
" \"controlnet_frames\": []\n",
" }\n",
" }\n",
"}\n",
"\n",
"from diffsynth import SDVideoPipelineRunner\n",
"\n",
"config_stage_1 = config_stage_1_template.copy()\n",
"config_stage_1[\"data\"][\"input_frames\"] = {\n",
" \"video_file\": \"/content/input_video.mp4\",\n",
" \"image_folder\": None,\n",
" \"height\": 512,\n",
" \"width\": 512,\n",
" \"start_frame_id\": 0,\n",
" \"end_frame_id\": 30\n",
"}\n",
"config_stage_1[\"data\"][\"controlnet_frames\"] = [config_stage_1[\"data\"][\"input_frames\"], config_stage_1[\"data\"][\"input_frames\"]]\n",
"config_stage_1[\"data\"][\"output_folder\"] = \"/content/color_video\"\n",
"config_stage_1[\"data\"][\"fps\"] = 25\n",
"config_stage_1[\"pipeline\"][\"pipeline_inputs\"][\"prompt\"] = \"best quality, perfect anime illustration, orange clothes, night, a girl is dancing, smile, solo, black silk stockings\"\n",
"\n",
"runner = SDVideoPipelineRunner()\n",
"runner.run(config_stage_1)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# If T4, at this point, restart and run the next cell."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"config_stage_2_template = {\n",
" \"models\": {\n",
" \"model_list\": [\n",
" \"models/stable_diffusion/aingdiffusion_v12.safetensors\",\n",
" \"models/AnimateDiff/mm_sd_v15_v2.ckpt\",\n",
" \"models/ControlNet/control_v11f1e_sd15_tile.pth\",\n",
" \"models/ControlNet/control_v11p_sd15_lineart.pth\"\n",
" ],\n",
" \"textual_inversion_folder\": \"models/textual_inversion\",\n",
" \"device\": \"cuda\",\n",
" \"lora_alphas\": [],\n",
" \"controlnet_units\": [\n",
" {\n",
" \"processor_id\": \"tile\",\n",
" \"model_path\": \"models/ControlNet/control_v11f1e_sd15_tile.pth\",\n",
" \"scale\": 0.5\n",
" },\n",
" {\n",
" \"processor_id\": \"lineart\",\n",
" \"model_path\": \"models/ControlNet/control_v11p_sd15_lineart.pth\",\n",
" \"scale\": 0.5\n",
" }\n",
" ]\n",
" },\n",
" \"data\": {\n",
" \"input_frames\": {\n",
" \"video_file\": \"/content/input_video.mp4\",\n",
" \"image_folder\": None,\n",
" \"height\": 1024,\n",
" \"width\": 1024,\n",
" \"start_frame_id\": 0,\n",
" \"end_frame_id\": 30\n",
" },\n",
" \"controlnet_frames\": [\n",
" {\n",
" \"video_file\": \"/content/input_video.mp4\",\n",
" \"image_folder\": None,\n",
" \"height\": 1024,\n",
" \"width\": 1024,\n",
" \"start_frame_id\": 0,\n",
" \"end_frame_id\": 30\n",
" },\n",
" {\n",
" \"video_file\": \"/content/input_video.mp4\",\n",
" \"image_folder\": None,\n",
" \"height\": 1024,\n",
" \"width\": 1024,\n",
" \"start_frame_id\": 0,\n",
" \"end_frame_id\": 30\n",
" }\n",
" ],\n",
" \"output_folder\": \"/content/output\",\n",
" \"fps\": 25\n",
" },\n",
" \"pipeline\": {\n",
" \"seed\": 0,\n",
" \"pipeline_inputs\": {\n",
" \"prompt\": \"best quality, perfect anime illustration, light, a girl is dancing, smile, solo\",\n",
" \"negative_prompt\": \"verybadimagenegative_v1.3\",\n",
" \"cfg_scale\": 7.0,\n",
" \"clip_skip\": 2,\n",
" \"denoising_strength\": 1.0,\n",
" \"num_inference_steps\": 10,\n",
" \"animatediff_batch_size\": 16,\n",
" \"animatediff_stride\": 8,\n",
" \"unet_batch_size\": 1,\n",
" \"controlnet_batch_size\": 1,\n",
" \"cross_frame_attention\": False,\n",
" # The following parameters will be overwritten. You don't need to modify them.\n",
" \"input_frames\": [],\n",
" \"num_frames\": 30,\n",
" \"width\": 1536,\n",
" \"height\": 1536,\n",
" \"controlnet_frames\": []\n",
" }\n",
" }\n",
"}\n",
"\n",
"from diffsynth import SDVideoPipelineRunner\n",
"\n",
"config_stage_2 = config_stage_2_template.copy()\n",
"config_stage_2[\"data\"][\"input_frames\"] = {\n",
" \"video_file\": \"/content/input_video.mp4\",\n",
" \"image_folder\": None,\n",
" \"height\": 1024,\n",
" \"width\": 1024,\n",
" \"start_frame_id\": 0,\n",
" \"end_frame_id\": 30\n",
"}\n",
"config_stage_2[\"data\"][\"controlnet_frames\"][0] = {\n",
" \"video_file\": \"/content/color_video/video.mp4\",\n",
" \"image_folder\": None,\n",
" \"height\": config_stage_2[\"data\"][\"input_frames\"][\"height\"],\n",
" \"width\": config_stage_2[\"data\"][\"input_frames\"][\"width\"],\n",
" \"start_frame_id\": None,\n",
" \"end_frame_id\": None\n",
"}\n",
"config_stage_2[\"data\"][\"controlnet_frames\"][1] = config_stage_2[\"data\"][\"input_frames\"]\n",
"config_stage_2[\"data\"][\"output_folder\"] = \"/content/edit_video\"\n",
"config_stage_2[\"data\"][\"fps\"] = 25\n",
"\n",
"runner = SDVideoPipelineRunner()\n",
"runner.run(config_stage_2)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import moviepy.editor\n",
"moviepy.editor.ipython_display(\"/content/edit_video/video.mp4\")"
]
}
],
"metadata": {
"accelerator": "GPU",
"colab": {
"gpuType": "T4",
"provenance": []
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
},
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 0
}
================================================
FILE: Diffutoon_jupyter.ipynb
================================================
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github"
},
"source": [
"[](https://colab.research.google.com/github/camenduru/Diffutoon-jupyter/blob/main/Diffutoon_jupyter.ipynb)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "VjYy0F2gZIPR"
},
"outputs": [],
"source": [
"# https://github.com/modelscope/DiffSynth-Studio/blob/main/examples/Diffutoon/Diffutoon.ipynb modified\n",
"\n",
"%cd /content\n",
"!git clone https://github.com/Artiprocher/DiffSynth-Studio\n",
"%cd /content/DiffSynth-Studio\n",
"\n",
"!pip install -q einops transformers controlnet-aux==0.0.7 sentencepiece imageio imageio-ffmpeg\n",
"\n",
"!apt -y install -qq aria2\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://civitai.com/api/download/models/229575 -d /content/DiffSynth-Studio/models/stable_diffusion -o aingdiffusion_v12.safetensors\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/guoyww/animatediff/resolve/main/mm_sd_v15_v2.ckpt -d /content/DiffSynth-Studio/models/AnimateDiff -o mm_sd_v15_v2.ckpt\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lllyasviel/ControlNet-v1-1/resolve/main/control_v11p_sd15_lineart.pth -d /content/DiffSynth-Studio/models/ControlNet -o control_v11p_sd15_lineart.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lllyasviel/ControlNet-v1-1/resolve/main/control_v11f1e_sd15_tile.pth -d /content/DiffSynth-Studio/models/ControlNet -o control_v11f1e_sd15_tile.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lllyasviel/ControlNet-v1-1/resolve/main/control_v11f1p_sd15_depth.pth -d /content/DiffSynth-Studio/models/ControlNet -o control_v11f1p_sd15_depth.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lllyasviel/ControlNet-v1-1/resolve/main/control_v11p_sd15_softedge.pth -d /content/DiffSynth-Studio/models/ControlNet -o control_v11p_sd15_softedge.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lllyasviel/Annotators/resolve/main/dpt_hybrid-midas-501f0c75.pt -d /content/DiffSynth-Studio/models/Annotators -o dpt_hybrid-midas-501f0c75.pt\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lllyasviel/Annotators/resolve/main/ControlNetHED.pth -d /content/DiffSynth-Studio/models/Annotators -o ControlNetHED.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lllyasviel/Annotators/resolve/main/sk_model.pth -d /content/DiffSynth-Studio/models/Annotators -o sk_model.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/lllyasviel/Annotators/resolve/main/sk_model2.pth -d /content/DiffSynth-Studio/models/Annotators -o sk_model2.pth\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M \"https://civitai.com/api/download/models/25820?type=Model&format=PickleTensor&size=full&fp=fp16\" -d /content/DiffSynth-Studio/models/textual_inversion -o verybadimagenegative_v1.3.pt\n",
"!aria2c --console-log-level=error -c -x 16 -s 16 -k 1M https://huggingface.co/camenduru/Diffutoon/resolve/main/input_video.mp4 -d /content -o input_video.mp4"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"config_stage_2_template = {\n",
" \"models\": {\n",
" \"model_list\": [\n",
" \"models/stable_diffusion/aingdiffusion_v12.safetensors\",\n",
" \"models/AnimateDiff/mm_sd_v15_v2.ckpt\",\n",
" \"models/ControlNet/control_v11f1e_sd15_tile.pth\",\n",
" \"models/ControlNet/control_v11p_sd15_lineart.pth\"\n",
" ],\n",
" \"textual_inversion_folder\": \"models/textual_inversion\",\n",
" \"device\": \"cuda\",\n",
" \"lora_alphas\": [],\n",
" \"controlnet_units\": [\n",
" {\n",
" \"processor_id\": \"tile\",\n",
" \"model_path\": \"models/ControlNet/control_v11f1e_sd15_tile.pth\",\n",
" \"scale\": 0.5\n",
" },\n",
" {\n",
" \"processor_id\": \"lineart\",\n",
" \"model_path\": \"models/ControlNet/control_v11p_sd15_lineart.pth\",\n",
" \"scale\": 0.5\n",
" }\n",
" ]\n",
" },\n",
" \"data\": {\n",
" \"input_frames\": {\n",
" \"video_file\": \"/content/input_video.mp4\",\n",
" \"image_folder\": None,\n",
" \"height\": 1024,\n",
" \"width\": 1024,\n",
" \"start_frame_id\": 0,\n",
" \"end_frame_id\": 30\n",
" },\n",
" \"controlnet_frames\": [\n",
" {\n",
" \"video_file\": \"/content/input_video.mp4\",\n",
" \"image_folder\": None,\n",
" \"height\": 1024,\n",
" \"width\": 1024,\n",
" \"start_frame_id\": 0,\n",
" \"end_frame_id\": 30\n",
" },\n",
" {\n",
" \"video_file\": \"/content/input_video.mp4\",\n",
" \"image_folder\": None,\n",
" \"height\": 1024,\n",
" \"width\": 1024,\n",
" \"start_frame_id\": 0,\n",
" \"end_frame_id\": 30\n",
" }\n",
" ],\n",
" \"output_folder\": \"/content/output\",\n",
" \"fps\": 25\n",
" },\n",
" \"pipeline\": {\n",
" \"seed\": 0,\n",
" \"pipeline_inputs\": {\n",
" \"prompt\": \"best quality, perfect anime illustration, light, a girl is dancing, smile, solo\",\n",
" \"negative_prompt\": \"verybadimagenegative_v1.3\",\n",
" \"cfg_scale\": 7.0,\n",
" \"clip_skip\": 2,\n",
" \"denoising_strength\": 1.0,\n",
" \"num_inference_steps\": 10,\n",
" \"animatediff_batch_size\": 16,\n",
" \"animatediff_stride\": 8,\n",
" \"unet_batch_size\": 1,\n",
" \"controlnet_batch_size\": 1,\n",
" \"cross_frame_attention\": False,\n",
" # The following parameters will be overwritten. You don't need to modify them.\n",
" \"input_frames\": [],\n",
" \"num_frames\": 30,\n",
" \"width\": 1536,\n",
" \"height\": 1536,\n",
" \"controlnet_frames\": []\n",
" }\n",
" }\n",
"}"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from diffsynth import SDVideoPipelineRunner\n",
"\n",
"config = config_stage_2_template.copy()\n",
"config[\"data\"][\"input_frames\"] = {\n",
" \"video_file\": \"/content/input_video.mp4\",\n",
" \"image_folder\": None,\n",
" \"height\": 1024,\n",
" \"width\": 1024,\n",
" \"start_frame_id\": 0,\n",
" \"end_frame_id\": 30\n",
"}\n",
"config[\"data\"][\"controlnet_frames\"] = [config[\"data\"][\"input_frames\"], config[\"data\"][\"input_frames\"]]\n",
"config[\"data\"][\"output_folder\"] = \"/content/toon_video\"\n",
"config[\"data\"][\"fps\"] = 25\n",
"\n",
"runner = SDVideoPipelineRunner()\n",
"runner.run(config)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import moviepy.editor\n",
"moviepy.editor.ipython_display(\"/content/toon_video/video.mp4\")"
]
}
],
"metadata": {
"accelerator": "GPU",
"colab": {
"gpuType": "T4",
"provenance": []
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
},
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 0
}
================================================
FILE: README.md
================================================
🐣 Please follow me for new updates https://twitter.com/camenduru <br />
🔥 Please join our discord server https://discord.gg/k5BwmmvJJU <br />
🥳 Please join my patreon community https://patreon.com/camenduru <br />
### 🍊 Jupyter Notebook
| Notebook | Info
| --- | --- |
[](https://colab.research.google.com/github/camenduru/Diffutoon-jupyter/blob/main/Diffutoon_jupyter.ipynb) | Diffutoon_jupyter
[](https://colab.research.google.com/github/camenduru/Diffutoon-jupyter/blob/main/Diffutoon_color_jupyter.ipynb) | Diffutoon_color_jupyter
### 🧬 Code
https://github.com/modelscope/DiffSynth-Studio/tree/main/examples/Diffutoon
### 📄 Paper
https://arxiv.org/abs/2401.16224
### 🌐 Page
https://ecnu-cilab.github.io/DiffutoonProjectPage/
### 🖼 Output
Diffutoon_color_jupyter
https://github.com/camenduru/Diffutoon-jupyter/assets/54370274/e8741f4b-8925-488b-95b6-3812d219af06
### 🏢 Sponsor
https://runpod.io
gitextract_11r30370/ ├── Diffutoon_color_jupyter.ipynb ├── Diffutoon_jupyter.ipynb └── README.md
Condensed preview — 3 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (28K chars).
[
{
"path": "Diffutoon_color_jupyter.ipynb",
"chars": 14617,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {\n \"id\": \"view-in-github\"\n },\n \"s"
},
{
"path": "Diffutoon_jupyter.ipynb",
"chars": 8911,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"markdown\",\n \"metadata\": {\n \"id\": \"view-in-github\"\n },\n \"s"
},
{
"path": "README.md",
"chars": 1055,
"preview": "🐣 Please follow me for new updates https://twitter.com/camenduru <br />\n🔥 Please join our discord server https://discord"
}
]
About this extraction
This page contains the full source code of the camenduru/Diffutoon-jupyter GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 3 files (24.0 KB), approximately 7.2k tokens. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.