Repository: cantrell/stable-diffusion-api-server
Branch: main
Commit: 3f5092603bcf
Files: 9
Total size: 35.0 KB
Directory structure:
gitextract_bkz26mir/
├── .gitignore
├── LICENSE
├── README.md
├── client_test.py
├── config-custom-models.json
├── config.json
├── environment-m1.yaml
├── environment.yaml
└── server.py
================================================
FILE CONTENTS
================================================
================================================
FILE: .gitignore
================================================
token.txt
__pycache__
================================================
FILE: LICENSE
================================================
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
================================================
FILE: README.md
================================================
# Stable Diffusion API Server
A local inference REST API server for the [Stable Diffusion Photoshop plugin](https://christiancantrell.com/#ai-ml). (Also a generic Stable Diffusion REST API for whatever you want.)
The API server currently supports:
1. Stable Diffusion weights automatically downloaded from Hugging Face.
1. Custom fine-tuned models in the Hugging Face diffusers file format like those created with [DreamBooth](https://github.com/XavierXiao/Dreambooth-Stable-Diffusion).
(Note that loading checkpoint files directly is not currently supported, but you can easily convert `.ckpt` files into the diffusers format using the aptly named [`convert_original_stable_diffusion_to_diffusers.py`](https://github.com/huggingface/diffusers/blob/main/scripts/convert_original_stable_diffusion_to_diffusers.py) script.)
The server will run on Windows and Linux machines with NVIDIA GPUs, and on M1 Macs. M1 Mac support using MPS (Metal Performance Shaders) is highly experimental (and not easy to configure) but it does work, and it will get better over time.
If you can swing it, for best results, use a dedicated Linux box. Performance on Windows is also very good, but I recommend a dedicated machine with no other apps running. You can run Photoshop on the same machine if you have to, but you will be giving up some of your GPU memory which is good for the Photoshop user experience, but bad for optimal local inference.
**Note that this project uses the content safety filter.**
## Installation
🤞 If anyone wants to make a detailed installation video, I would love to embed it right here. 🤞
### Windows, Linux, and Mac Instructions
1. Install Python.
1. Install [Conda](https://conda.io/projects/conda/en/latest/user-guide/install/download.html).
1. Download this repo.
1. cd into the repo's directory.
1. Set up a [Conda](https://conda.io) environment named `sd-api-server` by running the following command:
Windows and Linux:
**(Note that the '%' character below is meant to denote the command prompt; do not include it when copying and pasting.)**
```
% conda env create -f environment.yaml
```
M1 Macs:
```
% conda env create -f environment-m1.yaml
```
Then activate the Conda environment:
```
% conda activate sd-api-server
```
If you are updating the server, make sure to update your Conda environment (using the platform-specific `yaml` file):
```
% conda env update -f environment.yaml
% conda activate sd-api-server
```
If you want to remove an old environment and create it from scratch (using the platform-specific `yaml` file):
```
% conda env remove -n sd-api-server
% conda env create -f environment.yaml
% conda activate sd-api-server
```
### Hugging Face Configuration
There are two things you need to configure with Hugging Face in order to run the Stable Diffusion model locally:
1. You need to [agree to share your username and email address with Hugging Face](https://huggingface.co/CompVis/stable-diffusion-v1-4) in order to access the model.
1. You also need to set up [a Hugging Face token](https://huggingface.co/settings/tokens). Once you've created a read-only token, copy and paste it into the `config.json` file as the value to the `hf_token` key (and don't forget to save the file).
Windows and Linux users, you're good to go! All you have to do now is start the server:
```
% python3 server.py
```
### M1 Mac Additional Instructions
Note that this is highly experimental, and may not work for you. But it will probably get easier with the next release of [PyTorch](https://pytorch.org/).
#### Method 1: Nightly Builds
In Terminal, at the Conda prompt, **with the `sd-api-server` environment activated**:
```
% conda install pytorch torchvision -c pytorch-nightly
% conda deactivate
% conda activate sd-api-server
% python3 server.py
```
You might have noticed that you just installed a nightly build of PyTorch and Torchvision. Nightly builds come with neither warranties nor guarantees. If your server starts and you can generate images, you just won the nightly Lottery! If not, you can play again tomorrow. This is a temporary situation and probably won't be necessary with the next release of PyTorch.
#### Method 2: Environment Variables
If the nightly build didn't work for you — or if you're simply allergic to nightly builds — you can tell PyTorch to use use the CPU in addition to MPS. If you already installed the nightly build, remove your Conda environment using the command above, start all over again, skip the nightly build step, and try this (with the `sd-api-server` environment active):
```
% conda env config vars set PYTORCH_ENABLE_MPS_FALLBACK=1
% conda activate sd-api-server
% python3 server.py
```
If you get the message `ModuleNotFoundError: No module named 'flask'`, it probably means you're using the wrong version of Python. If you used `python3` then try `python`. If you used `python` then try `python3`. (These are the joys of old versions of Python being preinstalled on Macs.)
## Configuring Custom Models
If you want to use the server (and the Photoshop plugin) with custom-trained models, the first thing you need are the custom-trained models themselves. Instructions for how to do so are beyond the scope of this README, but here are some resources:
- [My custom fork of the DreamBooth repo](https://github.com/cantrell/Dreambooth-Stable-Diffusion-Tweaked) (dramatically simplified).
- [A DreamBooth Stable Diffusion Colab notebook](https://colab.research.google.com/github/ShivamShrirao/diffusers/blob/main/examples/dreambooth/DreamBooth_Stable_Diffusion.ipynb) (much easier than training locally).
- [A good YouTube tutorial on using the Colab notebook](https://www.youtube.com/watch?v=FaLTztGGueQ).
- [The original DreamBooth paper](https://arxiv.org/abs/2208.12242).
Loading checkpoint files directly is not currently supported, but you can easily convert `.ckpt` files into the diffusers format using the aptly named [`convert_original_stable_diffusion_to_diffusers.py`](https://github.com/huggingface/diffusers/blob/main/scripts/convert_original_stable_diffusion_to_diffusers.py) script.
Once you have the models trained, the rest is easy. All you have to do is:
1. Replace your `config.json` file with the `config-custom-models.json` template (rename `config-custom-models.json` to `config.json`).
1. Make sure you copy and paste your Hugging Face token into the new `config.json` file.
1. Fill in the `custom_model` array of the config file appropriately.
Here's an explanation of what the key/value pairs mean:
- `model_path`: The full path to the directory which contains the `model_index.json` file (just the directory; don't include the file itself). **Do not** escape spaces, but **do** escape backslashes with backslashes (e.g. `G:\\My Drive\\stable_diffusion_weights\\MyCustomModelOutput`).
- `ui_label`: The name of the model as you want it to appear in the Photoshop plugin.
- `url_path`: A unique, URL-friendly value that will be used as the endpoint path (see the REST API section below).
- `requires_safety_checker`: Whether or not your custom model expects the safety checker. For models in the Hugging Face diffusers file format, this will be true; for models compiled from checkpoint files into the diffusers file format, this will probably be false.
Once your config file is ready, (re)start the server. If the Photoshop plugin is already loaded, you may need to restart it (or you can just click on the 'Reload Plugin' link in the lower right-hand corner of the 'Generate' tab).
Note that the `custom_model` section of the `config.json` file is an array. That means you can include as many custom models as you want. Here's what it should look like for more than one custom-trained model:
```
{
"hf_token": "your_hugging_face_token",
"custom_models": [
{
"model_path": "/path/to/directory/containing/model_index.json",
"ui_label": "My First Model",
"url_path": "my_first_model",
"requires_safety_checker": true
}
],
[
{
"model_path": "/path/to/another/directory/containing/model_index.json",
"ui_label": "My Second Model",
"url_path": "my_second_model",
"requires_safety_checker": true
}
]
}
```
To see your custom models in the Generate tab of the Stable Diffusion Photoshop plugin, make sure you've configured your local inference server in the API Key tab.
## REST API
Note that all `POST` requests use the `application/x-www-form-urlencoded` content type, and all images are base64 encoded strings.
`GET /ping`
#### Response
```
{'status':'success'}
```
`GET /custom_models`
#### Response
```
[
{
"model_path": "/path/to/directory/containing/model_index.json",
"ui_label": "My First Model",
"url_path": "my_first_model",
"requires_safety_checker": "true | false"
}
], [...]
```
(If no custom models are configured, you will get back an empty array.)
`POST /txt2img`
Parameters:
- `prompt`: A text description.
- `seed`: A numeric seed.
- `num_outputs`: The number of images you want to get back.
- `width`: The width of your results.
- `height`: The height of your results.
- `num_inference_steps`: The number of steps (more steps mean higher quality).
- `guidance_scale`: Prompt strength.
#### Response
```
{
'status':'success | failure',
'message':'Only if there was a failure',
'images': [
{
'base64': 'base64EncodedImage==',
'seed': 123456789,
'mimetype': 'image/png',
'nsfw': true | false
}
]
}
```
`POST /img2img`
Parameters:
- `prompt`: A text description.
- `seed`: A numeric seed.
- `num_outputs`: The number of images you want to get back.
- `num_inference_steps`: The number of steps (more steps mean higher quality).
- `guidance_scale`: Prompt strength.
- `init_image`: The initial input image.
- `strength`: The image strength.
#### Response
```
{
'status':'success | failure',
'message':'Only if there was a failure',
'images': [
{
'base64': 'base64EncodedImage==',
'seed': 123456789,
'mimetype': 'image/png',
'nsfw': true | false
}
]
}
```
`POST /masking`
Parameters:
- `prompt`: A text description.
- `seed`: A numeric seed.
- `num_outputs`: The number of images you want to get back.
- `num_inference_steps`: The number of steps (more steps mean higher quality).
- `guidance_scale`: Prompt strength.
- `init_image`: The initial input image.
- `strength`: The image strength.
- `mask_image`: A mask representing the pixels to replace.
#### Response
```
{
'status':'success | failure',
'message':'Only if there was a failure',
'images': [
{
'base64': 'base64EncodedImage==',
'seed': 123456789,
'mimetype': 'image/png',
'nsfw': true | false
}
]
}
```
`POST /custom/<url_path>`
`url_path` refers to the `url_path` key/value pair you defined in your `config.json` file.
Parameters:
- `prompt`: A text description.
- `seed`: A numeric seed.
- `num_outputs`: The number of images you want to get back.
- `width`: The width of your results.
- `height`: The height of your results.
- `num_inference_steps`: The number of steps (more steps mean higher quality).
- `guidance_scale`: Prompt strength.
#### Response
```
{
'status':'success | failure',
'message':'Only if there was a failure',
'images': [
{
'base64': 'base64EncodedImage==',
'seed': 123456789,
'mimetype': 'image/png',
'nsfw': true | false
}
]
}
```
================================================
FILE: client_test.py
================================================
import json
import requests
import base64
from PIL import Image
from io import BytesIO
import matplotlib.pyplot as plt
def load_image_from_path(img_path):
img = Image.open( img_path )
return img
def load_image_from_url(img_url):
res = requests.get( img_url )
img = Image.open( BytesIO( res.content ) )
return img
def resize_image_preserve_aspect(img_pil, w):
wp = ( w / float( img_pil.size[0] ) )
hs = int( float( img_pil.size[1] ) * float( wp ) )
return img_pil.resize( ( w, hs ), Image.ANTIALIAS )
def pil_to_b64(input):
buffer = BytesIO()
input.save( buffer, 'PNG' )
output = base64.b64encode( buffer.getvalue() ).decode( 'utf-8' ).replace( '\n', '' )
buffer.close()
return output
def b64_to_pil(input):
output = Image.open( BytesIO( base64.b64decode( input ) ) )
return output
def test_txt2img():
ENDPOINT = "http://localhost:1337/txt2img"
data = {
'prompt':'a photo of a dog sitting on a bench',
'width':str( 512 ),
'height':str( 512 ),
'num_inference_steps':str( 100 ),
'guidance_scale':str( 7.5 ),
'num_outputs':str( 2 ),
'seed':str( 0 ),
}
response = json.loads( requests.post( url=ENDPOINT, data=data ).text )
def b64_to_pil(input):
output = Image.open( BytesIO( base64.b64decode( input ) ) )
return output
if 'status' in response:
if response[ 'status' ] == 'success':
images = response[ 'images' ]
for i, image in enumerate( images ):
plt.imshow( b64_to_pil( image['base64'] ) )
plt.show( block=True )
plt.pause( 10 )
plt.close()
def test_img2img():
ENDPOINT = "http://localhost:1337/img2img"
IMG_URL = 'https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg'
#IMG_URL = 'https://minitravellers.co.uk/wp-content/uploads/2019/05/40612080213_81852c19fc_k.jpg'
data = {
'prompt':'a family of pixar characters on vacation in new york',
'init_image':pil_to_b64( resize_image_preserve_aspect( load_image_from_url( IMG_URL ).convert( 'RGB' ), 512 ) ),
'num_inference_steps':str( 100 ),
'guidance_scale':str( 7.5 ),
'num_outputs':str( 2 ),
'seed':str( 0 ),
'strength':str( 0.5 ),
'eta':str( 0.0 ),
}
response = json.loads( requests.post( url=ENDPOINT, data=data ).text )
if 'status' in response:
if response[ 'status' ] == 'success':
images = response[ 'images' ]
for i, image in enumerate( images ):
plt.imshow( b64_to_pil( image['base64'] ) )
plt.show( block=True )
plt.pause( 10 )
plt.close()
def test_inpaint():
ENDPOINT = "http://localhost:1337/masking"
IMG_URL = 'https://raw.githubusercontent.com/CompVis/stable-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png'
MSK_URL = 'https://raw.githubusercontent.com/CompVis/stable-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png'
data = {
'prompt':'a cat sitting on a bench',
'init_image':pil_to_b64( resize_image_preserve_aspect( load_image_from_url( IMG_URL ).convert( 'RGB' ), 512 ) ),
'mask_image':pil_to_b64( resize_image_preserve_aspect( load_image_from_url( MSK_URL ).convert( 'RGB' ), 512 ) ),
'num_inference_steps':str( 100 ),
'guidance_scale':str( 7.5 ),
'num_outputs':str( 2 ),
'seed':str( 0 ),
'strength':str( 0.8 ),
'eta':str( 0.0 ),
}
response = json.loads( requests.post( url=ENDPOINT, data=data ).text )
if 'status' in response:
if response[ 'status' ] == 'success':
images = response[ 'images' ]
for i, image in enumerate( images ):
plt.imshow( b64_to_pil( image['base64'] ) )
plt.show( block=True )
plt.pause( 10 )
plt.close()
# Run tests:
# test_txt2img()
# test_img2img()
test_inpaint()
================================================
FILE: config-custom-models.json
================================================
{
"hf_token": "your_hugging_face_token",
"custom_models": [
{
"model_path": "/path/to/directory/containing/model_index.json",
"ui_label": "My Custom Model",
"url_path": "my_custom_model",
"requires_safety_checker": true
}
]
}
================================================
FILE: config.json
================================================
{
"hf_token": "your_hugging_face_token"
}
================================================
FILE: environment-m1.yaml
================================================
name: sd-api-server
channels:
- pytorch
- defaults
dependencies:
- python=3.8.5
- pip=20.3
- pytorch=1.12.1
- torchvision=0.13.1
- numpy=1.19.2
- pip:
- diffusers
- transformers==4.19.2
- Pillow
- Flask
================================================
FILE: environment.yaml
================================================
name: sd-api-server
channels:
- pytorch
- defaults
dependencies:
- python=3.8.5
- pip=20.3
- cudatoolkit=11.3
- pytorch=1.12.1
- torchvision=0.13.1
- numpy=1.19.2
- pip:
- diffusers
- transformers==4.19.2
- Pillow
- Flask
================================================
FILE: server.py
================================================
import re
import time
import inspect
import json
import flask
import sys
import base64
from PIL import Image
from io import BytesIO
import torch
import diffusers
##################################################
# Utils
def retrieve_param(key, data, cast, default):
if key in data:
value = flask.request.form[ key ]
value = cast( value )
return value
return default
def pil_to_b64(input):
buffer = BytesIO()
input.save( buffer, 'PNG' )
output = base64.b64encode( buffer.getvalue() ).decode( 'utf-8' ).replace( '\n', '' )
buffer.close()
return output
def b64_to_pil(input):
output = Image.open( BytesIO( base64.b64decode( input ) ) )
return output
def get_compute_platform(context):
try:
import torch
if torch.cuda.is_available():
return 'cuda'
elif torch.backends.mps.is_available() and context == 'engine':
return 'mps'
else:
return 'cpu'
except ImportError:
return 'cpu'
##################################################
# Engines
class Engine(object):
def __init__(self):
pass
def process(self, kwargs):
return []
class EngineStableDiffusion(Engine):
def __init__(self, pipe, sibling=None, custom_model_path=None, requires_safety_checker=True):
super().__init__()
if sibling == None:
self.engine = pipe.from_pretrained( 'runwayml/stable-diffusion-v1-5', use_auth_token=hf_token.strip() )
elif custom_model_path:
if requires_safety_checker:
self.engine = diffusers.StableDiffusionPipeline.from_pretrained(custom_model_path,
safety_checker=sibling.engine.safety_checker,
feature_extractor=sibling.engine.feature_extractor)
else:
self.engine = diffusers.StableDiffusionPipeline.from_pretrained(custom_model_path,
feature_extractor=sibling.engine.feature_extractor)
else:
self.engine = pipe(
vae=sibling.engine.vae,
text_encoder=sibling.engine.text_encoder,
tokenizer=sibling.engine.tokenizer,
unet=sibling.engine.unet,
scheduler=sibling.engine.scheduler,
safety_checker=sibling.engine.safety_checker,
feature_extractor=sibling.engine.feature_extractor
)
self.engine.to( get_compute_platform('engine') )
def process(self, kwargs):
output = self.engine( **kwargs )
return {'image': output.images[0], 'nsfw':output.nsfw_content_detected[0]}
class EngineManager(object):
def __init__(self):
self.engines = {}
def has_engine(self, name):
return ( name in self.engines )
def add_engine(self, name, engine):
if self.has_engine( name ):
return False
self.engines[ name ] = engine
return True
def get_engine(self, name):
if not self.has_engine( name ):
return None
engine = self.engines[ name ]
return engine
##################################################
# App
# Load and parse the config file:
try:
config_file = open ('config.json', 'r')
except:
sys.exit('config.json not found.')
config = json.loads(config_file.read())
hf_token = config['hf_token']
if (hf_token == None):
sys.exit('No Hugging Face token found in config.json.')
custom_models = config['custom_models'] if 'custom_models' in config else []
# Initialize app:
app = flask.Flask( __name__ )
# Initialize engine manager:
manager = EngineManager()
# Add supported engines to manager:
manager.add_engine( 'txt2img', EngineStableDiffusion( diffusers.StableDiffusionPipeline, sibling=None ) )
manager.add_engine( 'img2img', EngineStableDiffusion( diffusers.StableDiffusionImg2ImgPipeline, sibling=manager.get_engine( 'txt2img' ) ) )
manager.add_engine( 'masking', EngineStableDiffusion( diffusers.StableDiffusionInpaintPipeline, sibling=manager.get_engine( 'txt2img' ) ) )
for custom_model in custom_models:
manager.add_engine( custom_model['url_path'],
EngineStableDiffusion( diffusers.StableDiffusionPipeline, sibling=manager.get_engine( 'txt2img' ),
custom_model_path=custom_model['model_path'],
requires_safety_checker=custom_model['requires_safety_checker'] ) )
# Define routes:
@app.route('/ping', methods=['GET'])
def stable_ping():
return flask.jsonify( {'status':'success'} )
@app.route('/custom_models', methods=['GET'])
def stable_custom_models():
if custom_models == None:
return flask.jsonify( [] )
else:
return custom_models
@app.route('/txt2img', methods=['POST'])
def stable_txt2img():
return _generate('txt2img')
@app.route('/img2img', methods=['POST'])
def stable_img2img():
return _generate('img2img')
@app.route('/masking', methods=['POST'])
def stable_masking():
return _generate('masking')
@app.route('/custom/<path:model>', methods=['POST'])
def stable_custom(model):
return _generate('txt2img', model)
def _generate(task, engine=None):
# Retrieve engine:
if engine == None:
engine = task
engine = manager.get_engine( engine )
# Prepare output container:
output_data = {}
# Handle request:
try:
seed = retrieve_param( 'seed', flask.request.form, int, 0 )
count = retrieve_param( 'num_outputs', flask.request.form, int, 1 )
total_results = []
for i in range( count ):
if (seed == 0):
generator = torch.Generator( device=get_compute_platform('generator') )
else:
generator = torch.Generator( device=get_compute_platform('generator') ).manual_seed( seed )
new_seed = generator.seed()
prompt = flask.request.form[ 'prompt' ]
args_dict = {
'prompt' : [ prompt ],
'num_inference_steps' : retrieve_param( 'num_inference_steps', flask.request.form, int, 100 ),
'guidance_scale' : retrieve_param( 'guidance_scale', flask.request.form, float, 7.5 ),
'eta' : retrieve_param( 'eta', flask.request.form, float, 0.0 ),
'generator' : generator
}
if (task == 'txt2img'):
args_dict[ 'width' ] = retrieve_param( 'width', flask.request.form, int, 512 )
args_dict[ 'height' ] = retrieve_param( 'height', flask.request.form, int, 512 )
if (task == 'img2img' or task == 'masking'):
init_img_b64 = flask.request.form[ 'init_image' ]
init_img_b64 = re.sub( '^data:image/png;base64,', '', init_img_b64 )
init_img_pil = b64_to_pil( init_img_b64 )
args_dict[ 'init_image' ] = init_img_pil
args_dict[ 'strength' ] = retrieve_param( 'strength', flask.request.form, float, 0.7 )
if (task == 'masking'):
mask_img_b64 = flask.request.form[ 'mask_image' ]
mask_img_b64 = re.sub( '^data:image/png;base64,', '', mask_img_b64 )
mask_img_pil = b64_to_pil( mask_img_b64 )
args_dict[ 'mask_image' ] = mask_img_pil
# Perform inference:
pipeline_output = engine.process( args_dict )
pipeline_output[ 'seed' ] = new_seed
total_results.append( pipeline_output )
# Prepare response
output_data[ 'status' ] = 'success'
images = []
for result in total_results:
images.append({
'base64' : pil_to_b64( result['image'].convert( 'RGB' ) ),
'seed' : result['seed'],
'mime_type': 'image/png',
'nsfw': result['nsfw']
})
output_data[ 'images' ] = images
except RuntimeError as e:
output_data[ 'status' ] = 'failure'
output_data[ 'message' ] = 'A RuntimeError occurred. You probably ran out of GPU memory. Check the server logs for more details.'
print(str(e))
return flask.jsonify( output_data )
if __name__ == '__main__':
app.run( host='0.0.0.0', port=1337, debug=False )
gitextract_bkz26mir/ ├── .gitignore ├── LICENSE ├── README.md ├── client_test.py ├── config-custom-models.json ├── config.json ├── environment-m1.yaml ├── environment.yaml └── server.py
SYMBOL INDEX (30 symbols across 2 files)
FILE: client_test.py
function load_image_from_path (line 8) | def load_image_from_path(img_path):
function load_image_from_url (line 12) | def load_image_from_url(img_url):
function resize_image_preserve_aspect (line 17) | def resize_image_preserve_aspect(img_pil, w):
function pil_to_b64 (line 22) | def pil_to_b64(input):
function b64_to_pil (line 29) | def b64_to_pil(input):
function test_txt2img (line 33) | def test_txt2img():
function test_img2img (line 61) | def test_img2img():
function test_inpaint (line 89) | def test_inpaint():
FILE: server.py
function retrieve_param (line 18) | def retrieve_param(key, data, cast, default):
function pil_to_b64 (line 25) | def pil_to_b64(input):
function b64_to_pil (line 32) | def b64_to_pil(input):
function get_compute_platform (line 36) | def get_compute_platform(context):
class Engine (line 51) | class Engine(object):
method __init__ (line 52) | def __init__(self):
method process (line 55) | def process(self, kwargs):
class EngineStableDiffusion (line 58) | class EngineStableDiffusion(Engine):
method __init__ (line 59) | def __init__(self, pipe, sibling=None, custom_model_path=None, require...
method process (line 83) | def process(self, kwargs):
class EngineManager (line 87) | class EngineManager(object):
method __init__ (line 88) | def __init__(self):
method has_engine (line 91) | def has_engine(self, name):
method add_engine (line 94) | def add_engine(self, name, engine):
method get_engine (line 100) | def get_engine(self, name):
function stable_ping (line 142) | def stable_ping():
function stable_custom_models (line 146) | def stable_custom_models():
function stable_txt2img (line 153) | def stable_txt2img():
function stable_img2img (line 157) | def stable_img2img():
function stable_masking (line 161) | def stable_masking():
function stable_custom (line 165) | def stable_custom(model):
function _generate (line 168) | def _generate(task, engine=None):
Condensed preview — 9 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (37K chars).
[
{
"path": ".gitignore",
"chars": 21,
"preview": "token.txt\n__pycache__"
},
{
"path": "LICENSE",
"chars": 10947,
"preview": " Apache License\n Version 2.0, January 2004\n "
},
{
"path": "README.md",
"chars": 11457,
"preview": "# Stable Diffusion API Server\n\nA local inference REST API server for the [Stable Diffusion Photoshop plugin](https://chr"
},
{
"path": "client_test.py",
"chars": 4155,
"preview": "import json\nimport requests\nimport base64\nfrom PIL import Image\nfrom io import BytesIO\nimport matplotlib.pyplot as plt\n\n"
},
{
"path": "config-custom-models.json",
"chars": 264,
"preview": "{\n \"hf_token\": \"your_hugging_face_token\",\n \"custom_models\": [\n {\n \"model_path\": \"/path/to/directory/containing"
},
{
"path": "config.json",
"chars": 44,
"preview": "{\n \"hf_token\": \"your_hugging_face_token\"\n}\n"
},
{
"path": "environment-m1.yaml",
"chars": 243,
"preview": "name: sd-api-server\nchannels:\n - pytorch\n - defaults\ndependencies:\n - python=3.8.5\n - pip=20.3\n - pytorch=1.12.1\n "
},
{
"path": "environment.yaml",
"chars": 264,
"preview": "name: sd-api-server\nchannels:\n - pytorch\n - defaults\ndependencies:\n - python=3.8.5\n - pip=20.3\n - cudatoolkit=11.3\n"
},
{
"path": "server.py",
"chars": 8447,
"preview": "import re\nimport time\nimport inspect\nimport json\nimport flask\nimport sys\nimport base64\nfrom PIL import Image\nfrom io imp"
}
]
About this extraction
This page contains the full source code of the cantrell/stable-diffusion-api-server GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 9 files (35.0 KB), approximately 8.5k tokens, and a symbol index with 30 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.