Repository: Amorano/Jovimetrix
Branch: main
Commit: a28214a01507
Files: 41
Total size: 267.6 KB
Directory structure:
gitextract_d5cdnh8o/
├── .gitattributes
├── .github/
│ └── workflows/
│ └── publish_action.yml
├── .gitignore
├── LICENSE
├── NOTICE
├── README.md
├── __init__.py
├── core/
│ ├── __init__.py
│ ├── adjust.py
│ ├── anim.py
│ ├── calc.py
│ ├── color.py
│ ├── compose.py
│ ├── create.py
│ ├── trans.py
│ ├── utility/
│ │ ├── __init__.py
│ │ ├── batch.py
│ │ ├── info.py
│ │ └── io.py
│ └── vars.py
├── node_list.json
├── pyproject.toml
├── requirements.txt
└── web/
├── core.js
├── fun.js
├── jovi_metrix.css
├── nodes/
│ ├── akashic.js
│ ├── array.js
│ ├── delay.js
│ ├── flatten.js
│ ├── graph.js
│ ├── lerp.js
│ ├── op_binary.js
│ ├── op_unary.js
│ ├── queue.js
│ ├── route.js
│ ├── stack.js
│ ├── stringer.js
│ └── value.js
├── util.js
└── widget_vector.js
================================================
FILE CONTENTS
================================================
================================================
FILE: .gitattributes
================================================
# Auto detect text files and perform LF normalization
* text=auto
================================================
FILE: .github/workflows/publish_action.yml
================================================
name: Publish to Comfy registry
on:
workflow_dispatch:
push:
branches:
- main
paths:
- "pyproject.toml"
permissions:
issues: write
jobs:
publish-node:
name: Publish Custom Node to registry
runs-on: ubuntu-latest
if: ${{ github.repository_owner == 'Amorano' }}
steps:
- name: Check out code
uses: actions/checkout@v4
- name: Publish Custom Node
uses: Comfy-Org/publish-node-action@v1
with:
personal_access_token: ${{ secrets.REGISTRY_ACCESS_TOKEN }}
================================================
FILE: .gitignore
================================================
__pycache__
*.py[cod]
*$py.class
_*/
glsl/*
*.code-workspace
.vscode
config.json
ignore.txt
.env
.venv
.DS_Store
*.egg-info
*.bak
checkpoints
results
backup
node_modules
*-lock.json
*.config.mjs
package.json
_TODO*.*
================================================
FILE: LICENSE
================================================
MIT License
Copyright (c) 2023 Alexander G. Morano
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
GO NUTS; JUST TRY NOT TO DO IT IN YOUR HEAD.
================================================
FILE: NOTICE
================================================
This project includes code concepts from the MTB Nodes project (MIT)
https://github.com/melMass/comfy_mtb
This project includes code concepts from the ComfyUI-Custom-Scripts project (MIT)
https://github.com/pythongosssss/ComfyUI-Custom-Scripts
This project includes code concepts from the KJNodes for ComfyUI project (GPL 3.0)
https://github.com/kijai/ComfyUI-KJNodes
This project includes code concepts from the UE Nodes project (Apache 2.0)
https://github.com/chrisgoringe/cg-use-everywhere
This project includes code concepts from the WAS Node Suite project (MIT)
https://github.com/WASasquatch/was-node-suite-comfyui
This project includes code concepts from the rgthree-comfy project (MIT)
https://github.com/rgthree/rgthree-comfy
This project includes code concepts from the FizzNodes project (MIT)
https://github.com/FizzleDorf/ComfyUI_FizzNodes
================================================
FILE: README.md
================================================
virtual environment (venv), make sure it is activated before installation. Then install the requirements with the command:
```
pip install -r requirements.txt
```
# WHERE TO FIND ME
You can find me on [](https://discord.gg/62TJaZ3Z5r).
================================================
FILE: __init__.py
================================================
"""
██ ██████ ██ ██ ██ ███ ███ ███████ ████████ ██████ ██ ██ ██
██ ██ ██ ██ ██ ██ ████ ████ ██ ██ ██ ██ ██ ██ ██
██ ██ ██ ██ ██ ██ ██ ████ ██ █████ ██ ██████ ██ ███
██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██
█████ ██████ ████ ██ ██ ██ ███████ ██ ██ ██ ██ ██ ██
Animation, Image Compositing & Procedural Creation
@title: Jovimetrix
@author: Alexander G. Morano
@category: Compositing
@reference: https://github.com/Amorano/Jovimetrix
@tags: adjust, animate, compose, compositing, composition, device, flow, video,
mask, shape, animation, logic
@description: Animation via tick. Parameter manipulation with wave generator.
Unary and Binary math support. Value convert int/float/bool, VectorN and Image,
Mask types. Shape mask generator. Stack images, do channel ops, split, merge
and randomize arrays and batches. Load images & video from anywhere. Dynamic
bus routing. Save output anywhere! Flatten, crop, transform; check
colorblindness or linear interpolate values.
@node list:
TickNode, TickSimpleNode, WaveGeneratorNode
BitSplitNode, ComparisonNode, LerpNode, OPUnaryNode, OPBinaryNode, StringerNode, SwizzleNode,
ColorBlindNode, ColorMatchNode, ColorKMeansNode, ColorTheoryNode, GradientMapNode,
AdjustNode, BlendNode, FilterMaskNode, PixelMergeNode, PixelSplitNode, PixelSwapNode, ThresholdNode,
ConstantNode, ShapeNode, TextNode,
CropNode, FlattenNode, StackNode, TransformNode,
ArrayNode, QueueNode, QueueTooNode,
AkashicNode, GraphNode, ImageInfoNode,
DelayNode, ExportNode, RouteNode, SaveOutputNode
ValueNode, Vector2Node, Vector3Node, Vector4Node,
"""
__author__ = "Alexander G. Morano"
__email__ = "amorano@gmail.com"
from pathlib import Path
from cozy_comfyui import \
logger
from cozy_comfyui.node import \
loader
JOV_DOCKERENV = False
try:
with open('/proc/1/cgroup', 'rt') as f:
content = f.read()
JOV_DOCKERENV = any(x in content for x in ['docker', 'kubepods', 'containerd'])
except FileNotFoundError:
pass
if JOV_DOCKERENV:
logger.info("RUNNING IN A DOCKER")
# ==============================================================================
# === GLOBAL ===
# ==============================================================================
PACKAGE = "JOVIMETRIX"
WEB_DIRECTORY = "./web"
ROOT = Path(__file__).resolve().parent
NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS = loader(ROOT,
PACKAGE,
"core",
f"{PACKAGE} 🔺🟩🔵",
False)
================================================
FILE: core/__init__.py
================================================
from enum import Enum
class EnumFillOperation(Enum):
DEFAULT = 0
FILL_ZERO = 20
FILL_ALL = 10
================================================
FILE: core/adjust.py
================================================
""" Jovimetrix - Adjust """
import sys
from enum import Enum
from typing import Any
from typing_extensions import override
import comfy.model_management
from comfy_api.latest import ComfyExtension, io
from comfy.utils import ProgressBar
from cozy_comfyui import \
InputType, RGBAMaskType, EnumConvertType, \
deep_merge, parse_param, zip_longest_fill
from cozy_comfyui.lexicon import \
Lexicon
from cozy_comfy.node import \
COZY_TYPE_IMAGE as COZY_TYPE_IMAGEv3, \
CozyImageNode as CozyImageNodev3
from cozy_comfyui.node import \
COZY_TYPE_IMAGE, \
CozyImageNode
from cozy_comfyui.image.adjust import \
EnumAdjustBlur, EnumAdjustColor, EnumAdjustEdge, EnumAdjustMorpho, \
image_contrast, image_brightness, image_equalize, image_gamma, \
image_exposure, image_pixelate, image_pixelscale, \
image_posterize, image_quantize, image_sharpen, image_morphology, \
image_emboss, image_blur, image_edge, image_color, \
image_autolevel, image_autolevel_histogram
from cozy_comfyui.image.channel import \
channel_solid
from cozy_comfyui.image.compose import \
image_levels
from cozy_comfyui.image.convert import \
tensor_to_cv, cv_to_tensor_full, image_mask, image_mask_add
from cozy_comfyui.image.misc import \
image_stack
# ==============================================================================
# === GLOBAL ===
# ==============================================================================
JOV_CATEGORY = "ADJUST"
# ==============================================================================
# === ENUMERATION ===
# ==============================================================================
class EnumAutoLevel(Enum):
MANUAL = 10
AUTO = 20
HISTOGRAM = 30
class EnumAdjustLight(Enum):
EXPOSURE = 10
GAMMA = 20
BRIGHTNESS = 30
CONTRAST = 40
EQUALIZE = 50
class EnumAdjustPixel(Enum):
PIXELATE = 10
PIXELSCALE = 20
QUANTIZE = 30
POSTERIZE = 40
# ==============================================================================
# === CLASS ===
# ==============================================================================
class AdjustBlurNode(CozyImageNode):
NAME = "ADJUST: BLUR (JOV)"
CATEGORY = JOV_CATEGORY
DESCRIPTION = """
Enhance and modify images with various blur effects.
"""
@classmethod
def INPUT_TYPES(cls) -> InputType:
d = super().INPUT_TYPES()
d = deep_merge(d, {
"optional": {
Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),
Lexicon.FUNCTION: (EnumAdjustBlur._member_names_, {
"default": EnumAdjustBlur.BLUR.name,}),
Lexicon.RADIUS: ("INT", {
"default": 3, "min": 3}),
}
})
return Lexicon._parse(d)
def run(self, **kw) -> RGBAMaskType:
pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)
op = parse_param(kw, Lexicon.FUNCTION, EnumAdjustBlur, EnumAdjustBlur.BLUR.name)
radius = parse_param(kw, Lexicon.RADIUS, EnumConvertType.INT, 3)
params = list(zip_longest_fill(pA, op, radius))
images = []
pbar = ProgressBar(len(params))
for idx, (pA, op, radius) in enumerate(params):
pA = channel_solid() if pA is None else tensor_to_cv(pA)
# height, width = pA.shape[:2]
pA = image_blur(pA, op, radius)
#pA = image_blend(pA, img_new, mask)
images.append(cv_to_tensor_full(pA))
pbar.update_absolute(idx)
return image_stack(images)
class AdjustColorNode(CozyImageNode):
NAME = "ADJUST: COLOR (JOV)"
CATEGORY = JOV_CATEGORY
DESCRIPTION = """
Enhance and modify images with various blur effects.
"""
@classmethod
def INPUT_TYPES(cls) -> InputType:
d = super().INPUT_TYPES()
d = deep_merge(d, {
"optional": {
Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),
Lexicon.FUNCTION: (EnumAdjustColor._member_names_, {
"default": EnumAdjustColor.RGB.name,}),
Lexicon.VEC: ("VEC3", {
"default": (0,0,0), "mij": -1, "maj": 1, "step": 0.025})
}
})
return Lexicon._parse(d)
def run(self, **kw) -> RGBAMaskType:
pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)
op = parse_param(kw, Lexicon.FUNCTION, EnumAdjustColor, EnumAdjustColor.RGB.name)
vec = parse_param(kw, Lexicon.VEC, EnumConvertType.VEC3, (0,0,0))
params = list(zip_longest_fill(pA, op, vec))
images = []
pbar = ProgressBar(len(params))
for idx, (pA, op, vec) in enumerate(params):
pA = channel_solid() if pA is None else tensor_to_cv(pA)
pA = image_color(pA, op, vec[0], vec[1], vec[2])
images.append(cv_to_tensor_full(pA))
pbar.update_absolute(idx)
return image_stack(images)
class AdjustEdgeNode(CozyImageNode):
NAME = "ADJUST: EDGE (JOV)"
CATEGORY = JOV_CATEGORY
DESCRIPTION = """
Enhanced edge detection.
"""
@classmethod
def INPUT_TYPES(cls) -> InputType:
d = super().INPUT_TYPES()
d = deep_merge(d, {
"optional": {
Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),
Lexicon.FUNCTION: (EnumAdjustEdge._member_names_, {
"default": EnumAdjustEdge.CANNY.name,}),
Lexicon.RADIUS: ("INT", {
"default": 1, "min": 1}),
Lexicon.ITERATION: ("INT", {
"default": 1, "min": 1, "max": 1000}),
Lexicon.LOHI: ("VEC2", {
"default": (0, 1), "mij": 0, "maj": 1, "step": 0.01})
}
})
return Lexicon._parse(d)
def run(self, **kw) -> RGBAMaskType:
pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)
op = parse_param(kw, Lexicon.FUNCTION, EnumAdjustEdge, EnumAdjustEdge.CANNY.name)
radius = parse_param(kw, Lexicon.RADIUS, EnumConvertType.INT, 1)
count = parse_param(kw, Lexicon.ITERATION, EnumConvertType.INT, 1)
lohi = parse_param(kw, Lexicon.LOHI, EnumConvertType.VEC2, (0,1))
params = list(zip_longest_fill(pA, op, radius, count, lohi))
images = []
pbar = ProgressBar(len(params))
for idx, (pA, op, radius, count, lohi) in enumerate(params):
pA = channel_solid() if pA is None else tensor_to_cv(pA)
alpha = image_mask(pA)
pA = image_edge(pA, op, radius, count, lohi[0], lohi[1])
pA = image_mask_add(pA, alpha)
images.append(cv_to_tensor_full(pA))
pbar.update_absolute(idx)
return image_stack(images)
class AdjustEmbossNode(CozyImageNode):
NAME = "ADJUST: EMBOSS (JOV)"
CATEGORY = JOV_CATEGORY
DESCRIPTION = """
Emboss boss mode.
"""
@classmethod
def INPUT_TYPES(cls) -> InputType:
d = super().INPUT_TYPES()
d = deep_merge(d, {
"optional": {
Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),
Lexicon.HEADING: ("FLOAT", {
"default": -45, "min": -sys.float_info.max, "max": sys.float_info.max, "step": 0.1}),
Lexicon.ELEVATION: ("FLOAT", {
"default": 45, "min": -sys.float_info.max, "max": sys.float_info.max, "step": 0.1}),
Lexicon.DEPTH: ("FLOAT", {
"default": 10, "min": 0, "max": sys.float_info.max, "step": 0.1,
"tooltip": "Depth perceived from the light angles above"}),
}
})
return Lexicon._parse(d)
def run(self, **kw) -> RGBAMaskType:
pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)
heading = parse_param(kw, Lexicon.HEADING, EnumConvertType.FLOAT, -45)
elevation = parse_param(kw, Lexicon.ELEVATION, EnumConvertType.FLOAT, 45)
depth = parse_param(kw, Lexicon.DEPTH, EnumConvertType.FLOAT, 10)
params = list(zip_longest_fill(pA, heading, elevation, depth))
images = []
pbar = ProgressBar(len(params))
for idx, (pA, heading, elevation, depth) in enumerate(params):
pA = channel_solid() if pA is None else tensor_to_cv(pA)
alpha = image_mask(pA)
pA = image_emboss(pA, heading, elevation, depth)
pA = image_mask_add(pA, alpha)
images.append(cv_to_tensor_full(pA))
pbar.update_absolute(idx)
return image_stack(images)
class AdjustLevelNode(CozyImageNode):
NAME = "ADJUST: LEVELS (JOV)"
CATEGORY = JOV_CATEGORY
DESCRIPTION = """
Manual or automatic adjust image levels so that the darkest pixel becomes black
and the brightest pixel becomes white, enhancing overall contrast.
"""
@classmethod
def INPUT_TYPES(cls) -> InputType:
d = super().INPUT_TYPES()
d = deep_merge(d, {
"optional": {
Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),
Lexicon.LMH: ("VEC3", {
"default": (0,0.5,1), "mij": 0, "maj": 1, "step": 0.01,
"label": ["LOW", "MID", "HIGH"]}),
Lexicon.RANGE: ("VEC2", {
"default": (0, 1), "mij": 0, "maj": 1, "step": 0.01,
"label": ["IN", "OUT"]}),
Lexicon.MODE: (EnumAutoLevel._member_names_, {
"default": EnumAutoLevel.MANUAL.name,
"tooltip": "Autolevel linearly or with Histogram bin values, per channel"
}),
"clip": ("FLOAT", {
"default": 0.5, "min": 0, "max": 1.0, "step": 0.01
})
}
})
return Lexicon._parse(d)
def run(self, **kw) -> RGBAMaskType:
pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)
LMH = parse_param(kw, Lexicon.LMH, EnumConvertType.VEC3, (0,0.5,1))
inout = parse_param(kw, Lexicon.RANGE, EnumConvertType.VEC2, (0,1))
mode = parse_param(kw, Lexicon.MODE, EnumAutoLevel, EnumAutoLevel.AUTO.name)
clip = parse_param(kw, "clip", EnumConvertType.FLOAT, 0.5, 0, 1)
params = list(zip_longest_fill(pA, LMH, inout, mode, clip))
images = []
pbar = ProgressBar(len(params))
for idx, (pA, LMH, inout, mode, clip) in enumerate(params):
pA = channel_solid() if pA is None else tensor_to_cv(pA)
'''
h, s, v = hsv
img_new = image_hsv(img_new, h, s, v)
'''
match mode:
case EnumAutoLevel.MANUAL:
low, mid, high = LMH
start, end = inout
pA = image_levels(pA, low, mid, high, start, end)
case EnumAutoLevel.AUTO:
pA = image_autolevel(pA)
case EnumAutoLevel.HISTOGRAM:
pA = image_autolevel_histogram(pA, clip)
images.append(cv_to_tensor_full(pA))
pbar.update_absolute(idx)
return image_stack(images)
class AdjustLightNode(CozyImageNode):
NAME = "ADJUST: LIGHT (JOV)"
CATEGORY = JOV_CATEGORY
DESCRIPTION = """
Tonal adjustments. They can be applied individually or all at the same time in order: brightness, contrast, histogram equalization, exposure, and gamma correction.
"""
@classmethod
def INPUT_TYPES(cls) -> InputType:
d = super().INPUT_TYPES()
d = deep_merge(d, {
"optional": {
Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),
Lexicon.BRIGHTNESS: ("FLOAT", {
"default": 0.5, "min": 0, "max": 1, "step": 0.01}),
Lexicon.CONTRAST: ("FLOAT", {
"default": 0, "min": -1, "max": 1, "step": 0.01}),
Lexicon.EQUALIZE: ("BOOLEAN", {
"default": False}),
Lexicon.EXPOSURE: ("FLOAT", {
"default": 1, "min": -8, "max": 8, "step": 0.01}),
Lexicon.GAMMA: ("FLOAT", {
"default": 1, "min": 0, "max": 8, "step": 0.01}),
}
})
return Lexicon._parse(d)
def run(self, **kw) -> RGBAMaskType:
pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)
brightness = parse_param(kw, Lexicon.BRIGHTNESS, EnumConvertType.FLOAT, 0.5)
contrast = parse_param(kw, Lexicon.CONTRAST, EnumConvertType.FLOAT, 0)
equalize = parse_param(kw, Lexicon.EQUALIZE, EnumConvertType.FLOAT, 0)
exposure = parse_param(kw, Lexicon.EXPOSURE, EnumConvertType.FLOAT, 0)
gamma = parse_param(kw, Lexicon.GAMMA, EnumConvertType.FLOAT, 0)
params = list(zip_longest_fill(pA, brightness, contrast, equalize, exposure, gamma))
images = []
pbar = ProgressBar(len(params))
for idx, (pA, brightness, contrast, equalize, exposure, gamma) in enumerate(params):
pA = channel_solid() if pA is None else tensor_to_cv(pA)
alpha = image_mask(pA)
brightness = 2. * (brightness - 0.5)
if brightness != 0:
pA = image_brightness(pA, brightness)
if contrast != 0:
pA = image_contrast(pA, contrast)
if equalize:
pA = image_equalize(pA)
if exposure != 1:
pA = image_exposure(pA, exposure)
if gamma != 1:
pA = image_gamma(pA, gamma)
'''
h, s, v = hsv
img_new = image_hsv(img_new, h, s, v)
l, m, h = level
img_new = image_levels(img_new, l, h, m, gamma)
'''
pA = image_mask_add(pA, alpha)
images.append(cv_to_tensor_full(pA))
pbar.update_absolute(idx)
return image_stack(images)
class AdjustMorphNode(CozyImageNode):
NAME = "ADJUST: MORPHOLOGY (JOV)"
CATEGORY = JOV_CATEGORY
DESCRIPTION = """
Operations based on the image shape.
"""
@classmethod
def INPUT_TYPES(cls) -> InputType:
d = super().INPUT_TYPES()
d = deep_merge(d, {
"optional": {
Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),
Lexicon.FUNCTION: (EnumAdjustMorpho._member_names_, {
"default": EnumAdjustMorpho.DILATE.name,}),
Lexicon.RADIUS: ("INT", {
"default": 1, "min": 1}),
Lexicon.ITERATION: ("INT", {
"default": 1, "min": 1, "max": 1000}),
}
})
return Lexicon._parse(d)
def run(self, **kw) -> RGBAMaskType:
pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)
op = parse_param(kw, Lexicon.FUNCTION, EnumAdjustMorpho, EnumAdjustMorpho.DILATE.name)
kernel = parse_param(kw, Lexicon.RADIUS, EnumConvertType.INT, 1)
count = parse_param(kw, Lexicon.ITERATION, EnumConvertType.INT, 1)
params = list(zip_longest_fill(pA, op, kernel, count))
images: list[Any] = []
pbar = ProgressBar(len(params))
for idx, (pA, op, kernel, count) in enumerate(params):
pA = channel_solid() if pA is None else tensor_to_cv(pA)
alpha = image_mask(pA)
pA = image_morphology(pA, op, kernel, count)
pA = image_mask_add(pA, alpha)
images.append(cv_to_tensor_full(pA))
pbar.update_absolute(idx)
return image_stack(images)
class AdjustPixelNode(CozyImageNode):
NAME = "ADJUST: PIXEL (JOV)"
CATEGORY = JOV_CATEGORY
DESCRIPTION = """
Pixel-level transformations. The val parameter controls the intensity or resolution of the effect, depending on the operation.
"""
@classmethod
def INPUT_TYPES(cls) -> InputType:
d = super().INPUT_TYPES()
d = deep_merge(d, {
"optional": {
Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),
Lexicon.FUNCTION: (EnumAdjustPixel._member_names_, {
"default": EnumAdjustPixel.PIXELATE.name,}),
Lexicon.VALUE: ("FLOAT", {
"default": 0, "min": 0, "max": 1, "step": 0.01})
}
})
return Lexicon._parse(d)
def run(self, **kw) -> RGBAMaskType:
pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)
op = parse_param(kw, Lexicon.FUNCTION, EnumAdjustPixel, EnumAdjustPixel.PIXELATE.name)
val = parse_param(kw, Lexicon.VALUE, EnumConvertType.FLOAT, 0)
params = list(zip_longest_fill(pA, op, val))
images = []
pbar = ProgressBar(len(params))
for idx, (pA, op, val) in enumerate(params):
pA = channel_solid() if pA is None else tensor_to_cv(pA, chan=4)
alpha = image_mask(pA)
match op:
case EnumAdjustPixel.PIXELATE:
pA = image_pixelate(pA, val / 2.)
case EnumAdjustPixel.PIXELSCALE:
pA = image_pixelscale(pA, val)
case EnumAdjustPixel.QUANTIZE:
pA = image_quantize(pA, val)
case EnumAdjustPixel.POSTERIZE:
pA = image_posterize(pA, val)
pA = image_mask_add(pA, alpha)
images.append(cv_to_tensor_full(pA))
pbar.update_absolute(idx)
return image_stack(images)
class AdjustSharpenNode(CozyImageNode):
NAME = "ADJUST: SHARPEN (JOV)"
CATEGORY = JOV_CATEGORY
DESCRIPTION = """
Sharpen the pixels of an image.
"""
@classmethod
def INPUT_TYPES(cls) -> InputType:
d = super().INPUT_TYPES()
d = deep_merge(d, {
"optional": {
Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),
Lexicon.AMOUNT: ("FLOAT", {
"default": 0, "min": 0, "max": 1, "step": 0.01}),
Lexicon.THRESHOLD: ("FLOAT", {
"default": 0, "min": 0, "max": 1, "step": 0.01})
}
})
return Lexicon._parse(d)
def run(self, **kw) -> RGBAMaskType:
pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)
amount = parse_param(kw, Lexicon.AMOUNT, EnumConvertType.FLOAT, 0)
threshold = parse_param(kw, Lexicon.THRESHOLD, EnumConvertType.FLOAT, 0)
params = list(zip_longest_fill(pA, amount, threshold))
images = []
pbar = ProgressBar(len(params))
for idx, (pA, amount, threshold) in enumerate(params):
pA = channel_solid() if pA is None else tensor_to_cv(pA)
pA = image_sharpen(pA, amount / 2., threshold=threshold / 25.5)
images.append(cv_to_tensor_full(pA))
pbar.update_absolute(idx)
return image_stack(images)
class AdjustSharpenNodev3(CozyImageNodev3):
@classmethod
def define_schema(cls, **kwarg) -> io.Schema:
schema = super(**kwarg).define_schema()
schema.display_name = "ADJUST: SHARPEN (JOV)"
schema.category = JOV_CATEGORY
schema.description = "Sharpen the pixels of an image."
schema.inputs.extend([
io.MultiType.Input(
id=Lexicon.IMAGE[0],
types=COZY_TYPE_IMAGEv3,
display_name=Lexicon.IMAGE[0],
optional=True,
tooltip=Lexicon.IMAGE[1]
),
io.Float.Input(
id=Lexicon.AMOUNT[0],
display_name=Lexicon.AMOUNT[0],
optional=True,
default= 0,
min=0,
max=1,
step=0.01,
tooltip=Lexicon.AMOUNT[1]
),
io.Float.Input(
id=Lexicon.THRESHOLD[0],
display_name=Lexicon.THRESHOLD[0],
optional=True,
default= 0,
min=0,
max=1,
step=0.01,
tooltip=Lexicon.THRESHOLD[1]
)
])
return schema
@classmethod
def execute(self, *arg, **kw) -> io.NodeOutput:
pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)
amount = parse_param(kw, Lexicon.AMOUNT, EnumConvertType.FLOAT, 0)
threshold = parse_param(kw, Lexicon.THRESHOLD, EnumConvertType.FLOAT, 0)
params = list(zip_longest_fill(pA, amount, threshold))
images = []
pbar = ProgressBar(len(params))
for idx, (pA, amount, threshold) in enumerate(params):
pA = channel_solid() if pA is None else tensor_to_cv(pA)
pA = image_sharpen(pA, amount / 2., threshold=threshold / 25.5)
images.append(cv_to_tensor_full(pA))
pbar.update_absolute(idx)
return io.NodeOutput(image_stack(images))
class AdjustExtension(ComfyExtension):
@override
async def get_node_list(self) -> list[type[io.ComfyNode]]:
return [
AdjustSharpenNodev3
]
async def comfy_entrypoint() -> AdjustExtension:
return AdjustExtension()
================================================
FILE: core/anim.py
================================================
""" Jovimetrix - Animation """
import sys
import numpy as np
from comfy.utils import ProgressBar
from cozy_comfyui import \
InputType, EnumConvertType, \
deep_merge, parse_param, zip_longest_fill
from cozy_comfyui.lexicon import \
Lexicon
from cozy_comfyui.node import \
CozyBaseNode
from cozy_comfyui.maths.ease import \
EnumEase, \
ease_op
from cozy_comfyui.maths.norm import \
EnumNormalize, \
norm_op
from cozy_comfyui.maths.wave import \
EnumWave, \
wave_op
from cozy_comfyui.maths.series import \
seriesLinear
# ==============================================================================
# === GLOBAL ===
# ==============================================================================
JOV_CATEGORY = "ANIMATION"
# ==============================================================================
# === CLASS ===
# ==============================================================================
class ResultObject(object):
def __init__(self, *arg, **kw) -> None:
self.frame = []
self.lin = []
self.fixed = []
self.trigger = []
self.batch = []
class TickNode(CozyBaseNode):
NAME = "TICK (JOV) ⏱"
CATEGORY = JOV_CATEGORY
RETURN_TYPES = ("FLOAT", "FLOAT", "FLOAT", "FLOAT", "FLOAT")
RETURN_NAMES = ("VALUE", "LINEAR", "EASED", "SCALAR_LIN", "SCALAR_EASE")
OUTPUT_IS_LIST = (True, True, True, True, True,)
OUTPUT_TOOLTIPS = (
"List of values",
"Normalized values",
"Eased values",
"Scalar normalized values",
"Scalar eased values",
)
DESCRIPTION = """
Value generator with normalized values based on based on time interval.
"""
@classmethod
def INPUT_TYPES(cls) -> InputType:
d = super().INPUT_TYPES()
d = deep_merge(d, {
"optional": {
# forces a MOD on CYCLE
Lexicon.START: ("FLOAT", {
"default": 0, "min": -sys.maxsize, "max": sys.maxsize
}),
# interval between frames
Lexicon.STEP: ("FLOAT", {
"default": 0, "min": -sys.float_info.max, "max": sys.float_info.max, "precision": 3,
"tooltip": "Amount to add to each frame per tick"
}),
# how many frames to dump....
Lexicon.COUNT: ("INT", {
"default": 1, "min": 1, "max": 1500
}),
Lexicon.LOOP: ("INT", {
"default": 0, "min": 0, "max": sys.maxsize,
"tooltip": "What value before looping starts. 0 means linear playback (no loop point)"
}),
Lexicon.PINGPONG: ("BOOLEAN", {
"default": False
}),
Lexicon.EASE: (EnumEase._member_names_, {
"default": EnumEase.LINEAR.name}),
Lexicon.NORMALIZE: (EnumNormalize._member_names_, {
"default": EnumNormalize.MINMAX2.name}),
Lexicon.SCALAR: ("FLOAT", {
"default": 1, "min": 0, "max": sys.float_info.max
})
}
})
return Lexicon._parse(d)
def run(self, **kw) -> tuple[float, ...]:
"""
Generates a series of numbers with various options including:
- Custom start value (supporting floating point and negative numbers)
- Custom step value (supporting floating point and negative numbers)
- Fixed number of frames
- Custom loop point (series restarts after reaching this many steps)
- Ping-pong option (reverses direction at end points)
- Support for easing functions
- Normalized output 0..1, -1..1, L2 or ZScore
"""
start = parse_param(kw, Lexicon.START, EnumConvertType.FLOAT, 0)[0]
step = parse_param(kw, Lexicon.STEP, EnumConvertType.FLOAT, 0)[0]
count = parse_param(kw, Lexicon.COUNT, EnumConvertType.INT, 1, 1, 1500)[0]
loop = parse_param(kw, Lexicon.LOOP, EnumConvertType.INT, 0, 0)[0]
pingpong = parse_param(kw, Lexicon.PINGPONG, EnumConvertType.BOOLEAN, False)[0]
ease = parse_param(kw, Lexicon.EASE, EnumEase, EnumEase.LINEAR.name)[0]
normalize = parse_param(kw, Lexicon.NORMALIZE, EnumNormalize, EnumNormalize.MINMAX1.name)[0]
scalar = parse_param(kw, Lexicon.SCALAR, EnumConvertType.FLOAT, 1, 0)[0]
if step == 0:
step = 1
cycle = seriesLinear(start, step, count, loop, pingpong)
linear = norm_op(normalize, np.array(cycle))
eased = ease_op(ease, linear, len(linear))
scalar_linear = linear * scalar
scalar_eased = eased * scalar
return (
cycle,
linear.tolist(),
eased.tolist(),
scalar_linear.tolist(),
scalar_eased.tolist(),
)
class WaveGeneratorNode(CozyBaseNode):
NAME = "WAVE GEN (JOV) 🌊"
NAME_PRETTY = "WAVE GEN (JOV) 🌊"
CATEGORY = JOV_CATEGORY
RETURN_TYPES = ("FLOAT", "INT", )
RETURN_NAMES = ("FLOAT", "INT", )
DESCRIPTION = """
Produce waveforms like sine, square, or sawtooth with adjustable frequency, amplitude, phase, and offset. It's handy for creating oscillating patterns or controlling animation dynamics. This node emits both continuous floating-point values and integer representations of the generated waves.
"""
@classmethod
def INPUT_TYPES(cls) -> InputType:
d = super().INPUT_TYPES()
d = deep_merge(d, {
"optional": {
Lexicon.WAVE: (EnumWave._member_names_, {
"default": EnumWave.SIN.name}),
Lexicon.FREQ: ("FLOAT", {
"default": 1, "min": 0, "max": sys.float_info.max, "step": 0.01,}),
Lexicon.AMP: ("FLOAT", {
"default": 1, "min": 0, "max": sys.float_info.max, "step": 0.01,}),
Lexicon.PHASE: ("FLOAT", {
"default": 0, "min": 0, "max": 1, "step": 0.01}),
Lexicon.OFFSET: ("FLOAT", {
"default": 0, "min": 0, "max": 1, "step": 0.001}),
Lexicon.TIME: ("FLOAT", {
"default": 0, "min": 0, "max": sys.float_info.max, "step": 0.0001}),
Lexicon.INVERT: ("BOOLEAN", {
"default": False}),
Lexicon.ABSOLUTE: ("BOOLEAN", {
"default": False,}),
}
})
return Lexicon._parse(d)
def run(self, **kw) -> tuple[float, int]:
op = parse_param(kw, Lexicon.WAVE, EnumWave, EnumWave.SIN.name)
freq = parse_param(kw, Lexicon.FREQ, EnumConvertType.FLOAT, 1, 0)
amp = parse_param(kw, Lexicon.AMP, EnumConvertType.FLOAT, 1, 0)
phase = parse_param(kw, Lexicon.PHASE, EnumConvertType.FLOAT, 0, 0)
shift = parse_param(kw, Lexicon.OFFSET, EnumConvertType.FLOAT, 0, 0)
delta_time = parse_param(kw, Lexicon.TIME, EnumConvertType.FLOAT, 0, 0)
invert = parse_param(kw, Lexicon.INVERT, EnumConvertType.BOOLEAN, False)
absolute = parse_param(kw, Lexicon.ABSOLUTE, EnumConvertType.BOOLEAN, False)
results = []
params = list(zip_longest_fill(op, freq, amp, phase, shift, delta_time, invert, absolute))
pbar = ProgressBar(len(params))
for idx, (op, freq, amp, phase, shift, delta_time, invert, absolute) in enumerate(params):
# freq = 1. / freq
if invert:
amp = 1. / val
val = wave_op(op, phase, freq, amp, shift, delta_time)
if absolute:
val = np.abs(val)
val = max(-sys.float_info.max, min(val, sys.float_info.max))
results.append([val, int(val)])
pbar.update_absolute(idx)
return *list(zip(*results)),
'''
class TickOldNode(CozyBaseNode):
NAME = "TICK OLD (JOV) ⏱"
CATEGORY = JOV_CATEGORY
RETURN_TYPES = ("INT", "FLOAT", "FLOAT", COZY_TYPE_ANY, COZY_TYPE_ANY,)
RETURN_NAMES = ("VAL", "LINEAR", "FPS", "TRIGGER", "BATCH",)
OUTPUT_IS_LIST = (True, False, False, False, False,)
OUTPUT_TOOLTIPS = (
"Current value for the configured tick as ComfyUI List",
"Normalized tick value (0..1) based on BPM and Loop",
"Current 'frame' in the tick based on FPS setting",
"Based on the BPM settings, on beat hit, output the input at '⚡'",
"Current batch of values for the configured tick as standard list which works in other Jovimetrix nodes",
)
DESCRIPTION = """
A timer and frame counter, emitting pulses or signals based on time intervals. It allows precise synchronization and control over animation sequences, with options to adjust FPS, BPM, and loop points. This node is useful for generating time-based events or driving animations with rhythmic precision.
"""
@classmethod
def INPUT_TYPES(cls) -> InputType:
d = super().INPUT_TYPES()
d = deep_merge(d, {
"optional": {
# data to pass on a pulse of the loop
Lexicon.TRIGGER: (COZY_TYPE_ANY, {
"default": None,
"tooltip": "Output to send when beat (BPM setting) is hit"
}),
# forces a MOD on CYCLE
Lexicon.START: ("INT", {
"default": 0, "min": 0, "max": sys.maxsize,
}),
Lexicon.LOOP: ("INT", {
"default": 0, "min": 0, "max": sys.maxsize,
"tooltip": "Number of frames before looping starts. 0 means continuous playback (no loop point)"
}),
Lexicon.FPS: ("INT", {
"default": 24, "min": 1
}),
Lexicon.BPM: ("INT", {
"default": 120, "min": 1, "max": 60000,
"tooltip": "BPM trigger rate to send the input. If input is empty, TRUE is sent on trigger"
}),
Lexicon.NOTE: ("INT", {
"default": 4, "min": 1, "max": 256,
"tooltip": "Number of beats per measure. Quarter note is 4, Eighth is 8, 16 is 16, etc."}),
# how many frames to dump....
Lexicon.BATCH: ("INT", {
"default": 1, "min": 1, "max": 32767,
"tooltip": "Number of frames wanted"
}),
Lexicon.STEP: ("INT", {
"default": 0, "min": 0, "max": sys.maxsize
}),
}
})
return Lexicon._parse(d)
def run(self, ident, **kw) -> tuple[int, float, float, Any]:
passthru = parse_param(kw, Lexicon.TRIGGER, EnumConvertType.ANY, None)[0]
stride = parse_param(kw, Lexicon.STEP, EnumConvertType.INT, 0)[0]
loop = parse_param(kw, Lexicon.LOOP, EnumConvertType.INT, 0)[0]
start = parse_param(kw, Lexicon.START, EnumConvertType.INT, self.__frame)[0]
if loop != 0:
self.__frame %= loop
fps = parse_param(kw, Lexicon.FPS, EnumConvertType.INT, 24, 1)[0]
bpm = parse_param(kw, Lexicon.BPM, EnumConvertType.INT, 120, 1)[0]
divisor = parse_param(kw, Lexicon.NOTE, EnumConvertType.INT, 4, 1)[0]
beat = 60. / max(1., bpm) / divisor
batch = parse_param(kw, Lexicon.BATCH, EnumConvertType.INT, 1, 1)[0]
step_fps = 1. / max(1., float(fps))
trigger = None
results = ResultObject()
pbar = ProgressBar(batch)
step = stride if stride != 0 else max(1, loop / batch)
for idx in range(batch):
trigger = False
lin = start if loop == 0 else start / loop
fixed_step = math.fmod(start * step_fps, fps)
if (math.fmod(fixed_step, beat) == 0):
trigger = [passthru]
if loop != 0:
start %= loop
results.frame.append(start)
results.lin.append(float(lin))
results.fixed.append(float(fixed_step))
results.trigger.append(trigger)
results.batch.append(start)
start += step
pbar.update_absolute(idx)
return (results.frame, results.lin, results.fixed, results.trigger, results.batch,)
'''
================================================
FILE: core/calc.py
================================================
""" Jovimetrix - Calculation """
import sys
import math
import struct
from enum import Enum
from typing import Any
from collections import Counter
import torch
from scipy.special import gamma
from comfy.utils import ProgressBar
from cozy_comfyui import \
logger, \
TensorType, InputType, EnumConvertType, \
deep_merge, parse_dynamic, parse_param, parse_value, zip_longest_fill
from cozy_comfyui.lexicon import \
Lexicon
from cozy_comfyui.node import \
COZY_TYPE_ANY, COZY_TYPE_NUMERICAL, COZY_TYPE_FULL, \
CozyBaseNode
# ==============================================================================
# === GLOBAL ===
# ==============================================================================
JOV_CATEGORY = "CALC"
# ==============================================================================
# === ENUMERATION ===
# ==============================================================================
class EnumBinaryOperation(Enum):
ADD = 0
SUBTRACT = 1
MULTIPLY = 2
DIVIDE = 3
DIVIDE_FLOOR = 4
MODULUS = 5
POWER = 6
# TERNARY WITHOUT THE NEED
MAXIMUM = 20
MINIMUM = 21
# VECTOR
DOT_PRODUCT = 30
CROSS_PRODUCT = 31
# MATRIX
# BITS
# BIT_NOT = 39
BIT_AND = 60
BIT_NAND = 61
BIT_OR = 62
BIT_NOR = 63
BIT_XOR = 64
BIT_XNOR = 65
BIT_LSHIFT = 66
BIT_RSHIFT = 67
# GROUP
UNION = 80
INTERSECTION = 81
DIFFERENCE = 82
# WEIRD ONES
BASE = 90
class EnumComparison(Enum):
EQUAL = 0
NOT_EQUAL = 1
LESS_THAN = 2
LESS_THAN_EQUAL = 3
GREATER_THAN = 4
GREATER_THAN_EQUAL = 5
# LOGIC
# NOT = 10
AND = 20
NAND = 21
OR = 22
NOR = 23
XOR = 24
XNOR = 25
# TYPE
IS = 80
IS_NOT = 81
# GROUPS
IN = 82
NOT_IN = 83
class EnumConvertString(Enum):
SPLIT = 10
JOIN = 30
FIND = 40
REPLACE = 50
SLICE = 70 # start - end - step = -1, -1, 1
class EnumSwizzle(Enum):
A_X = 0
A_Y = 10
A_Z = 20
A_W = 30
B_X = 9
B_Y = 11
B_Z = 21
B_W = 31
CONSTANT = 40
class EnumUnaryOperation(Enum):
ABS = 0
FLOOR = 1
CEIL = 2
SQRT = 3
SQUARE = 4
LOG = 5
LOG10 = 6
SIN = 7
COS = 8
TAN = 9
NEGATE = 10
RECIPROCAL = 12
FACTORIAL = 14
EXP = 16
# COMPOUND
MINIMUM = 20
MAXIMUM = 21
MEAN = 22
MEDIAN = 24
MODE = 26
MAGNITUDE = 30
NORMALIZE = 32
# LOGICAL
NOT = 40
# BITWISE
BIT_NOT = 45
COS_H = 60
SIN_H = 62
TAN_H = 64
RADIANS = 70
DEGREES = 72
GAMMA = 80
# IS_EVEN
IS_EVEN = 90
IS_ODD = 91
# Dictionary to map each operation to its corresponding function
OP_UNARY = {
EnumUnaryOperation.ABS: lambda x: math.fabs(x),
EnumUnaryOperation.FLOOR: lambda x: math.floor(x),
EnumUnaryOperation.CEIL: lambda x: math.ceil(x),
EnumUnaryOperation.SQRT: lambda x: math.sqrt(x),
EnumUnaryOperation.SQUARE: lambda x: math.pow(x, 2),
EnumUnaryOperation.LOG: lambda x: math.log(x) if x != 0 else -math.inf,
EnumUnaryOperation.LOG10: lambda x: math.log10(x) if x != 0 else -math.inf,
EnumUnaryOperation.SIN: lambda x: math.sin(x),
EnumUnaryOperation.COS: lambda x: math.cos(x),
EnumUnaryOperation.TAN: lambda x: math.tan(x),
EnumUnaryOperation.NEGATE: lambda x: -x,
EnumUnaryOperation.RECIPROCAL: lambda x: 1 / x if x != 0 else 0,
EnumUnaryOperation.FACTORIAL: lambda x: math.factorial(abs(int(x))),
EnumUnaryOperation.EXP: lambda x: math.exp(x),
EnumUnaryOperation.NOT: lambda x: not x,
EnumUnaryOperation.BIT_NOT: lambda x: ~int(x),
EnumUnaryOperation.IS_EVEN: lambda x: x % 2 == 0,
EnumUnaryOperation.IS_ODD: lambda x: x % 2 == 1,
EnumUnaryOperation.COS_H: lambda x: math.cosh(x),
EnumUnaryOperation.SIN_H: lambda x: math.sinh(x),
EnumUnaryOperation.TAN_H: lambda x: math.tanh(x),
EnumUnaryOperation.RADIANS: lambda x: math.radians(x),
EnumUnaryOperation.DEGREES: lambda x: math.degrees(x),
EnumUnaryOperation.GAMMA: lambda x: gamma(x) if x > 0 else 0,
}
# ==============================================================================
# === SUPPORT ===
# ==============================================================================
def to_bits(value: Any):
if isinstance(value, int):
return bin(value)[2:]
elif isinstance(value, float):
packed = struct.pack('>d', value)
return ''.join(f'{byte:08b}' for byte in packed)
elif isinstance(value, str):
return ''.join(f'{ord(c):08b}' for c in value)
else:
raise TypeError(f"Unsupported type: {type(value)}")
def vector_swap(pA: Any, pB: Any, swap_x: EnumSwizzle, swap_y:EnumSwizzle,
swap_z:EnumSwizzle, swap_w:EnumSwizzle, default:list[float]) -> list[float]:
"""Swap out a vector's values with another vector's values, or a constant fill."""
def parse(target, targetB, swap, val) -> float:
if swap == EnumSwizzle.CONSTANT:
return val
if swap in [EnumSwizzle.B_X, EnumSwizzle.B_Y, EnumSwizzle.B_Z, EnumSwizzle.B_W]:
target = targetB
swap = int(swap.value / 10)
return target[swap]
while len(pA) < 4:
pA.append(0)
while len(pB) < 4:
pB.append(0)
while len(default) < 4:
default.append(0)
return [
parse(pA, pB, swap_x, default[0]),
parse(pA, pB, swap_y, default[1]),
parse(pA, pB, swap_z, default[2]),
parse(pA, pB, swap_w, default[3])
]
# ==============================================================================
# === CLASS ===
# ==============================================================================
class BitSplitNode(CozyBaseNode):
NAME = "BIT SPLIT (JOV) ⭄"
CATEGORY = JOV_CATEGORY
RETURN_TYPES = (COZY_TYPE_ANY, "BOOLEAN",)
RETURN_NAMES = ("BIT", "BOOL",)
OUTPUT_IS_LIST = (True, True,)
OUTPUT_TOOLTIPS = (
"Bits as Numerical output (0 or 1)",
"Bits as Boolean output (True or False)"
)
DESCRIPTION = """
Split an input into separate bits.
BOOL, INT and FLOAT use their numbers,
STRING is treated as a list of CHARACTER.
IMAGE and MASK will return a TRUE bit for any non-black pixel, as a stream of bits for all pixels in the image.
"""
@classmethod
def INPUT_TYPES(cls) -> InputType:
d = super().INPUT_TYPES()
d = deep_merge(d, {
"optional": {
Lexicon.VALUE: (COZY_TYPE_NUMERICAL, {
"default": None,
"tooltip": "Value to convert into bits"}),
Lexicon.BITS: ("INT", {
"default": 8, "min": 0, "max": 64,
"tooltip": "Number of output bits requested"})
}
})
return Lexicon._parse(d)
def run(self, **kw) -> tuple[list[int], list[bool]]:
value = parse_param(kw, Lexicon.VALUE, EnumConvertType.LIST, 0)
bits = parse_param(kw, Lexicon.BITS, EnumConvertType.INT, 8)
params = list(zip_longest_fill(value, bits))
pbar = ProgressBar(len(params))
results = []
for idx, (value, bits) in enumerate(params):
bit_repr = to_bits(value[0])[::-1]
if bits > 0:
if len(bit_repr) > bits:
bit_repr = bit_repr[0:bits]
else:
bit_repr = bit_repr.ljust(bits, '0')
int_bits = []
bool_bits = []
for b in bit_repr:
bit = int(b)
int_bits.append(bit)
bool_bits.append(bool(bit))
results.append([int_bits, bool_bits])
pbar.update_absolute(idx)
return *list(zip(*results)),
class ComparisonNode(CozyBaseNode):
NAME = "COMPARISON (JOV) 🕵🏽"
CATEGORY = JOV_CATEGORY
RETURN_TYPES = (COZY_TYPE_ANY, COZY_TYPE_ANY,)
RETURN_NAMES = ("OUT", "VAL",)
OUTPUT_IS_LIST = (True, True,)
OUTPUT_TOOLTIPS = (
"Outputs the input at PASS or FAIL depending the evaluation",
"The comparison result value"
)
DESCRIPTION = """
Evaluates two inputs (A and B) with a specified comparison operators and optional values for successful and failed comparisons. The node performs the specified operation element-wise between corresponding elements of A and B. If the comparison is successful for all elements, it returns the success value; otherwise, it returns the failure value. The node supports various comparison operators such as EQUAL, GREATER_THAN, LESS_THAN, AND, OR, IS, IN, etc.
"""
@classmethod
def INPUT_TYPES(cls) -> InputType:
d = super().INPUT_TYPES()
d = deep_merge(d, {
"optional": {
Lexicon.IN_A: (COZY_TYPE_NUMERICAL, {
"default": 0,
"tooltip":"First value to compare"}),
Lexicon.IN_B: (COZY_TYPE_NUMERICAL, {
"default": 0,
"tooltip":"Second value to compare"}),
Lexicon.SUCCESS: (COZY_TYPE_ANY, {
"default": 0,
"tooltip": "Sent to OUT on a successful condition"}),
Lexicon.FAIL: (COZY_TYPE_ANY, {
"default": 0,
"tooltip": "Sent to OUT on a failure condition"}),
Lexicon.FUNCTION: (EnumComparison._member_names_, {
"default": EnumComparison.EQUAL.name,
"tooltip": "Comparison function. Sends the data in PASS on successful comparison to OUT, otherwise sends the value in FAIL"}),
Lexicon.SWAP: ("BOOLEAN", {
"default": False,
"tooltip": "Reverse the A and B inputs"}),
Lexicon.INVERT: ("BOOLEAN", {
"default": False,
"tooltip": "Reverse the PASS and FAIL inputs"}),
}
})
return Lexicon._parse(d)
def run(self, **kw) -> tuple[Any, Any]:
in_a = parse_param(kw, Lexicon.IN_A, EnumConvertType.ANY, 0)
in_b = parse_param(kw, Lexicon.IN_B, EnumConvertType.ANY, 0)
size = max(len(in_a), len(in_b))
good = parse_param(kw, Lexicon.SUCCESS, EnumConvertType.ANY, 0)[:size]
fail = parse_param(kw, Lexicon.FAIL, EnumConvertType.ANY, 0)[:size]
op = parse_param(kw, Lexicon.FUNCTION, EnumComparison, EnumComparison.EQUAL.name)[:size]
swap = parse_param(kw, Lexicon.SWAP, EnumConvertType.BOOLEAN, False)[:size]
invert = parse_param(kw, Lexicon.INVERT, EnumConvertType.BOOLEAN, False)[:size]
params = list(zip_longest_fill(in_a, in_b, good, fail, op, swap, invert))
pbar = ProgressBar(len(params))
vals = []
results = []
for idx, (A, B, good, fail, op, swap, invert) in enumerate(params):
if not isinstance(A, (tuple, list,)):
A = [A]
if not isinstance(B, (tuple, list,)):
B = [B]
size = min(4, max(len(A), len(B))) - 1
typ = [EnumConvertType.FLOAT, EnumConvertType.VEC2, EnumConvertType.VEC3, EnumConvertType.VEC4][size]
val_a = parse_value(A, typ, [A[-1]] * size)
if not isinstance(val_a, (list,)):
val_a = [val_a]
val_b = parse_value(B, typ, [B[-1]] * size)
if not isinstance(val_b, (list,)):
val_b = [val_b]
if swap:
val_a, val_b = val_b, val_a
match op:
case EnumComparison.EQUAL:
val = [a == b for a, b in zip(val_a, val_b)]
case EnumComparison.GREATER_THAN:
val = [a > b for a, b in zip(val_a, val_b)]
case EnumComparison.GREATER_THAN_EQUAL:
val = [a >= b for a, b in zip(val_a, val_b)]
case EnumComparison.LESS_THAN:
val = [a < b for a, b in zip(val_a, val_b)]
case EnumComparison.LESS_THAN_EQUAL:
val = [a <= b for a, b in zip(val_a, val_b)]
case EnumComparison.NOT_EQUAL:
val = [a != b for a, b in zip(val_a, val_b)]
# LOGIC
# case EnumBinaryOperation.NOT = 10
case EnumComparison.AND:
val = [a and b for a, b in zip(val_a, val_b)]
case EnumComparison.NAND:
val = [not(a and b) for a, b in zip(val_a, val_b)]
case EnumComparison.OR:
val = [a or b for a, b in zip(val_a, val_b)]
case EnumComparison.NOR:
val = [not(a or b) for a, b in zip(val_a, val_b)]
case EnumComparison.XOR:
val = [(a and not b) or (not a and b) for a, b in zip(val_a, val_b)]
case EnumComparison.XNOR:
val = [not((a and not b) or (not a and b)) for a, b in zip(val_a, val_b)]
# IDENTITY
case EnumComparison.IS:
val = [a is b for a, b in zip(val_a, val_b)]
case EnumComparison.IS_NOT:
val = [a is not b for a, b in zip(val_a, val_b)]
# GROUP
case EnumComparison.IN:
val = [a in val_b for a in val_a]
case EnumComparison.NOT_IN:
val = [a not in val_b for a in val_a]
output = all([bool(v) for v in val])
if invert:
output = not output
output = good if output == True else fail
results.append([output, val])
pbar.update_absolute(idx)
outs, vals = zip(*results)
if isinstance(outs[0], (TensorType,)):
if len(outs) > 1:
outs = torch.stack(outs)
else:
outs = outs[0].unsqueeze(0)
outs = [outs]
else:
outs = list(outs)
return outs, *vals,
class LerpNode(CozyBaseNode):
NAME = "LERP (JOV) 🔰"
CATEGORY = JOV_CATEGORY
RETURN_TYPES = (COZY_TYPE_ANY,)
RETURN_NAMES = ("❔",)
OUTPUT_IS_LIST = (True,)
OUTPUT_TOOLTIPS = (
f"Output can vary depending on the type chosen in the {"TYPE"} parameter",
)
DESCRIPTION = """
Calculate linear interpolation between two values or vectors based on a blending factor (alpha).
The node accepts optional start (IN_A) and end (IN_B) points, a blending factor (FLOAT), and various input types for both start and end points, such as single values (X, Y), 2-value vectors (IN_A2, IN_B2), 3-value vectors (IN_A3, IN_B3), and 4-value vectors (IN_A4, IN_B4).
Additionally, you can specify the easing function (EASE) and the desired output type (TYPE). It supports various easing functions for smoother transitions.
"""
@classmethod
def INPUT_TYPES(cls) -> InputType:
d = super().INPUT_TYPES()
d = deep_merge(d, {
"optional": {
Lexicon.IN_A: (COZY_TYPE_NUMERICAL, {
"tooltip": "Custom Start Point"}),
Lexicon.IN_B: (COZY_TYPE_NUMERICAL, {
"tooltip": "Custom End Point"}),
Lexicon.ALPHA: ("VEC4", {
"default": (0.5, 0.5, 0.5, 0.5), "mij": 0, "maj": 1,}),
Lexicon.TYPE: (EnumConvertType._member_names_[:6], {
"default": EnumConvertType.FLOAT.name,
"tooltip": "Output type desired from resultant operation"}),
Lexicon.DEFAULT_A: ("VEC4", {
"default": (0, 0, 0, 0)}),
Lexicon.DEFAULT_B: ("VEC4", {
"default": (1,1,1,1)})
}
})
return Lexicon._parse(d)
def run(self, **kw) -> tuple[Any, Any]:
A = parse_param(kw, Lexicon.IN_A, EnumConvertType.ANY, 0)
B = parse_param(kw, Lexicon.IN_B, EnumConvertType.ANY, 0)
alpha = parse_param(kw, Lexicon.ALPHA,EnumConvertType.VEC4, (0.5,0.5,0.5,0.5))
typ = parse_param(kw, Lexicon.TYPE, EnumConvertType, EnumConvertType.FLOAT.name)
a_xyzw = parse_param(kw, Lexicon.DEFAULT_A, EnumConvertType.VEC4, (0, 0, 0, 0))
b_xyzw = parse_param(kw, Lexicon.DEFAULT_B, EnumConvertType.VEC4, (1, 1, 1, 1))
values = []
params = list(zip_longest_fill(A, B, alpha, typ, a_xyzw, b_xyzw))
pbar = ProgressBar(len(params))
for idx, (A, B, alpha, typ, a_xyzw, b_xyzw) in enumerate(params):
size = int(typ.value / 10)
if A is None:
A = a_xyzw[:size]
if B is None:
B = b_xyzw[:size]
val_a = parse_value(A, EnumConvertType.VEC4, a_xyzw)
val_b = parse_value(B, EnumConvertType.VEC4, b_xyzw)
alpha = parse_value(alpha, EnumConvertType.VEC4, alpha)
if size > 1:
val_a = val_a[:size + 1]
val_b = val_b[:size + 1]
else:
val_a = [val_a[0]]
val_b = [val_b[0]]
val = [val_b[x] * alpha[x] + val_a[x] * (1 - alpha[x]) for x in range(size)]
convert = int if "INT" in typ.name else float
ret = []
for v in val:
try:
ret.append(convert(v))
except OverflowError:
ret.append(0)
except Exception as e:
ret.append(0)
val = ret[0] if size == 1 else ret[:size+1]
values.append(val)
pbar.update_absolute(idx)
return [values]
class OPUnaryNode(CozyBaseNode):
NAME = "OP UNARY (JOV) 🎲"
CATEGORY = JOV_CATEGORY
RETURN_TYPES = (COZY_TYPE_ANY,)
RETURN_NAMES = ("❔",)
OUTPUT_IS_LIST = (True,)
OUTPUT_TOOLTIPS = (
"Output type will match the input type",
)
DESCRIPTION = """
Perform single function operations like absolute value, mean, median, mode, magnitude, normalization, maximum, or minimum on input values.
"""
@classmethod
def INPUT_TYPES(cls) -> InputType:
d = super().INPUT_TYPES()
typ = EnumConvertType._member_names_[:6]
d = deep_merge(d, {
"optional": {
Lexicon.IN_A: (COZY_TYPE_FULL, {
"default": 0}),
Lexicon.FUNCTION: (EnumUnaryOperation._member_names_, {
"default": EnumUnaryOperation.ABS.name}),
Lexicon.TYPE: (typ, {
"default": EnumConvertType.FLOAT.name,}),
Lexicon.DEFAULT_A: ("VEC4", {
"default": (0,0,0,0), "mij": -sys.float_info.max, "maj": sys.float_info.max,
"precision": 2,
"label": ["X", "Y", "Z", "W"]})
}
})
return Lexicon._parse(d)
def run(self, **kw) -> tuple[bool]:
results = []
A = parse_param(kw, Lexicon.IN_A, EnumConvertType.ANY, 0)
op = parse_param(kw, Lexicon.FUNCTION, EnumUnaryOperation, EnumUnaryOperation.ABS.name)
out = parse_param(kw, Lexicon.TYPE, EnumConvertType, EnumConvertType.FLOAT.name)
a_xyzw = parse_param(kw, Lexicon.DEFAULT_A, EnumConvertType.VEC4, (0, 0, 0, 0))
params = list(zip_longest_fill(A, op, out, a_xyzw))
pbar = ProgressBar(len(params))
for idx, (A, op, out, a_xyzw) in enumerate(params):
if not isinstance(A, (list, tuple,)):
A = [A]
best_type = [EnumConvertType.FLOAT, EnumConvertType.VEC2, EnumConvertType.VEC3, EnumConvertType.VEC4][len(A)-1]
val = parse_value(A, best_type, a_xyzw)
val = parse_value(val, EnumConvertType.VEC4, a_xyzw)
match op:
case EnumUnaryOperation.MEAN:
val = [sum(val) / len(val)]
case EnumUnaryOperation.MEDIAN:
val = [sorted(val)[len(val) // 2]]
case EnumUnaryOperation.MODE:
counts = Counter(val)
val = [max(counts, key=counts.get)]
case EnumUnaryOperation.MAGNITUDE:
val = [math.sqrt(sum(x ** 2 for x in val))]
case EnumUnaryOperation.NORMALIZE:
if len(val) == 1:
val = [1]
else:
m = math.sqrt(sum(x ** 2 for x in val))
if m > 0:
val = [v / m for v in val]
else:
val = [0] * len(val)
case EnumUnaryOperation.MAXIMUM:
val = [max(val)]
case EnumUnaryOperation.MINIMUM:
val = [min(val)]
case _:
# Apply unary operation to each item in the list
ret = []
for v in val:
try:
v = OP_UNARY[op](v)
except Exception as e:
logger.error(f"{e} :: {op}")
v = 0
ret.append(v)
val = ret
val = parse_value(val, out, 0)
results.append(val)
pbar.update_absolute(idx)
return (results,)
class OPBinaryNode(CozyBaseNode):
NAME = "OP BINARY (JOV) 🌟"
CATEGORY = JOV_CATEGORY
RETURN_TYPES = (COZY_TYPE_ANY,)
RETURN_NAMES = ("❔",)
OUTPUT_IS_LIST = (True,)
OUTPUT_TOOLTIPS = (
"Output type will match the input type",
)
DESCRIPTION = """
Execute binary operations like addition, subtraction, multiplication, division, and bitwise operations on input values, supporting various data types and vector sizes.
"""
@classmethod
def INPUT_TYPES(cls) -> InputType:
names_convert = EnumConvertType._member_names_[:6]
d = super().INPUT_TYPES()
d = deep_merge(d, {
"optional": {
Lexicon.IN_A: (COZY_TYPE_FULL, {
"default": None}),
Lexicon.IN_B: (COZY_TYPE_FULL, {
"default": None}),
Lexicon.FUNCTION: (EnumBinaryOperation._member_names_, {
"default": EnumBinaryOperation.ADD.name,}),
Lexicon.TYPE: (names_convert, {
"default": names_convert[2],
"tooltip":"Output type desired from resultant operation"}),
Lexicon.SWAP: ("BOOLEAN", {
"default": False}),
Lexicon.DEFAULT_A: ("VEC4", {
"default": (0,0,0,0), "mij": -sys.float_info.max, "maj": sys.float_info.max,
"label": ["X", "Y", "Z", "W"]}),
Lexicon.DEFAULT_B: ("VEC4", {
"default": (0,0,0,0), "mij": -sys.float_info.max, "maj": sys.float_info.max,
"label": ["X", "Y", "Z", "W"]})
}
})
return Lexicon._parse(d)
def run(self, **kw) -> tuple[bool]:
results = []
A = parse_param(kw, Lexicon.IN_A, EnumConvertType.ANY, None)
B = parse_param(kw, Lexicon.IN_B, EnumConvertType.ANY, None)
op = parse_param(kw, Lexicon.FUNCTION, EnumBinaryOperation, EnumBinaryOperation.ADD.name)
typ = parse_param(kw, Lexicon.TYPE, EnumConvertType, EnumConvertType.FLOAT.name)
swap = parse_param(kw, Lexicon.SWAP, EnumConvertType.BOOLEAN, False)
a_xyzw = parse_param(kw, Lexicon.DEFAULT_A, EnumConvertType.VEC4, (0, 0, 0, 0))
b_xyzw = parse_param(kw, Lexicon.DEFAULT_B, EnumConvertType.VEC4, (0, 0, 0, 0))
params = list(zip_longest_fill(A, B, a_xyzw, b_xyzw, op, typ, swap))
pbar = ProgressBar(len(params))
for idx, (A, B, a_xyzw, b_xyzw, op, typ, swap) in enumerate(params):
if not isinstance(A, (list, tuple,)):
A = [A]
if not isinstance(B, (list, tuple,)):
B = [B]
size = min(3, max(len(A)-1, len(B)-1))
best_type = [EnumConvertType.FLOAT, EnumConvertType.VEC2, EnumConvertType.VEC3, EnumConvertType.VEC4][size]
val_a = parse_value(A, best_type, a_xyzw)
val_a = parse_value(val_a, EnumConvertType.VEC4, a_xyzw)
val_b = parse_value(B, best_type, b_xyzw)
val_b = parse_value(val_b, EnumConvertType.VEC4, b_xyzw)
if swap:
val_a, val_b = val_b, val_a
size = max(1, int(typ.value / 10))
val_a = val_a[:size+1]
val_b = val_b[:size+1]
match op:
# VECTOR
case EnumBinaryOperation.DOT_PRODUCT:
val = [sum(a * b for a, b in zip(val_a, val_b))]
case EnumBinaryOperation.CROSS_PRODUCT:
val = [0, 0, 0]
if len(val_a) < 3 or len(val_b) < 3:
logger.warning("Cross product only defined for 3D vectors")
else:
val = [
val_a[1] * val_b[2] - val_a[2] * val_b[1],
val_a[2] * val_b[0] - val_a[0] * val_b[2],
val_a[0] * val_b[1] - val_a[1] * val_b[0]
]
# ARITHMETIC
case EnumBinaryOperation.ADD:
val = [sum(pair) for pair in zip(val_a, val_b)]
case EnumBinaryOperation.SUBTRACT:
val = [a - b for a, b in zip(val_a, val_b)]
case EnumBinaryOperation.MULTIPLY:
val = [a * b for a, b in zip(val_a, val_b)]
case EnumBinaryOperation.DIVIDE:
val = [a / b if b != 0 else 0 for a, b in zip(val_a, val_b)]
case EnumBinaryOperation.DIVIDE_FLOOR:
val = [a // b if b != 0 else 0 for a, b in zip(val_a, val_b)]
case EnumBinaryOperation.MODULUS:
val = [a % b if b != 0 else 0 for a, b in zip(val_a, val_b)]
case EnumBinaryOperation.POWER:
val = [a ** b if b >= 0 else 0 for a, b in zip(val_a, val_b)]
case EnumBinaryOperation.MAXIMUM:
val = [max(a, val_b[i]) for i, a in enumerate(val_a)]
case EnumBinaryOperation.MINIMUM:
# val = min(val_a, val_b)
val = [min(a, val_b[i]) for i, a in enumerate(val_a)]
# BITS
# case EnumBinaryOperation.BIT_NOT:
case EnumBinaryOperation.BIT_AND:
val = [int(a) & int(b) for a, b in zip(val_a, val_b)]
case EnumBinaryOperation.BIT_NAND:
val = [not(int(a) & int(b)) for a, b in zip(val_a, val_b)]
case EnumBinaryOperation.BIT_OR:
val = [int(a) | int(b) for a, b in zip(val_a, val_b)]
case EnumBinaryOperation.BIT_NOR:
val = [not(int(a) | int(b)) for a, b in zip(val_a, val_b)]
case EnumBinaryOperation.BIT_XOR:
val = [int(a) ^ int(b) for a, b in zip(val_a, val_b)]
case EnumBinaryOperation.BIT_XNOR:
val = [not(int(a) ^ int(b)) for a, b in zip(val_a, val_b)]
case EnumBinaryOperation.BIT_LSHIFT:
val = [int(a) << int(b) if b >= 0 else 0 for a, b in zip(val_a, val_b)]
case EnumBinaryOperation.BIT_RSHIFT:
val = [int(a) >> int(b) if b >= 0 else 0 for a, b in zip(val_a, val_b)]
# GROUP
case EnumBinaryOperation.UNION:
val = list(set(val_a) | set(val_b))
case EnumBinaryOperation.INTERSECTION:
val = list(set(val_a) & set(val_b))
case EnumBinaryOperation.DIFFERENCE:
val = list(set(val_a) - set(val_b))
# WEIRD
case EnumBinaryOperation.BASE:
val = list(set(val_a) - set(val_b))
# cast into correct type....
default = val
if len(val) == 0:
default = [0]
val = parse_value(val, typ, default)
results.append(val)
pbar.update_absolute(idx)
return (results,)
class StringerNode(CozyBaseNode):
NAME = "STRINGER (JOV) 🪀"
CATEGORY = JOV_CATEGORY
RETURN_TYPES = ("STRING", "INT",)
RETURN_NAMES = ("STRING", "COUNT",)
OUTPUT_IS_LIST = (True, False,)
DESCRIPTION = """
Manipulate strings through filtering
"""
@classmethod
def INPUT_TYPES(cls) -> InputType:
d = super().INPUT_TYPES()
d = deep_merge(d, {
"optional": {
# split, join, replace, trim/lift
Lexicon.FUNCTION: (EnumConvertString._member_names_, {
"default": EnumConvertString.SPLIT.name}),
Lexicon.KEY: ("STRING", {
"default":"", "dynamicPrompt":False,
"tooltip": "Delimiter (SPLIT/JOIN) or string to use as search string (FIND/REPLACE)."}),
Lexicon.REPLACE: ("STRING", {
"default":"", "dynamicPrompt":False}),
Lexicon.RANGE: ("VEC3", {
"default":(0, -1, 1), "int": True,
"tooltip": "Start, End and Step. Values will clip to the actual list size(s)."}),
}
})
return Lexicon._parse(d)
def run(self, **kw) -> tuple[TensorType, ...]:
# turn any all inputs into the
data_list = parse_dynamic(kw, Lexicon.STRING, EnumConvertType.ANY, "")
if data_list is None:
logger.warn("no data for list")
return ([], 0)
op = parse_param(kw, Lexicon.FUNCTION, EnumConvertString, EnumConvertString.SPLIT.name)[0]
key = parse_param(kw, Lexicon.KEY, EnumConvertType.STRING, "")[0]
replace = parse_param(kw, Lexicon.REPLACE, EnumConvertType.STRING, "")[0]
stenst = parse_param(kw, Lexicon.RANGE, EnumConvertType.VEC3INT, (0, -1, 1))[0]
results = []
match op:
case EnumConvertString.SPLIT:
results = data_list
if key != "":
results = []
for d in data_list:
d = [key if len(r) == 0 else r for r in d.split(key)]
results.extend(d)
case EnumConvertString.JOIN:
results = [key.join(data_list)]
case EnumConvertString.FIND:
results = [r for r in data_list if r.find(key) > -1]
case EnumConvertString.REPLACE:
results = data_list
if key != "":
results = [r.replace(key, replace) for r in data_list]
case EnumConvertString.SLICE:
start, end, step = stenst
for x in data_list:
start = len(x) if start < 0 else min(max(0, start), len(x))
end = len(x) if end < 0 else min(max(0, end), len(x))
if step != 0:
results.append(x[start:end:step])
else:
results.append(x)
return (results, len(results),)
class SwizzleNode(CozyBaseNode):
NAME = "SWIZZLE (JOV) 😵"
CATEGORY = JOV_CATEGORY
RETURN_TYPES = (COZY_TYPE_ANY,)
RETURN_NAMES = ("❔",)
OUTPUT_IS_LIST = (True,)
DESCRIPTION = """
Swap components between two vectors based on specified swizzle patterns and values. It provides flexibility in rearranging vector elements dynamically.
"""
@classmethod
def INPUT_TYPES(cls) -> InputType:
d = super().INPUT_TYPES()
names_convert = EnumConvertType._member_names_[3:6]
d = deep_merge(d, {
"optional": {
Lexicon.IN_A: (COZY_TYPE_NUMERICAL, {}),
Lexicon.IN_B: (COZY_TYPE_NUMERICAL, {}),
Lexicon.TYPE: (names_convert, {
"default": names_convert[0]}),
Lexicon.SWAP_X: (EnumSwizzle._member_names_, {
"default": EnumSwizzle.A_X.name,}),
Lexicon.SWAP_Y: (EnumSwizzle._member_names_, {
"default": EnumSwizzle.A_Y.name,}),
Lexicon.SWAP_Z: (EnumSwizzle._member_names_, {
"default": EnumSwizzle.A_Z.name,}),
Lexicon.SWAP_W: (EnumSwizzle._member_names_, {
"default": EnumSwizzle.A_W.name,}),
Lexicon.DEFAULT: ("VEC4", {
"default": (0,0,0,0), "mij": -sys.float_info.max, "maj": sys.float_info.max})
}
})
return Lexicon._parse(d)
def run(self, **kw) -> tuple[float, ...]:
pA = parse_param(kw, Lexicon.IN_A, EnumConvertType.LIST, None)
pB = parse_param(kw, Lexicon.IN_B, EnumConvertType.LIST, None)
typ = parse_param(kw, Lexicon.TYPE, EnumConvertType, EnumConvertType.VEC2.name)
swap_x = parse_param(kw, Lexicon.SWAP_X, EnumSwizzle, EnumSwizzle.A_X.name)
swap_y = parse_param(kw, Lexicon.SWAP_Y, EnumSwizzle, EnumSwizzle.A_Y.name)
swap_z = parse_param(kw, Lexicon.SWAP_Z, EnumSwizzle, EnumSwizzle.A_W.name)
swap_w = parse_param(kw, Lexicon.SWAP_W, EnumSwizzle, EnumSwizzle.A_Z.name)
default = parse_param(kw, Lexicon.DEFAULT, EnumConvertType.VEC4, (0, 0, 0, 0))
params = list(zip_longest_fill(pA, pB, typ, swap_x, swap_y, swap_z, swap_w, default))
results = []
pbar = ProgressBar(len(params))
for idx, (pA, pB, typ, swap_x, swap_y, swap_z, swap_w, default) in enumerate(params):
default = list(default)
pA = pA + default[len(pA):]
pB = pB + default[len(pB):]
val = vector_swap(pA, pB, swap_x, swap_y, swap_z, swap_w, default)
val = parse_value(val, typ, val)
results.append(val)
pbar.update_absolute(idx)
return (results,)
================================================
FILE: core/color.py
================================================
""" Jovimetrix - Color """
from enum import Enum
import cv2
import torch
from comfy.utils import ProgressBar
from cozy_comfyui import \
IMAGE_SIZE_MIN, \
InputType, RGBAMaskType, EnumConvertType, TensorType, \
deep_merge, parse_param, zip_longest_fill
from cozy_comfyui.lexicon import \
Lexicon
from cozy_comfyui.node import \
COZY_TYPE_IMAGE, \
CozyBaseNode, CozyImageNode
from cozy_comfyui.image.adjust import \
image_invert
from cozy_comfyui.image.color import \
EnumCBDeficiency, EnumCBSimulator, EnumColorMap, EnumColorTheory, \
color_lut_full, color_lut_match, color_lut_palette, \
color_lut_tonal, color_lut_visualize, color_match_reinhard, \
color_theory, color_blind, color_top_used, image_gradient_expand, \
image_gradient_map
from cozy_comfyui.image.channel import \
channel_solid
from cozy_comfyui.image.compose import \
EnumScaleMode, EnumInterpolation, \
image_scalefit
from cozy_comfyui.image.convert import \
tensor_to_cv, cv_to_tensor, cv_to_tensor_full, image_mask, image_mask_add
from cozy_comfyui.image.misc import \
image_stack
# ==============================================================================
# === GLOBAL ===
# ==============================================================================
JOV_CATEGORY = "COLOR"
# ==============================================================================
# === ENUMERATION ===
# ==============================================================================
class EnumColorMatchMode(Enum):
REINHARD = 30
LUT = 10
# HISTOGRAM = 20
class EnumColorMatchMap(Enum):
USER_MAP = 0
PRESET_MAP = 10
# ==============================================================================
# === CLASS ===
# ==============================================================================
class ColorBlindNode(CozyImageNode):
NAME = "COLOR BLIND (JOV) 👁🗨"
CATEGORY = JOV_CATEGORY
DESCRIPTION = """
Simulate color blindness effects on images. You can select various types of color deficiencies, adjust the severity of the effect, and apply the simulation using different simulators. This node is ideal for accessibility testing and design adjustments, ensuring inclusivity in your visual content.
"""
@classmethod
def INPUT_TYPES(cls) -> InputType:
d = super().INPUT_TYPES()
d = deep_merge(d, {
"optional": {
Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),
Lexicon.DEFICIENCY: (EnumCBDeficiency._member_names_, {
"default": EnumCBDeficiency.PROTAN.name,}),
Lexicon.SOLVER: (EnumCBSimulator._member_names_, {
"default": EnumCBSimulator.AUTOSELECT.name,})
}
})
return Lexicon._parse(d)
def run(self, **kw) -> RGBAMaskType:
pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)
deficiency = parse_param(kw, Lexicon.DEFICIENCY, EnumCBDeficiency, EnumCBDeficiency.PROTAN.name)
simulator = parse_param(kw, Lexicon.SOLVER, EnumCBSimulator, EnumCBSimulator.AUTOSELECT.name)
severity = parse_param(kw, Lexicon.VALUE, EnumConvertType.FLOAT, 1)
params = list(zip_longest_fill(pA, deficiency, simulator, severity))
images = []
pbar = ProgressBar(len(params))
for idx, (pA, deficiency, simulator, severity) in enumerate(params):
pA = channel_solid() if pA is None else tensor_to_cv(pA)
pA = color_blind(pA, deficiency, simulator, severity)
images.append(cv_to_tensor_full(pA))
pbar.update_absolute(idx)
return image_stack(images)
class ColorMatchNode(CozyImageNode):
NAME = "COLOR MATCH (JOV) 💞"
CATEGORY = JOV_CATEGORY
DESCRIPTION = """
Adjust the color scheme of one image to match another with the Color Match Node. Choose from various color matching LUTs or Reinhard matching. You can specify a custom user color maps, the number of colors, and whether to flip or invert the images.
"""
@classmethod
def INPUT_TYPES(cls) -> InputType:
d = super().INPUT_TYPES()
d = deep_merge(d, {
"optional": {
Lexicon.IMAGE_SOURCE: (COZY_TYPE_IMAGE, {}),
Lexicon.IMAGE_TARGET: (COZY_TYPE_IMAGE, {}),
Lexicon.MODE: (EnumColorMatchMode._member_names_, {
"default": EnumColorMatchMode.REINHARD.name,
"tooltip": "Match colors from an image or built-in (LUT), Histogram lookups or Reinhard method"}),
Lexicon.MAP: (EnumColorMatchMap._member_names_, {
"default": EnumColorMatchMap.USER_MAP.name, }),
Lexicon.COLORMAP: (EnumColorMap._member_names_, {
"default": EnumColorMap.HSV.name,}),
Lexicon.VALUE: ("INT", {
"default": 255, "min": 0, "max": 255,
"tooltip":"The number of colors to use from the LUT during the remap. Will quantize the LUT range."}),
Lexicon.SWAP: ("BOOLEAN", {
"default": False,}),
Lexicon.INVERT: ("BOOLEAN", {
"default": False,}),
Lexicon.MATTE: ("VEC4", {
"default": (0, 0, 0, 255), "rgb": True,}),
}
})
return Lexicon._parse(d)
def run(self, **kw) -> RGBAMaskType:
pA = parse_param(kw, Lexicon.IMAGE_SOURCE, EnumConvertType.IMAGE, None)
pB = parse_param(kw, Lexicon.IMAGE_TARGET, EnumConvertType.IMAGE, None)
mode = parse_param(kw, Lexicon.MODE, EnumColorMatchMode, EnumColorMatchMode.REINHARD.name)
cmap = parse_param(kw, Lexicon.MAP, EnumColorMatchMap, EnumColorMatchMap.USER_MAP.name)
colormap = parse_param(kw, Lexicon.COLORMAP, EnumColorMap, EnumColorMap.HSV.name)
num_colors = parse_param(kw, Lexicon.VALUE, EnumConvertType.INT, 255)
swap = parse_param(kw, Lexicon.SWAP, EnumConvertType.BOOLEAN, False)
invert = parse_param(kw, Lexicon.INVERT, EnumConvertType.BOOLEAN, False)
matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4, (0, 0, 0, 255), 0, 255)
params = list(zip_longest_fill(pA, pB, mode, cmap, colormap, num_colors, swap, invert, matte))
images = []
pbar = ProgressBar(len(params))
for idx, (pA, pB, mode, cmap, colormap, num_colors, swap, invert, matte) in enumerate(params):
if swap == True:
pA, pB = pB, pA
mask = None
if pA is None:
pA = channel_solid()
else:
pA = tensor_to_cv(pA)
if pA.ndim == 3 and pA.shape[2] == 4:
mask = image_mask(pA)
# h, w = pA.shape[:2]
if pB is None:
pB = channel_solid()
else:
pB = tensor_to_cv(pB)
match mode:
case EnumColorMatchMode.LUT:
if cmap == EnumColorMatchMap.PRESET_MAP:
pB = None
pA = color_lut_match(pA, colormap.value, pB, num_colors)
case EnumColorMatchMode.REINHARD:
pA = color_match_reinhard(pA, pB)
if invert == True:
pA = image_invert(pA, 1)
if mask is not None:
pA = image_mask_add(pA, mask)
images.append(cv_to_tensor_full(pA, matte))
pbar.update_absolute(idx)
return image_stack(images)
class ColorKMeansNode(CozyBaseNode):
NAME = "COLOR MEANS (JOV) 〰️"
CATEGORY = JOV_CATEGORY
RETURN_TYPES = ("IMAGE", "IMAGE", "IMAGE", "JLUT", "IMAGE",)
RETURN_NAMES = ("IMAGE", "PALETTE", "GRADIENT", "LUT", "RGB", )
OUTPUT_TOOLTIPS = (
"Sequence of top-K colors. Count depends on value in `VAL`.",
"Simple Tone palette based on result top-K colors. Width is taken from input.",
"Gradient of top-K colors.",
"Full 3D LUT of the image mapped to the resultant top-K colors chosen.",
"Visualization of full 3D .cube LUT in JLUT output"
)
DESCRIPTION = """
The top-k colors ordered from most->least used as a strip, tonal palette and 3D LUT.
"""
@classmethod
def INPUT_TYPES(cls) -> InputType:
d = super().INPUT_TYPES()
d = deep_merge(d, {
"optional": {
Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),
Lexicon.VALUE: ("INT", {
"default": 12, "min": 1, "max": 255,
"tooltip": "The top K colors to select"}),
Lexicon.SIZE: ("INT", {
"default": 32, "min": 1, "max": 256,
"tooltip": "Height of the tones in the strip. Width is based on input"}),
Lexicon.COUNT: ("INT", {
"default": 33, "min": 1, "max": 255,
"tooltip": "Number of nodes to use in interpolation of full LUT (256 is every pixel)"}),
Lexicon.WH: ("VEC2", {
"default": (256, 256), "mij":IMAGE_SIZE_MIN, "int": True,
"label": ["W", "H"]
}),
}
})
return Lexicon._parse(d)
def run(self, **kw) -> RGBAMaskType:
pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)
kcolors = parse_param(kw, Lexicon.VALUE, EnumConvertType.INT, 12, 1, 255)
lut_height = parse_param(kw, Lexicon.SIZE, EnumConvertType.INT, 32, 1, 256)
nodes = parse_param(kw, Lexicon.COUNT, EnumConvertType.INT, 33, 1, 255)
wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (256, 256), IMAGE_SIZE_MIN)
params = list(zip_longest_fill(pA, kcolors, nodes, lut_height, wihi))
top_colors = []
lut_tonal = []
lut_full = []
lut_visualized = []
gradients = []
pbar = ProgressBar(len(params) * sum(kcolors))
for idx, (pA, kcolors, nodes, lut_height, wihi) in enumerate(params):
if pA is None:
pA = channel_solid()
pA = tensor_to_cv(pA)
colors = color_top_used(pA, kcolors)
# size down to 1px strip then expand to 256 for full gradient
top_colors.extend([cv_to_tensor(channel_solid(*wihi, color=c)) for c in colors])
lut = color_lut_tonal(colors, width=pA.shape[1], height=lut_height)
lut_tonal.append(cv_to_tensor(lut))
full = color_lut_full(colors, nodes)
lut_full.append(torch.from_numpy(full))
lut = color_lut_visualize(full, wihi[1])
lut_visualized.append(cv_to_tensor(lut))
palette = color_lut_palette(colors, 1)
gradient = image_gradient_expand(palette)
gradient = cv2.resize(gradient, wihi)
gradients.append(cv_to_tensor(gradient))
pbar.update_absolute(idx)
return torch.stack(top_colors), torch.stack(lut_tonal), torch.stack(gradients), lut_full, torch.stack(lut_visualized),
class ColorTheoryNode(CozyBaseNode):
NAME = "COLOR THEORY (JOV) 🛞"
CATEGORY = JOV_CATEGORY
RETURN_TYPES = ("IMAGE", "IMAGE", "IMAGE", "IMAGE", "IMAGE")
RETURN_NAMES = ("C1", "C2", "C3", "C4", "C5")
DESCRIPTION = """
Generate a color harmony based on the selected scheme.
Supported schemes include complimentary, analogous, triadic, tetradic, and more.
Users can customize the angle of separation for color calculations, offering flexibility in color manipulation and exploration of different color palettes.
"""
@classmethod
def INPUT_TYPES(cls) -> InputType:
d = super().INPUT_TYPES()
d = deep_merge(d, {
"optional": {
Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),
Lexicon.SCHEME: (EnumColorTheory._member_names_, {
"default": EnumColorTheory.COMPLIMENTARY.name}),
Lexicon.VALUE: ("INT", {
"default": 45, "min": -90, "max": 90,
"tooltip": "Custom angle of separation to use when calculating colors"}),
Lexicon.INVERT: ("BOOLEAN", {
"default": False})
}
})
return Lexicon._parse(d)
def run(self, **kw) -> tuple[list[TensorType], list[TensorType]]:
pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)
scheme = parse_param(kw, Lexicon.SCHEME, EnumColorTheory, EnumColorTheory.COMPLIMENTARY.name)
value = parse_param(kw, Lexicon.VALUE, EnumConvertType.INT, 45, -90, 90)
invert = parse_param(kw, Lexicon.INVERT, EnumConvertType.BOOLEAN, False)
params = list(zip_longest_fill(pA, scheme, value, invert))
images = []
pbar = ProgressBar(len(params))
for idx, (img, scheme, value, invert) in enumerate(params):
img = channel_solid() if img is None else tensor_to_cv(img)
img = color_theory(img, value, scheme)
if invert:
img = (image_invert(s, 1) for s in img)
images.append([cv_to_tensor(a) for a in img])
pbar.update_absolute(idx)
return image_stack(images)
class GradientMapNode(CozyImageNode):
NAME = "GRADIENT MAP (JOV) 🇲🇺"
CATEGORY = JOV_CATEGORY
DESCRIPTION = """
Remaps an input image using a gradient lookup table (LUT).
The gradient image will be translated into a single row lookup table.
"""
@classmethod
def INPUT_TYPES(cls) -> InputType:
d = super().INPUT_TYPES()
d = deep_merge(d, {
"optional": {
Lexicon.IMAGE: (COZY_TYPE_IMAGE, {
"tooltip": "Image to remap with gradient input"}),
Lexicon.GRADIENT: (COZY_TYPE_IMAGE, {
"tooltip": f"Look up table (LUT) to remap the input image in `{"IMAGE"}`"}),
Lexicon.REVERSE: ("BOOLEAN", {
"default": False,
"tooltip": "Reverse the gradient from left-to-right"}),
Lexicon.MODE: (EnumScaleMode._member_names_, {
"default": EnumScaleMode.MATTE.name,}),
Lexicon.WH: ("VEC2", {
"default": (512, 512), "mij":IMAGE_SIZE_MIN, "int": True,
"label": ["W", "H"] }),
Lexicon.SAMPLE: (EnumInterpolation._member_names_, {
"default": EnumInterpolation.LANCZOS4.name,}),
Lexicon.MATTE: ("VEC4", {
"default": (0, 0, 0, 255), "rgb": True,})
}
})
return Lexicon._parse(d)
def run(self, **kw) -> RGBAMaskType:
pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)
gradient = parse_param(kw, Lexicon.GRADIENT, EnumConvertType.IMAGE, None)
reverse = parse_param(kw, Lexicon.REVERSE, EnumConvertType.BOOLEAN, False)
mode = parse_param(kw, Lexicon.MODE, EnumScaleMode, EnumScaleMode.MATTE.name)
wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (512, 512), IMAGE_SIZE_MIN)
sample = parse_param(kw, Lexicon.SAMPLE, EnumInterpolation, EnumInterpolation.LANCZOS4.name)
matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255)
images = []
params = list(zip_longest_fill(pA, gradient, reverse, mode, sample, wihi, matte))
pbar = ProgressBar(len(params))
for idx, (pA, gradient, reverse, mode, sample, wihi, matte) in enumerate(params):
pA = channel_solid() if pA is None else tensor_to_cv(pA)
mask = None
if pA.ndim == 3 and pA.shape[2] == 4:
mask = image_mask(pA)
gradient = channel_solid() if gradient is None else tensor_to_cv(gradient)
pA = image_gradient_map(pA, gradient)
if mode != EnumScaleMode.MATTE:
w, h = wihi
pA = image_scalefit(pA, w, h, mode, sample)
if mask is not None:
pA = image_mask_add(pA, mask)
images.append(cv_to_tensor_full(pA, matte))
pbar.update_absolute(idx)
return image_stack(images)
================================================
FILE: core/compose.py
================================================
""" Jovimetrix - Composition """
import numpy as np
from comfy.utils import ProgressBar
from cozy_comfyui import \
IMAGE_SIZE_MIN, \
InputType, RGBAMaskType, EnumConvertType, \
deep_merge, parse_param, zip_longest_fill
from cozy_comfyui.lexicon import \
Lexicon
from cozy_comfyui.node import \
COZY_TYPE_IMAGE, \
CozyBaseNode, CozyImageNode
from cozy_comfyui.image import \
EnumImageType
from cozy_comfyui.image.adjust import \
EnumThreshold, EnumThresholdAdapt, \
image_histogram2, image_invert, image_filter, image_threshold
from cozy_comfyui.image.channel import \
EnumPixelSwizzle, \
channel_merge, channel_solid, channel_swap
from cozy_comfyui.image.compose import \
EnumBlendType, EnumScaleMode, EnumScaleInputMode, EnumInterpolation, \
image_resize, \
image_scalefit, image_split, image_blend, image_matte
from cozy_comfyui.image.convert import \
image_mask, image_convert, tensor_to_cv, cv_to_tensor, cv_to_tensor_full
from cozy_comfyui.image.misc import \
image_by_size, image_minmax, image_stack
# ==============================================================================
# === GLOBAL ===
# ==============================================================================
JOV_CATEGORY = "COMPOSE"
# ==============================================================================
# === CLASS ===
# ==============================================================================
class BlendNode(CozyImageNode):
NAME = "BLEND (JOV) ⚗️"
CATEGORY = JOV_CATEGORY
DESCRIPTION = """
Combine two input images using various blending modes, such as normal, screen, multiply, overlay, etc. It also supports alpha blending and masking to achieve complex compositing effects. This node is essential for creating layered compositions and adding visual richness to images.
"""
@classmethod
def INPUT_TYPES(cls) -> InputType:
d = super().INPUT_TYPES()
d = deep_merge(d, {
"optional": {
Lexicon.IMAGE_BACK: (COZY_TYPE_IMAGE, {}),
Lexicon.IMAGE_FORE: (COZY_TYPE_IMAGE, {}),
Lexicon.MASK: (COZY_TYPE_IMAGE, {
"tooltip": "Optional Mask for Alpha Blending. If empty, it will use the ALPHA of the FOREGROUND"}),
Lexicon.FUNCTION: (EnumBlendType._member_names_, {
"default": EnumBlendType.NORMAL.name,}),
Lexicon.ALPHA: ("FLOAT", {
"default": 1, "min": 0, "max": 1, "step": 0.01,}),
Lexicon.SWAP: ("BOOLEAN", {
"default": False}),
Lexicon.INVERT: ("BOOLEAN", {
"default": False, "tooltip": "Invert the mask input"}),
Lexicon.MODE: (EnumScaleMode._member_names_, {
"default": EnumScaleMode.MATTE.name,}),
Lexicon.WH: ("VEC2", {
"default": (512, 512), "mij":IMAGE_SIZE_MIN, "int": True,
"label": ["W", "H"]}),
Lexicon.SAMPLE: (EnumInterpolation._member_names_, {
"default": EnumInterpolation.LANCZOS4.name,}),
Lexicon.MATTE: ("VEC4", {
"default": (0, 0, 0, 255), "rgb": True,}),
Lexicon.INPUT: (EnumScaleInputMode._member_names_, {
"default": EnumScaleInputMode.NONE.name,}),
}
})
return Lexicon._parse(d)
def run(self, **kw) -> RGBAMaskType:
back = parse_param(kw, Lexicon.IMAGE_BACK, EnumConvertType.IMAGE, None)
fore = parse_param(kw, Lexicon.IMAGE_FORE, EnumConvertType.IMAGE, None)
mask = parse_param(kw, Lexicon.MASK, EnumConvertType.MASK, None)
func = parse_param(kw, Lexicon.FUNCTION, EnumBlendType, EnumBlendType.NORMAL.name)
alpha = parse_param(kw, Lexicon.ALPHA, EnumConvertType.FLOAT, 1)
swap = parse_param(kw, Lexicon.SWAP, EnumConvertType.BOOLEAN, False)
invert = parse_param(kw, Lexicon.INVERT, EnumConvertType.BOOLEAN, False)
mode = parse_param(kw, Lexicon.MODE, EnumScaleMode, EnumScaleMode.MATTE.name)
wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (512, 512), IMAGE_SIZE_MIN)
sample = parse_param(kw, Lexicon.SAMPLE, EnumInterpolation, EnumInterpolation.LANCZOS4.name)
matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255)
inputMode = parse_param(kw, Lexicon.INPUT, EnumScaleInputMode, EnumScaleInputMode.NONE.name)
params = list(zip_longest_fill(back, fore, mask, func, alpha, swap, invert, mode, wihi, sample, matte, inputMode))
images = []
pbar = ProgressBar(len(params))
for idx, (back, fore, mask, func, alpha, swap, invert, mode, wihi, sample, matte, inputMode) in enumerate(params):
if swap:
back, fore = fore, back
width, height = IMAGE_SIZE_MIN, IMAGE_SIZE_MIN
if back is None:
if fore is None:
if mask is None:
if mode != EnumScaleMode.MATTE:
width, height = wihi
else:
height, width = mask.shape[:2]
else:
height, width = fore.shape[:2]
else:
height, width = back.shape[:2]
if back is None:
back = channel_solid(width, height, matte)
else:
back = tensor_to_cv(back)
#matted = pixel_eval(matte)
#back = image_matte(back, matted)
if fore is None:
clear = list(matte[:3]) + [0]
fore = channel_solid(width, height, clear)
else:
fore = tensor_to_cv(fore)
if mask is None:
mask = image_mask(fore, 255)
else:
mask = tensor_to_cv(mask, 1)
if invert:
mask = 255 - mask
if inputMode != EnumScaleInputMode.NONE:
# get the min/max of back, fore; and mask?
imgs = [back, fore]
_, w, h = image_by_size(imgs)
back = image_scalefit(back, w, h, inputMode, sample, matte)
fore = image_scalefit(fore, w, h, inputMode, sample, matte)
mask = image_scalefit(mask, w, h, inputMode, sample)
back = image_scalefit(back, w, h, EnumScaleMode.RESIZE_MATTE, sample, matte)
fore = image_scalefit(fore, w, h, EnumScaleMode.RESIZE_MATTE, sample, (0,0,0,255))
mask = image_scalefit(mask, w, h, EnumScaleMode.RESIZE_MATTE, sample, (255,255,255,255))
img = image_blend(back, fore, mask, func, alpha)
mask = image_mask(img)
if mode != EnumScaleMode.MATTE:
width, height = wihi
img = image_scalefit(img, width, height, mode, sample, matte)
img = cv_to_tensor_full(img, matte)
#img = [cv_to_tensor(back), cv_to_tensor(fore), cv_to_tensor(mask, True)]
images.append(img)
pbar.update_absolute(idx)
return image_stack(images)
class FilterMaskNode(CozyImageNode):
NAME = "FILTER MASK (JOV) 🤿"
CATEGORY = JOV_CATEGORY
DESCRIPTION = """
Create masks based on specific color ranges within an image. Specify the color range using start and end values and an optional fuzziness factor to adjust the range. This node allows for precise color-based mask creation, ideal for tasks like object isolation, background removal, or targeted color adjustments.
"""
@classmethod
def INPUT_TYPES(cls) -> InputType:
d = super().INPUT_TYPES()
d = deep_merge(d, {
"optional": {
Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),
Lexicon.START: ("VEC3", {
"default": (128, 128, 128), "rgb": True}),
Lexicon.RANGE: ("BOOLEAN", {
"default": False,
"tooltip": "Use an end point (start->end) when calculating the filter range"}),
Lexicon.END: ("VEC3", {
"default": (128, 128, 128), "rgb": True}),
Lexicon.FUZZ: ("VEC3", {
"default": (0.5,0.5,0.5), "mij":0, "maj":1,}),
Lexicon.MATTE: ("VEC4", {
"default": (0, 0, 0, 255), "rgb": True,}),
}
})
return Lexicon._parse(d)
def run(self, **kw) -> RGBAMaskType:
pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)
start = parse_param(kw, Lexicon.START, EnumConvertType.VEC3INT, (128,128,128), 0, 255)
use_range = parse_param(kw, Lexicon.RANGE, EnumConvertType.BOOLEAN, False)
end = parse_param(kw, Lexicon.END, EnumConvertType.VEC3INT, (128,128,128), 0, 255)
fuzz = parse_param(kw, Lexicon.FUZZ, EnumConvertType.VEC3, (0.5,0.5,0.5), 0, 1)
matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255)
params = list(zip_longest_fill(pA, start, use_range, end, fuzz, matte))
images = []
pbar = ProgressBar(len(params))
for idx, (pA, start, use_range, end, fuzz, matte) in enumerate(params):
img = np.zeros((IMAGE_SIZE_MIN, IMAGE_SIZE_MIN, 3), dtype=np.uint8) if pA is None else tensor_to_cv(pA)
img, mask = image_filter(img, start, end, fuzz, use_range)
if img.shape[2] == 3:
alpha_channel = np.zeros((img.shape[0], img.shape[1], 1), dtype=img.dtype)
img = np.concatenate((img, alpha_channel), axis=2)
img[..., 3] = mask[:,:]
images.append(cv_to_tensor_full(img, matte))
pbar.update_absolute(idx)
return image_stack(images)
class HistogramNode(CozyImageNode):
NAME = "HISTOGRAM (JOV)"
CATEGORY = JOV_CATEGORY
DESCRIPTION = """
The Histogram Node generates a histogram representation of the input image, showing the distribution of pixel intensity values across different bins. This visualization is useful for understanding the overall brightness and contrast characteristics of an image. Additionally, the node performs histogram normalization, which adjusts the pixel values to enhance the contrast of the image. Histogram normalization can be helpful for improving the visual quality of images or preparing them for further image processing tasks.
"""
@classmethod
def INPUT_TYPES(cls) -> InputType:
d = super().INPUT_TYPES()
d = deep_merge(d, {
"optional": {
Lexicon.IMAGE: (COZY_TYPE_IMAGE, {
"tooltip": "Pixel Data (RGBA, RGB or Grayscale)"}),
Lexicon.WH: ("VEC2", {
"default": (512, 512), "mij":IMAGE_SIZE_MIN, "int": True,
"label": ["W", "H"]}),
}
})
return Lexicon._parse(d)
def run(self, **kw) -> RGBAMaskType:
pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)
wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (512, 512), IMAGE_SIZE_MIN)
params = list(zip_longest_fill(pA, wihi))
images = []
pbar = ProgressBar(len(params))
for idx, (pA, wihi) in enumerate(params):
pA = tensor_to_cv(pA) if pA is not None else channel_solid()
hist_img = image_histogram2(pA, bins=256)
width, height = wihi
hist_img = image_resize(hist_img, width, height, EnumInterpolation.NEAREST)
images.append(cv_to_tensor_full(hist_img))
pbar.update_absolute(idx)
return image_stack(images)
class PixelMergeNode(CozyImageNode):
NAME = "PIXEL MERGE (JOV) 🫂"
CATEGORY = JOV_CATEGORY
DESCRIPTION = """
Combines individual color channels (red, green, blue) along with an optional mask channel to create a composite image.
"""
@classmethod
def INPUT_TYPES(cls) -> InputType:
d = super().INPUT_TYPES()
d = deep_merge(d, {
"optional": {
Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),
Lexicon.CHAN_RED: (COZY_TYPE_IMAGE, {}),
Lexicon.CHAN_GREEN: (COZY_TYPE_IMAGE, {}),
Lexicon.CHAN_BLUE: (COZY_TYPE_IMAGE, {}),
Lexicon.CHAN_ALPHA: (COZY_TYPE_IMAGE, {}),
Lexicon.MATTE: ("VEC4", {
"default": (0, 0, 0, 255), "rgb": True,}),
Lexicon.FLIP: ("VEC4", {
"default": (0,0,0,0), "mij":0, "maj":1, "step": 0.01,
"tooltip": "Invert specific input prior to merging. R, G, B, A."}),
Lexicon.INVERT: ("BOOLEAN", {
"default": False,})
}
})
return Lexicon._parse(d)
def run(self, **kw) -> RGBAMaskType:
rgba = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)
R = parse_param(kw, Lexicon.CHAN_RED, EnumConvertType.MASK, None)
G = parse_param(kw, Lexicon.CHAN_GREEN, EnumConvertType.MASK, None)
B = parse_param(kw, Lexicon.CHAN_BLUE, EnumConvertType.MASK, None)
A = parse_param(kw, Lexicon.CHAN_ALPHA, EnumConvertType.MASK, None)
matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255)
flip = parse_param(kw, Lexicon.FLIP, EnumConvertType.VEC4, (0, 0, 0, 0), 0, 1)
invert = parse_param(kw, Lexicon.INVERT, EnumConvertType.BOOLEAN, False)
params = list(zip_longest_fill(rgba, R, G, B, A, matte, flip, invert))
images = []
pbar = ProgressBar(len(params))
for idx, (rgba, r, g, b, a, matte, flip, invert) in enumerate(params):
replace = r, g, b, a
if rgba is not None:
rgba = image_split(tensor_to_cv(rgba, chan=4))
img = [tensor_to_cv(replace[i]) if replace[i] is not None else x for i, x in enumerate(rgba)]
else:
img = [tensor_to_cv(x) if x is not None else x for x in replace]
_, _, w_max, h_max = image_minmax(img)
for i, x in enumerate(img):
if x is None:
x = np.full((h_max, w_max, 1), matte[i], dtype=np.uint8)
else:
x = image_convert(x, 1)
x = image_scalefit(x, w_max, h_max, EnumScaleMode.ASPECT)
if flip[i] != 0:
x = image_invert(x, flip[i])
img[i] = x
img = channel_merge(img)
#if invert == True:
# img = image_invert(img, 1)
images.append(cv_to_tensor_full(img, matte))
pbar.update_absolute(idx)
return image_stack(images)
class PixelSplitNode(CozyBaseNode):
NAME = "PIXEL SPLIT (JOV) 💔"
CATEGORY = JOV_CATEGORY
RETURN_TYPES = ("MASK", "MASK", "MASK", "MASK", "IMAGE")
RETURN_NAMES = ("❤️", "💚", "💙", "🤍", "RGB")
OUTPUT_TOOLTIPS = (
"Single channel output of Red Channel.",
"Single channel output of Green Channel",
"Single channel output of Blue Channel",
"Single channel output of Alpha Channel",
"RGB pack of the input",
)
DESCRIPTION = """
Split an input into individual color channels (red, green, blue, alpha).
"""
@classmethod
def INPUT_TYPES(cls) -> InputType:
d = super().INPUT_TYPES()
d = deep_merge(d, {
"optional": {
Lexicon.IMAGE: (COZY_TYPE_IMAGE, {})
}
})
return Lexicon._parse(d)
def run(self, **kw) -> RGBAMaskType:
pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)
images = []
pbar = ProgressBar(len(pA))
for idx, pA in enumerate(pA):
pA = channel_solid(chan=EnumImageType.RGBA) if pA is None else tensor_to_cv(pA, chan=4)
out = [cv_to_tensor(x, True) for x in image_split(pA)] + [cv_to_tensor(image_convert(pA, 3))]
images.append(out)
pbar.update_absolute(idx)
return image_stack(images)
class PixelSwapNode(CozyImageNode):
NAME = "PIXEL SWAP (JOV) 🔃"
CATEGORY = JOV_CATEGORY
DESCRIPTION = """
Swap pixel values between two input images based on specified channel swizzle operations. Options include pixel inputs, swap operations for red, green, blue, and alpha channels, and constant values for each channel. The swap operations allow for flexible pixel manipulation by determining the source of each channel in the output image, whether it be from the first image, the second image, or a constant value.
"""
@classmethod
def INPUT_TYPES(cls) -> InputType:
d = super().INPUT_TYPES()
d = deep_merge(d, {
"optional": {
Lexicon.IMAGE_SOURCE: (COZY_TYPE_IMAGE, {}),
Lexicon.IMAGE_TARGET: (COZY_TYPE_IMAGE, {}),
Lexicon.SWAP_R: (EnumPixelSwizzle._member_names_, {
"default": EnumPixelSwizzle.RED_A.name,}),
Lexicon.SWAP_G: (EnumPixelSwizzle._member_names_, {
"default": EnumPixelSwizzle.GREEN_A.name,}),
Lexicon.SWAP_B: (EnumPixelSwizzle._member_names_, {
"default": EnumPixelSwizzle.BLUE_A.name,}),
Lexicon.SWAP_A: (EnumPixelSwizzle._member_names_, {
"default": EnumPixelSwizzle.ALPHA_A.name,}),
Lexicon.MATTE: ("VEC4", {
"default": (0, 0, 0, 255), "rgb": True,})
}
})
return Lexicon._parse(d)
def run(self, **kw) -> RGBAMaskType:
pA = parse_param(kw, Lexicon.IMAGE_SOURCE, EnumConvertType.IMAGE, None)
pB = parse_param(kw, Lexicon.IMAGE_TARGET, EnumConvertType.IMAGE, None)
swap_r = parse_param(kw, Lexicon.SWAP_R, EnumPixelSwizzle, EnumPixelSwizzle.RED_A.name)
swap_g = parse_param(kw, Lexicon.SWAP_G, EnumPixelSwizzle, EnumPixelSwizzle.GREEN_A.name)
swap_b = parse_param(kw, Lexicon.SWAP_B, EnumPixelSwizzle, EnumPixelSwizzle.BLUE_A.name)
swap_a = parse_param(kw, Lexicon.SWAP_A, EnumPixelSwizzle, EnumPixelSwizzle.ALPHA_A.name)
matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255)
params = list(zip_longest_fill(pA, pB, swap_r, swap_g, swap_b, swap_a, matte))
images = []
pbar = ProgressBar(len(params))
for idx, (pA, pB, swap_r, swap_g, swap_b, swap_a, matte) in enumerate(params):
if pA is None:
if pB is None:
out = channel_solid()
images.append(cv_to_tensor_full(out))
pbar.update_absolute(idx)
continue
h, w = pB.shape[:2]
pA = channel_solid(w, h)
else:
h, w = pA.shape[:2]
pA = tensor_to_cv(pA)
pA = image_convert(pA, 4)
pB = tensor_to_cv(pB) if pB is not None else channel_solid(w, h)
pB = image_convert(pB, 4)
pB = image_matte(pB, (0,0,0,0), w, h)
pB = image_scalefit(pB, w, h, EnumScaleMode.CROP)
out = channel_swap(pA, pB, (swap_r, swap_g, swap_b, swap_a), matte)
images.append(cv_to_tensor_full(out))
pbar.update_absolute(idx)
return image_stack(images)
class ThresholdNode(CozyImageNode):
NAME = "THRESHOLD (JOV) 📉"
CATEGORY = JOV_CATEGORY
DESCRIPTION = """
Define a range and apply it to an image for segmentation and feature extraction. Choose from various threshold modes, such as binary and adaptive, and adjust the threshold value and block size to suit your needs. You can also invert the resulting mask if necessary. This node is versatile for a variety of image processing tasks.
"""
@classmethod
def INPUT_TYPES(cls) -> InputType:
d = super().INPUT_TYPES()
d = deep_merge(d, {
"optional": {
Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),
Lexicon.ADAPT: ( EnumThresholdAdapt._member_names_, {
"default": EnumThresholdAdapt.ADAPT_NONE.name,}),
Lexicon.FUNCTION: ( EnumThreshold._member_names_, {
"default": EnumThreshold.BINARY.name}),
Lexicon.THRESHOLD: ("FLOAT", {
"default": 0.5, "min": 0, "max": 1, "step": 0.005}),
Lexicon.SIZE: ("INT", {
"default": 3, "min": 3, "max": 103}),
Lexicon.INVERT: ("BOOLEAN", {
"default": False,
"tooltip": "Invert the mask input"})
}
})
return Lexicon._parse(d)
def run(self, **kw) -> RGBAMaskType:
pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)
mode = parse_param(kw, Lexicon.FUNCTION, EnumThreshold, EnumThreshold.BINARY.name)
adapt = parse_param(kw, Lexicon.ADAPT, EnumThresholdAdapt, EnumThresholdAdapt.ADAPT_NONE.name)
threshold = parse_param(kw, Lexicon.THRESHOLD, EnumConvertType.FLOAT, 1, 0, 1)
block = parse_param(kw, Lexicon.SIZE, EnumConvertType.INT, 3, 3, 103)
invert = parse_param(kw, Lexicon.INVERT, EnumConvertType.BOOLEAN, False)
params = list(zip_longest_fill(pA, mode, adapt, threshold, block, invert))
images = []
pbar = ProgressBar(len(params))
for idx, (pA, mode, adapt, th, block, invert) in enumerate(params):
pA = tensor_to_cv(pA) if pA is not None else channel_solid()
pA = image_threshold(pA, th, mode, adapt, block)
if invert == True:
pA = image_invert(pA, 1)
images.append(cv_to_tensor_full(pA))
pbar.update_absolute(idx)
return image_stack(images)
================================================
FILE: core/create.py
================================================
""" Jovimetrix - Creation """
import numpy as np
from PIL import ImageFont
from skimage.filters import gaussian
from comfy.utils import ProgressBar
from cozy_comfyui import \
IMAGE_SIZE_MIN, \
InputType, EnumConvertType, RGBAMaskType, \
deep_merge, parse_param, zip_longest_fill
from cozy_comfyui.lexicon import \
Lexicon
from cozy_comfyui.node import \
COZY_TYPE_IMAGE, \
CozyImageNode
from cozy_comfyui.image import \
EnumImageType
from cozy_comfyui.image.adjust import \
image_invert
from cozy_comfyui.image.channel import \
channel_solid
from cozy_comfyui.image.compose import \
EnumEdge, EnumScaleMode, EnumInterpolation, \
image_rotate, image_scalefit, image_transform, image_translate, image_blend
from cozy_comfyui.image.convert import \
image_convert, pil_to_cv, cv_to_tensor, cv_to_tensor_full, tensor_to_cv, \
image_mask, image_mask_add, image_mask_binary
from cozy_comfyui.image.misc import \
image_stack
from cozy_comfyui.image.shape import \
EnumShapes, \
shape_ellipse, shape_polygon, shape_quad
from cozy_comfyui.image.text import \
EnumAlignment, EnumJustify, \
font_names, text_autosize, text_draw
# ==============================================================================
# === GLOBAL ===
# ==============================================================================
JOV_CATEGORY = "CREATE"
# ==============================================================================
# === CLASS ===
# ==============================================================================
class ConstantNode(CozyImageNode):
NAME = "CONSTANT (JOV) 🟪"
CATEGORY = JOV_CATEGORY
DESCRIPTION = """
Generate a constant image or mask of a specified size and color. It can be used to create solid color backgrounds or matte images for compositing with other visual elements. The node allows you to define the desired width and height of the output and specify the RGBA color value for the constant output. Additionally, you can input an optional image to use as a matte with the selected color.
"""
@classmethod
def INPUT_TYPES(cls) -> InputType:
d = super().INPUT_TYPES()
d = deep_merge(d, {
"optional": {
Lexicon.IMAGE: (COZY_TYPE_IMAGE, {
"tooltip":"Optional Image to Matte with Selected Color"}),
Lexicon.MASK: (COZY_TYPE_IMAGE, {
"tooltip":"Override Image mask"}),
Lexicon.COLOR: ("VEC4", {
"default": (0, 0, 0, 255), "rgb": True,
"tooltip": "Constant Color to Output"}),
Lexicon.MODE: (EnumScaleMode._member_names_, {
"default": EnumScaleMode.MATTE.name,}),
Lexicon.WH: ("VEC2", {
"default": (512, 512), "mij": 1, "int": True,
"label": ["W", "H"],}),
Lexicon.SAMPLE: (EnumInterpolation._member_names_, {
"default": EnumInterpolation.LANCZOS4.name,})
}
})
return Lexicon._parse(d)
def run(self, **kw) -> RGBAMaskType:
pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)
mask = parse_param(kw, Lexicon.MASK, EnumConvertType.MASK, None)
matte = parse_param(kw, Lexicon.COLOR, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255)
mode = parse_param(kw, Lexicon.MODE, EnumScaleMode, EnumScaleMode.MATTE.name)
wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (512, 512), 1)
sample = parse_param(kw, Lexicon.SAMPLE, EnumInterpolation, EnumInterpolation.LANCZOS4.name)
images = []
params = list(zip_longest_fill(pA, mask, matte, mode, wihi, sample))
pbar = ProgressBar(len(params))
for idx, (pA, mask, matte, mode, wihi, sample) in enumerate(params):
width, height = wihi
w, h = width, height
if pA is None:
pA = channel_solid(width, height, (0,0,0,255))
else:
pA = tensor_to_cv(pA)
pA = image_convert(pA, 4)
h, w = pA.shape[:2]
if mask is None:
mask = image_mask(pA, 0)
else:
mask = tensor_to_cv(mask, invert=1, chan=1)
mask = image_scalefit(mask, w, h, matte=(0,0,0,255), mode=EnumScaleMode.FIT)
pB = channel_solid(w, h, matte)
pA = image_blend(pB, pA, mask)
#mask = image_invert(mask, 1)
pA = image_mask_add(pA, mask)
if mode != EnumScaleMode.MATTE:
pA = image_scalefit(pA, width, height, mode, sample, matte)
images.append(cv_to_tensor_full(pA, matte))
pbar.update_absolute(idx)
return image_stack(images)
class ShapeNode(CozyImageNode):
NAME = "SHAPE GEN (JOV) ✨"
CATEGORY = JOV_CATEGORY
DESCRIPTION = """
Create n-sided polygons. These shapes can be customized by adjusting parameters such as size, color, position, rotation angle, and edge blur. The node provides options to specify the shape type, the number of sides for polygons, the RGBA color value for the main shape, and the RGBA color value for the background. Additionally, you can control the width and height of the output images, the position offset, and the amount of edge blur applied to the shapes.
"""
@classmethod
def INPUT_TYPES(cls) -> InputType:
d = super().INPUT_TYPES()
d = deep_merge(d, {
"optional": {
Lexicon.SHAPE: (EnumShapes._member_names_, {
"default": EnumShapes.CIRCLE.name}),
Lexicon.SIDES: ("INT", {
"default": 3, "min": 3, "max": 100}),
Lexicon.COLOR: ("VEC4", {
"default": (255, 255, 255, 255), "rgb": True,
"tooltip": "Main Shape Color"}),
Lexicon.MATTE: ("VEC4", {
"default": (0, 0, 0, 255), "rgb": True,}),
Lexicon.WH: ("VEC2", {
"default": (256, 256), "mij":IMAGE_SIZE_MIN, "int": True,
"label": ["W", "H"],}),
Lexicon.XY: ("VEC2", {
"default": (0, 0,), "mij": -1, "maj": 1,
"label": ["X", "Y"]}),
Lexicon.ANGLE: ("FLOAT", {
"default": 0, "min": -180, "max": 180, "step": 0.01,}),
Lexicon.SIZE: ("VEC2", {
"default": (1, 1), "mij": 0, "maj": 1,
"label": ["X", "Y"]}),
Lexicon.EDGE: (EnumEdge._member_names_, {
"default": EnumEdge.CLIP.name}),
Lexicon.BLUR: ("FLOAT", {
"default": 0, "min": 0, "step": 0.01,}),
}
})
return Lexicon._parse(d)
def run(self, **kw) -> RGBAMaskType:
shape = parse_param(kw, Lexicon.SHAPE, EnumShapes, EnumShapes.CIRCLE.name)
sides = parse_param(kw, Lexicon.SIDES, EnumConvertType.INT, 3, 3)
color = parse_param(kw, Lexicon.COLOR, EnumConvertType.VEC4INT, (255, 255, 255, 255), 0, 255)
matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255)
wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (256, 256), IMAGE_SIZE_MIN)
offset = parse_param(kw, Lexicon.XY, EnumConvertType.VEC2, (0, 0), -1, 1)
angle = parse_param(kw, Lexicon.ANGLE, EnumConvertType.FLOAT, 0, -180, 180)
size = parse_param(kw, Lexicon.SIZE, EnumConvertType.VEC2, (1, 1), 0, 1, zero=0.001)
edge = parse_param(kw, Lexicon.EDGE, EnumEdge, EnumEdge.CLIP.name)
blur = parse_param(kw, Lexicon.BLUR, EnumConvertType.FLOAT, 0, 0)
params = list(zip_longest_fill(shape, sides, color, matte, wihi, offset, angle, size, edge, blur))
images = []
pbar = ProgressBar(len(params))
for idx, (shape, sides, color, matte, wihi, offset, angle, size, edge, blur) in enumerate(params):
width, height = wihi
sizeX, sizeY = size
fill = color[:3][::-1]
match shape:
case EnumShapes.SQUARE:
rgb = shape_quad(width, height, sizeX, sizeY, fill)
case EnumShapes.CIRCLE:
rgb = shape_ellipse(width, height, sizeX, sizeY, fill)
case EnumShapes.POLYGON:
rgb = shape_polygon(width, height, sizeX, sides, fill)
rgb = pil_to_cv(rgb)
rgb = image_transform(rgb, offset, angle, edge=edge)
mask = image_mask_binary(rgb)
if blur > 0:
# @TODO: Do blur on larger canvas to remove wrap bleed.
rgb = (gaussian(rgb, sigma=blur, channel_axis=2) * 255).astype(np.uint8)
mask = (gaussian(mask, sigma=blur, channel_axis=2) * 255).astype(np.uint8)
mask = (mask * (color[3] / 255.)).astype(np.uint8)
back = list(matte[:3]) + [255]
canvas = np.full((height, width, 4), back, dtype=rgb.dtype)
rgba = image_blend(canvas, rgb, mask)
rgba = image_mask_add(rgba, mask)
rgb = image_convert(rgba, 3)
images.append([cv_to_tensor(rgba), cv_to_tensor(rgb), cv_to_tensor(mask, True)])
pbar.update_absolute(idx)
return image_stack(images)
class TextNode(CozyImageNode):
NAME = "TEXT GEN (JOV) 📝"
CATEGORY = JOV_CATEGORY
FONTS = font_names()
FONT_NAMES = sorted(FONTS.keys())
DESCRIPTION = """
Generates images containing text based on parameters such as font, size, alignment, color, and position. Users can input custom text messages, select fonts from a list of available options, adjust font size, and specify the alignment and justification of the text. Additionally, the node provides options for auto-sizing text to fit within specified dimensions, controlling letter-by-letter rendering, and applying edge effects such as clipping and inversion.
"""
@classmethod
def INPUT_TYPES(cls) -> InputType:
d = super().INPUT_TYPES()
d = deep_merge(d, {
"optional": {
Lexicon.STRING: ("STRING", {
"default": "jovimetrix", "multiline": True,
"dynamicPrompts": False,
"tooltip": "Your Message"}),
Lexicon.FONT: (cls.FONT_NAMES, {
"default": cls.FONT_NAMES[0]}),
Lexicon.LETTER: ("BOOLEAN", {
"default": False,}),
Lexicon.AUTOSIZE: ("BOOLEAN", {
"default": False,
"tooltip": "Scale based on Width & Height"}),
Lexicon.COLOR: ("VEC4", {
"default": (255, 255, 255, 255), "rgb": True,
"tooltip": "Color of the letters"}),
Lexicon.MATTE: ("VEC4", {
"default": (0, 0, 0, 255), "rgb": True,}),
Lexicon.COLUMNS: ("INT", {
"default": 0, "min": 0}),
# if auto on, hide these...
Lexicon.SIZE: ("INT", {
"default": 16, "min": 8}),
Lexicon.ALIGN: (EnumAlignment._member_names_, {
"default": EnumAlignment.CENTER.name,}),
Lexicon.JUSTIFY: (EnumJustify._member_names_, {
"default": EnumJustify.CENTER.name,}),
Lexicon.MARGIN: ("INT", {
"default": 0, "min": -1024, "max": 1024,}),
Lexicon.SPACING: ("INT", {
"default": 0, "min": -1024, "max": 1024}),
Lexicon.WH: ("VEC2", {
"default": (256, 256), "mij":IMAGE_SIZE_MIN, "int": True,
"label": ["W", "H"],}),
Lexicon.XY: ("VEC2", {
"default": (0, 0,), "mij": -1, "maj": 1,
"label": ["X", "Y"],
"tooltip":"Offset the position"}),
Lexicon.ANGLE: ("FLOAT", {
"default": 0, "step": 0.01,}),
Lexicon.EDGE: (EnumEdge._member_names_, {
"default": EnumEdge.CLIP.name}),
Lexicon.INVERT: ("BOOLEAN", {
"default": False,
"tooltip": "Invert the mask input"})
}
})
return Lexicon._parse(d)
def run(self, **kw) -> RGBAMaskType:
full_text = parse_param(kw, Lexicon.STRING, EnumConvertType.STRING, "jovimetrix")
font_idx = parse_param(kw, Lexicon.FONT, EnumConvertType.STRING, self.FONT_NAMES[0])
autosize = parse_param(kw, Lexicon.AUTOSIZE, EnumConvertType.BOOLEAN, False)
letter = parse_param(kw, Lexicon.LETTER, EnumConvertType.BOOLEAN, False)
color = parse_param(kw, Lexicon.COLOR, EnumConvertType.VEC4INT, (255,255,255,255))
matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0,0,0,255))
columns = parse_param(kw, Lexicon.COLUMNS, EnumConvertType.INT, 0)
font_size = parse_param(kw, Lexicon.SIZE, EnumConvertType.INT, 1)
align = parse_param(kw, Lexicon.ALIGN, EnumAlignment, EnumAlignment.CENTER.name)
justify = parse_param(kw, Lexicon.JUSTIFY, EnumJustify, EnumJustify.CENTER.name)
margin = parse_param(kw, Lexicon.MARGIN, EnumConvertType.INT, 0)
line_spacing = parse_param(kw, Lexicon.SPACING, EnumConvertType.INT, 0)
wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (512, 512), IMAGE_SIZE_MIN)
pos = parse_param(kw, Lexicon.XY, EnumConvertType.VEC2, (0, 0))
angle = parse_param(kw, Lexicon.ANGLE, EnumConvertType.INT, 0)
edge = parse_param(kw, Lexicon.EDGE, EnumEdge, EnumEdge.CLIP.name)
invert = parse_param(kw, Lexicon.INVERT, EnumConvertType.BOOLEAN, False)
images = []
params = list(zip_longest_fill(full_text, font_idx, autosize, letter, color,
matte, columns, font_size, align, justify, margin,
line_spacing, wihi, pos, angle, edge, invert))
pbar = ProgressBar(len(params))
for idx, (full_text, font_idx, autosize, letter, color, matte, columns,
font_size, align, justify, margin, line_spacing, wihi, pos,
angle, edge, invert) in enumerate(params):
width, height = wihi
font_name = self.FONTS[font_idx]
full_text = str(full_text)
if letter:
full_text = full_text.replace('\n', '')
if autosize:
_, font_size = text_autosize(full_text[0].upper(), font_name, width, height)[:2]
margin = 0
line_spacing = 0
else:
if autosize:
wm = width - margin * 2
hm = height - margin * 2 - line_spacing
columns = 0 if columns == 0 else columns * 2 + 2
full_text, font_size = text_autosize(full_text, font_name, wm, hm, columns)[:2]
full_text = [full_text]
font_size *= 2.5
font = ImageFont.truetype(font_name, font_size)
for ch in full_text:
img = text_draw(ch, font, width, height, align, justify, margin, line_spacing, color)
img = image_rotate(img, angle, edge=edge)
img = image_translate(img, pos, edge=edge)
if invert:
img = image_invert(img, 1)
images.append(cv_to_tensor_full(img, matte))
pbar.update_absolute(idx)
return image_stack(images)
================================================
FILE: core/trans.py
================================================
""" Jovimetrix - Transform """
import sys
from enum import Enum
from comfy.utils import ProgressBar
from cozy_comfyui import \
logger, \
IMAGE_SIZE_MIN, \
InputType, RGBAMaskType, EnumConvertType, \
deep_merge, parse_param, parse_dynamic, zip_longest_fill
from cozy_comfyui.lexicon import \
Lexicon
from cozy_comfyui.node import \
COZY_TYPE_IMAGE, \
CozyImageNode, CozyBaseNode
from cozy_comfyui.image.channel import \
channel_solid
from cozy_comfyui.image.convert import \
tensor_to_cv, cv_to_tensor_full, cv_to_tensor, image_mask, image_mask_add
from cozy_comfyui.image.compose import \
EnumOrientation, EnumEdge, EnumMirrorMode, EnumScaleMode, EnumInterpolation, \
image_edge_wrap, image_mirror, image_scalefit, image_transform, \
image_crop, image_crop_center, image_crop_polygonal, image_stacker, \
image_flatten
from cozy_comfyui.image.misc import \
image_stack
from cozy_comfyui.image.mapping import \
EnumProjection, \
remap_fisheye, remap_perspective, remap_polar, remap_sphere
# ==============================================================================
# === GLOBAL ===
# ==============================================================================
JOV_CATEGORY = "TRANSFORM"
# ==============================================================================
# === ENUMERATION ===
# ==============================================================================
class EnumCropMode(Enum):
CENTER = 20
XY = 0
FREE = 10
# ==============================================================================
# === CLASS ===
# ==============================================================================
class CropNode(CozyImageNode):
NAME = "CROP (JOV) ✂️"
CATEGORY = JOV_CATEGORY
DESCRIPTION = """
Extract a portion of an input image or resize it. It supports various cropping modes, including center cropping, custom XY cropping, and free-form polygonal cropping. This node is useful for preparing image data for specific tasks or extracting regions of interest.
"""
@classmethod
def INPUT_TYPES(cls) -> InputType:
d = super().INPUT_TYPES()
d = deep_merge(d, {
"optional": {
Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),
Lexicon.FUNCTION: (EnumCropMode._member_names_, {
"default": EnumCropMode.CENTER.name}),
Lexicon.XY: ("VEC2", {
"default": (0, 0), "mij": 0, "maj": 1,
"label": ["X", "Y"]}),
Lexicon.WH: ("VEC2", {
"default": (512, 512), "mij": IMAGE_SIZE_MIN, "int": True,
"label": ["W", "H"]}),
Lexicon.TLTR: ("VEC4", {
"default": (0, 0, 0, 1), "mij": 0, "maj": 1,
"label": ["TOP", "LEFT", "TOP", "RIGHT"],}),
Lexicon.BLBR: ("VEC4", {
"default": (1, 0, 1, 1), "mij": 0, "maj": 1,
"label": ["BOTTOM", "LEFT", "BOTTOM", "RIGHT"],}),
Lexicon.MATTE: ("VEC4", {
"default": (0, 0, 0, 255), "rgb": True,})
}
})
return Lexicon._parse(d)
def run(self, **kw) -> RGBAMaskType:
pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)
func = parse_param(kw, Lexicon.FUNCTION, EnumCropMode, EnumCropMode.CENTER.name)
# if less than 1 then use as scalar, over 1 = int(size)
xy = parse_param(kw, Lexicon.XY, EnumConvertType.VEC2, (0, 0,))
wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (512, 512), IMAGE_SIZE_MIN)
tltr = parse_param(kw, Lexicon.TLTR, EnumConvertType.VEC4, (0, 0, 0, 1,))
blbr = parse_param(kw, Lexicon.BLBR, EnumConvertType.VEC4, (1, 0, 1, 1,))
matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255)
params = list(zip_longest_fill(pA, func, xy, wihi, tltr, blbr, matte))
images = []
pbar = ProgressBar(len(params))
for idx, (pA, func, xy, wihi, tltr, blbr, matte) in enumerate(params):
width, height = wihi
pA = tensor_to_cv(pA) if pA is not None else channel_solid(width, height)
alpha = None
if pA.ndim == 3 and pA.shape[2] == 4:
alpha = image_mask(pA)
if func == EnumCropMode.FREE:
x1, y1, x2, y2 = tltr
x4, y4, x3, y3 = blbr
points = (x1 * width, y1 * height), (x2 * width, y2 * height), \
(x3 * width, y3 * height), (x4 * width, y4 * height)
pA = image_crop_polygonal(pA, points)
if alpha is not None:
alpha = image_crop_polygonal(alpha, points)
pA[..., 3] = alpha[..., 0][:,:]
elif func == EnumCropMode.XY:
pA = image_crop(pA, width, height, xy)
else:
pA = image_crop_center(pA, width, height)
images.append(cv_to_tensor_full(pA, matte))
pbar.update_absolute(idx)
return image_stack(images)
class FlattenNode(CozyImageNode):
NAME = "FLATTEN (JOV) ⬇️"
CATEGORY = JOV_CATEGORY
DESCRIPTION = """
Combine multiple input images into a single image by summing their pixel values. This operation is useful for merging multiple layers or images into one composite image, such as combining different elements of a design or merging masks. Users can specify the blending mode and interpolation method to control how the images are combined. Additionally, a matte can be applied to adjust the transparency of the final composite image.
"""
@classmethod
def INPUT_TYPES(cls) -> InputType:
d = super().INPUT_TYPES()
d = deep_merge(d, {
"optional": {
Lexicon.MODE: (EnumScaleMode._member_names_, {
"default": EnumScaleMode.MATTE.name,}),
Lexicon.WH: ("VEC2", {
"default": (512, 512), "mij":1, "int": True,
"label": ["W", "H"]}),
Lexicon.SAMPLE: (EnumInterpolation._member_names_, {
"default": EnumInterpolation.LANCZOS4.name,}),
Lexicon.MATTE: ("VEC4", {
"default": (0, 0, 0, 255), "rgb": True,}),
Lexicon.OFFSET: ("VEC2", {
"default": (0, 0), "mij":0, "int": True,
"label": ["X", "Y"]}),
}
})
return Lexicon._parse(d)
def run(self, **kw) -> RGBAMaskType:
imgs = parse_dynamic(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)
if imgs is None:
logger.warning("no images to flatten")
return ()
# be less dumb when merging
pA = [tensor_to_cv(i) for i in imgs]
mode = parse_param(kw, Lexicon.MODE, EnumScaleMode, EnumScaleMode.MATTE.name)[0]
wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (512, 512), 1)[0]
sample = parse_param(kw, Lexicon.SAMPLE, EnumInterpolation, EnumInterpolation.LANCZOS4.name)[0]
matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255)[0]
offset = parse_param(kw, Lexicon.OFFSET, EnumConvertType.VEC2INT, (0, 0), 0)[0]
w, h = wihi
x, y = offset
pA = image_flatten(pA, x, y, w, h, mode=mode, sample=sample)
pA = [cv_to_tensor_full(pA, matte)]
return image_stack(pA)
class SplitNode(CozyBaseNode):
NAME = "SPLIT (JOV) 🎭"
CATEGORY = JOV_CATEGORY
RETURN_TYPES = ("IMAGE", "IMAGE",)
RETURN_NAMES = ("IMAGEA", "IMAGEB",)
OUTPUT_TOOLTIPS = (
"Left/Top image",
"Right/Bottom image"
)
DESCRIPTION = """
Split an image into two or four images based on the percentages for width and height.
"""
@classmethod
def INPUT_TYPES(cls) -> InputType:
d = super().INPUT_TYPES()
d = deep_merge(d, {
"optional": {
Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),
Lexicon.VALUE: ("FLOAT", {
"default": 0.5, "min": 0, "max": 1, "step": 0.001
}),
Lexicon.FLIP: ("BOOLEAN", {
"default": False,
"tooltip": "Horizontal split (False) or Vertical split (True)"
}),
Lexicon.MODE: (EnumScaleMode._member_names_, {
"default": EnumScaleMode.MATTE.name,}),
Lexicon.WH: ("VEC2", {
"default": (512, 512), "mij":IMAGE_SIZE_MIN, "int": True,
"label": ["W", "H"]}),
Lexicon.SAMPLE: (EnumInterpolation._member_names_, {
"default": EnumInterpolation.LANCZOS4.name,}),
Lexicon.MATTE: ("VEC4", {
"default": (0, 0, 0, 255), "rgb": True,})
}
})
return Lexicon._parse(d)
def run(self, **kw) -> RGBAMaskType:
pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)
percent = parse_param(kw, Lexicon.VALUE, EnumConvertType.FLOAT, 0.5, 0, 1)
flip = parse_param(kw, Lexicon.FLIP, EnumConvertType.BOOLEAN, False)
mode = parse_param(kw, Lexicon.MODE, EnumScaleMode, EnumScaleMode.MATTE.name)
wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (512, 512), IMAGE_SIZE_MIN)
sample = parse_param(kw, Lexicon.SAMPLE, EnumInterpolation, EnumInterpolation.LANCZOS4.name)
matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255)
params = list(zip_longest_fill(pA, percent, flip, mode, wihi, sample, matte))
images = []
pbar = ProgressBar(len(params))
for idx, (pA, percent, flip, mode, wihi, sample, matte) in enumerate(params):
w, h = wihi
pA = channel_solid(w, h, matte) if pA is None else tensor_to_cv(pA)
if flip:
size = pA.shape[1]
percent = max(1, min(size-1, int(size * percent)))
image_a = pA[:, :percent]
image_b = pA[:, percent:]
else:
size = pA.shape[0]
percent = max(1, min(size-1, int(size * percent)))
image_a = pA[:percent, :]
image_b = pA[percent:, :]
if mode != EnumScaleMode.MATTE:
image_a = image_scalefit(image_a, w, h, mode, sample)
image_b = image_scalefit(image_b, w, h, mode, sample)
images.append([cv_to_tensor(img) for img in [image_a, image_b]])
pbar.update_absolute(idx)
return image_stack(images)
class StackNode(CozyImageNode):
NAME = "STACK (JOV) ➕"
CATEGORY = JOV_CATEGORY
DESCRIPTION = """
Merge multiple input images into a single composite image by stacking them along a specified axis.
Options include axis, stride, scaling mode, width and height, interpolation method, and matte color.
The axis parameter allows for horizontal, vertical, or grid stacking of images, while stride controls the spacing between them.
"""
@classmethod
def INPUT_TYPES(cls) -> InputType:
d = super().INPUT_TYPES()
d = deep_merge(d, {
"optional": {
Lexicon.AXIS: (EnumOrientation._member_names_, {
"default": EnumOrientation.GRID.name,}),
Lexicon.STEP: ("INT", {
"default": 1, "min": 0,
"tooltip":"How many images are placed before a new row starts (stride)"}),
Lexicon.MODE: (EnumScaleMode._member_names_, {
"default": EnumScaleMode.MATTE.name,}),
Lexicon.WH: ("VEC2", {
"default": (512, 512), "mij": IMAGE_SIZE_MIN, "int": True,
"label": ["W", "H"]}),
Lexicon.SAMPLE: (EnumInterpolation._member_names_, {
"default": EnumInterpolation.LANCZOS4.name,}),
Lexicon.MATTE: ("VEC4", {
"default": (0, 0, 0, 255), "rgb": True,})
}
})
return Lexicon._parse(d)
def run(self, **kw) -> RGBAMaskType:
images = parse_dynamic(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)
if len(images) == 0:
logger.warning("no images to stack")
return
images = [tensor_to_cv(i) for i in images]
axis = parse_param(kw, Lexicon.AXIS, EnumOrientation, EnumOrientation.GRID.name)[0]
stride = parse_param(kw, Lexicon.STEP, EnumConvertType.INT, 1, 0)[0]
mode = parse_param(kw, Lexicon.MODE, EnumScaleMode, EnumScaleMode.MATTE.name)[0]
wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (512, 512), IMAGE_SIZE_MIN)[0]
sample = parse_param(kw, Lexicon.SAMPLE, EnumInterpolation, EnumInterpolation.LANCZOS4.name)[0]
matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255)[0]
img = image_stacker(images, axis, stride) #, matte)
if mode != EnumScaleMode.MATTE:
w, h = wihi
img = image_scalefit(img, w, h, mode, sample)
rgba, rgb, mask = cv_to_tensor_full(img, matte)
return rgba.unsqueeze(0), rgb.unsqueeze(0), mask.unsqueeze(0)
class TransformNode(CozyImageNode):
NAME = "TRANSFORM (JOV) 🏝️"
CATEGORY = JOV_CATEGORY
DESCRIPTION = """
Apply various geometric transformations to images, including translation, rotation, scaling, mirroring, tiling and perspective projection. It offers extensive control over image manipulation to achieve desired visual effects.
"""
@classmethod
def INPUT_TYPES(cls) -> InputType:
d = super().INPUT_TYPES(prompt=True, dynprompt=True)
d = deep_merge(d, {
"optional": {
Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),
Lexicon.MASK: (COZY_TYPE_IMAGE, {
"tooltip": "Override Image mask"}),
Lexicon.XY: ("VEC2", {
"default": (0, 0,), "mij": -1, "maj": 1,
"label": ["X", "Y"]}),
Lexicon.ANGLE: ("FLOAT", {
"default": 0, "min": -sys.float_info.max, "max": sys.float_info.max, "step": 0.1,}),
Lexicon.SIZE: ("VEC2", {
"default": (1, 1), "mij": 0.001,
"label": ["X", "Y"]}),
Lexicon.TILE: ("VEC2", {
"default": (1, 1), "mij": 1,
"label": ["X", "Y"]}),
Lexicon.EDGE: (EnumEdge._member_names_, {
"default": EnumEdge.CLIP.name}),
Lexicon.MIRROR: (EnumMirrorMode._member_names_, {
"default": EnumMirrorMode.NONE.name}),
Lexicon.PIVOT: ("VEC2", {
"default": (0.5, 0.5), "mij": 0, "maj": 1, "step": 0.01,
"label": ["X", "Y"]}),
Lexicon.PROJECTION: (EnumProjection._member_names_, {
"default": EnumProjection.NORMAL.name}),
Lexicon.TLTR: ("VEC4", {
"default": (0, 0, 1, 0), "mij": 0, "maj": 1, "step": 0.005,
"label": ["TOP", "LEFT", "TOP", "RIGHT"],}),
Lexicon.BLBR: ("VEC4", {
"default": (0, 1, 1, 1), "mij": 0, "maj": 1, "step": 0.005,
"label": ["BOTTOM", "LEFT", "BOTTOM", "RIGHT"],}),
Lexicon.STRENGTH: ("FLOAT", {
"default": 1, "min": 0, "max": 1, "step": 0.005}),
Lexicon.MODE: (EnumScaleMode._member_names_, {
"default": EnumScaleMode.MATTE.name,}),
Lexicon.WH: ("VEC2", {
"default": (512, 512), "mij": IMAGE_SIZE_MIN, "int": True,
"label": ["W", "H"]}),
Lexicon.SAMPLE: (EnumInterpolation._member_names_, {
"default": EnumInterpolation.LANCZOS4.name,}),
Lexicon.MATTE: ("VEC4", {
"default": (0, 0, 0, 255), "rgb": True,})
}
})
return Lexicon._parse(d)
def run(self, **kw) -> RGBAMaskType:
pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)
mask = parse_param(kw, Lexicon.MASK, EnumConvertType.IMAGE, None)
offset = parse_param(kw, Lexicon.XY, EnumConvertType.VEC2, (0, 0), -1, 1)
angle = parse_param(kw, Lexicon.ANGLE, EnumConvertType.FLOAT, 0)
size = parse_param(kw, Lexicon.SIZE, EnumConvertType.VEC2, (1, 1), 0.001)
edge = parse_param(kw, Lexicon.EDGE, EnumEdge, EnumEdge.CLIP.name)
mirror = parse_param(kw, Lexicon.MIRROR, EnumMirrorMode, EnumMirrorMode.NONE.name)
mirror_pivot = parse_param(kw, Lexicon.PIVOT, EnumConvertType.VEC2, (0.5, 0.5), 0, 1)
tile_xy = parse_param(kw, Lexicon.TILE, EnumConvertType.VEC2, (1, 1), 1)
proj = parse_param(kw, Lexicon.PROJECTION, EnumProjection, EnumProjection.NORMAL.name)
tltr = parse_param(kw, Lexicon.TLTR, EnumConvertType.VEC4, (0, 0, 1, 0), 0, 1)
blbr = parse_param(kw, Lexicon.BLBR, EnumConvertType.VEC4, (0, 1, 1, 1), 0, 1)
strength = parse_param(kw, Lexicon.STRENGTH, EnumConvertType.FLOAT, 1, 0, 1)
mode = parse_param(kw, Lexicon.MODE, EnumScaleMode, EnumScaleMode.MATTE.name)
wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (512, 512), IMAGE_SIZE_MIN)
sample = parse_param(kw, Lexicon.SAMPLE, EnumInterpolation, EnumInterpolation.LANCZOS4.name)
matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255)
params = list(zip_longest_fill(pA, mask, offset, angle, size, edge, tile_xy, mirror, mirror_pivot, proj, strength, tltr, blbr, mode, wihi, sample, matte))
images = []
pbar = ProgressBar(len(params))
for idx, (pA, mask, offset, angle, size, edge, tile_xy, mirror, mirror_pivot, proj, strength, tltr, blbr, mode, wihi, sample, matte) in enumerate(params):
pA = tensor_to_cv(pA) if pA is not None else channel_solid()
if mask is None:
mask = image_mask(pA, 255)
else:
mask = tensor_to_cv(mask)
pA = image_mask_add(pA, mask)
h, w = pA.shape[:2]
pA = image_transform(pA, offset, angle, size, sample, edge)
pA = image_crop_center(pA, w, h)
if mirror != EnumMirrorMode.NONE:
mpx, mpy = mirror_pivot
pA = image_mirror(pA, mirror, mpx, mpy)
pA = image_scalefit(pA, w, h, EnumScaleMode.FIT, sample)
tx, ty = tile_xy
if tx != 1. or ty != 1.:
pA = image_edge_wrap(pA, tx / 2 - 0.5, ty / 2 - 0.5)
pA = image_scalefit(pA, w, h, EnumScaleMode.FIT, sample)
match proj:
case EnumProjection.PERSPECTIVE:
x1, y1, x2, y2 = tltr
x4, y4, x3, y3 = blbr
sh, sw = pA.shape[:2]
x1, x2, x3, x4 = map(lambda x: x * sw, [x1, x2, x3, x4])
y1, y2, y3, y4 = map(lambda y: y * sh, [y1, y2, y3, y4])
pA = remap_perspective(pA, [[x1, y1], [x2, y2], [x3, y3], [x4, y4]])
case EnumProjection.SPHERICAL:
pA = remap_sphere(pA, strength)
case EnumProjection.FISHEYE:
pA = remap_fisheye(pA, strength)
case EnumProjection.POLAR:
pA = remap_polar(pA)
if proj != EnumProjection.NORMAL:
pA = image_scalefit(pA, w, h, EnumScaleMode.FIT, sample)
if mode != EnumScaleMode.MATTE:
w, h = wihi
pA = image_scalefit(pA, w, h, mode, sample)
images.append(cv_to_tensor_full(pA, matte))
pbar.update_absolute(idx)
return image_stack(images)
================================================
FILE: core/utility/__init__.py
================================================
================================================
FILE: core/utility/batch.py
================================================
""" Jovimetrix - Utility """
import os
import sys
import json
import glob
import random
from enum import Enum
from pathlib import Path
from itertools import zip_longest
from typing import Any
import torch
import numpy as np
from comfy.utils import ProgressBar
from nodes import interrupt_processing
from cozy_comfyui import \
logger, \
IMAGE_SIZE_MIN, \
InputType, EnumConvertType, TensorType, \
deep_merge, parse_dynamic, parse_param
from cozy_comfyui.lexicon import \
Lexicon
from cozy_comfyui.node import \
COZY_TYPE_ANY, \
CozyBaseNode
from cozy_comfyui.image import \
IMAGE_FORMATS
from cozy_comfyui.image.compose import \
EnumScaleMode, EnumInterpolation, \
image_matte, image_scalefit
from cozy_comfyui.image.convert import \
image_convert, cv_to_tensor, cv_to_tensor_full, tensor_to_cv
from cozy_comfyui.image.misc import \
image_by_size
from cozy_comfyui.image.io import \
image_load
from cozy_comfyui.api import \
parse_reset, comfy_api_post
from ... import \
ROOT
JOV_CATEGORY = "UTILITY/BATCH"
# ==============================================================================
# === ENUMERATION ===
# ==============================================================================
class EnumBatchMode(Enum):
MERGE = 30
PICK = 10
SLICE = 15
INDEX_LIST = 20
RANDOM = 5
# ==============================================================================
# === CLASS ===
# ==============================================================================
class ArrayNode(CozyBaseNode):
NAME = "ARRAY (JOV) 📚"
CATEGORY = JOV_CATEGORY
RETURN_TYPES = (COZY_TYPE_ANY, "INT",)
RETURN_NAMES = ("ARRAY", "LENGTH",)
OUTPUT_IS_LIST = (True, True,)
OUTPUT_TOOLTIPS = (
"Output list from selected operation",
"Length of output list",
"Full input list",
"Length of all input elements",
)
DESCRIPTION = """
Processes a batch of data based on the selected mode. Merge, pick, slice, random select, or index items. Can also reverse the order of items.
"""
@classmethod
def INPUT_TYPES(cls) -> InputType:
d = super().INPUT_TYPES()
d = deep_merge(d, {
"optional": {
Lexicon.MODE: (EnumBatchMode._member_names_, {
"default": EnumBatchMode.MERGE.name,
"tooltip": "Select a single index, specific range, custom index list or randomized"}),
Lexicon.RANGE: ("VEC3", {
"default": (0, 0, 1), "mij": 0, "int": True,
"tooltip": "The start, end and step for the range"}),
Lexicon.INDEX: ("STRING", {
"default": "",
"tooltip": "Comma separated list of indicies to export"}),
Lexicon.COUNT: ("INT", {
"default": 0, "min": 0, "max": sys.maxsize,
"tooltip": "How many items to return"}),
Lexicon.REVERSE: ("BOOLEAN", {
"default": False,
"tooltip": "Reverse the calculated output list"}),
Lexicon.SEED: ("INT", {
"default": 0, "min": 0, "max": sys.maxsize}),
}
})
return Lexicon._parse(d)
@classmethod
def batched(cls, iterable, chunk_size, expand:bool=False, fill:Any=None) -> list[Any]:
if expand:
iterator = iter(iterable)
return zip_longest(*[iterator] * chunk_size, fillvalue=fill)
return [iterable[i: i + chunk_size] for i in range(0, len(iterable), chunk_size)]
def run(self, **kw) -> tuple[int, list]:
data_list = parse_dynamic(kw, Lexicon.DYNAMIC, EnumConvertType.ANY, None)
mode = parse_param(kw, Lexicon.MODE, EnumBatchMode, EnumBatchMode.MERGE.name)[0]
slice_range = parse_param(kw, Lexicon.RANGE, EnumConvertType.VEC3INT, (0, 0, 1))[0]
index = parse_param(kw, Lexicon.INDEX, EnumConvertType.STRING, "")[0]
count = parse_param(kw, Lexicon.COUNT, EnumConvertType.INT, 0, 0)[0]
reverse = parse_param(kw, Lexicon.REVERSE, EnumConvertType.BOOLEAN, False)[0]
seed = parse_param(kw, Lexicon.SEED, EnumConvertType.INT, 0, 0)[0]
data = []
# track latents since they need to be added back to Dict['samples']
output_type = None
for b in data_list:
if isinstance(b, dict) and "samples" in b:
# latents are batched in the x.samples key
if output_type and output_type != EnumConvertType.LATENT:
raise Exception(f"Cannot mix input types {output_type} vs {EnumConvertType.LATENT}")
data.extend(b["samples"])
output_type = EnumConvertType.LATENT
elif isinstance(b, TensorType):
if output_type and output_type not in (EnumConvertType.IMAGE, EnumConvertType.MASK):
raise Exception(f"Cannot mix input types {output_type} vs {EnumConvertType.IMAGE}")
if b.ndim == 4:
b = [i for i in b]
else:
b = [b]
for x in b:
if x.ndim == 2:
x = x.unsqueeze(-1)
data.append(x)
output_type = EnumConvertType.IMAGE
elif b is not None:
idx_type = type(b)
if output_type and output_type != idx_type:
raise Exception(f"Cannot mix input types {output_type} vs {idx_type}")
data.append(b)
if len(data) == 0:
logger.warning("no data for list")
return [], [0], [], [0]
if mode == EnumBatchMode.PICK:
start, end, step = slice_range
start = start if start < len(data) else -1
data = [data[start]]
elif mode == EnumBatchMode.SLICE:
start, end, step = slice_range
start = abs(start)
end = len(data) if end == 0 else abs(end+1)
if step == 0:
step = 1
elif step < 0:
data = data[::-1]
step = abs(step)
data = data[start:end:step]
elif mode == EnumBatchMode.RANDOM:
random.seed(seed)
if count == 0:
count = len(data)
else:
count = max(1, min(len(data), count))
data = random.sample(data, k=count)
elif mode == EnumBatchMode.INDEX_LIST:
junk = []
for x in index.split(','):
if '-' in x:
x = x.split('-')
for idx, v in enumerate(x):
try:
x[idx] = max(0, min(len(data)-1, int(v)))
except ValueError as e:
logger.error(e)
x[idx] = 0
if x[0] > x[1]:
tmp = list(range(x[0], x[1]-1, -1))
else:
tmp = list(range(x[0], x[1]+1))
junk.extend(tmp)
else:
idx = max(0, min(len(data)-1, int(x)))
junk.append(idx)
if len(junk) > 0:
data = [data[i] for i in junk]
if len(data) == 0:
logger.warning("no data for list")
return [], [0], [], [0]
# reverse before?
if reverse:
data.reverse()
# cut the list down first
if count > 0:
data = data[0:count]
size = len(data)
if output_type == EnumConvertType.IMAGE:
_, w, h = image_by_size(data)
result = []
for d in data:
w2, h2, cc = d.shape
if w != w2 or h != h2 or cc != 4:
d = tensor_to_cv(d)
d = image_convert(d, 4)
d = image_matte(d, (0,0,0,0), w, h)
d = cv_to_tensor(d)
d = d.unsqueeze(0)
result.append(d)
size = len(result)
data = torch.stack(result)
else:
data = [data]
return (data, [size],)
class BatchToList(CozyBaseNode):
NAME = "BATCH TO LIST (JOV)"
NAME_PRETTY = "BATCH TO LIST (JOV)"
CATEGORY = JOV_CATEGORY
RETURN_TYPES = (COZY_TYPE_ANY, )
RETURN_NAMES = ("LIST", )
DESCRIPTION = """
Convert a batch of values into a pure python list of values.
"""
@classmethod
def INPUT_TYPES(cls) -> InputType:
d = super().INPUT_TYPES()
return deep_merge(d, {
"optional": {
Lexicon.BATCH: (COZY_TYPE_ANY, {}),
}
})
def run(self, **kw) -> tuple[list[Any]]:
batch = parse_param(kw, Lexicon.BATCH, EnumConvertType.LIST, [])
batch = [f[0] for f in batch]
return (batch,)
class QueueBaseNode(CozyBaseNode):
CATEGORY = JOV_CATEGORY
RETURN_TYPES = (COZY_TYPE_ANY, COZY_TYPE_ANY, "STRING", "INT", "INT", "BOOLEAN")
RETURN_NAMES = ("❔", "QUEUE", "CURRENT", "INDEX", "TOTAL", "TRIGGER", )
#OUTPUT_IS_LIST = (True, True, True, True, True, True,)
VIDEO_FORMATS = ['.wav', '.mp3', '.webm', '.mp4', '.avi', '.wmv', '.mkv', '.mov', '.mxf']
@classmethod
def IS_CHANGED(cls, **kw) -> float:
return float('nan')
@classmethod
def INPUT_TYPES(cls) -> InputType:
d = super().INPUT_TYPES()
d = deep_merge(d, {
"optional": {
Lexicon.QUEUE: ("STRING", {
"default": "./res/img/test-a.png", "multiline": True,
"tooltip": "Current items to process during Queue iteration"}),
Lexicon.RECURSE: ("BOOLEAN", {
"default": False,
"tooltip": "Recurse through all subdirectories found"}),
Lexicon.BATCH: ("BOOLEAN", {
"default": False,
"tooltip": "Load all items, if they are loadable items, i.e. batch load images from the Queue's list"}),
Lexicon.SELECT: ("INT", {
"default": 0, "min": 0,
"tooltip": "The index to use for the current queue item. 0 will move to the next item each queue run"}),
Lexicon.HOLD: ("BOOLEAN", {
"default": False,
"tooltip": "Hold the item at the current queue index"}),
Lexicon.STOP: ("BOOLEAN", {
"default": False,
"tooltip": "When the Queue is out of items, send a `HALT` to ComfyUI"}),
Lexicon.LOOP: ("BOOLEAN", {
"default": True,
"tooltip": "If the queue should loop. If `False` and if there are more iterations, will send the previous image"}),
Lexicon.RESET: ("BOOLEAN", {
"default": False,
"tooltip": "Reset the queue back to index 1"}),
}
})
return Lexicon._parse(d)
def __init__(self) -> None:
self.__index = 0
self.__q = None
self.__index_last = None
self.__len = 0
self.__current = None
self.__previous = None
self.__ident = None
self.__last_q_value = {}
# consume the list into iterable items to load/process
def __parseQ(self, data: Any, recurse: bool=False) -> list[str]:
entries = []
for line in data.strip().split('\n'):
if len(line) == 0:
continue
data = [line]
if not line.lower().startswith("http"):
#