Repository: Amorano/Jovimetrix Branch: main Commit: a28214a01507 Files: 41 Total size: 267.6 KB Directory structure: gitextract_d5cdnh8o/ ├── .gitattributes ├── .github/ │ └── workflows/ │ └── publish_action.yml ├── .gitignore ├── LICENSE ├── NOTICE ├── README.md ├── __init__.py ├── core/ │ ├── __init__.py │ ├── adjust.py │ ├── anim.py │ ├── calc.py │ ├── color.py │ ├── compose.py │ ├── create.py │ ├── trans.py │ ├── utility/ │ │ ├── __init__.py │ │ ├── batch.py │ │ ├── info.py │ │ └── io.py │ └── vars.py ├── node_list.json ├── pyproject.toml ├── requirements.txt └── web/ ├── core.js ├── fun.js ├── jovi_metrix.css ├── nodes/ │ ├── akashic.js │ ├── array.js │ ├── delay.js │ ├── flatten.js │ ├── graph.js │ ├── lerp.js │ ├── op_binary.js │ ├── op_unary.js │ ├── queue.js │ ├── route.js │ ├── stack.js │ ├── stringer.js │ └── value.js ├── util.js └── widget_vector.js ================================================ FILE CONTENTS ================================================ ================================================ FILE: .gitattributes ================================================ # Auto detect text files and perform LF normalization * text=auto ================================================ FILE: .github/workflows/publish_action.yml ================================================ name: Publish to Comfy registry on: workflow_dispatch: push: branches: - main paths: - "pyproject.toml" permissions: issues: write jobs: publish-node: name: Publish Custom Node to registry runs-on: ubuntu-latest if: ${{ github.repository_owner == 'Amorano' }} steps: - name: Check out code uses: actions/checkout@v4 - name: Publish Custom Node uses: Comfy-Org/publish-node-action@v1 with: personal_access_token: ${{ secrets.REGISTRY_ACCESS_TOKEN }} ================================================ FILE: .gitignore ================================================ __pycache__ *.py[cod] *$py.class _*/ glsl/* *.code-workspace .vscode config.json ignore.txt .env .venv .DS_Store *.egg-info *.bak checkpoints results backup node_modules *-lock.json *.config.mjs package.json _TODO*.* ================================================ FILE: LICENSE ================================================ MIT License Copyright (c) 2023 Alexander G. Morano Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. GO NUTS; JUST TRY NOT TO DO IT IN YOUR HEAD. ================================================ FILE: NOTICE ================================================ This project includes code concepts from the MTB Nodes project (MIT) https://github.com/melMass/comfy_mtb This project includes code concepts from the ComfyUI-Custom-Scripts project (MIT) https://github.com/pythongosssss/ComfyUI-Custom-Scripts This project includes code concepts from the KJNodes for ComfyUI project (GPL 3.0) https://github.com/kijai/ComfyUI-KJNodes This project includes code concepts from the UE Nodes project (Apache 2.0) https://github.com/chrisgoringe/cg-use-everywhere This project includes code concepts from the WAS Node Suite project (MIT) https://github.com/WASasquatch/was-node-suite-comfyui This project includes code concepts from the rgthree-comfy project (MIT) https://github.com/rgthree/rgthree-comfy This project includes code concepts from the FizzNodes project (MIT) https://github.com/FizzleDorf/ComfyUI_FizzNodes ================================================ FILE: README.md ================================================ ComfyUI Nodes for procedural masking, live composition and video manipulation

COMFYUI Nodes for procedural masking, live composition and video manipulation

JOVIMETRIX IS ONLY GUARANTEED TO SUPPORT COMFYUI 0.1.3+ and FRONTEND 1.2.40+
IF YOU NEED AN OLDER VERSION, PLEASE DO NOT UPDATE.

![KNIVES!](https://badgen.net/github/open-issues/amorano/jovimetrix) ![FORKS!](https://badgen.net/github/forks/amorano/jovimetrix)

# SPONSORSHIP Please consider sponsoring me if you enjoy the results of my work, code or documentation or otherwise. A good way to keep code development open and free is through sponsorship.
 | | |  -|-|-|- [![BE A GITHUB SPONSOR ❤️](https://img.shields.io/badge/sponsor-30363D?style=for-the-badge&logo=GitHub-Sponsors&logoColor=#EA4AAA)](https://github.com/sponsors/Amorano) | [![DIRECTLY SUPPORT ME VIA PAYPAL](https://img.shields.io/badge/PayPal-00457C?style=for-the-badge&logo=paypal&logoColor=white)](https://www.paypal.com/paypalme/onarom) | [![PATREON SUPPORTER](https://img.shields.io/badge/Patreon-F96854?style=for-the-badge&logo=patreon&logoColor=white)](https://www.patreon.com/joviex) | [![SUPPORT ME ON KO-FI!](https://ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/alexandermorano)
## HIGHLIGHTS * 30 function `BLEND` node -- subtract, multiply and overlay like the best * Vector support for 2, 3, 4 size tuples of integer or float type * Specific RGB/RGBA color vector support that provides a color picker * All Image inputs support RGBA, RGB or pure MASK input * Full Text generation support using installed system fonts * Basic parametric shape (Circle, Square, Polygon) generator~~ * `COLOR BLIND` check support * `COLOR MATCH` against existing images or create a custom LUT * Generate `COLOR THEORY` spreads from an existing image * `COLOR MEANS` to generate palettes for existing images to keep other images in the same tonal ranges * `PIXEL SPLIT` separate the channels of an image to manipulate and `PIXEL MERGE` them back together * `STACK` a series of images into a new single image vertically, horizontally or in a grid * Or `FLATTEN` a batch of images into a single image with each image subsequently added on top (slap comp) * `VALUE` Node has conversion support for all ComfyUI types and some 3rd party types (2DCoords, Mixlab Layers) * `LERP` node to linear interpolate all ComfyUI and Jovimetrix value types * Automatic conversion of Mixlab Layer types into Image types * Generic `ARRAY` that can Merge, Split, Select, Slice or Randomize a list of ANY type * `STRINGER` node to perform specific string manipulation operations: Split, Join, Replace, Slice. * A `QUEUE` Node that supports recursing directories, filtering multiple file types and batch loading * Use the `OP UNARY` and `OP BINARY` nodes to perform single and double type functions across all ComfyUI and Jovimetrix value types * Manipulate vectors with the `SWIZZLE` node to swap their XYZW positions * `DELAY` execution at certain parts in a workflow, with or without a timeout * Generate curve data with the `TICK` and `WAVE GEN` nodes

AS OF VERSION 2.0.0, THESE NODES HAVE MIGRATED TO OTHER, SMALLER PACKAGES

Migrated to [Jovi_GLSL](https://github.com/Amorano/Jovi_GLSL) ~~* GLSL shader support~~ ~~* * `GLSL Node` provides raw access to Vertex and Fragment shaders~~ ~~* * `Dynamic GLSL` dynamically convert existing GLSL scripts file into ComfyUI nodes at runtime~~ ~~* * Over 20+ Hand written GLSL nodes to speed up specific tasks better done on the GPU (10x speedup in most cases)~~ Migrated to [Jovi_Capture](https://github.com/Amorano/Jovi_Capture) ~~* `STREAM READER` node to capture monitor, webcam or url media~~ ~~* `STREAM WRITER` node to export media to a HTTP/HTTPS server for OBS or other 3rd party streaming software~~ Migrated to [Jovi_Spout](https://github.com/Amorano/Jovi_Spout) ~~* `SPOUT` streaming support *WINDOWS ONLY*~~ Migrated to [Jovi_MIDI](https://github.com/Amorano/Jovi_MIDI) ~~* `MIDI READER` Captures MIDI messages from an external MIDI device or controller~~ ~~* `MIDI MESSAGE` Processes MIDI messages received from an external MIDI controller or device~~ ~~* `MIDI FILTER` (advanced filter) to select messages from MIDI streams and devices~~ ~~* `MIDI FILTER EZ` simpler interface to filter single messages from MIDI streams and devices~~ Migrated to [Jovi_Help](https://github.com/Amorano/Jovi_Help) ~~* Help System for *ALL NODES* that will auto-parse unknown knows for their type data and descriptions~~ Migrated to [Jovi_Colorizer](https://github.com/Amorano/Jovi_Colorizer) ~~* Colorization for *ALL NODES* using their own node settings, their node group or via regex pattern matching~~ ## UPDATES

DO NOT UPDATE JOVIMETRIX PAST VERSION 1.7.48 IF YOU DONT WANT TO LOSE A BUNCH OF NODES

Nodes that have been removed are in various other packages now. You can install those specific packages to get the functionality back, but I have no way to migrate the actual connections -- you will need to do that manually. ** Nodes that have been migrated: * ALL MIDI NODES: * * MIDIMessageNode * * MIDIReaderNode * * MIDIFilterNode * * MIDIFilterEZNode [Migrated to Jovi_MIDI](https://github.com/Amorano/Jovi_MIDI) * ALL STREAMING NODES: * * StreamReaderNode * * StreamWriterNode [Migrated to Jovi_Capture](https://github.com/Amorano/Jovi_Capture) * * SpoutWriterNode [Migrated to Jovi_Spout](https://github.com/Amorano/Jovi_Spout) * ALL GLSL NODES: * * GLSL * * GLSL BLEND LINEAR * * GLSL COLOR CONVERSION * * GLSL COLOR PALETTE * * GLSL CONICAL GRADIENT * * GLSL DIRECTIONAL WARP * * GLSL FILTER RANGE * * GLSL GRAYSCALE * * GLSL HSV ADJUST * * GLSL INVERT * * GLSL NORMAL * * GLSL NORMAL BLEND * * GLSL POSTERIZE * * GLSL TRANSFORM [Migrated to Jovi_GLSL](https://github.com/Amorano/Jovi_GLSL) **2025/09/04** @2.1.25: * Auto-level for `LEVEL` node * `HISTOGRAM` node * new support for cozy_comfy (v3+ comfy node spec) **2025/08/15** @2.1.23: * fixed regression in `FLATTEN` node **2025/08/12** @2.1.22: * tick allows for float/int start **2025/08/03** @2.1.21: * fixed css for `DELAY` node * delay node timer extended to 150+ days * all tooltips checked to be TUPLE entries **2025/07/31** @2.1.20: * support for tensors in `OP UNARY` or `OP BINARY` **2025/07/27** @2.1.19: * added `BATCH TO LIST` node * `VECTOR` node(s) default step changed to 0.1 **2025/07/13** @2.1.18: * allow numpy>=1.25.0 **2025/07/07** @2.1.17: * updated to cozy_comfyui 0.0.39 **2025/07/04** @2.1.16: * Type hint updates **2025/06/28** @2.1.15: * `GRAPH NODE` updated to use new mechanism in cozy_comfyui 0.0.37 for list of list parse on dynamics **2025/06/18** @2.1.14: * fixed resize_matte mode to use full mask/alpha **2025/06/18** @2.1.13: * allow hex codes for vectors * updated to cozy_comfyui 0.0.36 **2025/06/07** @2.1.11: * cleaned up image_convert for grayscale/mask * updated to cozy_comfyui 0.0.35 **2025/06/06** @2.1.10: * updated to comfy_cozy 0.0.34 * default width and height to 1 * removed old debug string * akashic try to parse unicode emoji strings **2025/06/02** @2.1.9: * fixed dynamic nodes that already start with inputs (dynamic input wouldnt show up) * patched Queue node to work with new `COMBO` style of inputs **2025/05/29** @2.1.8: * updated to comfy_cozy 0.0.32 **2025/05/27** @2.1.7: * re-ranged all FLOAT to their maximum representations * clerical cleanup for JS callbacks * added `SPLIT` node to break images into vertical or horizontal slices **2025/05/25** @2.1.6: * loosened restriction for python 3.11+ to allow for 3.10+ * * I make zero guarantee that will actually let 3.10 work and I will not support 3.10 **2025/05/16** @2.1.5: * Full compatibility with [ComfyMath Vector](https://github.com/evanspearman/ComfyMath) nodes * Masks can be inverted at inputs * `EnumScaleInputMode` for `BLEND` node to adjust inputs prior to operation * Allow images or mask inputs in `CONSTANT` node to fall through * `VALUE` nodes return all items as list, not just >1 * Added explicit MASK option for `PIXEL SPLIT` node * Split `ADJUST` node into `BLUR`, `EDGE`, `LIGHT`, `PIXEL`, * Migrated most of image lib to cozy_comfyui * widget_vector tweaked to disallow non-numerics * widgetHookControl streamlined **2025/05/08** @2.1.4: * Support for NUMERICAL (bool, int, float, vecN) inputs on value inputs **2025/05/08** @2.1.3: * fixed for VEC* types using MIN/MAX **2025/05/07** @2.1.2: * `TICK` with normalization and new series generator **2025/05/06** @2.1.1: * fixed IS_CHANGED in graphnode * updated `TICK SIMPLE` in situ of `TICK` to be inclusive of the end range * migrated ease, normalization and wave functions to cozy_comfyui * first pass preserving values in multi-type fields **2025/05/05** @2.1.0: * Cleaned up all node defaults * Vector nodes aligned for list outputs * Cleaned all emoji from input/output * Clear all EnumConvertTypes and align with new comfy_cozy * Lexicon defines come from Comfy_Cozy module * `OP UNARY` fixed factorial * Added fill array mode for `OP UNARY` * removed `STEREOGRAM` and `STEROSCOPIC` -- they were designed poorly **2025/05/01** @2.0.11: * unified widget_vector.js * new comfy_cozy support * auto-convert all VEC*INT -> VEC* float types * readability for node definitions **2025/04/24** @2.0.10: * `SHAPE NODE` fixed for transparency blends when using blurred masks **2025/04/24** @2.0.9: * removed inversion in pixel splitter **2025/04/23** @2.0.8: * categories aligned to new comfy-cozy support **2025/04/19** @2.0.7: * all JS messages fixed **2025/04/19** @2.0.6: * fixed reset message from JS **2025/04/19** @2.0.5: * patched new frontend input mechanism for dynamic inputs * reduced requirements * removed old vector conversions waiting for new frontend mechanism **2025/04/17** @2.0.4: * fixed bug in resize_matte `MODE` that would fail when the matte was smaller than the input image * migrated to image_crop functions to cozy_comfyui **2025/04/12** @2.0.0: * REMOVED ALL STREAMING, MIDI and GLSL nodes for new packages, HELP System and Node Colorization system: [Jovi_Capture - Web camera, Monitor Capture, Window Capture](https://github.com/Amorano/Jovi_Capture) [Jovi_MIDI - MIDI capture and MIDI message parsing](https://github.com/Amorano/Jovi_MIDI) [Jovi_GLSL - GLSL Shaders](https://github.com/Amorano/Jovi_GLSL) [Jovi_Spout - SPOUT Streaming support](https://github.com/Amorano/Jovi_Spout) [Jovi_Colorizer - Node Colorization](https://github.com/Amorano/Jovi_Colorizer) [Jovi_Help - Node Help](https://github.com/Amorano/Jovi_Help) * all nodes will accept `LIST` or `BATCH` and process as if all elements are in a list. * patched constant node to work with `MATTE_RESIZE` * patched import loader to work with old/new comfyui * missing array web node partial * removed array and no one even noticed. * all inputs should be treated as a list even single elements []
explicit vector node supports TICK Node Batch Support Output
# INSTALLATION [Please see the wiki for advanced use of the environment variables used during startup](https://github.com/Amorano/Jovimetrix/wiki/B.-ASICS) ## COMFYUI MANAGER If you have [ComfyUI Manager](https://github.com/ltdrdata/ComfyUI-Manager) installed, simply search for Jovimetrix and install from the manager's database. ## MANUAL INSTALL Clone the repository into your ComfyUI custom_nodes directory. You can clone the repository with the command: ``` git clone https://github.com/Amorano/Jovimetrix.git ``` You can then install the requirements by using the command: ``` .\python_embed\python.exe -s -m pip install -r requirements.txt ``` If you are using a virtual environment (venv), make sure it is activated before installation. Then install the requirements with the command: ``` pip install -r requirements.txt ``` # WHERE TO FIND ME You can find me on [![DISCORD](https://dcbadge.vercel.app/api/server/62TJaZ3Z5r?style=flat-square)](https://discord.gg/62TJaZ3Z5r). ================================================ FILE: __init__.py ================================================ """ ██  ██████  ██  ██ ██ ███  ███ ███████ ████████ ██████  ██ ██  ██  ██ ██    ██ ██  ██ ██ ████  ████ ██         ██    ██   ██ ██  ██ ██   ██ ██  ██ ██  ██ ██ ██ ████ ██ █████  ██  ██████  ██   ███   ██ ██ ██  ██  ██  ██  ██ ██  ██  ██ ██     ██  ██   ██ ██  ██ ██   █████   ██████    ████   ██ ██      ██ ███████  ██  ██  ██ ██ ██   ██  Animation, Image Compositing & Procedural Creation @title: Jovimetrix @author: Alexander G. Morano @category: Compositing @reference: https://github.com/Amorano/Jovimetrix @tags: adjust, animate, compose, compositing, composition, device, flow, video, mask, shape, animation, logic @description: Animation via tick. Parameter manipulation with wave generator. Unary and Binary math support. Value convert int/float/bool, VectorN and Image, Mask types. Shape mask generator. Stack images, do channel ops, split, merge and randomize arrays and batches. Load images & video from anywhere. Dynamic bus routing. Save output anywhere! Flatten, crop, transform; check colorblindness or linear interpolate values. @node list: TickNode, TickSimpleNode, WaveGeneratorNode BitSplitNode, ComparisonNode, LerpNode, OPUnaryNode, OPBinaryNode, StringerNode, SwizzleNode, ColorBlindNode, ColorMatchNode, ColorKMeansNode, ColorTheoryNode, GradientMapNode, AdjustNode, BlendNode, FilterMaskNode, PixelMergeNode, PixelSplitNode, PixelSwapNode, ThresholdNode, ConstantNode, ShapeNode, TextNode, CropNode, FlattenNode, StackNode, TransformNode, ArrayNode, QueueNode, QueueTooNode, AkashicNode, GraphNode, ImageInfoNode, DelayNode, ExportNode, RouteNode, SaveOutputNode ValueNode, Vector2Node, Vector3Node, Vector4Node, """ __author__ = "Alexander G. Morano" __email__ = "amorano@gmail.com" from pathlib import Path from cozy_comfyui import \ logger from cozy_comfyui.node import \ loader JOV_DOCKERENV = False try: with open('/proc/1/cgroup', 'rt') as f: content = f.read() JOV_DOCKERENV = any(x in content for x in ['docker', 'kubepods', 'containerd']) except FileNotFoundError: pass if JOV_DOCKERENV: logger.info("RUNNING IN A DOCKER") # ============================================================================== # === GLOBAL === # ============================================================================== PACKAGE = "JOVIMETRIX" WEB_DIRECTORY = "./web" ROOT = Path(__file__).resolve().parent NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS = loader(ROOT, PACKAGE, "core", f"{PACKAGE} 🔺🟩🔵", False) ================================================ FILE: core/__init__.py ================================================ from enum import Enum class EnumFillOperation(Enum): DEFAULT = 0 FILL_ZERO = 20 FILL_ALL = 10 ================================================ FILE: core/adjust.py ================================================ """ Jovimetrix - Adjust """ import sys from enum import Enum from typing import Any from typing_extensions import override import comfy.model_management from comfy_api.latest import ComfyExtension, io from comfy.utils import ProgressBar from cozy_comfyui import \ InputType, RGBAMaskType, EnumConvertType, \ deep_merge, parse_param, zip_longest_fill from cozy_comfyui.lexicon import \ Lexicon from cozy_comfy.node import \ COZY_TYPE_IMAGE as COZY_TYPE_IMAGEv3, \ CozyImageNode as CozyImageNodev3 from cozy_comfyui.node import \ COZY_TYPE_IMAGE, \ CozyImageNode from cozy_comfyui.image.adjust import \ EnumAdjustBlur, EnumAdjustColor, EnumAdjustEdge, EnumAdjustMorpho, \ image_contrast, image_brightness, image_equalize, image_gamma, \ image_exposure, image_pixelate, image_pixelscale, \ image_posterize, image_quantize, image_sharpen, image_morphology, \ image_emboss, image_blur, image_edge, image_color, \ image_autolevel, image_autolevel_histogram from cozy_comfyui.image.channel import \ channel_solid from cozy_comfyui.image.compose import \ image_levels from cozy_comfyui.image.convert import \ tensor_to_cv, cv_to_tensor_full, image_mask, image_mask_add from cozy_comfyui.image.misc import \ image_stack # ============================================================================== # === GLOBAL === # ============================================================================== JOV_CATEGORY = "ADJUST" # ============================================================================== # === ENUMERATION === # ============================================================================== class EnumAutoLevel(Enum): MANUAL = 10 AUTO = 20 HISTOGRAM = 30 class EnumAdjustLight(Enum): EXPOSURE = 10 GAMMA = 20 BRIGHTNESS = 30 CONTRAST = 40 EQUALIZE = 50 class EnumAdjustPixel(Enum): PIXELATE = 10 PIXELSCALE = 20 QUANTIZE = 30 POSTERIZE = 40 # ============================================================================== # === CLASS === # ============================================================================== class AdjustBlurNode(CozyImageNode): NAME = "ADJUST: BLUR (JOV)" CATEGORY = JOV_CATEGORY DESCRIPTION = """ Enhance and modify images with various blur effects. """ @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES() d = deep_merge(d, { "optional": { Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}), Lexicon.FUNCTION: (EnumAdjustBlur._member_names_, { "default": EnumAdjustBlur.BLUR.name,}), Lexicon.RADIUS: ("INT", { "default": 3, "min": 3}), } }) return Lexicon._parse(d) def run(self, **kw) -> RGBAMaskType: pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None) op = parse_param(kw, Lexicon.FUNCTION, EnumAdjustBlur, EnumAdjustBlur.BLUR.name) radius = parse_param(kw, Lexicon.RADIUS, EnumConvertType.INT, 3) params = list(zip_longest_fill(pA, op, radius)) images = [] pbar = ProgressBar(len(params)) for idx, (pA, op, radius) in enumerate(params): pA = channel_solid() if pA is None else tensor_to_cv(pA) # height, width = pA.shape[:2] pA = image_blur(pA, op, radius) #pA = image_blend(pA, img_new, mask) images.append(cv_to_tensor_full(pA)) pbar.update_absolute(idx) return image_stack(images) class AdjustColorNode(CozyImageNode): NAME = "ADJUST: COLOR (JOV)" CATEGORY = JOV_CATEGORY DESCRIPTION = """ Enhance and modify images with various blur effects. """ @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES() d = deep_merge(d, { "optional": { Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}), Lexicon.FUNCTION: (EnumAdjustColor._member_names_, { "default": EnumAdjustColor.RGB.name,}), Lexicon.VEC: ("VEC3", { "default": (0,0,0), "mij": -1, "maj": 1, "step": 0.025}) } }) return Lexicon._parse(d) def run(self, **kw) -> RGBAMaskType: pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None) op = parse_param(kw, Lexicon.FUNCTION, EnumAdjustColor, EnumAdjustColor.RGB.name) vec = parse_param(kw, Lexicon.VEC, EnumConvertType.VEC3, (0,0,0)) params = list(zip_longest_fill(pA, op, vec)) images = [] pbar = ProgressBar(len(params)) for idx, (pA, op, vec) in enumerate(params): pA = channel_solid() if pA is None else tensor_to_cv(pA) pA = image_color(pA, op, vec[0], vec[1], vec[2]) images.append(cv_to_tensor_full(pA)) pbar.update_absolute(idx) return image_stack(images) class AdjustEdgeNode(CozyImageNode): NAME = "ADJUST: EDGE (JOV)" CATEGORY = JOV_CATEGORY DESCRIPTION = """ Enhanced edge detection. """ @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES() d = deep_merge(d, { "optional": { Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}), Lexicon.FUNCTION: (EnumAdjustEdge._member_names_, { "default": EnumAdjustEdge.CANNY.name,}), Lexicon.RADIUS: ("INT", { "default": 1, "min": 1}), Lexicon.ITERATION: ("INT", { "default": 1, "min": 1, "max": 1000}), Lexicon.LOHI: ("VEC2", { "default": (0, 1), "mij": 0, "maj": 1, "step": 0.01}) } }) return Lexicon._parse(d) def run(self, **kw) -> RGBAMaskType: pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None) op = parse_param(kw, Lexicon.FUNCTION, EnumAdjustEdge, EnumAdjustEdge.CANNY.name) radius = parse_param(kw, Lexicon.RADIUS, EnumConvertType.INT, 1) count = parse_param(kw, Lexicon.ITERATION, EnumConvertType.INT, 1) lohi = parse_param(kw, Lexicon.LOHI, EnumConvertType.VEC2, (0,1)) params = list(zip_longest_fill(pA, op, radius, count, lohi)) images = [] pbar = ProgressBar(len(params)) for idx, (pA, op, radius, count, lohi) in enumerate(params): pA = channel_solid() if pA is None else tensor_to_cv(pA) alpha = image_mask(pA) pA = image_edge(pA, op, radius, count, lohi[0], lohi[1]) pA = image_mask_add(pA, alpha) images.append(cv_to_tensor_full(pA)) pbar.update_absolute(idx) return image_stack(images) class AdjustEmbossNode(CozyImageNode): NAME = "ADJUST: EMBOSS (JOV)" CATEGORY = JOV_CATEGORY DESCRIPTION = """ Emboss boss mode. """ @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES() d = deep_merge(d, { "optional": { Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}), Lexicon.HEADING: ("FLOAT", { "default": -45, "min": -sys.float_info.max, "max": sys.float_info.max, "step": 0.1}), Lexicon.ELEVATION: ("FLOAT", { "default": 45, "min": -sys.float_info.max, "max": sys.float_info.max, "step": 0.1}), Lexicon.DEPTH: ("FLOAT", { "default": 10, "min": 0, "max": sys.float_info.max, "step": 0.1, "tooltip": "Depth perceived from the light angles above"}), } }) return Lexicon._parse(d) def run(self, **kw) -> RGBAMaskType: pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None) heading = parse_param(kw, Lexicon.HEADING, EnumConvertType.FLOAT, -45) elevation = parse_param(kw, Lexicon.ELEVATION, EnumConvertType.FLOAT, 45) depth = parse_param(kw, Lexicon.DEPTH, EnumConvertType.FLOAT, 10) params = list(zip_longest_fill(pA, heading, elevation, depth)) images = [] pbar = ProgressBar(len(params)) for idx, (pA, heading, elevation, depth) in enumerate(params): pA = channel_solid() if pA is None else tensor_to_cv(pA) alpha = image_mask(pA) pA = image_emboss(pA, heading, elevation, depth) pA = image_mask_add(pA, alpha) images.append(cv_to_tensor_full(pA)) pbar.update_absolute(idx) return image_stack(images) class AdjustLevelNode(CozyImageNode): NAME = "ADJUST: LEVELS (JOV)" CATEGORY = JOV_CATEGORY DESCRIPTION = """ Manual or automatic adjust image levels so that the darkest pixel becomes black and the brightest pixel becomes white, enhancing overall contrast. """ @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES() d = deep_merge(d, { "optional": { Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}), Lexicon.LMH: ("VEC3", { "default": (0,0.5,1), "mij": 0, "maj": 1, "step": 0.01, "label": ["LOW", "MID", "HIGH"]}), Lexicon.RANGE: ("VEC2", { "default": (0, 1), "mij": 0, "maj": 1, "step": 0.01, "label": ["IN", "OUT"]}), Lexicon.MODE: (EnumAutoLevel._member_names_, { "default": EnumAutoLevel.MANUAL.name, "tooltip": "Autolevel linearly or with Histogram bin values, per channel" }), "clip": ("FLOAT", { "default": 0.5, "min": 0, "max": 1.0, "step": 0.01 }) } }) return Lexicon._parse(d) def run(self, **kw) -> RGBAMaskType: pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None) LMH = parse_param(kw, Lexicon.LMH, EnumConvertType.VEC3, (0,0.5,1)) inout = parse_param(kw, Lexicon.RANGE, EnumConvertType.VEC2, (0,1)) mode = parse_param(kw, Lexicon.MODE, EnumAutoLevel, EnumAutoLevel.AUTO.name) clip = parse_param(kw, "clip", EnumConvertType.FLOAT, 0.5, 0, 1) params = list(zip_longest_fill(pA, LMH, inout, mode, clip)) images = [] pbar = ProgressBar(len(params)) for idx, (pA, LMH, inout, mode, clip) in enumerate(params): pA = channel_solid() if pA is None else tensor_to_cv(pA) ''' h, s, v = hsv img_new = image_hsv(img_new, h, s, v) ''' match mode: case EnumAutoLevel.MANUAL: low, mid, high = LMH start, end = inout pA = image_levels(pA, low, mid, high, start, end) case EnumAutoLevel.AUTO: pA = image_autolevel(pA) case EnumAutoLevel.HISTOGRAM: pA = image_autolevel_histogram(pA, clip) images.append(cv_to_tensor_full(pA)) pbar.update_absolute(idx) return image_stack(images) class AdjustLightNode(CozyImageNode): NAME = "ADJUST: LIGHT (JOV)" CATEGORY = JOV_CATEGORY DESCRIPTION = """ Tonal adjustments. They can be applied individually or all at the same time in order: brightness, contrast, histogram equalization, exposure, and gamma correction. """ @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES() d = deep_merge(d, { "optional": { Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}), Lexicon.BRIGHTNESS: ("FLOAT", { "default": 0.5, "min": 0, "max": 1, "step": 0.01}), Lexicon.CONTRAST: ("FLOAT", { "default": 0, "min": -1, "max": 1, "step": 0.01}), Lexicon.EQUALIZE: ("BOOLEAN", { "default": False}), Lexicon.EXPOSURE: ("FLOAT", { "default": 1, "min": -8, "max": 8, "step": 0.01}), Lexicon.GAMMA: ("FLOAT", { "default": 1, "min": 0, "max": 8, "step": 0.01}), } }) return Lexicon._parse(d) def run(self, **kw) -> RGBAMaskType: pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None) brightness = parse_param(kw, Lexicon.BRIGHTNESS, EnumConvertType.FLOAT, 0.5) contrast = parse_param(kw, Lexicon.CONTRAST, EnumConvertType.FLOAT, 0) equalize = parse_param(kw, Lexicon.EQUALIZE, EnumConvertType.FLOAT, 0) exposure = parse_param(kw, Lexicon.EXPOSURE, EnumConvertType.FLOAT, 0) gamma = parse_param(kw, Lexicon.GAMMA, EnumConvertType.FLOAT, 0) params = list(zip_longest_fill(pA, brightness, contrast, equalize, exposure, gamma)) images = [] pbar = ProgressBar(len(params)) for idx, (pA, brightness, contrast, equalize, exposure, gamma) in enumerate(params): pA = channel_solid() if pA is None else tensor_to_cv(pA) alpha = image_mask(pA) brightness = 2. * (brightness - 0.5) if brightness != 0: pA = image_brightness(pA, brightness) if contrast != 0: pA = image_contrast(pA, contrast) if equalize: pA = image_equalize(pA) if exposure != 1: pA = image_exposure(pA, exposure) if gamma != 1: pA = image_gamma(pA, gamma) ''' h, s, v = hsv img_new = image_hsv(img_new, h, s, v) l, m, h = level img_new = image_levels(img_new, l, h, m, gamma) ''' pA = image_mask_add(pA, alpha) images.append(cv_to_tensor_full(pA)) pbar.update_absolute(idx) return image_stack(images) class AdjustMorphNode(CozyImageNode): NAME = "ADJUST: MORPHOLOGY (JOV)" CATEGORY = JOV_CATEGORY DESCRIPTION = """ Operations based on the image shape. """ @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES() d = deep_merge(d, { "optional": { Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}), Lexicon.FUNCTION: (EnumAdjustMorpho._member_names_, { "default": EnumAdjustMorpho.DILATE.name,}), Lexicon.RADIUS: ("INT", { "default": 1, "min": 1}), Lexicon.ITERATION: ("INT", { "default": 1, "min": 1, "max": 1000}), } }) return Lexicon._parse(d) def run(self, **kw) -> RGBAMaskType: pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None) op = parse_param(kw, Lexicon.FUNCTION, EnumAdjustMorpho, EnumAdjustMorpho.DILATE.name) kernel = parse_param(kw, Lexicon.RADIUS, EnumConvertType.INT, 1) count = parse_param(kw, Lexicon.ITERATION, EnumConvertType.INT, 1) params = list(zip_longest_fill(pA, op, kernel, count)) images: list[Any] = [] pbar = ProgressBar(len(params)) for idx, (pA, op, kernel, count) in enumerate(params): pA = channel_solid() if pA is None else tensor_to_cv(pA) alpha = image_mask(pA) pA = image_morphology(pA, op, kernel, count) pA = image_mask_add(pA, alpha) images.append(cv_to_tensor_full(pA)) pbar.update_absolute(idx) return image_stack(images) class AdjustPixelNode(CozyImageNode): NAME = "ADJUST: PIXEL (JOV)" CATEGORY = JOV_CATEGORY DESCRIPTION = """ Pixel-level transformations. The val parameter controls the intensity or resolution of the effect, depending on the operation. """ @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES() d = deep_merge(d, { "optional": { Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}), Lexicon.FUNCTION: (EnumAdjustPixel._member_names_, { "default": EnumAdjustPixel.PIXELATE.name,}), Lexicon.VALUE: ("FLOAT", { "default": 0, "min": 0, "max": 1, "step": 0.01}) } }) return Lexicon._parse(d) def run(self, **kw) -> RGBAMaskType: pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None) op = parse_param(kw, Lexicon.FUNCTION, EnumAdjustPixel, EnumAdjustPixel.PIXELATE.name) val = parse_param(kw, Lexicon.VALUE, EnumConvertType.FLOAT, 0) params = list(zip_longest_fill(pA, op, val)) images = [] pbar = ProgressBar(len(params)) for idx, (pA, op, val) in enumerate(params): pA = channel_solid() if pA is None else tensor_to_cv(pA, chan=4) alpha = image_mask(pA) match op: case EnumAdjustPixel.PIXELATE: pA = image_pixelate(pA, val / 2.) case EnumAdjustPixel.PIXELSCALE: pA = image_pixelscale(pA, val) case EnumAdjustPixel.QUANTIZE: pA = image_quantize(pA, val) case EnumAdjustPixel.POSTERIZE: pA = image_posterize(pA, val) pA = image_mask_add(pA, alpha) images.append(cv_to_tensor_full(pA)) pbar.update_absolute(idx) return image_stack(images) class AdjustSharpenNode(CozyImageNode): NAME = "ADJUST: SHARPEN (JOV)" CATEGORY = JOV_CATEGORY DESCRIPTION = """ Sharpen the pixels of an image. """ @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES() d = deep_merge(d, { "optional": { Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}), Lexicon.AMOUNT: ("FLOAT", { "default": 0, "min": 0, "max": 1, "step": 0.01}), Lexicon.THRESHOLD: ("FLOAT", { "default": 0, "min": 0, "max": 1, "step": 0.01}) } }) return Lexicon._parse(d) def run(self, **kw) -> RGBAMaskType: pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None) amount = parse_param(kw, Lexicon.AMOUNT, EnumConvertType.FLOAT, 0) threshold = parse_param(kw, Lexicon.THRESHOLD, EnumConvertType.FLOAT, 0) params = list(zip_longest_fill(pA, amount, threshold)) images = [] pbar = ProgressBar(len(params)) for idx, (pA, amount, threshold) in enumerate(params): pA = channel_solid() if pA is None else tensor_to_cv(pA) pA = image_sharpen(pA, amount / 2., threshold=threshold / 25.5) images.append(cv_to_tensor_full(pA)) pbar.update_absolute(idx) return image_stack(images) class AdjustSharpenNodev3(CozyImageNodev3): @classmethod def define_schema(cls, **kwarg) -> io.Schema: schema = super(**kwarg).define_schema() schema.display_name = "ADJUST: SHARPEN (JOV)" schema.category = JOV_CATEGORY schema.description = "Sharpen the pixels of an image." schema.inputs.extend([ io.MultiType.Input( id=Lexicon.IMAGE[0], types=COZY_TYPE_IMAGEv3, display_name=Lexicon.IMAGE[0], optional=True, tooltip=Lexicon.IMAGE[1] ), io.Float.Input( id=Lexicon.AMOUNT[0], display_name=Lexicon.AMOUNT[0], optional=True, default= 0, min=0, max=1, step=0.01, tooltip=Lexicon.AMOUNT[1] ), io.Float.Input( id=Lexicon.THRESHOLD[0], display_name=Lexicon.THRESHOLD[0], optional=True, default= 0, min=0, max=1, step=0.01, tooltip=Lexicon.THRESHOLD[1] ) ]) return schema @classmethod def execute(self, *arg, **kw) -> io.NodeOutput: pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None) amount = parse_param(kw, Lexicon.AMOUNT, EnumConvertType.FLOAT, 0) threshold = parse_param(kw, Lexicon.THRESHOLD, EnumConvertType.FLOAT, 0) params = list(zip_longest_fill(pA, amount, threshold)) images = [] pbar = ProgressBar(len(params)) for idx, (pA, amount, threshold) in enumerate(params): pA = channel_solid() if pA is None else tensor_to_cv(pA) pA = image_sharpen(pA, amount / 2., threshold=threshold / 25.5) images.append(cv_to_tensor_full(pA)) pbar.update_absolute(idx) return io.NodeOutput(image_stack(images)) class AdjustExtension(ComfyExtension): @override async def get_node_list(self) -> list[type[io.ComfyNode]]: return [ AdjustSharpenNodev3 ] async def comfy_entrypoint() -> AdjustExtension: return AdjustExtension() ================================================ FILE: core/anim.py ================================================ """ Jovimetrix - Animation """ import sys import numpy as np from comfy.utils import ProgressBar from cozy_comfyui import \ InputType, EnumConvertType, \ deep_merge, parse_param, zip_longest_fill from cozy_comfyui.lexicon import \ Lexicon from cozy_comfyui.node import \ CozyBaseNode from cozy_comfyui.maths.ease import \ EnumEase, \ ease_op from cozy_comfyui.maths.norm import \ EnumNormalize, \ norm_op from cozy_comfyui.maths.wave import \ EnumWave, \ wave_op from cozy_comfyui.maths.series import \ seriesLinear # ============================================================================== # === GLOBAL === # ============================================================================== JOV_CATEGORY = "ANIMATION" # ============================================================================== # === CLASS === # ============================================================================== class ResultObject(object): def __init__(self, *arg, **kw) -> None: self.frame = [] self.lin = [] self.fixed = [] self.trigger = [] self.batch = [] class TickNode(CozyBaseNode): NAME = "TICK (JOV) ⏱" CATEGORY = JOV_CATEGORY RETURN_TYPES = ("FLOAT", "FLOAT", "FLOAT", "FLOAT", "FLOAT") RETURN_NAMES = ("VALUE", "LINEAR", "EASED", "SCALAR_LIN", "SCALAR_EASE") OUTPUT_IS_LIST = (True, True, True, True, True,) OUTPUT_TOOLTIPS = ( "List of values", "Normalized values", "Eased values", "Scalar normalized values", "Scalar eased values", ) DESCRIPTION = """ Value generator with normalized values based on based on time interval. """ @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES() d = deep_merge(d, { "optional": { # forces a MOD on CYCLE Lexicon.START: ("FLOAT", { "default": 0, "min": -sys.maxsize, "max": sys.maxsize }), # interval between frames Lexicon.STEP: ("FLOAT", { "default": 0, "min": -sys.float_info.max, "max": sys.float_info.max, "precision": 3, "tooltip": "Amount to add to each frame per tick" }), # how many frames to dump.... Lexicon.COUNT: ("INT", { "default": 1, "min": 1, "max": 1500 }), Lexicon.LOOP: ("INT", { "default": 0, "min": 0, "max": sys.maxsize, "tooltip": "What value before looping starts. 0 means linear playback (no loop point)" }), Lexicon.PINGPONG: ("BOOLEAN", { "default": False }), Lexicon.EASE: (EnumEase._member_names_, { "default": EnumEase.LINEAR.name}), Lexicon.NORMALIZE: (EnumNormalize._member_names_, { "default": EnumNormalize.MINMAX2.name}), Lexicon.SCALAR: ("FLOAT", { "default": 1, "min": 0, "max": sys.float_info.max }) } }) return Lexicon._parse(d) def run(self, **kw) -> tuple[float, ...]: """ Generates a series of numbers with various options including: - Custom start value (supporting floating point and negative numbers) - Custom step value (supporting floating point and negative numbers) - Fixed number of frames - Custom loop point (series restarts after reaching this many steps) - Ping-pong option (reverses direction at end points) - Support for easing functions - Normalized output 0..1, -1..1, L2 or ZScore """ start = parse_param(kw, Lexicon.START, EnumConvertType.FLOAT, 0)[0] step = parse_param(kw, Lexicon.STEP, EnumConvertType.FLOAT, 0)[0] count = parse_param(kw, Lexicon.COUNT, EnumConvertType.INT, 1, 1, 1500)[0] loop = parse_param(kw, Lexicon.LOOP, EnumConvertType.INT, 0, 0)[0] pingpong = parse_param(kw, Lexicon.PINGPONG, EnumConvertType.BOOLEAN, False)[0] ease = parse_param(kw, Lexicon.EASE, EnumEase, EnumEase.LINEAR.name)[0] normalize = parse_param(kw, Lexicon.NORMALIZE, EnumNormalize, EnumNormalize.MINMAX1.name)[0] scalar = parse_param(kw, Lexicon.SCALAR, EnumConvertType.FLOAT, 1, 0)[0] if step == 0: step = 1 cycle = seriesLinear(start, step, count, loop, pingpong) linear = norm_op(normalize, np.array(cycle)) eased = ease_op(ease, linear, len(linear)) scalar_linear = linear * scalar scalar_eased = eased * scalar return ( cycle, linear.tolist(), eased.tolist(), scalar_linear.tolist(), scalar_eased.tolist(), ) class WaveGeneratorNode(CozyBaseNode): NAME = "WAVE GEN (JOV) 🌊" NAME_PRETTY = "WAVE GEN (JOV) 🌊" CATEGORY = JOV_CATEGORY RETURN_TYPES = ("FLOAT", "INT", ) RETURN_NAMES = ("FLOAT", "INT", ) DESCRIPTION = """ Produce waveforms like sine, square, or sawtooth with adjustable frequency, amplitude, phase, and offset. It's handy for creating oscillating patterns or controlling animation dynamics. This node emits both continuous floating-point values and integer representations of the generated waves. """ @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES() d = deep_merge(d, { "optional": { Lexicon.WAVE: (EnumWave._member_names_, { "default": EnumWave.SIN.name}), Lexicon.FREQ: ("FLOAT", { "default": 1, "min": 0, "max": sys.float_info.max, "step": 0.01,}), Lexicon.AMP: ("FLOAT", { "default": 1, "min": 0, "max": sys.float_info.max, "step": 0.01,}), Lexicon.PHASE: ("FLOAT", { "default": 0, "min": 0, "max": 1, "step": 0.01}), Lexicon.OFFSET: ("FLOAT", { "default": 0, "min": 0, "max": 1, "step": 0.001}), Lexicon.TIME: ("FLOAT", { "default": 0, "min": 0, "max": sys.float_info.max, "step": 0.0001}), Lexicon.INVERT: ("BOOLEAN", { "default": False}), Lexicon.ABSOLUTE: ("BOOLEAN", { "default": False,}), } }) return Lexicon._parse(d) def run(self, **kw) -> tuple[float, int]: op = parse_param(kw, Lexicon.WAVE, EnumWave, EnumWave.SIN.name) freq = parse_param(kw, Lexicon.FREQ, EnumConvertType.FLOAT, 1, 0) amp = parse_param(kw, Lexicon.AMP, EnumConvertType.FLOAT, 1, 0) phase = parse_param(kw, Lexicon.PHASE, EnumConvertType.FLOAT, 0, 0) shift = parse_param(kw, Lexicon.OFFSET, EnumConvertType.FLOAT, 0, 0) delta_time = parse_param(kw, Lexicon.TIME, EnumConvertType.FLOAT, 0, 0) invert = parse_param(kw, Lexicon.INVERT, EnumConvertType.BOOLEAN, False) absolute = parse_param(kw, Lexicon.ABSOLUTE, EnumConvertType.BOOLEAN, False) results = [] params = list(zip_longest_fill(op, freq, amp, phase, shift, delta_time, invert, absolute)) pbar = ProgressBar(len(params)) for idx, (op, freq, amp, phase, shift, delta_time, invert, absolute) in enumerate(params): # freq = 1. / freq if invert: amp = 1. / val val = wave_op(op, phase, freq, amp, shift, delta_time) if absolute: val = np.abs(val) val = max(-sys.float_info.max, min(val, sys.float_info.max)) results.append([val, int(val)]) pbar.update_absolute(idx) return *list(zip(*results)), ''' class TickOldNode(CozyBaseNode): NAME = "TICK OLD (JOV) ⏱" CATEGORY = JOV_CATEGORY RETURN_TYPES = ("INT", "FLOAT", "FLOAT", COZY_TYPE_ANY, COZY_TYPE_ANY,) RETURN_NAMES = ("VAL", "LINEAR", "FPS", "TRIGGER", "BATCH",) OUTPUT_IS_LIST = (True, False, False, False, False,) OUTPUT_TOOLTIPS = ( "Current value for the configured tick as ComfyUI List", "Normalized tick value (0..1) based on BPM and Loop", "Current 'frame' in the tick based on FPS setting", "Based on the BPM settings, on beat hit, output the input at '⚡'", "Current batch of values for the configured tick as standard list which works in other Jovimetrix nodes", ) DESCRIPTION = """ A timer and frame counter, emitting pulses or signals based on time intervals. It allows precise synchronization and control over animation sequences, with options to adjust FPS, BPM, and loop points. This node is useful for generating time-based events or driving animations with rhythmic precision. """ @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES() d = deep_merge(d, { "optional": { # data to pass on a pulse of the loop Lexicon.TRIGGER: (COZY_TYPE_ANY, { "default": None, "tooltip": "Output to send when beat (BPM setting) is hit" }), # forces a MOD on CYCLE Lexicon.START: ("INT", { "default": 0, "min": 0, "max": sys.maxsize, }), Lexicon.LOOP: ("INT", { "default": 0, "min": 0, "max": sys.maxsize, "tooltip": "Number of frames before looping starts. 0 means continuous playback (no loop point)" }), Lexicon.FPS: ("INT", { "default": 24, "min": 1 }), Lexicon.BPM: ("INT", { "default": 120, "min": 1, "max": 60000, "tooltip": "BPM trigger rate to send the input. If input is empty, TRUE is sent on trigger" }), Lexicon.NOTE: ("INT", { "default": 4, "min": 1, "max": 256, "tooltip": "Number of beats per measure. Quarter note is 4, Eighth is 8, 16 is 16, etc."}), # how many frames to dump.... Lexicon.BATCH: ("INT", { "default": 1, "min": 1, "max": 32767, "tooltip": "Number of frames wanted" }), Lexicon.STEP: ("INT", { "default": 0, "min": 0, "max": sys.maxsize }), } }) return Lexicon._parse(d) def run(self, ident, **kw) -> tuple[int, float, float, Any]: passthru = parse_param(kw, Lexicon.TRIGGER, EnumConvertType.ANY, None)[0] stride = parse_param(kw, Lexicon.STEP, EnumConvertType.INT, 0)[0] loop = parse_param(kw, Lexicon.LOOP, EnumConvertType.INT, 0)[0] start = parse_param(kw, Lexicon.START, EnumConvertType.INT, self.__frame)[0] if loop != 0: self.__frame %= loop fps = parse_param(kw, Lexicon.FPS, EnumConvertType.INT, 24, 1)[0] bpm = parse_param(kw, Lexicon.BPM, EnumConvertType.INT, 120, 1)[0] divisor = parse_param(kw, Lexicon.NOTE, EnumConvertType.INT, 4, 1)[0] beat = 60. / max(1., bpm) / divisor batch = parse_param(kw, Lexicon.BATCH, EnumConvertType.INT, 1, 1)[0] step_fps = 1. / max(1., float(fps)) trigger = None results = ResultObject() pbar = ProgressBar(batch) step = stride if stride != 0 else max(1, loop / batch) for idx in range(batch): trigger = False lin = start if loop == 0 else start / loop fixed_step = math.fmod(start * step_fps, fps) if (math.fmod(fixed_step, beat) == 0): trigger = [passthru] if loop != 0: start %= loop results.frame.append(start) results.lin.append(float(lin)) results.fixed.append(float(fixed_step)) results.trigger.append(trigger) results.batch.append(start) start += step pbar.update_absolute(idx) return (results.frame, results.lin, results.fixed, results.trigger, results.batch,) ''' ================================================ FILE: core/calc.py ================================================ """ Jovimetrix - Calculation """ import sys import math import struct from enum import Enum from typing import Any from collections import Counter import torch from scipy.special import gamma from comfy.utils import ProgressBar from cozy_comfyui import \ logger, \ TensorType, InputType, EnumConvertType, \ deep_merge, parse_dynamic, parse_param, parse_value, zip_longest_fill from cozy_comfyui.lexicon import \ Lexicon from cozy_comfyui.node import \ COZY_TYPE_ANY, COZY_TYPE_NUMERICAL, COZY_TYPE_FULL, \ CozyBaseNode # ============================================================================== # === GLOBAL === # ============================================================================== JOV_CATEGORY = "CALC" # ============================================================================== # === ENUMERATION === # ============================================================================== class EnumBinaryOperation(Enum): ADD = 0 SUBTRACT = 1 MULTIPLY = 2 DIVIDE = 3 DIVIDE_FLOOR = 4 MODULUS = 5 POWER = 6 # TERNARY WITHOUT THE NEED MAXIMUM = 20 MINIMUM = 21 # VECTOR DOT_PRODUCT = 30 CROSS_PRODUCT = 31 # MATRIX # BITS # BIT_NOT = 39 BIT_AND = 60 BIT_NAND = 61 BIT_OR = 62 BIT_NOR = 63 BIT_XOR = 64 BIT_XNOR = 65 BIT_LSHIFT = 66 BIT_RSHIFT = 67 # GROUP UNION = 80 INTERSECTION = 81 DIFFERENCE = 82 # WEIRD ONES BASE = 90 class EnumComparison(Enum): EQUAL = 0 NOT_EQUAL = 1 LESS_THAN = 2 LESS_THAN_EQUAL = 3 GREATER_THAN = 4 GREATER_THAN_EQUAL = 5 # LOGIC # NOT = 10 AND = 20 NAND = 21 OR = 22 NOR = 23 XOR = 24 XNOR = 25 # TYPE IS = 80 IS_NOT = 81 # GROUPS IN = 82 NOT_IN = 83 class EnumConvertString(Enum): SPLIT = 10 JOIN = 30 FIND = 40 REPLACE = 50 SLICE = 70 # start - end - step = -1, -1, 1 class EnumSwizzle(Enum): A_X = 0 A_Y = 10 A_Z = 20 A_W = 30 B_X = 9 B_Y = 11 B_Z = 21 B_W = 31 CONSTANT = 40 class EnumUnaryOperation(Enum): ABS = 0 FLOOR = 1 CEIL = 2 SQRT = 3 SQUARE = 4 LOG = 5 LOG10 = 6 SIN = 7 COS = 8 TAN = 9 NEGATE = 10 RECIPROCAL = 12 FACTORIAL = 14 EXP = 16 # COMPOUND MINIMUM = 20 MAXIMUM = 21 MEAN = 22 MEDIAN = 24 MODE = 26 MAGNITUDE = 30 NORMALIZE = 32 # LOGICAL NOT = 40 # BITWISE BIT_NOT = 45 COS_H = 60 SIN_H = 62 TAN_H = 64 RADIANS = 70 DEGREES = 72 GAMMA = 80 # IS_EVEN IS_EVEN = 90 IS_ODD = 91 # Dictionary to map each operation to its corresponding function OP_UNARY = { EnumUnaryOperation.ABS: lambda x: math.fabs(x), EnumUnaryOperation.FLOOR: lambda x: math.floor(x), EnumUnaryOperation.CEIL: lambda x: math.ceil(x), EnumUnaryOperation.SQRT: lambda x: math.sqrt(x), EnumUnaryOperation.SQUARE: lambda x: math.pow(x, 2), EnumUnaryOperation.LOG: lambda x: math.log(x) if x != 0 else -math.inf, EnumUnaryOperation.LOG10: lambda x: math.log10(x) if x != 0 else -math.inf, EnumUnaryOperation.SIN: lambda x: math.sin(x), EnumUnaryOperation.COS: lambda x: math.cos(x), EnumUnaryOperation.TAN: lambda x: math.tan(x), EnumUnaryOperation.NEGATE: lambda x: -x, EnumUnaryOperation.RECIPROCAL: lambda x: 1 / x if x != 0 else 0, EnumUnaryOperation.FACTORIAL: lambda x: math.factorial(abs(int(x))), EnumUnaryOperation.EXP: lambda x: math.exp(x), EnumUnaryOperation.NOT: lambda x: not x, EnumUnaryOperation.BIT_NOT: lambda x: ~int(x), EnumUnaryOperation.IS_EVEN: lambda x: x % 2 == 0, EnumUnaryOperation.IS_ODD: lambda x: x % 2 == 1, EnumUnaryOperation.COS_H: lambda x: math.cosh(x), EnumUnaryOperation.SIN_H: lambda x: math.sinh(x), EnumUnaryOperation.TAN_H: lambda x: math.tanh(x), EnumUnaryOperation.RADIANS: lambda x: math.radians(x), EnumUnaryOperation.DEGREES: lambda x: math.degrees(x), EnumUnaryOperation.GAMMA: lambda x: gamma(x) if x > 0 else 0, } # ============================================================================== # === SUPPORT === # ============================================================================== def to_bits(value: Any): if isinstance(value, int): return bin(value)[2:] elif isinstance(value, float): packed = struct.pack('>d', value) return ''.join(f'{byte:08b}' for byte in packed) elif isinstance(value, str): return ''.join(f'{ord(c):08b}' for c in value) else: raise TypeError(f"Unsupported type: {type(value)}") def vector_swap(pA: Any, pB: Any, swap_x: EnumSwizzle, swap_y:EnumSwizzle, swap_z:EnumSwizzle, swap_w:EnumSwizzle, default:list[float]) -> list[float]: """Swap out a vector's values with another vector's values, or a constant fill.""" def parse(target, targetB, swap, val) -> float: if swap == EnumSwizzle.CONSTANT: return val if swap in [EnumSwizzle.B_X, EnumSwizzle.B_Y, EnumSwizzle.B_Z, EnumSwizzle.B_W]: target = targetB swap = int(swap.value / 10) return target[swap] while len(pA) < 4: pA.append(0) while len(pB) < 4: pB.append(0) while len(default) < 4: default.append(0) return [ parse(pA, pB, swap_x, default[0]), parse(pA, pB, swap_y, default[1]), parse(pA, pB, swap_z, default[2]), parse(pA, pB, swap_w, default[3]) ] # ============================================================================== # === CLASS === # ============================================================================== class BitSplitNode(CozyBaseNode): NAME = "BIT SPLIT (JOV) ⭄" CATEGORY = JOV_CATEGORY RETURN_TYPES = (COZY_TYPE_ANY, "BOOLEAN",) RETURN_NAMES = ("BIT", "BOOL",) OUTPUT_IS_LIST = (True, True,) OUTPUT_TOOLTIPS = ( "Bits as Numerical output (0 or 1)", "Bits as Boolean output (True or False)" ) DESCRIPTION = """ Split an input into separate bits. BOOL, INT and FLOAT use their numbers, STRING is treated as a list of CHARACTER. IMAGE and MASK will return a TRUE bit for any non-black pixel, as a stream of bits for all pixels in the image. """ @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES() d = deep_merge(d, { "optional": { Lexicon.VALUE: (COZY_TYPE_NUMERICAL, { "default": None, "tooltip": "Value to convert into bits"}), Lexicon.BITS: ("INT", { "default": 8, "min": 0, "max": 64, "tooltip": "Number of output bits requested"}) } }) return Lexicon._parse(d) def run(self, **kw) -> tuple[list[int], list[bool]]: value = parse_param(kw, Lexicon.VALUE, EnumConvertType.LIST, 0) bits = parse_param(kw, Lexicon.BITS, EnumConvertType.INT, 8) params = list(zip_longest_fill(value, bits)) pbar = ProgressBar(len(params)) results = [] for idx, (value, bits) in enumerate(params): bit_repr = to_bits(value[0])[::-1] if bits > 0: if len(bit_repr) > bits: bit_repr = bit_repr[0:bits] else: bit_repr = bit_repr.ljust(bits, '0') int_bits = [] bool_bits = [] for b in bit_repr: bit = int(b) int_bits.append(bit) bool_bits.append(bool(bit)) results.append([int_bits, bool_bits]) pbar.update_absolute(idx) return *list(zip(*results)), class ComparisonNode(CozyBaseNode): NAME = "COMPARISON (JOV) 🕵🏽" CATEGORY = JOV_CATEGORY RETURN_TYPES = (COZY_TYPE_ANY, COZY_TYPE_ANY,) RETURN_NAMES = ("OUT", "VAL",) OUTPUT_IS_LIST = (True, True,) OUTPUT_TOOLTIPS = ( "Outputs the input at PASS or FAIL depending the evaluation", "The comparison result value" ) DESCRIPTION = """ Evaluates two inputs (A and B) with a specified comparison operators and optional values for successful and failed comparisons. The node performs the specified operation element-wise between corresponding elements of A and B. If the comparison is successful for all elements, it returns the success value; otherwise, it returns the failure value. The node supports various comparison operators such as EQUAL, GREATER_THAN, LESS_THAN, AND, OR, IS, IN, etc. """ @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES() d = deep_merge(d, { "optional": { Lexicon.IN_A: (COZY_TYPE_NUMERICAL, { "default": 0, "tooltip":"First value to compare"}), Lexicon.IN_B: (COZY_TYPE_NUMERICAL, { "default": 0, "tooltip":"Second value to compare"}), Lexicon.SUCCESS: (COZY_TYPE_ANY, { "default": 0, "tooltip": "Sent to OUT on a successful condition"}), Lexicon.FAIL: (COZY_TYPE_ANY, { "default": 0, "tooltip": "Sent to OUT on a failure condition"}), Lexicon.FUNCTION: (EnumComparison._member_names_, { "default": EnumComparison.EQUAL.name, "tooltip": "Comparison function. Sends the data in PASS on successful comparison to OUT, otherwise sends the value in FAIL"}), Lexicon.SWAP: ("BOOLEAN", { "default": False, "tooltip": "Reverse the A and B inputs"}), Lexicon.INVERT: ("BOOLEAN", { "default": False, "tooltip": "Reverse the PASS and FAIL inputs"}), } }) return Lexicon._parse(d) def run(self, **kw) -> tuple[Any, Any]: in_a = parse_param(kw, Lexicon.IN_A, EnumConvertType.ANY, 0) in_b = parse_param(kw, Lexicon.IN_B, EnumConvertType.ANY, 0) size = max(len(in_a), len(in_b)) good = parse_param(kw, Lexicon.SUCCESS, EnumConvertType.ANY, 0)[:size] fail = parse_param(kw, Lexicon.FAIL, EnumConvertType.ANY, 0)[:size] op = parse_param(kw, Lexicon.FUNCTION, EnumComparison, EnumComparison.EQUAL.name)[:size] swap = parse_param(kw, Lexicon.SWAP, EnumConvertType.BOOLEAN, False)[:size] invert = parse_param(kw, Lexicon.INVERT, EnumConvertType.BOOLEAN, False)[:size] params = list(zip_longest_fill(in_a, in_b, good, fail, op, swap, invert)) pbar = ProgressBar(len(params)) vals = [] results = [] for idx, (A, B, good, fail, op, swap, invert) in enumerate(params): if not isinstance(A, (tuple, list,)): A = [A] if not isinstance(B, (tuple, list,)): B = [B] size = min(4, max(len(A), len(B))) - 1 typ = [EnumConvertType.FLOAT, EnumConvertType.VEC2, EnumConvertType.VEC3, EnumConvertType.VEC4][size] val_a = parse_value(A, typ, [A[-1]] * size) if not isinstance(val_a, (list,)): val_a = [val_a] val_b = parse_value(B, typ, [B[-1]] * size) if not isinstance(val_b, (list,)): val_b = [val_b] if swap: val_a, val_b = val_b, val_a match op: case EnumComparison.EQUAL: val = [a == b for a, b in zip(val_a, val_b)] case EnumComparison.GREATER_THAN: val = [a > b for a, b in zip(val_a, val_b)] case EnumComparison.GREATER_THAN_EQUAL: val = [a >= b for a, b in zip(val_a, val_b)] case EnumComparison.LESS_THAN: val = [a < b for a, b in zip(val_a, val_b)] case EnumComparison.LESS_THAN_EQUAL: val = [a <= b for a, b in zip(val_a, val_b)] case EnumComparison.NOT_EQUAL: val = [a != b for a, b in zip(val_a, val_b)] # LOGIC # case EnumBinaryOperation.NOT = 10 case EnumComparison.AND: val = [a and b for a, b in zip(val_a, val_b)] case EnumComparison.NAND: val = [not(a and b) for a, b in zip(val_a, val_b)] case EnumComparison.OR: val = [a or b for a, b in zip(val_a, val_b)] case EnumComparison.NOR: val = [not(a or b) for a, b in zip(val_a, val_b)] case EnumComparison.XOR: val = [(a and not b) or (not a and b) for a, b in zip(val_a, val_b)] case EnumComparison.XNOR: val = [not((a and not b) or (not a and b)) for a, b in zip(val_a, val_b)] # IDENTITY case EnumComparison.IS: val = [a is b for a, b in zip(val_a, val_b)] case EnumComparison.IS_NOT: val = [a is not b for a, b in zip(val_a, val_b)] # GROUP case EnumComparison.IN: val = [a in val_b for a in val_a] case EnumComparison.NOT_IN: val = [a not in val_b for a in val_a] output = all([bool(v) for v in val]) if invert: output = not output output = good if output == True else fail results.append([output, val]) pbar.update_absolute(idx) outs, vals = zip(*results) if isinstance(outs[0], (TensorType,)): if len(outs) > 1: outs = torch.stack(outs) else: outs = outs[0].unsqueeze(0) outs = [outs] else: outs = list(outs) return outs, *vals, class LerpNode(CozyBaseNode): NAME = "LERP (JOV) 🔰" CATEGORY = JOV_CATEGORY RETURN_TYPES = (COZY_TYPE_ANY,) RETURN_NAMES = ("❔",) OUTPUT_IS_LIST = (True,) OUTPUT_TOOLTIPS = ( f"Output can vary depending on the type chosen in the {"TYPE"} parameter", ) DESCRIPTION = """ Calculate linear interpolation between two values or vectors based on a blending factor (alpha). The node accepts optional start (IN_A) and end (IN_B) points, a blending factor (FLOAT), and various input types for both start and end points, such as single values (X, Y), 2-value vectors (IN_A2, IN_B2), 3-value vectors (IN_A3, IN_B3), and 4-value vectors (IN_A4, IN_B4). Additionally, you can specify the easing function (EASE) and the desired output type (TYPE). It supports various easing functions for smoother transitions. """ @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES() d = deep_merge(d, { "optional": { Lexicon.IN_A: (COZY_TYPE_NUMERICAL, { "tooltip": "Custom Start Point"}), Lexicon.IN_B: (COZY_TYPE_NUMERICAL, { "tooltip": "Custom End Point"}), Lexicon.ALPHA: ("VEC4", { "default": (0.5, 0.5, 0.5, 0.5), "mij": 0, "maj": 1,}), Lexicon.TYPE: (EnumConvertType._member_names_[:6], { "default": EnumConvertType.FLOAT.name, "tooltip": "Output type desired from resultant operation"}), Lexicon.DEFAULT_A: ("VEC4", { "default": (0, 0, 0, 0)}), Lexicon.DEFAULT_B: ("VEC4", { "default": (1,1,1,1)}) } }) return Lexicon._parse(d) def run(self, **kw) -> tuple[Any, Any]: A = parse_param(kw, Lexicon.IN_A, EnumConvertType.ANY, 0) B = parse_param(kw, Lexicon.IN_B, EnumConvertType.ANY, 0) alpha = parse_param(kw, Lexicon.ALPHA,EnumConvertType.VEC4, (0.5,0.5,0.5,0.5)) typ = parse_param(kw, Lexicon.TYPE, EnumConvertType, EnumConvertType.FLOAT.name) a_xyzw = parse_param(kw, Lexicon.DEFAULT_A, EnumConvertType.VEC4, (0, 0, 0, 0)) b_xyzw = parse_param(kw, Lexicon.DEFAULT_B, EnumConvertType.VEC4, (1, 1, 1, 1)) values = [] params = list(zip_longest_fill(A, B, alpha, typ, a_xyzw, b_xyzw)) pbar = ProgressBar(len(params)) for idx, (A, B, alpha, typ, a_xyzw, b_xyzw) in enumerate(params): size = int(typ.value / 10) if A is None: A = a_xyzw[:size] if B is None: B = b_xyzw[:size] val_a = parse_value(A, EnumConvertType.VEC4, a_xyzw) val_b = parse_value(B, EnumConvertType.VEC4, b_xyzw) alpha = parse_value(alpha, EnumConvertType.VEC4, alpha) if size > 1: val_a = val_a[:size + 1] val_b = val_b[:size + 1] else: val_a = [val_a[0]] val_b = [val_b[0]] val = [val_b[x] * alpha[x] + val_a[x] * (1 - alpha[x]) for x in range(size)] convert = int if "INT" in typ.name else float ret = [] for v in val: try: ret.append(convert(v)) except OverflowError: ret.append(0) except Exception as e: ret.append(0) val = ret[0] if size == 1 else ret[:size+1] values.append(val) pbar.update_absolute(idx) return [values] class OPUnaryNode(CozyBaseNode): NAME = "OP UNARY (JOV) 🎲" CATEGORY = JOV_CATEGORY RETURN_TYPES = (COZY_TYPE_ANY,) RETURN_NAMES = ("❔",) OUTPUT_IS_LIST = (True,) OUTPUT_TOOLTIPS = ( "Output type will match the input type", ) DESCRIPTION = """ Perform single function operations like absolute value, mean, median, mode, magnitude, normalization, maximum, or minimum on input values. """ @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES() typ = EnumConvertType._member_names_[:6] d = deep_merge(d, { "optional": { Lexicon.IN_A: (COZY_TYPE_FULL, { "default": 0}), Lexicon.FUNCTION: (EnumUnaryOperation._member_names_, { "default": EnumUnaryOperation.ABS.name}), Lexicon.TYPE: (typ, { "default": EnumConvertType.FLOAT.name,}), Lexicon.DEFAULT_A: ("VEC4", { "default": (0,0,0,0), "mij": -sys.float_info.max, "maj": sys.float_info.max, "precision": 2, "label": ["X", "Y", "Z", "W"]}) } }) return Lexicon._parse(d) def run(self, **kw) -> tuple[bool]: results = [] A = parse_param(kw, Lexicon.IN_A, EnumConvertType.ANY, 0) op = parse_param(kw, Lexicon.FUNCTION, EnumUnaryOperation, EnumUnaryOperation.ABS.name) out = parse_param(kw, Lexicon.TYPE, EnumConvertType, EnumConvertType.FLOAT.name) a_xyzw = parse_param(kw, Lexicon.DEFAULT_A, EnumConvertType.VEC4, (0, 0, 0, 0)) params = list(zip_longest_fill(A, op, out, a_xyzw)) pbar = ProgressBar(len(params)) for idx, (A, op, out, a_xyzw) in enumerate(params): if not isinstance(A, (list, tuple,)): A = [A] best_type = [EnumConvertType.FLOAT, EnumConvertType.VEC2, EnumConvertType.VEC3, EnumConvertType.VEC4][len(A)-1] val = parse_value(A, best_type, a_xyzw) val = parse_value(val, EnumConvertType.VEC4, a_xyzw) match op: case EnumUnaryOperation.MEAN: val = [sum(val) / len(val)] case EnumUnaryOperation.MEDIAN: val = [sorted(val)[len(val) // 2]] case EnumUnaryOperation.MODE: counts = Counter(val) val = [max(counts, key=counts.get)] case EnumUnaryOperation.MAGNITUDE: val = [math.sqrt(sum(x ** 2 for x in val))] case EnumUnaryOperation.NORMALIZE: if len(val) == 1: val = [1] else: m = math.sqrt(sum(x ** 2 for x in val)) if m > 0: val = [v / m for v in val] else: val = [0] * len(val) case EnumUnaryOperation.MAXIMUM: val = [max(val)] case EnumUnaryOperation.MINIMUM: val = [min(val)] case _: # Apply unary operation to each item in the list ret = [] for v in val: try: v = OP_UNARY[op](v) except Exception as e: logger.error(f"{e} :: {op}") v = 0 ret.append(v) val = ret val = parse_value(val, out, 0) results.append(val) pbar.update_absolute(idx) return (results,) class OPBinaryNode(CozyBaseNode): NAME = "OP BINARY (JOV) 🌟" CATEGORY = JOV_CATEGORY RETURN_TYPES = (COZY_TYPE_ANY,) RETURN_NAMES = ("❔",) OUTPUT_IS_LIST = (True,) OUTPUT_TOOLTIPS = ( "Output type will match the input type", ) DESCRIPTION = """ Execute binary operations like addition, subtraction, multiplication, division, and bitwise operations on input values, supporting various data types and vector sizes. """ @classmethod def INPUT_TYPES(cls) -> InputType: names_convert = EnumConvertType._member_names_[:6] d = super().INPUT_TYPES() d = deep_merge(d, { "optional": { Lexicon.IN_A: (COZY_TYPE_FULL, { "default": None}), Lexicon.IN_B: (COZY_TYPE_FULL, { "default": None}), Lexicon.FUNCTION: (EnumBinaryOperation._member_names_, { "default": EnumBinaryOperation.ADD.name,}), Lexicon.TYPE: (names_convert, { "default": names_convert[2], "tooltip":"Output type desired from resultant operation"}), Lexicon.SWAP: ("BOOLEAN", { "default": False}), Lexicon.DEFAULT_A: ("VEC4", { "default": (0,0,0,0), "mij": -sys.float_info.max, "maj": sys.float_info.max, "label": ["X", "Y", "Z", "W"]}), Lexicon.DEFAULT_B: ("VEC4", { "default": (0,0,0,0), "mij": -sys.float_info.max, "maj": sys.float_info.max, "label": ["X", "Y", "Z", "W"]}) } }) return Lexicon._parse(d) def run(self, **kw) -> tuple[bool]: results = [] A = parse_param(kw, Lexicon.IN_A, EnumConvertType.ANY, None) B = parse_param(kw, Lexicon.IN_B, EnumConvertType.ANY, None) op = parse_param(kw, Lexicon.FUNCTION, EnumBinaryOperation, EnumBinaryOperation.ADD.name) typ = parse_param(kw, Lexicon.TYPE, EnumConvertType, EnumConvertType.FLOAT.name) swap = parse_param(kw, Lexicon.SWAP, EnumConvertType.BOOLEAN, False) a_xyzw = parse_param(kw, Lexicon.DEFAULT_A, EnumConvertType.VEC4, (0, 0, 0, 0)) b_xyzw = parse_param(kw, Lexicon.DEFAULT_B, EnumConvertType.VEC4, (0, 0, 0, 0)) params = list(zip_longest_fill(A, B, a_xyzw, b_xyzw, op, typ, swap)) pbar = ProgressBar(len(params)) for idx, (A, B, a_xyzw, b_xyzw, op, typ, swap) in enumerate(params): if not isinstance(A, (list, tuple,)): A = [A] if not isinstance(B, (list, tuple,)): B = [B] size = min(3, max(len(A)-1, len(B)-1)) best_type = [EnumConvertType.FLOAT, EnumConvertType.VEC2, EnumConvertType.VEC3, EnumConvertType.VEC4][size] val_a = parse_value(A, best_type, a_xyzw) val_a = parse_value(val_a, EnumConvertType.VEC4, a_xyzw) val_b = parse_value(B, best_type, b_xyzw) val_b = parse_value(val_b, EnumConvertType.VEC4, b_xyzw) if swap: val_a, val_b = val_b, val_a size = max(1, int(typ.value / 10)) val_a = val_a[:size+1] val_b = val_b[:size+1] match op: # VECTOR case EnumBinaryOperation.DOT_PRODUCT: val = [sum(a * b for a, b in zip(val_a, val_b))] case EnumBinaryOperation.CROSS_PRODUCT: val = [0, 0, 0] if len(val_a) < 3 or len(val_b) < 3: logger.warning("Cross product only defined for 3D vectors") else: val = [ val_a[1] * val_b[2] - val_a[2] * val_b[1], val_a[2] * val_b[0] - val_a[0] * val_b[2], val_a[0] * val_b[1] - val_a[1] * val_b[0] ] # ARITHMETIC case EnumBinaryOperation.ADD: val = [sum(pair) for pair in zip(val_a, val_b)] case EnumBinaryOperation.SUBTRACT: val = [a - b for a, b in zip(val_a, val_b)] case EnumBinaryOperation.MULTIPLY: val = [a * b for a, b in zip(val_a, val_b)] case EnumBinaryOperation.DIVIDE: val = [a / b if b != 0 else 0 for a, b in zip(val_a, val_b)] case EnumBinaryOperation.DIVIDE_FLOOR: val = [a // b if b != 0 else 0 for a, b in zip(val_a, val_b)] case EnumBinaryOperation.MODULUS: val = [a % b if b != 0 else 0 for a, b in zip(val_a, val_b)] case EnumBinaryOperation.POWER: val = [a ** b if b >= 0 else 0 for a, b in zip(val_a, val_b)] case EnumBinaryOperation.MAXIMUM: val = [max(a, val_b[i]) for i, a in enumerate(val_a)] case EnumBinaryOperation.MINIMUM: # val = min(val_a, val_b) val = [min(a, val_b[i]) for i, a in enumerate(val_a)] # BITS # case EnumBinaryOperation.BIT_NOT: case EnumBinaryOperation.BIT_AND: val = [int(a) & int(b) for a, b in zip(val_a, val_b)] case EnumBinaryOperation.BIT_NAND: val = [not(int(a) & int(b)) for a, b in zip(val_a, val_b)] case EnumBinaryOperation.BIT_OR: val = [int(a) | int(b) for a, b in zip(val_a, val_b)] case EnumBinaryOperation.BIT_NOR: val = [not(int(a) | int(b)) for a, b in zip(val_a, val_b)] case EnumBinaryOperation.BIT_XOR: val = [int(a) ^ int(b) for a, b in zip(val_a, val_b)] case EnumBinaryOperation.BIT_XNOR: val = [not(int(a) ^ int(b)) for a, b in zip(val_a, val_b)] case EnumBinaryOperation.BIT_LSHIFT: val = [int(a) << int(b) if b >= 0 else 0 for a, b in zip(val_a, val_b)] case EnumBinaryOperation.BIT_RSHIFT: val = [int(a) >> int(b) if b >= 0 else 0 for a, b in zip(val_a, val_b)] # GROUP case EnumBinaryOperation.UNION: val = list(set(val_a) | set(val_b)) case EnumBinaryOperation.INTERSECTION: val = list(set(val_a) & set(val_b)) case EnumBinaryOperation.DIFFERENCE: val = list(set(val_a) - set(val_b)) # WEIRD case EnumBinaryOperation.BASE: val = list(set(val_a) - set(val_b)) # cast into correct type.... default = val if len(val) == 0: default = [0] val = parse_value(val, typ, default) results.append(val) pbar.update_absolute(idx) return (results,) class StringerNode(CozyBaseNode): NAME = "STRINGER (JOV) 🪀" CATEGORY = JOV_CATEGORY RETURN_TYPES = ("STRING", "INT",) RETURN_NAMES = ("STRING", "COUNT",) OUTPUT_IS_LIST = (True, False,) DESCRIPTION = """ Manipulate strings through filtering """ @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES() d = deep_merge(d, { "optional": { # split, join, replace, trim/lift Lexicon.FUNCTION: (EnumConvertString._member_names_, { "default": EnumConvertString.SPLIT.name}), Lexicon.KEY: ("STRING", { "default":"", "dynamicPrompt":False, "tooltip": "Delimiter (SPLIT/JOIN) or string to use as search string (FIND/REPLACE)."}), Lexicon.REPLACE: ("STRING", { "default":"", "dynamicPrompt":False}), Lexicon.RANGE: ("VEC3", { "default":(0, -1, 1), "int": True, "tooltip": "Start, End and Step. Values will clip to the actual list size(s)."}), } }) return Lexicon._parse(d) def run(self, **kw) -> tuple[TensorType, ...]: # turn any all inputs into the data_list = parse_dynamic(kw, Lexicon.STRING, EnumConvertType.ANY, "") if data_list is None: logger.warn("no data for list") return ([], 0) op = parse_param(kw, Lexicon.FUNCTION, EnumConvertString, EnumConvertString.SPLIT.name)[0] key = parse_param(kw, Lexicon.KEY, EnumConvertType.STRING, "")[0] replace = parse_param(kw, Lexicon.REPLACE, EnumConvertType.STRING, "")[0] stenst = parse_param(kw, Lexicon.RANGE, EnumConvertType.VEC3INT, (0, -1, 1))[0] results = [] match op: case EnumConvertString.SPLIT: results = data_list if key != "": results = [] for d in data_list: d = [key if len(r) == 0 else r for r in d.split(key)] results.extend(d) case EnumConvertString.JOIN: results = [key.join(data_list)] case EnumConvertString.FIND: results = [r for r in data_list if r.find(key) > -1] case EnumConvertString.REPLACE: results = data_list if key != "": results = [r.replace(key, replace) for r in data_list] case EnumConvertString.SLICE: start, end, step = stenst for x in data_list: start = len(x) if start < 0 else min(max(0, start), len(x)) end = len(x) if end < 0 else min(max(0, end), len(x)) if step != 0: results.append(x[start:end:step]) else: results.append(x) return (results, len(results),) class SwizzleNode(CozyBaseNode): NAME = "SWIZZLE (JOV) 😵" CATEGORY = JOV_CATEGORY RETURN_TYPES = (COZY_TYPE_ANY,) RETURN_NAMES = ("❔",) OUTPUT_IS_LIST = (True,) DESCRIPTION = """ Swap components between two vectors based on specified swizzle patterns and values. It provides flexibility in rearranging vector elements dynamically. """ @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES() names_convert = EnumConvertType._member_names_[3:6] d = deep_merge(d, { "optional": { Lexicon.IN_A: (COZY_TYPE_NUMERICAL, {}), Lexicon.IN_B: (COZY_TYPE_NUMERICAL, {}), Lexicon.TYPE: (names_convert, { "default": names_convert[0]}), Lexicon.SWAP_X: (EnumSwizzle._member_names_, { "default": EnumSwizzle.A_X.name,}), Lexicon.SWAP_Y: (EnumSwizzle._member_names_, { "default": EnumSwizzle.A_Y.name,}), Lexicon.SWAP_Z: (EnumSwizzle._member_names_, { "default": EnumSwizzle.A_Z.name,}), Lexicon.SWAP_W: (EnumSwizzle._member_names_, { "default": EnumSwizzle.A_W.name,}), Lexicon.DEFAULT: ("VEC4", { "default": (0,0,0,0), "mij": -sys.float_info.max, "maj": sys.float_info.max}) } }) return Lexicon._parse(d) def run(self, **kw) -> tuple[float, ...]: pA = parse_param(kw, Lexicon.IN_A, EnumConvertType.LIST, None) pB = parse_param(kw, Lexicon.IN_B, EnumConvertType.LIST, None) typ = parse_param(kw, Lexicon.TYPE, EnumConvertType, EnumConvertType.VEC2.name) swap_x = parse_param(kw, Lexicon.SWAP_X, EnumSwizzle, EnumSwizzle.A_X.name) swap_y = parse_param(kw, Lexicon.SWAP_Y, EnumSwizzle, EnumSwizzle.A_Y.name) swap_z = parse_param(kw, Lexicon.SWAP_Z, EnumSwizzle, EnumSwizzle.A_W.name) swap_w = parse_param(kw, Lexicon.SWAP_W, EnumSwizzle, EnumSwizzle.A_Z.name) default = parse_param(kw, Lexicon.DEFAULT, EnumConvertType.VEC4, (0, 0, 0, 0)) params = list(zip_longest_fill(pA, pB, typ, swap_x, swap_y, swap_z, swap_w, default)) results = [] pbar = ProgressBar(len(params)) for idx, (pA, pB, typ, swap_x, swap_y, swap_z, swap_w, default) in enumerate(params): default = list(default) pA = pA + default[len(pA):] pB = pB + default[len(pB):] val = vector_swap(pA, pB, swap_x, swap_y, swap_z, swap_w, default) val = parse_value(val, typ, val) results.append(val) pbar.update_absolute(idx) return (results,) ================================================ FILE: core/color.py ================================================ """ Jovimetrix - Color """ from enum import Enum import cv2 import torch from comfy.utils import ProgressBar from cozy_comfyui import \ IMAGE_SIZE_MIN, \ InputType, RGBAMaskType, EnumConvertType, TensorType, \ deep_merge, parse_param, zip_longest_fill from cozy_comfyui.lexicon import \ Lexicon from cozy_comfyui.node import \ COZY_TYPE_IMAGE, \ CozyBaseNode, CozyImageNode from cozy_comfyui.image.adjust import \ image_invert from cozy_comfyui.image.color import \ EnumCBDeficiency, EnumCBSimulator, EnumColorMap, EnumColorTheory, \ color_lut_full, color_lut_match, color_lut_palette, \ color_lut_tonal, color_lut_visualize, color_match_reinhard, \ color_theory, color_blind, color_top_used, image_gradient_expand, \ image_gradient_map from cozy_comfyui.image.channel import \ channel_solid from cozy_comfyui.image.compose import \ EnumScaleMode, EnumInterpolation, \ image_scalefit from cozy_comfyui.image.convert import \ tensor_to_cv, cv_to_tensor, cv_to_tensor_full, image_mask, image_mask_add from cozy_comfyui.image.misc import \ image_stack # ============================================================================== # === GLOBAL === # ============================================================================== JOV_CATEGORY = "COLOR" # ============================================================================== # === ENUMERATION === # ============================================================================== class EnumColorMatchMode(Enum): REINHARD = 30 LUT = 10 # HISTOGRAM = 20 class EnumColorMatchMap(Enum): USER_MAP = 0 PRESET_MAP = 10 # ============================================================================== # === CLASS === # ============================================================================== class ColorBlindNode(CozyImageNode): NAME = "COLOR BLIND (JOV) 👁‍🗨" CATEGORY = JOV_CATEGORY DESCRIPTION = """ Simulate color blindness effects on images. You can select various types of color deficiencies, adjust the severity of the effect, and apply the simulation using different simulators. This node is ideal for accessibility testing and design adjustments, ensuring inclusivity in your visual content. """ @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES() d = deep_merge(d, { "optional": { Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}), Lexicon.DEFICIENCY: (EnumCBDeficiency._member_names_, { "default": EnumCBDeficiency.PROTAN.name,}), Lexicon.SOLVER: (EnumCBSimulator._member_names_, { "default": EnumCBSimulator.AUTOSELECT.name,}) } }) return Lexicon._parse(d) def run(self, **kw) -> RGBAMaskType: pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None) deficiency = parse_param(kw, Lexicon.DEFICIENCY, EnumCBDeficiency, EnumCBDeficiency.PROTAN.name) simulator = parse_param(kw, Lexicon.SOLVER, EnumCBSimulator, EnumCBSimulator.AUTOSELECT.name) severity = parse_param(kw, Lexicon.VALUE, EnumConvertType.FLOAT, 1) params = list(zip_longest_fill(pA, deficiency, simulator, severity)) images = [] pbar = ProgressBar(len(params)) for idx, (pA, deficiency, simulator, severity) in enumerate(params): pA = channel_solid() if pA is None else tensor_to_cv(pA) pA = color_blind(pA, deficiency, simulator, severity) images.append(cv_to_tensor_full(pA)) pbar.update_absolute(idx) return image_stack(images) class ColorMatchNode(CozyImageNode): NAME = "COLOR MATCH (JOV) 💞" CATEGORY = JOV_CATEGORY DESCRIPTION = """ Adjust the color scheme of one image to match another with the Color Match Node. Choose from various color matching LUTs or Reinhard matching. You can specify a custom user color maps, the number of colors, and whether to flip or invert the images. """ @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES() d = deep_merge(d, { "optional": { Lexicon.IMAGE_SOURCE: (COZY_TYPE_IMAGE, {}), Lexicon.IMAGE_TARGET: (COZY_TYPE_IMAGE, {}), Lexicon.MODE: (EnumColorMatchMode._member_names_, { "default": EnumColorMatchMode.REINHARD.name, "tooltip": "Match colors from an image or built-in (LUT), Histogram lookups or Reinhard method"}), Lexicon.MAP: (EnumColorMatchMap._member_names_, { "default": EnumColorMatchMap.USER_MAP.name, }), Lexicon.COLORMAP: (EnumColorMap._member_names_, { "default": EnumColorMap.HSV.name,}), Lexicon.VALUE: ("INT", { "default": 255, "min": 0, "max": 255, "tooltip":"The number of colors to use from the LUT during the remap. Will quantize the LUT range."}), Lexicon.SWAP: ("BOOLEAN", { "default": False,}), Lexicon.INVERT: ("BOOLEAN", { "default": False,}), Lexicon.MATTE: ("VEC4", { "default": (0, 0, 0, 255), "rgb": True,}), } }) return Lexicon._parse(d) def run(self, **kw) -> RGBAMaskType: pA = parse_param(kw, Lexicon.IMAGE_SOURCE, EnumConvertType.IMAGE, None) pB = parse_param(kw, Lexicon.IMAGE_TARGET, EnumConvertType.IMAGE, None) mode = parse_param(kw, Lexicon.MODE, EnumColorMatchMode, EnumColorMatchMode.REINHARD.name) cmap = parse_param(kw, Lexicon.MAP, EnumColorMatchMap, EnumColorMatchMap.USER_MAP.name) colormap = parse_param(kw, Lexicon.COLORMAP, EnumColorMap, EnumColorMap.HSV.name) num_colors = parse_param(kw, Lexicon.VALUE, EnumConvertType.INT, 255) swap = parse_param(kw, Lexicon.SWAP, EnumConvertType.BOOLEAN, False) invert = parse_param(kw, Lexicon.INVERT, EnumConvertType.BOOLEAN, False) matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4, (0, 0, 0, 255), 0, 255) params = list(zip_longest_fill(pA, pB, mode, cmap, colormap, num_colors, swap, invert, matte)) images = [] pbar = ProgressBar(len(params)) for idx, (pA, pB, mode, cmap, colormap, num_colors, swap, invert, matte) in enumerate(params): if swap == True: pA, pB = pB, pA mask = None if pA is None: pA = channel_solid() else: pA = tensor_to_cv(pA) if pA.ndim == 3 and pA.shape[2] == 4: mask = image_mask(pA) # h, w = pA.shape[:2] if pB is None: pB = channel_solid() else: pB = tensor_to_cv(pB) match mode: case EnumColorMatchMode.LUT: if cmap == EnumColorMatchMap.PRESET_MAP: pB = None pA = color_lut_match(pA, colormap.value, pB, num_colors) case EnumColorMatchMode.REINHARD: pA = color_match_reinhard(pA, pB) if invert == True: pA = image_invert(pA, 1) if mask is not None: pA = image_mask_add(pA, mask) images.append(cv_to_tensor_full(pA, matte)) pbar.update_absolute(idx) return image_stack(images) class ColorKMeansNode(CozyBaseNode): NAME = "COLOR MEANS (JOV) 〰️" CATEGORY = JOV_CATEGORY RETURN_TYPES = ("IMAGE", "IMAGE", "IMAGE", "JLUT", "IMAGE",) RETURN_NAMES = ("IMAGE", "PALETTE", "GRADIENT", "LUT", "RGB", ) OUTPUT_TOOLTIPS = ( "Sequence of top-K colors. Count depends on value in `VAL`.", "Simple Tone palette based on result top-K colors. Width is taken from input.", "Gradient of top-K colors.", "Full 3D LUT of the image mapped to the resultant top-K colors chosen.", "Visualization of full 3D .cube LUT in JLUT output" ) DESCRIPTION = """ The top-k colors ordered from most->least used as a strip, tonal palette and 3D LUT. """ @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES() d = deep_merge(d, { "optional": { Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}), Lexicon.VALUE: ("INT", { "default": 12, "min": 1, "max": 255, "tooltip": "The top K colors to select"}), Lexicon.SIZE: ("INT", { "default": 32, "min": 1, "max": 256, "tooltip": "Height of the tones in the strip. Width is based on input"}), Lexicon.COUNT: ("INT", { "default": 33, "min": 1, "max": 255, "tooltip": "Number of nodes to use in interpolation of full LUT (256 is every pixel)"}), Lexicon.WH: ("VEC2", { "default": (256, 256), "mij":IMAGE_SIZE_MIN, "int": True, "label": ["W", "H"] }), } }) return Lexicon._parse(d) def run(self, **kw) -> RGBAMaskType: pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None) kcolors = parse_param(kw, Lexicon.VALUE, EnumConvertType.INT, 12, 1, 255) lut_height = parse_param(kw, Lexicon.SIZE, EnumConvertType.INT, 32, 1, 256) nodes = parse_param(kw, Lexicon.COUNT, EnumConvertType.INT, 33, 1, 255) wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (256, 256), IMAGE_SIZE_MIN) params = list(zip_longest_fill(pA, kcolors, nodes, lut_height, wihi)) top_colors = [] lut_tonal = [] lut_full = [] lut_visualized = [] gradients = [] pbar = ProgressBar(len(params) * sum(kcolors)) for idx, (pA, kcolors, nodes, lut_height, wihi) in enumerate(params): if pA is None: pA = channel_solid() pA = tensor_to_cv(pA) colors = color_top_used(pA, kcolors) # size down to 1px strip then expand to 256 for full gradient top_colors.extend([cv_to_tensor(channel_solid(*wihi, color=c)) for c in colors]) lut = color_lut_tonal(colors, width=pA.shape[1], height=lut_height) lut_tonal.append(cv_to_tensor(lut)) full = color_lut_full(colors, nodes) lut_full.append(torch.from_numpy(full)) lut = color_lut_visualize(full, wihi[1]) lut_visualized.append(cv_to_tensor(lut)) palette = color_lut_palette(colors, 1) gradient = image_gradient_expand(palette) gradient = cv2.resize(gradient, wihi) gradients.append(cv_to_tensor(gradient)) pbar.update_absolute(idx) return torch.stack(top_colors), torch.stack(lut_tonal), torch.stack(gradients), lut_full, torch.stack(lut_visualized), class ColorTheoryNode(CozyBaseNode): NAME = "COLOR THEORY (JOV) 🛞" CATEGORY = JOV_CATEGORY RETURN_TYPES = ("IMAGE", "IMAGE", "IMAGE", "IMAGE", "IMAGE") RETURN_NAMES = ("C1", "C2", "C3", "C4", "C5") DESCRIPTION = """ Generate a color harmony based on the selected scheme. Supported schemes include complimentary, analogous, triadic, tetradic, and more. Users can customize the angle of separation for color calculations, offering flexibility in color manipulation and exploration of different color palettes. """ @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES() d = deep_merge(d, { "optional": { Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}), Lexicon.SCHEME: (EnumColorTheory._member_names_, { "default": EnumColorTheory.COMPLIMENTARY.name}), Lexicon.VALUE: ("INT", { "default": 45, "min": -90, "max": 90, "tooltip": "Custom angle of separation to use when calculating colors"}), Lexicon.INVERT: ("BOOLEAN", { "default": False}) } }) return Lexicon._parse(d) def run(self, **kw) -> tuple[list[TensorType], list[TensorType]]: pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None) scheme = parse_param(kw, Lexicon.SCHEME, EnumColorTheory, EnumColorTheory.COMPLIMENTARY.name) value = parse_param(kw, Lexicon.VALUE, EnumConvertType.INT, 45, -90, 90) invert = parse_param(kw, Lexicon.INVERT, EnumConvertType.BOOLEAN, False) params = list(zip_longest_fill(pA, scheme, value, invert)) images = [] pbar = ProgressBar(len(params)) for idx, (img, scheme, value, invert) in enumerate(params): img = channel_solid() if img is None else tensor_to_cv(img) img = color_theory(img, value, scheme) if invert: img = (image_invert(s, 1) for s in img) images.append([cv_to_tensor(a) for a in img]) pbar.update_absolute(idx) return image_stack(images) class GradientMapNode(CozyImageNode): NAME = "GRADIENT MAP (JOV) 🇲🇺" CATEGORY = JOV_CATEGORY DESCRIPTION = """ Remaps an input image using a gradient lookup table (LUT). The gradient image will be translated into a single row lookup table. """ @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES() d = deep_merge(d, { "optional": { Lexicon.IMAGE: (COZY_TYPE_IMAGE, { "tooltip": "Image to remap with gradient input"}), Lexicon.GRADIENT: (COZY_TYPE_IMAGE, { "tooltip": f"Look up table (LUT) to remap the input image in `{"IMAGE"}`"}), Lexicon.REVERSE: ("BOOLEAN", { "default": False, "tooltip": "Reverse the gradient from left-to-right"}), Lexicon.MODE: (EnumScaleMode._member_names_, { "default": EnumScaleMode.MATTE.name,}), Lexicon.WH: ("VEC2", { "default": (512, 512), "mij":IMAGE_SIZE_MIN, "int": True, "label": ["W", "H"] }), Lexicon.SAMPLE: (EnumInterpolation._member_names_, { "default": EnumInterpolation.LANCZOS4.name,}), Lexicon.MATTE: ("VEC4", { "default": (0, 0, 0, 255), "rgb": True,}) } }) return Lexicon._parse(d) def run(self, **kw) -> RGBAMaskType: pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None) gradient = parse_param(kw, Lexicon.GRADIENT, EnumConvertType.IMAGE, None) reverse = parse_param(kw, Lexicon.REVERSE, EnumConvertType.BOOLEAN, False) mode = parse_param(kw, Lexicon.MODE, EnumScaleMode, EnumScaleMode.MATTE.name) wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (512, 512), IMAGE_SIZE_MIN) sample = parse_param(kw, Lexicon.SAMPLE, EnumInterpolation, EnumInterpolation.LANCZOS4.name) matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255) images = [] params = list(zip_longest_fill(pA, gradient, reverse, mode, sample, wihi, matte)) pbar = ProgressBar(len(params)) for idx, (pA, gradient, reverse, mode, sample, wihi, matte) in enumerate(params): pA = channel_solid() if pA is None else tensor_to_cv(pA) mask = None if pA.ndim == 3 and pA.shape[2] == 4: mask = image_mask(pA) gradient = channel_solid() if gradient is None else tensor_to_cv(gradient) pA = image_gradient_map(pA, gradient) if mode != EnumScaleMode.MATTE: w, h = wihi pA = image_scalefit(pA, w, h, mode, sample) if mask is not None: pA = image_mask_add(pA, mask) images.append(cv_to_tensor_full(pA, matte)) pbar.update_absolute(idx) return image_stack(images) ================================================ FILE: core/compose.py ================================================ """ Jovimetrix - Composition """ import numpy as np from comfy.utils import ProgressBar from cozy_comfyui import \ IMAGE_SIZE_MIN, \ InputType, RGBAMaskType, EnumConvertType, \ deep_merge, parse_param, zip_longest_fill from cozy_comfyui.lexicon import \ Lexicon from cozy_comfyui.node import \ COZY_TYPE_IMAGE, \ CozyBaseNode, CozyImageNode from cozy_comfyui.image import \ EnumImageType from cozy_comfyui.image.adjust import \ EnumThreshold, EnumThresholdAdapt, \ image_histogram2, image_invert, image_filter, image_threshold from cozy_comfyui.image.channel import \ EnumPixelSwizzle, \ channel_merge, channel_solid, channel_swap from cozy_comfyui.image.compose import \ EnumBlendType, EnumScaleMode, EnumScaleInputMode, EnumInterpolation, \ image_resize, \ image_scalefit, image_split, image_blend, image_matte from cozy_comfyui.image.convert import \ image_mask, image_convert, tensor_to_cv, cv_to_tensor, cv_to_tensor_full from cozy_comfyui.image.misc import \ image_by_size, image_minmax, image_stack # ============================================================================== # === GLOBAL === # ============================================================================== JOV_CATEGORY = "COMPOSE" # ============================================================================== # === CLASS === # ============================================================================== class BlendNode(CozyImageNode): NAME = "BLEND (JOV) ⚗️" CATEGORY = JOV_CATEGORY DESCRIPTION = """ Combine two input images using various blending modes, such as normal, screen, multiply, overlay, etc. It also supports alpha blending and masking to achieve complex compositing effects. This node is essential for creating layered compositions and adding visual richness to images. """ @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES() d = deep_merge(d, { "optional": { Lexicon.IMAGE_BACK: (COZY_TYPE_IMAGE, {}), Lexicon.IMAGE_FORE: (COZY_TYPE_IMAGE, {}), Lexicon.MASK: (COZY_TYPE_IMAGE, { "tooltip": "Optional Mask for Alpha Blending. If empty, it will use the ALPHA of the FOREGROUND"}), Lexicon.FUNCTION: (EnumBlendType._member_names_, { "default": EnumBlendType.NORMAL.name,}), Lexicon.ALPHA: ("FLOAT", { "default": 1, "min": 0, "max": 1, "step": 0.01,}), Lexicon.SWAP: ("BOOLEAN", { "default": False}), Lexicon.INVERT: ("BOOLEAN", { "default": False, "tooltip": "Invert the mask input"}), Lexicon.MODE: (EnumScaleMode._member_names_, { "default": EnumScaleMode.MATTE.name,}), Lexicon.WH: ("VEC2", { "default": (512, 512), "mij":IMAGE_SIZE_MIN, "int": True, "label": ["W", "H"]}), Lexicon.SAMPLE: (EnumInterpolation._member_names_, { "default": EnumInterpolation.LANCZOS4.name,}), Lexicon.MATTE: ("VEC4", { "default": (0, 0, 0, 255), "rgb": True,}), Lexicon.INPUT: (EnumScaleInputMode._member_names_, { "default": EnumScaleInputMode.NONE.name,}), } }) return Lexicon._parse(d) def run(self, **kw) -> RGBAMaskType: back = parse_param(kw, Lexicon.IMAGE_BACK, EnumConvertType.IMAGE, None) fore = parse_param(kw, Lexicon.IMAGE_FORE, EnumConvertType.IMAGE, None) mask = parse_param(kw, Lexicon.MASK, EnumConvertType.MASK, None) func = parse_param(kw, Lexicon.FUNCTION, EnumBlendType, EnumBlendType.NORMAL.name) alpha = parse_param(kw, Lexicon.ALPHA, EnumConvertType.FLOAT, 1) swap = parse_param(kw, Lexicon.SWAP, EnumConvertType.BOOLEAN, False) invert = parse_param(kw, Lexicon.INVERT, EnumConvertType.BOOLEAN, False) mode = parse_param(kw, Lexicon.MODE, EnumScaleMode, EnumScaleMode.MATTE.name) wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (512, 512), IMAGE_SIZE_MIN) sample = parse_param(kw, Lexicon.SAMPLE, EnumInterpolation, EnumInterpolation.LANCZOS4.name) matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255) inputMode = parse_param(kw, Lexicon.INPUT, EnumScaleInputMode, EnumScaleInputMode.NONE.name) params = list(zip_longest_fill(back, fore, mask, func, alpha, swap, invert, mode, wihi, sample, matte, inputMode)) images = [] pbar = ProgressBar(len(params)) for idx, (back, fore, mask, func, alpha, swap, invert, mode, wihi, sample, matte, inputMode) in enumerate(params): if swap: back, fore = fore, back width, height = IMAGE_SIZE_MIN, IMAGE_SIZE_MIN if back is None: if fore is None: if mask is None: if mode != EnumScaleMode.MATTE: width, height = wihi else: height, width = mask.shape[:2] else: height, width = fore.shape[:2] else: height, width = back.shape[:2] if back is None: back = channel_solid(width, height, matte) else: back = tensor_to_cv(back) #matted = pixel_eval(matte) #back = image_matte(back, matted) if fore is None: clear = list(matte[:3]) + [0] fore = channel_solid(width, height, clear) else: fore = tensor_to_cv(fore) if mask is None: mask = image_mask(fore, 255) else: mask = tensor_to_cv(mask, 1) if invert: mask = 255 - mask if inputMode != EnumScaleInputMode.NONE: # get the min/max of back, fore; and mask? imgs = [back, fore] _, w, h = image_by_size(imgs) back = image_scalefit(back, w, h, inputMode, sample, matte) fore = image_scalefit(fore, w, h, inputMode, sample, matte) mask = image_scalefit(mask, w, h, inputMode, sample) back = image_scalefit(back, w, h, EnumScaleMode.RESIZE_MATTE, sample, matte) fore = image_scalefit(fore, w, h, EnumScaleMode.RESIZE_MATTE, sample, (0,0,0,255)) mask = image_scalefit(mask, w, h, EnumScaleMode.RESIZE_MATTE, sample, (255,255,255,255)) img = image_blend(back, fore, mask, func, alpha) mask = image_mask(img) if mode != EnumScaleMode.MATTE: width, height = wihi img = image_scalefit(img, width, height, mode, sample, matte) img = cv_to_tensor_full(img, matte) #img = [cv_to_tensor(back), cv_to_tensor(fore), cv_to_tensor(mask, True)] images.append(img) pbar.update_absolute(idx) return image_stack(images) class FilterMaskNode(CozyImageNode): NAME = "FILTER MASK (JOV) 🤿" CATEGORY = JOV_CATEGORY DESCRIPTION = """ Create masks based on specific color ranges within an image. Specify the color range using start and end values and an optional fuzziness factor to adjust the range. This node allows for precise color-based mask creation, ideal for tasks like object isolation, background removal, or targeted color adjustments. """ @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES() d = deep_merge(d, { "optional": { Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}), Lexicon.START: ("VEC3", { "default": (128, 128, 128), "rgb": True}), Lexicon.RANGE: ("BOOLEAN", { "default": False, "tooltip": "Use an end point (start->end) when calculating the filter range"}), Lexicon.END: ("VEC3", { "default": (128, 128, 128), "rgb": True}), Lexicon.FUZZ: ("VEC3", { "default": (0.5,0.5,0.5), "mij":0, "maj":1,}), Lexicon.MATTE: ("VEC4", { "default": (0, 0, 0, 255), "rgb": True,}), } }) return Lexicon._parse(d) def run(self, **kw) -> RGBAMaskType: pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None) start = parse_param(kw, Lexicon.START, EnumConvertType.VEC3INT, (128,128,128), 0, 255) use_range = parse_param(kw, Lexicon.RANGE, EnumConvertType.BOOLEAN, False) end = parse_param(kw, Lexicon.END, EnumConvertType.VEC3INT, (128,128,128), 0, 255) fuzz = parse_param(kw, Lexicon.FUZZ, EnumConvertType.VEC3, (0.5,0.5,0.5), 0, 1) matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255) params = list(zip_longest_fill(pA, start, use_range, end, fuzz, matte)) images = [] pbar = ProgressBar(len(params)) for idx, (pA, start, use_range, end, fuzz, matte) in enumerate(params): img = np.zeros((IMAGE_SIZE_MIN, IMAGE_SIZE_MIN, 3), dtype=np.uint8) if pA is None else tensor_to_cv(pA) img, mask = image_filter(img, start, end, fuzz, use_range) if img.shape[2] == 3: alpha_channel = np.zeros((img.shape[0], img.shape[1], 1), dtype=img.dtype) img = np.concatenate((img, alpha_channel), axis=2) img[..., 3] = mask[:,:] images.append(cv_to_tensor_full(img, matte)) pbar.update_absolute(idx) return image_stack(images) class HistogramNode(CozyImageNode): NAME = "HISTOGRAM (JOV)" CATEGORY = JOV_CATEGORY DESCRIPTION = """ The Histogram Node generates a histogram representation of the input image, showing the distribution of pixel intensity values across different bins. This visualization is useful for understanding the overall brightness and contrast characteristics of an image. Additionally, the node performs histogram normalization, which adjusts the pixel values to enhance the contrast of the image. Histogram normalization can be helpful for improving the visual quality of images or preparing them for further image processing tasks. """ @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES() d = deep_merge(d, { "optional": { Lexicon.IMAGE: (COZY_TYPE_IMAGE, { "tooltip": "Pixel Data (RGBA, RGB or Grayscale)"}), Lexicon.WH: ("VEC2", { "default": (512, 512), "mij":IMAGE_SIZE_MIN, "int": True, "label": ["W", "H"]}), } }) return Lexicon._parse(d) def run(self, **kw) -> RGBAMaskType: pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None) wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (512, 512), IMAGE_SIZE_MIN) params = list(zip_longest_fill(pA, wihi)) images = [] pbar = ProgressBar(len(params)) for idx, (pA, wihi) in enumerate(params): pA = tensor_to_cv(pA) if pA is not None else channel_solid() hist_img = image_histogram2(pA, bins=256) width, height = wihi hist_img = image_resize(hist_img, width, height, EnumInterpolation.NEAREST) images.append(cv_to_tensor_full(hist_img)) pbar.update_absolute(idx) return image_stack(images) class PixelMergeNode(CozyImageNode): NAME = "PIXEL MERGE (JOV) 🫂" CATEGORY = JOV_CATEGORY DESCRIPTION = """ Combines individual color channels (red, green, blue) along with an optional mask channel to create a composite image. """ @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES() d = deep_merge(d, { "optional": { Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}), Lexicon.CHAN_RED: (COZY_TYPE_IMAGE, {}), Lexicon.CHAN_GREEN: (COZY_TYPE_IMAGE, {}), Lexicon.CHAN_BLUE: (COZY_TYPE_IMAGE, {}), Lexicon.CHAN_ALPHA: (COZY_TYPE_IMAGE, {}), Lexicon.MATTE: ("VEC4", { "default": (0, 0, 0, 255), "rgb": True,}), Lexicon.FLIP: ("VEC4", { "default": (0,0,0,0), "mij":0, "maj":1, "step": 0.01, "tooltip": "Invert specific input prior to merging. R, G, B, A."}), Lexicon.INVERT: ("BOOLEAN", { "default": False,}) } }) return Lexicon._parse(d) def run(self, **kw) -> RGBAMaskType: rgba = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None) R = parse_param(kw, Lexicon.CHAN_RED, EnumConvertType.MASK, None) G = parse_param(kw, Lexicon.CHAN_GREEN, EnumConvertType.MASK, None) B = parse_param(kw, Lexicon.CHAN_BLUE, EnumConvertType.MASK, None) A = parse_param(kw, Lexicon.CHAN_ALPHA, EnumConvertType.MASK, None) matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255) flip = parse_param(kw, Lexicon.FLIP, EnumConvertType.VEC4, (0, 0, 0, 0), 0, 1) invert = parse_param(kw, Lexicon.INVERT, EnumConvertType.BOOLEAN, False) params = list(zip_longest_fill(rgba, R, G, B, A, matte, flip, invert)) images = [] pbar = ProgressBar(len(params)) for idx, (rgba, r, g, b, a, matte, flip, invert) in enumerate(params): replace = r, g, b, a if rgba is not None: rgba = image_split(tensor_to_cv(rgba, chan=4)) img = [tensor_to_cv(replace[i]) if replace[i] is not None else x for i, x in enumerate(rgba)] else: img = [tensor_to_cv(x) if x is not None else x for x in replace] _, _, w_max, h_max = image_minmax(img) for i, x in enumerate(img): if x is None: x = np.full((h_max, w_max, 1), matte[i], dtype=np.uint8) else: x = image_convert(x, 1) x = image_scalefit(x, w_max, h_max, EnumScaleMode.ASPECT) if flip[i] != 0: x = image_invert(x, flip[i]) img[i] = x img = channel_merge(img) #if invert == True: # img = image_invert(img, 1) images.append(cv_to_tensor_full(img, matte)) pbar.update_absolute(idx) return image_stack(images) class PixelSplitNode(CozyBaseNode): NAME = "PIXEL SPLIT (JOV) 💔" CATEGORY = JOV_CATEGORY RETURN_TYPES = ("MASK", "MASK", "MASK", "MASK", "IMAGE") RETURN_NAMES = ("❤️", "💚", "💙", "🤍", "RGB") OUTPUT_TOOLTIPS = ( "Single channel output of Red Channel.", "Single channel output of Green Channel", "Single channel output of Blue Channel", "Single channel output of Alpha Channel", "RGB pack of the input", ) DESCRIPTION = """ Split an input into individual color channels (red, green, blue, alpha). """ @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES() d = deep_merge(d, { "optional": { Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}) } }) return Lexicon._parse(d) def run(self, **kw) -> RGBAMaskType: pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None) images = [] pbar = ProgressBar(len(pA)) for idx, pA in enumerate(pA): pA = channel_solid(chan=EnumImageType.RGBA) if pA is None else tensor_to_cv(pA, chan=4) out = [cv_to_tensor(x, True) for x in image_split(pA)] + [cv_to_tensor(image_convert(pA, 3))] images.append(out) pbar.update_absolute(idx) return image_stack(images) class PixelSwapNode(CozyImageNode): NAME = "PIXEL SWAP (JOV) 🔃" CATEGORY = JOV_CATEGORY DESCRIPTION = """ Swap pixel values between two input images based on specified channel swizzle operations. Options include pixel inputs, swap operations for red, green, blue, and alpha channels, and constant values for each channel. The swap operations allow for flexible pixel manipulation by determining the source of each channel in the output image, whether it be from the first image, the second image, or a constant value. """ @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES() d = deep_merge(d, { "optional": { Lexicon.IMAGE_SOURCE: (COZY_TYPE_IMAGE, {}), Lexicon.IMAGE_TARGET: (COZY_TYPE_IMAGE, {}), Lexicon.SWAP_R: (EnumPixelSwizzle._member_names_, { "default": EnumPixelSwizzle.RED_A.name,}), Lexicon.SWAP_G: (EnumPixelSwizzle._member_names_, { "default": EnumPixelSwizzle.GREEN_A.name,}), Lexicon.SWAP_B: (EnumPixelSwizzle._member_names_, { "default": EnumPixelSwizzle.BLUE_A.name,}), Lexicon.SWAP_A: (EnumPixelSwizzle._member_names_, { "default": EnumPixelSwizzle.ALPHA_A.name,}), Lexicon.MATTE: ("VEC4", { "default": (0, 0, 0, 255), "rgb": True,}) } }) return Lexicon._parse(d) def run(self, **kw) -> RGBAMaskType: pA = parse_param(kw, Lexicon.IMAGE_SOURCE, EnumConvertType.IMAGE, None) pB = parse_param(kw, Lexicon.IMAGE_TARGET, EnumConvertType.IMAGE, None) swap_r = parse_param(kw, Lexicon.SWAP_R, EnumPixelSwizzle, EnumPixelSwizzle.RED_A.name) swap_g = parse_param(kw, Lexicon.SWAP_G, EnumPixelSwizzle, EnumPixelSwizzle.GREEN_A.name) swap_b = parse_param(kw, Lexicon.SWAP_B, EnumPixelSwizzle, EnumPixelSwizzle.BLUE_A.name) swap_a = parse_param(kw, Lexicon.SWAP_A, EnumPixelSwizzle, EnumPixelSwizzle.ALPHA_A.name) matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255) params = list(zip_longest_fill(pA, pB, swap_r, swap_g, swap_b, swap_a, matte)) images = [] pbar = ProgressBar(len(params)) for idx, (pA, pB, swap_r, swap_g, swap_b, swap_a, matte) in enumerate(params): if pA is None: if pB is None: out = channel_solid() images.append(cv_to_tensor_full(out)) pbar.update_absolute(idx) continue h, w = pB.shape[:2] pA = channel_solid(w, h) else: h, w = pA.shape[:2] pA = tensor_to_cv(pA) pA = image_convert(pA, 4) pB = tensor_to_cv(pB) if pB is not None else channel_solid(w, h) pB = image_convert(pB, 4) pB = image_matte(pB, (0,0,0,0), w, h) pB = image_scalefit(pB, w, h, EnumScaleMode.CROP) out = channel_swap(pA, pB, (swap_r, swap_g, swap_b, swap_a), matte) images.append(cv_to_tensor_full(out)) pbar.update_absolute(idx) return image_stack(images) class ThresholdNode(CozyImageNode): NAME = "THRESHOLD (JOV) 📉" CATEGORY = JOV_CATEGORY DESCRIPTION = """ Define a range and apply it to an image for segmentation and feature extraction. Choose from various threshold modes, such as binary and adaptive, and adjust the threshold value and block size to suit your needs. You can also invert the resulting mask if necessary. This node is versatile for a variety of image processing tasks. """ @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES() d = deep_merge(d, { "optional": { Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}), Lexicon.ADAPT: ( EnumThresholdAdapt._member_names_, { "default": EnumThresholdAdapt.ADAPT_NONE.name,}), Lexicon.FUNCTION: ( EnumThreshold._member_names_, { "default": EnumThreshold.BINARY.name}), Lexicon.THRESHOLD: ("FLOAT", { "default": 0.5, "min": 0, "max": 1, "step": 0.005}), Lexicon.SIZE: ("INT", { "default": 3, "min": 3, "max": 103}), Lexicon.INVERT: ("BOOLEAN", { "default": False, "tooltip": "Invert the mask input"}) } }) return Lexicon._parse(d) def run(self, **kw) -> RGBAMaskType: pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None) mode = parse_param(kw, Lexicon.FUNCTION, EnumThreshold, EnumThreshold.BINARY.name) adapt = parse_param(kw, Lexicon.ADAPT, EnumThresholdAdapt, EnumThresholdAdapt.ADAPT_NONE.name) threshold = parse_param(kw, Lexicon.THRESHOLD, EnumConvertType.FLOAT, 1, 0, 1) block = parse_param(kw, Lexicon.SIZE, EnumConvertType.INT, 3, 3, 103) invert = parse_param(kw, Lexicon.INVERT, EnumConvertType.BOOLEAN, False) params = list(zip_longest_fill(pA, mode, adapt, threshold, block, invert)) images = [] pbar = ProgressBar(len(params)) for idx, (pA, mode, adapt, th, block, invert) in enumerate(params): pA = tensor_to_cv(pA) if pA is not None else channel_solid() pA = image_threshold(pA, th, mode, adapt, block) if invert == True: pA = image_invert(pA, 1) images.append(cv_to_tensor_full(pA)) pbar.update_absolute(idx) return image_stack(images) ================================================ FILE: core/create.py ================================================ """ Jovimetrix - Creation """ import numpy as np from PIL import ImageFont from skimage.filters import gaussian from comfy.utils import ProgressBar from cozy_comfyui import \ IMAGE_SIZE_MIN, \ InputType, EnumConvertType, RGBAMaskType, \ deep_merge, parse_param, zip_longest_fill from cozy_comfyui.lexicon import \ Lexicon from cozy_comfyui.node import \ COZY_TYPE_IMAGE, \ CozyImageNode from cozy_comfyui.image import \ EnumImageType from cozy_comfyui.image.adjust import \ image_invert from cozy_comfyui.image.channel import \ channel_solid from cozy_comfyui.image.compose import \ EnumEdge, EnumScaleMode, EnumInterpolation, \ image_rotate, image_scalefit, image_transform, image_translate, image_blend from cozy_comfyui.image.convert import \ image_convert, pil_to_cv, cv_to_tensor, cv_to_tensor_full, tensor_to_cv, \ image_mask, image_mask_add, image_mask_binary from cozy_comfyui.image.misc import \ image_stack from cozy_comfyui.image.shape import \ EnumShapes, \ shape_ellipse, shape_polygon, shape_quad from cozy_comfyui.image.text import \ EnumAlignment, EnumJustify, \ font_names, text_autosize, text_draw # ============================================================================== # === GLOBAL === # ============================================================================== JOV_CATEGORY = "CREATE" # ============================================================================== # === CLASS === # ============================================================================== class ConstantNode(CozyImageNode): NAME = "CONSTANT (JOV) 🟪" CATEGORY = JOV_CATEGORY DESCRIPTION = """ Generate a constant image or mask of a specified size and color. It can be used to create solid color backgrounds or matte images for compositing with other visual elements. The node allows you to define the desired width and height of the output and specify the RGBA color value for the constant output. Additionally, you can input an optional image to use as a matte with the selected color. """ @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES() d = deep_merge(d, { "optional": { Lexicon.IMAGE: (COZY_TYPE_IMAGE, { "tooltip":"Optional Image to Matte with Selected Color"}), Lexicon.MASK: (COZY_TYPE_IMAGE, { "tooltip":"Override Image mask"}), Lexicon.COLOR: ("VEC4", { "default": (0, 0, 0, 255), "rgb": True, "tooltip": "Constant Color to Output"}), Lexicon.MODE: (EnumScaleMode._member_names_, { "default": EnumScaleMode.MATTE.name,}), Lexicon.WH: ("VEC2", { "default": (512, 512), "mij": 1, "int": True, "label": ["W", "H"],}), Lexicon.SAMPLE: (EnumInterpolation._member_names_, { "default": EnumInterpolation.LANCZOS4.name,}) } }) return Lexicon._parse(d) def run(self, **kw) -> RGBAMaskType: pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None) mask = parse_param(kw, Lexicon.MASK, EnumConvertType.MASK, None) matte = parse_param(kw, Lexicon.COLOR, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255) mode = parse_param(kw, Lexicon.MODE, EnumScaleMode, EnumScaleMode.MATTE.name) wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (512, 512), 1) sample = parse_param(kw, Lexicon.SAMPLE, EnumInterpolation, EnumInterpolation.LANCZOS4.name) images = [] params = list(zip_longest_fill(pA, mask, matte, mode, wihi, sample)) pbar = ProgressBar(len(params)) for idx, (pA, mask, matte, mode, wihi, sample) in enumerate(params): width, height = wihi w, h = width, height if pA is None: pA = channel_solid(width, height, (0,0,0,255)) else: pA = tensor_to_cv(pA) pA = image_convert(pA, 4) h, w = pA.shape[:2] if mask is None: mask = image_mask(pA, 0) else: mask = tensor_to_cv(mask, invert=1, chan=1) mask = image_scalefit(mask, w, h, matte=(0,0,0,255), mode=EnumScaleMode.FIT) pB = channel_solid(w, h, matte) pA = image_blend(pB, pA, mask) #mask = image_invert(mask, 1) pA = image_mask_add(pA, mask) if mode != EnumScaleMode.MATTE: pA = image_scalefit(pA, width, height, mode, sample, matte) images.append(cv_to_tensor_full(pA, matte)) pbar.update_absolute(idx) return image_stack(images) class ShapeNode(CozyImageNode): NAME = "SHAPE GEN (JOV) ✨" CATEGORY = JOV_CATEGORY DESCRIPTION = """ Create n-sided polygons. These shapes can be customized by adjusting parameters such as size, color, position, rotation angle, and edge blur. The node provides options to specify the shape type, the number of sides for polygons, the RGBA color value for the main shape, and the RGBA color value for the background. Additionally, you can control the width and height of the output images, the position offset, and the amount of edge blur applied to the shapes. """ @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES() d = deep_merge(d, { "optional": { Lexicon.SHAPE: (EnumShapes._member_names_, { "default": EnumShapes.CIRCLE.name}), Lexicon.SIDES: ("INT", { "default": 3, "min": 3, "max": 100}), Lexicon.COLOR: ("VEC4", { "default": (255, 255, 255, 255), "rgb": True, "tooltip": "Main Shape Color"}), Lexicon.MATTE: ("VEC4", { "default": (0, 0, 0, 255), "rgb": True,}), Lexicon.WH: ("VEC2", { "default": (256, 256), "mij":IMAGE_SIZE_MIN, "int": True, "label": ["W", "H"],}), Lexicon.XY: ("VEC2", { "default": (0, 0,), "mij": -1, "maj": 1, "label": ["X", "Y"]}), Lexicon.ANGLE: ("FLOAT", { "default": 0, "min": -180, "max": 180, "step": 0.01,}), Lexicon.SIZE: ("VEC2", { "default": (1, 1), "mij": 0, "maj": 1, "label": ["X", "Y"]}), Lexicon.EDGE: (EnumEdge._member_names_, { "default": EnumEdge.CLIP.name}), Lexicon.BLUR: ("FLOAT", { "default": 0, "min": 0, "step": 0.01,}), } }) return Lexicon._parse(d) def run(self, **kw) -> RGBAMaskType: shape = parse_param(kw, Lexicon.SHAPE, EnumShapes, EnumShapes.CIRCLE.name) sides = parse_param(kw, Lexicon.SIDES, EnumConvertType.INT, 3, 3) color = parse_param(kw, Lexicon.COLOR, EnumConvertType.VEC4INT, (255, 255, 255, 255), 0, 255) matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255) wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (256, 256), IMAGE_SIZE_MIN) offset = parse_param(kw, Lexicon.XY, EnumConvertType.VEC2, (0, 0), -1, 1) angle = parse_param(kw, Lexicon.ANGLE, EnumConvertType.FLOAT, 0, -180, 180) size = parse_param(kw, Lexicon.SIZE, EnumConvertType.VEC2, (1, 1), 0, 1, zero=0.001) edge = parse_param(kw, Lexicon.EDGE, EnumEdge, EnumEdge.CLIP.name) blur = parse_param(kw, Lexicon.BLUR, EnumConvertType.FLOAT, 0, 0) params = list(zip_longest_fill(shape, sides, color, matte, wihi, offset, angle, size, edge, blur)) images = [] pbar = ProgressBar(len(params)) for idx, (shape, sides, color, matte, wihi, offset, angle, size, edge, blur) in enumerate(params): width, height = wihi sizeX, sizeY = size fill = color[:3][::-1] match shape: case EnumShapes.SQUARE: rgb = shape_quad(width, height, sizeX, sizeY, fill) case EnumShapes.CIRCLE: rgb = shape_ellipse(width, height, sizeX, sizeY, fill) case EnumShapes.POLYGON: rgb = shape_polygon(width, height, sizeX, sides, fill) rgb = pil_to_cv(rgb) rgb = image_transform(rgb, offset, angle, edge=edge) mask = image_mask_binary(rgb) if blur > 0: # @TODO: Do blur on larger canvas to remove wrap bleed. rgb = (gaussian(rgb, sigma=blur, channel_axis=2) * 255).astype(np.uint8) mask = (gaussian(mask, sigma=blur, channel_axis=2) * 255).astype(np.uint8) mask = (mask * (color[3] / 255.)).astype(np.uint8) back = list(matte[:3]) + [255] canvas = np.full((height, width, 4), back, dtype=rgb.dtype) rgba = image_blend(canvas, rgb, mask) rgba = image_mask_add(rgba, mask) rgb = image_convert(rgba, 3) images.append([cv_to_tensor(rgba), cv_to_tensor(rgb), cv_to_tensor(mask, True)]) pbar.update_absolute(idx) return image_stack(images) class TextNode(CozyImageNode): NAME = "TEXT GEN (JOV) 📝" CATEGORY = JOV_CATEGORY FONTS = font_names() FONT_NAMES = sorted(FONTS.keys()) DESCRIPTION = """ Generates images containing text based on parameters such as font, size, alignment, color, and position. Users can input custom text messages, select fonts from a list of available options, adjust font size, and specify the alignment and justification of the text. Additionally, the node provides options for auto-sizing text to fit within specified dimensions, controlling letter-by-letter rendering, and applying edge effects such as clipping and inversion. """ @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES() d = deep_merge(d, { "optional": { Lexicon.STRING: ("STRING", { "default": "jovimetrix", "multiline": True, "dynamicPrompts": False, "tooltip": "Your Message"}), Lexicon.FONT: (cls.FONT_NAMES, { "default": cls.FONT_NAMES[0]}), Lexicon.LETTER: ("BOOLEAN", { "default": False,}), Lexicon.AUTOSIZE: ("BOOLEAN", { "default": False, "tooltip": "Scale based on Width & Height"}), Lexicon.COLOR: ("VEC4", { "default": (255, 255, 255, 255), "rgb": True, "tooltip": "Color of the letters"}), Lexicon.MATTE: ("VEC4", { "default": (0, 0, 0, 255), "rgb": True,}), Lexicon.COLUMNS: ("INT", { "default": 0, "min": 0}), # if auto on, hide these... Lexicon.SIZE: ("INT", { "default": 16, "min": 8}), Lexicon.ALIGN: (EnumAlignment._member_names_, { "default": EnumAlignment.CENTER.name,}), Lexicon.JUSTIFY: (EnumJustify._member_names_, { "default": EnumJustify.CENTER.name,}), Lexicon.MARGIN: ("INT", { "default": 0, "min": -1024, "max": 1024,}), Lexicon.SPACING: ("INT", { "default": 0, "min": -1024, "max": 1024}), Lexicon.WH: ("VEC2", { "default": (256, 256), "mij":IMAGE_SIZE_MIN, "int": True, "label": ["W", "H"],}), Lexicon.XY: ("VEC2", { "default": (0, 0,), "mij": -1, "maj": 1, "label": ["X", "Y"], "tooltip":"Offset the position"}), Lexicon.ANGLE: ("FLOAT", { "default": 0, "step": 0.01,}), Lexicon.EDGE: (EnumEdge._member_names_, { "default": EnumEdge.CLIP.name}), Lexicon.INVERT: ("BOOLEAN", { "default": False, "tooltip": "Invert the mask input"}) } }) return Lexicon._parse(d) def run(self, **kw) -> RGBAMaskType: full_text = parse_param(kw, Lexicon.STRING, EnumConvertType.STRING, "jovimetrix") font_idx = parse_param(kw, Lexicon.FONT, EnumConvertType.STRING, self.FONT_NAMES[0]) autosize = parse_param(kw, Lexicon.AUTOSIZE, EnumConvertType.BOOLEAN, False) letter = parse_param(kw, Lexicon.LETTER, EnumConvertType.BOOLEAN, False) color = parse_param(kw, Lexicon.COLOR, EnumConvertType.VEC4INT, (255,255,255,255)) matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0,0,0,255)) columns = parse_param(kw, Lexicon.COLUMNS, EnumConvertType.INT, 0) font_size = parse_param(kw, Lexicon.SIZE, EnumConvertType.INT, 1) align = parse_param(kw, Lexicon.ALIGN, EnumAlignment, EnumAlignment.CENTER.name) justify = parse_param(kw, Lexicon.JUSTIFY, EnumJustify, EnumJustify.CENTER.name) margin = parse_param(kw, Lexicon.MARGIN, EnumConvertType.INT, 0) line_spacing = parse_param(kw, Lexicon.SPACING, EnumConvertType.INT, 0) wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (512, 512), IMAGE_SIZE_MIN) pos = parse_param(kw, Lexicon.XY, EnumConvertType.VEC2, (0, 0)) angle = parse_param(kw, Lexicon.ANGLE, EnumConvertType.INT, 0) edge = parse_param(kw, Lexicon.EDGE, EnumEdge, EnumEdge.CLIP.name) invert = parse_param(kw, Lexicon.INVERT, EnumConvertType.BOOLEAN, False) images = [] params = list(zip_longest_fill(full_text, font_idx, autosize, letter, color, matte, columns, font_size, align, justify, margin, line_spacing, wihi, pos, angle, edge, invert)) pbar = ProgressBar(len(params)) for idx, (full_text, font_idx, autosize, letter, color, matte, columns, font_size, align, justify, margin, line_spacing, wihi, pos, angle, edge, invert) in enumerate(params): width, height = wihi font_name = self.FONTS[font_idx] full_text = str(full_text) if letter: full_text = full_text.replace('\n', '') if autosize: _, font_size = text_autosize(full_text[0].upper(), font_name, width, height)[:2] margin = 0 line_spacing = 0 else: if autosize: wm = width - margin * 2 hm = height - margin * 2 - line_spacing columns = 0 if columns == 0 else columns * 2 + 2 full_text, font_size = text_autosize(full_text, font_name, wm, hm, columns)[:2] full_text = [full_text] font_size *= 2.5 font = ImageFont.truetype(font_name, font_size) for ch in full_text: img = text_draw(ch, font, width, height, align, justify, margin, line_spacing, color) img = image_rotate(img, angle, edge=edge) img = image_translate(img, pos, edge=edge) if invert: img = image_invert(img, 1) images.append(cv_to_tensor_full(img, matte)) pbar.update_absolute(idx) return image_stack(images) ================================================ FILE: core/trans.py ================================================ """ Jovimetrix - Transform """ import sys from enum import Enum from comfy.utils import ProgressBar from cozy_comfyui import \ logger, \ IMAGE_SIZE_MIN, \ InputType, RGBAMaskType, EnumConvertType, \ deep_merge, parse_param, parse_dynamic, zip_longest_fill from cozy_comfyui.lexicon import \ Lexicon from cozy_comfyui.node import \ COZY_TYPE_IMAGE, \ CozyImageNode, CozyBaseNode from cozy_comfyui.image.channel import \ channel_solid from cozy_comfyui.image.convert import \ tensor_to_cv, cv_to_tensor_full, cv_to_tensor, image_mask, image_mask_add from cozy_comfyui.image.compose import \ EnumOrientation, EnumEdge, EnumMirrorMode, EnumScaleMode, EnumInterpolation, \ image_edge_wrap, image_mirror, image_scalefit, image_transform, \ image_crop, image_crop_center, image_crop_polygonal, image_stacker, \ image_flatten from cozy_comfyui.image.misc import \ image_stack from cozy_comfyui.image.mapping import \ EnumProjection, \ remap_fisheye, remap_perspective, remap_polar, remap_sphere # ============================================================================== # === GLOBAL === # ============================================================================== JOV_CATEGORY = "TRANSFORM" # ============================================================================== # === ENUMERATION === # ============================================================================== class EnumCropMode(Enum): CENTER = 20 XY = 0 FREE = 10 # ============================================================================== # === CLASS === # ============================================================================== class CropNode(CozyImageNode): NAME = "CROP (JOV) ✂️" CATEGORY = JOV_CATEGORY DESCRIPTION = """ Extract a portion of an input image or resize it. It supports various cropping modes, including center cropping, custom XY cropping, and free-form polygonal cropping. This node is useful for preparing image data for specific tasks or extracting regions of interest. """ @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES() d = deep_merge(d, { "optional": { Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}), Lexicon.FUNCTION: (EnumCropMode._member_names_, { "default": EnumCropMode.CENTER.name}), Lexicon.XY: ("VEC2", { "default": (0, 0), "mij": 0, "maj": 1, "label": ["X", "Y"]}), Lexicon.WH: ("VEC2", { "default": (512, 512), "mij": IMAGE_SIZE_MIN, "int": True, "label": ["W", "H"]}), Lexicon.TLTR: ("VEC4", { "default": (0, 0, 0, 1), "mij": 0, "maj": 1, "label": ["TOP", "LEFT", "TOP", "RIGHT"],}), Lexicon.BLBR: ("VEC4", { "default": (1, 0, 1, 1), "mij": 0, "maj": 1, "label": ["BOTTOM", "LEFT", "BOTTOM", "RIGHT"],}), Lexicon.MATTE: ("VEC4", { "default": (0, 0, 0, 255), "rgb": True,}) } }) return Lexicon._parse(d) def run(self, **kw) -> RGBAMaskType: pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None) func = parse_param(kw, Lexicon.FUNCTION, EnumCropMode, EnumCropMode.CENTER.name) # if less than 1 then use as scalar, over 1 = int(size) xy = parse_param(kw, Lexicon.XY, EnumConvertType.VEC2, (0, 0,)) wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (512, 512), IMAGE_SIZE_MIN) tltr = parse_param(kw, Lexicon.TLTR, EnumConvertType.VEC4, (0, 0, 0, 1,)) blbr = parse_param(kw, Lexicon.BLBR, EnumConvertType.VEC4, (1, 0, 1, 1,)) matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255) params = list(zip_longest_fill(pA, func, xy, wihi, tltr, blbr, matte)) images = [] pbar = ProgressBar(len(params)) for idx, (pA, func, xy, wihi, tltr, blbr, matte) in enumerate(params): width, height = wihi pA = tensor_to_cv(pA) if pA is not None else channel_solid(width, height) alpha = None if pA.ndim == 3 and pA.shape[2] == 4: alpha = image_mask(pA) if func == EnumCropMode.FREE: x1, y1, x2, y2 = tltr x4, y4, x3, y3 = blbr points = (x1 * width, y1 * height), (x2 * width, y2 * height), \ (x3 * width, y3 * height), (x4 * width, y4 * height) pA = image_crop_polygonal(pA, points) if alpha is not None: alpha = image_crop_polygonal(alpha, points) pA[..., 3] = alpha[..., 0][:,:] elif func == EnumCropMode.XY: pA = image_crop(pA, width, height, xy) else: pA = image_crop_center(pA, width, height) images.append(cv_to_tensor_full(pA, matte)) pbar.update_absolute(idx) return image_stack(images) class FlattenNode(CozyImageNode): NAME = "FLATTEN (JOV) ⬇️" CATEGORY = JOV_CATEGORY DESCRIPTION = """ Combine multiple input images into a single image by summing their pixel values. This operation is useful for merging multiple layers or images into one composite image, such as combining different elements of a design or merging masks. Users can specify the blending mode and interpolation method to control how the images are combined. Additionally, a matte can be applied to adjust the transparency of the final composite image. """ @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES() d = deep_merge(d, { "optional": { Lexicon.MODE: (EnumScaleMode._member_names_, { "default": EnumScaleMode.MATTE.name,}), Lexicon.WH: ("VEC2", { "default": (512, 512), "mij":1, "int": True, "label": ["W", "H"]}), Lexicon.SAMPLE: (EnumInterpolation._member_names_, { "default": EnumInterpolation.LANCZOS4.name,}), Lexicon.MATTE: ("VEC4", { "default": (0, 0, 0, 255), "rgb": True,}), Lexicon.OFFSET: ("VEC2", { "default": (0, 0), "mij":0, "int": True, "label": ["X", "Y"]}), } }) return Lexicon._parse(d) def run(self, **kw) -> RGBAMaskType: imgs = parse_dynamic(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None) if imgs is None: logger.warning("no images to flatten") return () # be less dumb when merging pA = [tensor_to_cv(i) for i in imgs] mode = parse_param(kw, Lexicon.MODE, EnumScaleMode, EnumScaleMode.MATTE.name)[0] wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (512, 512), 1)[0] sample = parse_param(kw, Lexicon.SAMPLE, EnumInterpolation, EnumInterpolation.LANCZOS4.name)[0] matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255)[0] offset = parse_param(kw, Lexicon.OFFSET, EnumConvertType.VEC2INT, (0, 0), 0)[0] w, h = wihi x, y = offset pA = image_flatten(pA, x, y, w, h, mode=mode, sample=sample) pA = [cv_to_tensor_full(pA, matte)] return image_stack(pA) class SplitNode(CozyBaseNode): NAME = "SPLIT (JOV) 🎭" CATEGORY = JOV_CATEGORY RETURN_TYPES = ("IMAGE", "IMAGE",) RETURN_NAMES = ("IMAGEA", "IMAGEB",) OUTPUT_TOOLTIPS = ( "Left/Top image", "Right/Bottom image" ) DESCRIPTION = """ Split an image into two or four images based on the percentages for width and height. """ @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES() d = deep_merge(d, { "optional": { Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}), Lexicon.VALUE: ("FLOAT", { "default": 0.5, "min": 0, "max": 1, "step": 0.001 }), Lexicon.FLIP: ("BOOLEAN", { "default": False, "tooltip": "Horizontal split (False) or Vertical split (True)" }), Lexicon.MODE: (EnumScaleMode._member_names_, { "default": EnumScaleMode.MATTE.name,}), Lexicon.WH: ("VEC2", { "default": (512, 512), "mij":IMAGE_SIZE_MIN, "int": True, "label": ["W", "H"]}), Lexicon.SAMPLE: (EnumInterpolation._member_names_, { "default": EnumInterpolation.LANCZOS4.name,}), Lexicon.MATTE: ("VEC4", { "default": (0, 0, 0, 255), "rgb": True,}) } }) return Lexicon._parse(d) def run(self, **kw) -> RGBAMaskType: pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None) percent = parse_param(kw, Lexicon.VALUE, EnumConvertType.FLOAT, 0.5, 0, 1) flip = parse_param(kw, Lexicon.FLIP, EnumConvertType.BOOLEAN, False) mode = parse_param(kw, Lexicon.MODE, EnumScaleMode, EnumScaleMode.MATTE.name) wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (512, 512), IMAGE_SIZE_MIN) sample = parse_param(kw, Lexicon.SAMPLE, EnumInterpolation, EnumInterpolation.LANCZOS4.name) matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255) params = list(zip_longest_fill(pA, percent, flip, mode, wihi, sample, matte)) images = [] pbar = ProgressBar(len(params)) for idx, (pA, percent, flip, mode, wihi, sample, matte) in enumerate(params): w, h = wihi pA = channel_solid(w, h, matte) if pA is None else tensor_to_cv(pA) if flip: size = pA.shape[1] percent = max(1, min(size-1, int(size * percent))) image_a = pA[:, :percent] image_b = pA[:, percent:] else: size = pA.shape[0] percent = max(1, min(size-1, int(size * percent))) image_a = pA[:percent, :] image_b = pA[percent:, :] if mode != EnumScaleMode.MATTE: image_a = image_scalefit(image_a, w, h, mode, sample) image_b = image_scalefit(image_b, w, h, mode, sample) images.append([cv_to_tensor(img) for img in [image_a, image_b]]) pbar.update_absolute(idx) return image_stack(images) class StackNode(CozyImageNode): NAME = "STACK (JOV) ➕" CATEGORY = JOV_CATEGORY DESCRIPTION = """ Merge multiple input images into a single composite image by stacking them along a specified axis. Options include axis, stride, scaling mode, width and height, interpolation method, and matte color. The axis parameter allows for horizontal, vertical, or grid stacking of images, while stride controls the spacing between them. """ @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES() d = deep_merge(d, { "optional": { Lexicon.AXIS: (EnumOrientation._member_names_, { "default": EnumOrientation.GRID.name,}), Lexicon.STEP: ("INT", { "default": 1, "min": 0, "tooltip":"How many images are placed before a new row starts (stride)"}), Lexicon.MODE: (EnumScaleMode._member_names_, { "default": EnumScaleMode.MATTE.name,}), Lexicon.WH: ("VEC2", { "default": (512, 512), "mij": IMAGE_SIZE_MIN, "int": True, "label": ["W", "H"]}), Lexicon.SAMPLE: (EnumInterpolation._member_names_, { "default": EnumInterpolation.LANCZOS4.name,}), Lexicon.MATTE: ("VEC4", { "default": (0, 0, 0, 255), "rgb": True,}) } }) return Lexicon._parse(d) def run(self, **kw) -> RGBAMaskType: images = parse_dynamic(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None) if len(images) == 0: logger.warning("no images to stack") return images = [tensor_to_cv(i) for i in images] axis = parse_param(kw, Lexicon.AXIS, EnumOrientation, EnumOrientation.GRID.name)[0] stride = parse_param(kw, Lexicon.STEP, EnumConvertType.INT, 1, 0)[0] mode = parse_param(kw, Lexicon.MODE, EnumScaleMode, EnumScaleMode.MATTE.name)[0] wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (512, 512), IMAGE_SIZE_MIN)[0] sample = parse_param(kw, Lexicon.SAMPLE, EnumInterpolation, EnumInterpolation.LANCZOS4.name)[0] matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255)[0] img = image_stacker(images, axis, stride) #, matte) if mode != EnumScaleMode.MATTE: w, h = wihi img = image_scalefit(img, w, h, mode, sample) rgba, rgb, mask = cv_to_tensor_full(img, matte) return rgba.unsqueeze(0), rgb.unsqueeze(0), mask.unsqueeze(0) class TransformNode(CozyImageNode): NAME = "TRANSFORM (JOV) 🏝️" CATEGORY = JOV_CATEGORY DESCRIPTION = """ Apply various geometric transformations to images, including translation, rotation, scaling, mirroring, tiling and perspective projection. It offers extensive control over image manipulation to achieve desired visual effects. """ @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES(prompt=True, dynprompt=True) d = deep_merge(d, { "optional": { Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}), Lexicon.MASK: (COZY_TYPE_IMAGE, { "tooltip": "Override Image mask"}), Lexicon.XY: ("VEC2", { "default": (0, 0,), "mij": -1, "maj": 1, "label": ["X", "Y"]}), Lexicon.ANGLE: ("FLOAT", { "default": 0, "min": -sys.float_info.max, "max": sys.float_info.max, "step": 0.1,}), Lexicon.SIZE: ("VEC2", { "default": (1, 1), "mij": 0.001, "label": ["X", "Y"]}), Lexicon.TILE: ("VEC2", { "default": (1, 1), "mij": 1, "label": ["X", "Y"]}), Lexicon.EDGE: (EnumEdge._member_names_, { "default": EnumEdge.CLIP.name}), Lexicon.MIRROR: (EnumMirrorMode._member_names_, { "default": EnumMirrorMode.NONE.name}), Lexicon.PIVOT: ("VEC2", { "default": (0.5, 0.5), "mij": 0, "maj": 1, "step": 0.01, "label": ["X", "Y"]}), Lexicon.PROJECTION: (EnumProjection._member_names_, { "default": EnumProjection.NORMAL.name}), Lexicon.TLTR: ("VEC4", { "default": (0, 0, 1, 0), "mij": 0, "maj": 1, "step": 0.005, "label": ["TOP", "LEFT", "TOP", "RIGHT"],}), Lexicon.BLBR: ("VEC4", { "default": (0, 1, 1, 1), "mij": 0, "maj": 1, "step": 0.005, "label": ["BOTTOM", "LEFT", "BOTTOM", "RIGHT"],}), Lexicon.STRENGTH: ("FLOAT", { "default": 1, "min": 0, "max": 1, "step": 0.005}), Lexicon.MODE: (EnumScaleMode._member_names_, { "default": EnumScaleMode.MATTE.name,}), Lexicon.WH: ("VEC2", { "default": (512, 512), "mij": IMAGE_SIZE_MIN, "int": True, "label": ["W", "H"]}), Lexicon.SAMPLE: (EnumInterpolation._member_names_, { "default": EnumInterpolation.LANCZOS4.name,}), Lexicon.MATTE: ("VEC4", { "default": (0, 0, 0, 255), "rgb": True,}) } }) return Lexicon._parse(d) def run(self, **kw) -> RGBAMaskType: pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None) mask = parse_param(kw, Lexicon.MASK, EnumConvertType.IMAGE, None) offset = parse_param(kw, Lexicon.XY, EnumConvertType.VEC2, (0, 0), -1, 1) angle = parse_param(kw, Lexicon.ANGLE, EnumConvertType.FLOAT, 0) size = parse_param(kw, Lexicon.SIZE, EnumConvertType.VEC2, (1, 1), 0.001) edge = parse_param(kw, Lexicon.EDGE, EnumEdge, EnumEdge.CLIP.name) mirror = parse_param(kw, Lexicon.MIRROR, EnumMirrorMode, EnumMirrorMode.NONE.name) mirror_pivot = parse_param(kw, Lexicon.PIVOT, EnumConvertType.VEC2, (0.5, 0.5), 0, 1) tile_xy = parse_param(kw, Lexicon.TILE, EnumConvertType.VEC2, (1, 1), 1) proj = parse_param(kw, Lexicon.PROJECTION, EnumProjection, EnumProjection.NORMAL.name) tltr = parse_param(kw, Lexicon.TLTR, EnumConvertType.VEC4, (0, 0, 1, 0), 0, 1) blbr = parse_param(kw, Lexicon.BLBR, EnumConvertType.VEC4, (0, 1, 1, 1), 0, 1) strength = parse_param(kw, Lexicon.STRENGTH, EnumConvertType.FLOAT, 1, 0, 1) mode = parse_param(kw, Lexicon.MODE, EnumScaleMode, EnumScaleMode.MATTE.name) wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (512, 512), IMAGE_SIZE_MIN) sample = parse_param(kw, Lexicon.SAMPLE, EnumInterpolation, EnumInterpolation.LANCZOS4.name) matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255) params = list(zip_longest_fill(pA, mask, offset, angle, size, edge, tile_xy, mirror, mirror_pivot, proj, strength, tltr, blbr, mode, wihi, sample, matte)) images = [] pbar = ProgressBar(len(params)) for idx, (pA, mask, offset, angle, size, edge, tile_xy, mirror, mirror_pivot, proj, strength, tltr, blbr, mode, wihi, sample, matte) in enumerate(params): pA = tensor_to_cv(pA) if pA is not None else channel_solid() if mask is None: mask = image_mask(pA, 255) else: mask = tensor_to_cv(mask) pA = image_mask_add(pA, mask) h, w = pA.shape[:2] pA = image_transform(pA, offset, angle, size, sample, edge) pA = image_crop_center(pA, w, h) if mirror != EnumMirrorMode.NONE: mpx, mpy = mirror_pivot pA = image_mirror(pA, mirror, mpx, mpy) pA = image_scalefit(pA, w, h, EnumScaleMode.FIT, sample) tx, ty = tile_xy if tx != 1. or ty != 1.: pA = image_edge_wrap(pA, tx / 2 - 0.5, ty / 2 - 0.5) pA = image_scalefit(pA, w, h, EnumScaleMode.FIT, sample) match proj: case EnumProjection.PERSPECTIVE: x1, y1, x2, y2 = tltr x4, y4, x3, y3 = blbr sh, sw = pA.shape[:2] x1, x2, x3, x4 = map(lambda x: x * sw, [x1, x2, x3, x4]) y1, y2, y3, y4 = map(lambda y: y * sh, [y1, y2, y3, y4]) pA = remap_perspective(pA, [[x1, y1], [x2, y2], [x3, y3], [x4, y4]]) case EnumProjection.SPHERICAL: pA = remap_sphere(pA, strength) case EnumProjection.FISHEYE: pA = remap_fisheye(pA, strength) case EnumProjection.POLAR: pA = remap_polar(pA) if proj != EnumProjection.NORMAL: pA = image_scalefit(pA, w, h, EnumScaleMode.FIT, sample) if mode != EnumScaleMode.MATTE: w, h = wihi pA = image_scalefit(pA, w, h, mode, sample) images.append(cv_to_tensor_full(pA, matte)) pbar.update_absolute(idx) return image_stack(images) ================================================ FILE: core/utility/__init__.py ================================================ ================================================ FILE: core/utility/batch.py ================================================ """ Jovimetrix - Utility """ import os import sys import json import glob import random from enum import Enum from pathlib import Path from itertools import zip_longest from typing import Any import torch import numpy as np from comfy.utils import ProgressBar from nodes import interrupt_processing from cozy_comfyui import \ logger, \ IMAGE_SIZE_MIN, \ InputType, EnumConvertType, TensorType, \ deep_merge, parse_dynamic, parse_param from cozy_comfyui.lexicon import \ Lexicon from cozy_comfyui.node import \ COZY_TYPE_ANY, \ CozyBaseNode from cozy_comfyui.image import \ IMAGE_FORMATS from cozy_comfyui.image.compose import \ EnumScaleMode, EnumInterpolation, \ image_matte, image_scalefit from cozy_comfyui.image.convert import \ image_convert, cv_to_tensor, cv_to_tensor_full, tensor_to_cv from cozy_comfyui.image.misc import \ image_by_size from cozy_comfyui.image.io import \ image_load from cozy_comfyui.api import \ parse_reset, comfy_api_post from ... import \ ROOT JOV_CATEGORY = "UTILITY/BATCH" # ============================================================================== # === ENUMERATION === # ============================================================================== class EnumBatchMode(Enum): MERGE = 30 PICK = 10 SLICE = 15 INDEX_LIST = 20 RANDOM = 5 # ============================================================================== # === CLASS === # ============================================================================== class ArrayNode(CozyBaseNode): NAME = "ARRAY (JOV) 📚" CATEGORY = JOV_CATEGORY RETURN_TYPES = (COZY_TYPE_ANY, "INT",) RETURN_NAMES = ("ARRAY", "LENGTH",) OUTPUT_IS_LIST = (True, True,) OUTPUT_TOOLTIPS = ( "Output list from selected operation", "Length of output list", "Full input list", "Length of all input elements", ) DESCRIPTION = """ Processes a batch of data based on the selected mode. Merge, pick, slice, random select, or index items. Can also reverse the order of items. """ @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES() d = deep_merge(d, { "optional": { Lexicon.MODE: (EnumBatchMode._member_names_, { "default": EnumBatchMode.MERGE.name, "tooltip": "Select a single index, specific range, custom index list or randomized"}), Lexicon.RANGE: ("VEC3", { "default": (0, 0, 1), "mij": 0, "int": True, "tooltip": "The start, end and step for the range"}), Lexicon.INDEX: ("STRING", { "default": "", "tooltip": "Comma separated list of indicies to export"}), Lexicon.COUNT: ("INT", { "default": 0, "min": 0, "max": sys.maxsize, "tooltip": "How many items to return"}), Lexicon.REVERSE: ("BOOLEAN", { "default": False, "tooltip": "Reverse the calculated output list"}), Lexicon.SEED: ("INT", { "default": 0, "min": 0, "max": sys.maxsize}), } }) return Lexicon._parse(d) @classmethod def batched(cls, iterable, chunk_size, expand:bool=False, fill:Any=None) -> list[Any]: if expand: iterator = iter(iterable) return zip_longest(*[iterator] * chunk_size, fillvalue=fill) return [iterable[i: i + chunk_size] for i in range(0, len(iterable), chunk_size)] def run(self, **kw) -> tuple[int, list]: data_list = parse_dynamic(kw, Lexicon.DYNAMIC, EnumConvertType.ANY, None) mode = parse_param(kw, Lexicon.MODE, EnumBatchMode, EnumBatchMode.MERGE.name)[0] slice_range = parse_param(kw, Lexicon.RANGE, EnumConvertType.VEC3INT, (0, 0, 1))[0] index = parse_param(kw, Lexicon.INDEX, EnumConvertType.STRING, "")[0] count = parse_param(kw, Lexicon.COUNT, EnumConvertType.INT, 0, 0)[0] reverse = parse_param(kw, Lexicon.REVERSE, EnumConvertType.BOOLEAN, False)[0] seed = parse_param(kw, Lexicon.SEED, EnumConvertType.INT, 0, 0)[0] data = [] # track latents since they need to be added back to Dict['samples'] output_type = None for b in data_list: if isinstance(b, dict) and "samples" in b: # latents are batched in the x.samples key if output_type and output_type != EnumConvertType.LATENT: raise Exception(f"Cannot mix input types {output_type} vs {EnumConvertType.LATENT}") data.extend(b["samples"]) output_type = EnumConvertType.LATENT elif isinstance(b, TensorType): if output_type and output_type not in (EnumConvertType.IMAGE, EnumConvertType.MASK): raise Exception(f"Cannot mix input types {output_type} vs {EnumConvertType.IMAGE}") if b.ndim == 4: b = [i for i in b] else: b = [b] for x in b: if x.ndim == 2: x = x.unsqueeze(-1) data.append(x) output_type = EnumConvertType.IMAGE elif b is not None: idx_type = type(b) if output_type and output_type != idx_type: raise Exception(f"Cannot mix input types {output_type} vs {idx_type}") data.append(b) if len(data) == 0: logger.warning("no data for list") return [], [0], [], [0] if mode == EnumBatchMode.PICK: start, end, step = slice_range start = start if start < len(data) else -1 data = [data[start]] elif mode == EnumBatchMode.SLICE: start, end, step = slice_range start = abs(start) end = len(data) if end == 0 else abs(end+1) if step == 0: step = 1 elif step < 0: data = data[::-1] step = abs(step) data = data[start:end:step] elif mode == EnumBatchMode.RANDOM: random.seed(seed) if count == 0: count = len(data) else: count = max(1, min(len(data), count)) data = random.sample(data, k=count) elif mode == EnumBatchMode.INDEX_LIST: junk = [] for x in index.split(','): if '-' in x: x = x.split('-') for idx, v in enumerate(x): try: x[idx] = max(0, min(len(data)-1, int(v))) except ValueError as e: logger.error(e) x[idx] = 0 if x[0] > x[1]: tmp = list(range(x[0], x[1]-1, -1)) else: tmp = list(range(x[0], x[1]+1)) junk.extend(tmp) else: idx = max(0, min(len(data)-1, int(x))) junk.append(idx) if len(junk) > 0: data = [data[i] for i in junk] if len(data) == 0: logger.warning("no data for list") return [], [0], [], [0] # reverse before? if reverse: data.reverse() # cut the list down first if count > 0: data = data[0:count] size = len(data) if output_type == EnumConvertType.IMAGE: _, w, h = image_by_size(data) result = [] for d in data: w2, h2, cc = d.shape if w != w2 or h != h2 or cc != 4: d = tensor_to_cv(d) d = image_convert(d, 4) d = image_matte(d, (0,0,0,0), w, h) d = cv_to_tensor(d) d = d.unsqueeze(0) result.append(d) size = len(result) data = torch.stack(result) else: data = [data] return (data, [size],) class BatchToList(CozyBaseNode): NAME = "BATCH TO LIST (JOV)" NAME_PRETTY = "BATCH TO LIST (JOV)" CATEGORY = JOV_CATEGORY RETURN_TYPES = (COZY_TYPE_ANY, ) RETURN_NAMES = ("LIST", ) DESCRIPTION = """ Convert a batch of values into a pure python list of values. """ @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES() return deep_merge(d, { "optional": { Lexicon.BATCH: (COZY_TYPE_ANY, {}), } }) def run(self, **kw) -> tuple[list[Any]]: batch = parse_param(kw, Lexicon.BATCH, EnumConvertType.LIST, []) batch = [f[0] for f in batch] return (batch,) class QueueBaseNode(CozyBaseNode): CATEGORY = JOV_CATEGORY RETURN_TYPES = (COZY_TYPE_ANY, COZY_TYPE_ANY, "STRING", "INT", "INT", "BOOLEAN") RETURN_NAMES = ("❔", "QUEUE", "CURRENT", "INDEX", "TOTAL", "TRIGGER", ) #OUTPUT_IS_LIST = (True, True, True, True, True, True,) VIDEO_FORMATS = ['.wav', '.mp3', '.webm', '.mp4', '.avi', '.wmv', '.mkv', '.mov', '.mxf'] @classmethod def IS_CHANGED(cls, **kw) -> float: return float('nan') @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES() d = deep_merge(d, { "optional": { Lexicon.QUEUE: ("STRING", { "default": "./res/img/test-a.png", "multiline": True, "tooltip": "Current items to process during Queue iteration"}), Lexicon.RECURSE: ("BOOLEAN", { "default": False, "tooltip": "Recurse through all subdirectories found"}), Lexicon.BATCH: ("BOOLEAN", { "default": False, "tooltip": "Load all items, if they are loadable items, i.e. batch load images from the Queue's list"}), Lexicon.SELECT: ("INT", { "default": 0, "min": 0, "tooltip": "The index to use for the current queue item. 0 will move to the next item each queue run"}), Lexicon.HOLD: ("BOOLEAN", { "default": False, "tooltip": "Hold the item at the current queue index"}), Lexicon.STOP: ("BOOLEAN", { "default": False, "tooltip": "When the Queue is out of items, send a `HALT` to ComfyUI"}), Lexicon.LOOP: ("BOOLEAN", { "default": True, "tooltip": "If the queue should loop. If `False` and if there are more iterations, will send the previous image"}), Lexicon.RESET: ("BOOLEAN", { "default": False, "tooltip": "Reset the queue back to index 1"}), } }) return Lexicon._parse(d) def __init__(self) -> None: self.__index = 0 self.__q = None self.__index_last = None self.__len = 0 self.__current = None self.__previous = None self.__ident = None self.__last_q_value = {} # consume the list into iterable items to load/process def __parseQ(self, data: Any, recurse: bool=False) -> list[str]: entries = [] for line in data.strip().split('\n'): if len(line) == 0: continue data = [line] if not line.lower().startswith("http"): # ;*.png;*.gif;*.jpg base_path_str, tail = os.path.split(line) filters = [p.strip() for p in tail.split(';')] base_path = Path(base_path_str) if base_path.is_absolute(): search_dir = base_path if base_path.is_dir() else base_path.parent else: search_dir = (ROOT / base_path).resolve() # Check if the base directory exists if search_dir.exists(): if search_dir.is_dir(): new_data = [] filters = filters if len(filters) > 0 and isinstance(filters[0], str) else IMAGE_FORMATS for pattern in filters: found = glob.glob(str(search_dir / pattern), recursive=recurse) new_data.extend([str(Path(f).resolve()) for f in found if Path(f).is_file()]) if len(new_data): data = new_data elif search_dir.is_file(): path = str(search_dir.resolve()) if path.lower().endswith('.txt'): with open(path, 'r', encoding='utf-8') as f: data = f.read().split('\n') else: data = [path] elif len(results := glob.glob(str(search_dir))) > 0: data = [x.replace('\\', '/') for x in results] if len(data): ret = [] for x in data: try: ret.append(float(x)) except: ret.append(x) entries.extend(ret) return entries # turn Q element into actual hard type def process(self, q_data: Any) -> TensorType | str | dict: # single Q cache to skip loading single entries over and over # @TODO: MRU cache strategy if (val := self.__last_q_value.get(q_data, None)) is not None: return val if isinstance(q_data, (str,)): _, ext = os.path.splitext(q_data) if ext in IMAGE_FORMATS: data = image_load(q_data)[0] self.__last_q_value[q_data] = data #elif ext in self.VIDEO_FORMATS: # data = load_file(q_data) # self.__last_q_value[q_data] = data elif ext == '.json': with open(q_data, 'r', encoding='utf-8') as f: self.__last_q_value[q_data] = json.load(f) return self.__last_q_value.get(q_data, q_data) def run(self, ident, **kw) -> tuple[Any, list[str], str, int, int]: self.__ident = ident # should work headless as well if (new_val := parse_param(kw, Lexicon.SELECT, EnumConvertType.INT, 0)[0]) > 0: self.__index = new_val - 1 reset = parse_reset(ident) > 0 if reset or parse_param(kw, Lexicon.RESET, EnumConvertType.BOOLEAN, False)[0]: self.__q = None self.__index = 0 mode = parse_param(kw, Lexicon.MODE, EnumScaleMode, EnumScaleMode.MATTE.name)[0] sample = parse_param(kw, Lexicon.SAMPLE, EnumInterpolation, EnumInterpolation.LANCZOS4.name)[0] wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (512, 512), IMAGE_SIZE_MIN)[0] w, h = wihi matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255)[0] if self.__q is None: # process Q into ... # check if folder first, file, then string. # entry is: data, , recurse = parse_param(kw, Lexicon.RECURSE, EnumConvertType.BOOLEAN, False)[0] q = parse_param(kw, Lexicon.QUEUE, EnumConvertType.STRING, "")[0] self.__q = self.__parseQ(q, recurse) self.__len = len(self.__q) self.__index_last = 0 self.__previous = self.__q[0] if len(self.__q) else None if self.__previous: self.__previous = self.process(self.__previous) # make sure we have more to process if are a single fire queue stop = parse_param(kw, Lexicon.STOP, EnumConvertType.BOOLEAN, False)[0] if stop and self.__index >= self.__len: comfy_api_post("jovi-queue-done", ident, self.status) interrupt_processing() return self.__previous, self.__q, self.__current, self.__index_last+1, self.__len if (wait := parse_param(kw, Lexicon.HOLD, EnumConvertType.BOOLEAN, False))[0] == True: self.__index = self.__index_last # otherwise loop around the end loop = parse_param(kw, Lexicon.LOOP, EnumConvertType.BOOLEAN, False)[0] if loop == True: self.__index %= self.__len else: self.__index = min(self.__index, self.__len-1) self.__current = self.__q[self.__index] data = self.__previous self.__index_last = self.__index info = f"QUEUE #{ident} [{self.__current}] ({self.__index})" batched = False if (batched := parse_param(kw, Lexicon.BATCH, EnumConvertType.BOOLEAN, False)[0]) == True: data = [] mw, mh, mc = 0, 0, 0 for idx in range(self.__len): ret = self.process(self.__q[idx]) if isinstance(ret, (np.ndarray,)): h2, w2, c = ret.shape mw, mh, mc = max(mw, w2), max(mh, h2), max(mc, c) data.append(ret) if mw != 0 or mh != 0 or mc != 0: ret = [] # matte = [matte[0], matte[1], matte[2], 0] pbar = ProgressBar(self.__len) for idx, d in enumerate(data): d = image_convert(d, mc) if mode != EnumScaleMode.MATTE: d = image_scalefit(d, w, h, mode, sample, matte) d = image_scalefit(d, w, h, EnumScaleMode.RESIZE_MATTE, sample, matte) else: d = image_matte(d, matte, mw, mh) ret.append(cv_to_tensor(d)) pbar.update_absolute(idx) data = torch.stack(ret) elif wait == True: info += f" PAUSED" else: data = self.process(self.__q[self.__index]) if isinstance(data, (np.ndarray,)): if mode != EnumScaleMode.MATTE: data = image_scalefit(data, w, h, mode, sample) data = cv_to_tensor(data).unsqueeze(0) self.__index += 1 self.__previous = data comfy_api_post("jovi-queue-ping", ident, self.status) if stop and batched: interrupt_processing() return data, self.__q, self.__current, self.__index, self.__len, self.__index == self.__len or batched @property def status(self) -> dict[str, Any]: return { "id": self.__ident, "c": self.__current, "i": self.__index_last, "s": self.__len, "l": self.__q } class QueueNode(QueueBaseNode): NAME = "QUEUE (JOV) 🗃" OUTPUT_TOOLTIPS = ( "Current item selected from the Queue list", "The entire Queue list", "Current item selected from the Queue list as a string", "Current index for the selected item in the Queue list", "Total items in the current Queue List", "Send a True signal when the queue end index is reached" ) DESCRIPTION = """ Manage a queue of items, such as file paths or data. Supports various formats including images, videos, text files, and JSON files. You can specify the current index for the queue item, enable pausing the queue, or reset it back to the first index. The node outputs the current item in the queue, the entire queue, the current index, and the total number of items in the queue. """ class QueueTooNode(QueueBaseNode): NAME = "QUEUE TOO (JOV) 🗃" RETURN_TYPES = ("IMAGE", "IMAGE", "MASK", "STRING", "INT", "INT", "BOOLEAN") RETURN_NAMES = ("RGBA", "RGB", "MASK", "CURRENT", "INDEX", "TOTAL", "TRIGGER", ) #OUTPUT_IS_LIST = (False, False, False, True, True, True, True,) OUTPUT_TOOLTIPS = ( "Full channel [RGBA] image. If there is an alpha, the image will be masked out with it when using this output", "Three channel [RGB] image. There will be no alpha", "Single channel mask output", "Current item selected from the Queue list as a string", "Current index for the selected item in the Queue list", "Total items in the current Queue List", "Send a True signal when the queue end index is reached" ) DESCRIPTION = """ Manage a queue of specific items: media files. Supports various image and video formats. You can specify the current index for the queue item, enable pausing the queue, or reset it back to the first index. The node outputs the current item in the queue, the entire queue, the current index, and the total number of items in the queue. """ @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES() d = deep_merge(d, { "optional": { Lexicon.MODE: (EnumScaleMode._member_names_, { "default": EnumScaleMode.MATTE.name}), Lexicon.WH: ("VEC2", { "default": (512, 512), "mij":IMAGE_SIZE_MIN, "int": True, "label": ["W", "H"],}), Lexicon.SAMPLE: (EnumInterpolation._member_names_, { "default": EnumInterpolation.LANCZOS4.name,}), Lexicon.MATTE: ("VEC4", { "default": (0, 0, 0, 255), "rgb": True,}), }, "hidden": d.get("hidden", {}) }) return Lexicon._parse(d) def run(self, ident, **kw) -> tuple[TensorType, TensorType, TensorType, str, int, int, bool]: data, _, current, index, total, trigger = super().run(ident, **kw) if not isinstance(data, (TensorType, )): data = [None, None, None] else: matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255)[0] data = [tensor_to_cv(d) for d in data] data = [cv_to_tensor_full(d, matte) for d in data] data = [torch.stack(d) for d in zip(*data)] return *data, current, index, total, trigger ================================================ FILE: core/utility/info.py ================================================ """ Jovimetrix - Utility """ import io import json from typing import Any import torch import numpy as np from PIL import Image import matplotlib.pyplot as plt from cozy_comfyui import \ IMAGE_SIZE_MIN, \ InputType, EnumConvertType, TensorType, \ deep_merge, parse_dynamic, parse_param from cozy_comfyui.lexicon import \ Lexicon from cozy_comfyui.node import \ COZY_TYPE_IMAGE, \ CozyBaseNode from cozy_comfyui.image.convert import \ pil_to_tensor from cozy_comfyui.api import \ parse_reset JOV_CATEGORY = "UTILITY/INFO" # ============================================================================== # === SUPPORT === # ============================================================================== def decode_tensor(tensor: TensorType) -> str: if tensor.ndim > 3: b, h, w, cc = tensor.shape elif tensor.ndim > 2: cc = 1 b, h, w = tensor.shape else: b = 1 cc = 1 h, w = tensor.shape return f"{b}x{w}x{h}x{cc}" # ============================================================================== # === CLASS === # ============================================================================== class AkashicData: def __init__(self, **kw) -> None: for k, v in kw.items(): setattr(self, k, v) class AkashicNode(CozyBaseNode): NAME = "AKASHIC (JOV) 📓" CATEGORY = JOV_CATEGORY RETURN_NAMES = () OUTPUT_NODE = True DESCRIPTION = """ Visualize data. It accepts various types of data, including images, text, and other types. If no input is provided, it returns an empty result. The output consists of a dictionary containing UI-related information, such as base64-encoded images and text representations of the input data. """ def run(self, **kw) -> tuple[Any, Any]: kw.pop('ident', None) o = kw.values() output = {"ui": {"b64_images": [], "text": []}} if o is None or len(o) == 0: output["ui"]["result"] = (None, None, ) return output def __parse(val) -> str: ret = '' typ = ''.join(repr(type(val)).split("'")[1:2]) if isinstance(val, dict): # mixlab layer? if (image := val.get('image', None)) is not None: ret = image if (mask := val.get('mask', None)) is not None: while len(mask.shape) < len(image.shape): mask = mask.unsqueeze(-1) ret = torch.cat((image, mask), dim=-1) if ret.ndim < 4: ret = ret.unsqueeze(-1) ret = decode_tensor(ret) typ = "Mixlab Layer" # vector patch.... elif 'xyzw' in val: val = val["xyzw"] typ = "VECTOR" # latents.... elif 'samples' in val: ret = decode_tensor(val['samples'][0]) typ = "LATENT" # empty bugger elif len(val) == 0: ret = "" else: try: ret = json.dumps(val, indent=3, separators=(',', ': ')) except Exception as e: ret = str(e) elif isinstance(val, (tuple, set, list,)): if (size := len(val)) > 0: if isinstance(val, (np.ndarray,)): ret = str(val) typ = "NUMPY ARRAY" elif isinstance(val[0], (TensorType,)): ret = decode_tensor(val[0]) typ = type(val[0]) elif size == 1 and isinstance(val[0], (list,)) and isinstance(val[0][0], (TensorType,)): ret = decode_tensor(val[0][0]) typ = "CONDITIONING" elif all(isinstance(i, (tuple, set, list)) for i in val): ret = "[\n" + ",\n".join(f" {row}" for row in val) + "\n]" # ret = json.dumps(val, indent=4) elif all(isinstance(i, (bool, int, float)) for i in val): ret = ','.join([str(x) for x in val]) else: ret = str(val) elif isinstance(val, bool): ret = "True" if val else "False" elif isinstance(val, TensorType): ret = decode_tensor(val) else: ret = str(val) return json.dumps({typ: ret}, separators=(',', ': ')) for x in o: data = "" if len(x) > 1: data += "::\n" for p in x: data += __parse(p) + "\n" output["ui"]["text"].append(data) return output class GraphNode(CozyBaseNode): NAME = "GRAPH (JOV) 📈" CATEGORY = JOV_CATEGORY OUTPUT_NODE = True RETURN_TYPES = ("IMAGE", ) RETURN_NAMES = ("IMAGE",) OUTPUT_TOOLTIPS = ( "The graphed image", ) DESCRIPTION = """ Visualize a series of data points over time. It accepts a dynamic number of values to graph and display, with options to reset the graph or specify the number of values. The output is an image displaying the graph, allowing users to analyze trends and patterns. """ @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES() d = deep_merge(d, { "optional": { Lexicon.RESET: ("BOOLEAN", { "default": False, "tooltip":"Clear the graph history"}), Lexicon.VALUE: ("INT", { "default": 60, "min": 0, "tooltip":"Number of values to graph and display"}), Lexicon.WH: ("VEC2", { "default": (512, 512), "mij":IMAGE_SIZE_MIN, "int": True, "label": ["W", "H"]}), } }) return Lexicon._parse(d) @classmethod def IS_CHANGED(cls, **kw) -> float: return float('nan') def __init__(self, *arg, **kw) -> None: super().__init__(*arg, **kw) self.__history = [] self.__fig, self.__ax = plt.subplots(figsize=(5.12, 5.12)) def run(self, ident, **kw) -> tuple[TensorType]: slice = parse_param(kw, Lexicon.VALUE, EnumConvertType.INT, 60)[0] wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (512, 512), IMAGE_SIZE_MIN)[0] if parse_reset(ident) > 0 or parse_param(kw, Lexicon.RESET, EnumConvertType.BOOLEAN, False)[0]: self.__history = [] longest_edge = 0 dynamic = parse_dynamic(kw, Lexicon.DYNAMIC, EnumConvertType.FLOAT, 0, extend=False) self.__ax.clear() for idx, val in enumerate(dynamic): if isinstance(val, (set, tuple,)): val = list(val) if not isinstance(val, (list, )): val = [val] while len(self.__history) <= idx: self.__history.append([]) self.__history[idx].extend(val) if slice > 0: stride = max(0, -slice + len(self.__history[idx]) + 1) longest_edge = max(longest_edge, stride) self.__history[idx] = self.__history[idx][stride:] self.__ax.plot(self.__history[idx], color="rgbcymk"[idx]) self.__history = self.__history[:slice+1] width, height = wihi width, height = (width / 100., height / 100.) self.__fig.set_figwidth(width) self.__fig.set_figheight(height) self.__fig.canvas.draw_idle() buffer = io.BytesIO() self.__fig.savefig(buffer, format="png") buffer.seek(0) image = Image.open(buffer) return (pil_to_tensor(image),) class ImageInfoNode(CozyBaseNode): NAME = "IMAGE INFO (JOV) 📚" CATEGORY = JOV_CATEGORY RETURN_TYPES = ("INT", "INT", "INT", "INT", "VEC2", "VEC3") RETURN_NAMES = ("COUNT", "W", "H", "C", "WH", "WHC") OUTPUT_TOOLTIPS = ( "Batch count", "Width", "Height", "Channels", "Width & Height as a VEC2", "Width, Height and Channels as a VEC3" ) DESCRIPTION = """ Exports and Displays immediate information about images. """ @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES() d = deep_merge(d, { "optional": { Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}) } }) return Lexicon._parse(d) def run(self, **kw) -> tuple[int, list]: image = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None) height, width, cc = image[0].shape return (len(image), width, height, cc, (width, height), (width, height, cc)) ================================================ FILE: core/utility/io.py ================================================ """ Jovimetrix - Utility """ import os import json from uuid import uuid4 from pathlib import Path from typing import Any import torch import numpy as np from PIL import Image from PIL.PngImagePlugin import PngInfo from comfy.utils import ProgressBar from folder_paths import get_output_directory from nodes import interrupt_processing from cozy_comfyui import \ logger, \ InputType, EnumConvertType, \ deep_merge, parse_param, parse_param_list, zip_longest_fill from cozy_comfyui.lexicon import \ Lexicon from cozy_comfyui.node import \ COZY_TYPE_IMAGE, COZY_TYPE_ANY, \ CozyBaseNode from cozy_comfyui.image.convert import \ tensor_to_pil, tensor_to_cv from cozy_comfyui.api import \ TimedOutException, ComfyAPIMessage, \ comfy_api_post # ============================================================================== # === GLOBAL === # ============================================================================== JOV_CATEGORY = "UTILITY/IO" # min amount of time before showing the cancel dialog JOV_DELAY_MIN = 5 try: JOV_DELAY_MIN = int(os.getenv("JOV_DELAY_MIN", JOV_DELAY_MIN)) except: pass JOV_DELAY_MIN = max(1, JOV_DELAY_MIN) # max 115 days JOV_DELAY_MAX = 10000000 try: JOV_DELAY_MAX = int(os.getenv("JOV_DELAY_MAX", JOV_DELAY_MAX)) except: pass FORMATS = ["gif", "png", "jpg"] if (JOV_GIFSKI := os.getenv("JOV_GIFSKI", None)) is not None: if not os.path.isfile(JOV_GIFSKI): logger.error(f"gifski missing [{JOV_GIFSKI}]") JOV_GIFSKI = None else: FORMATS = ["gifski"] + FORMATS logger.info("gifski support") else: logger.warning("no gifski support") # ============================================================================== # === SUPPORT === # ============================================================================== def path_next(pattern: str) -> str: """ Finds the next free path in an sequentially named list of files """ i = 1 while os.path.exists(pattern % i): i = i * 2 a, b = (i // 2, i) while a + 1 < b: c = (a + b) // 2 a, b = (c, b) if os.path.exists(pattern % c) else (a, c) return pattern % b # ============================================================================== # === CLASS === # ============================================================================== class DelayNode(CozyBaseNode): NAME = "DELAY (JOV) ✋🏽" CATEGORY = JOV_CATEGORY RETURN_TYPES = (COZY_TYPE_ANY,) RETURN_NAMES = ("OUT",) OUTPUT_TOOLTIPS = ( "Pass through data when the delay ends", ) DESCRIPTION = """ Introduce pauses in the workflow that accept an optional input to pass through and a timer parameter to specify the duration of the delay. If no timer is provided, it defaults to a maximum delay. During the delay, it periodically checks for messages to interrupt the delay. Once the delay is completed, it returns the input passed to it. You can disable the screensaver with the `ENABLE` option """ @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES() d = deep_merge(d, { "optional": { Lexicon.PASS_IN: (COZY_TYPE_ANY, { "default": None, "tooltip":"The data that should be held until the timer completes."}), Lexicon.TIMER: ("INT", { "default" : 0, "min": -1, "tooltip":"How long to delay if enabled. 0 means no delay."}), Lexicon.ENABLE: ("BOOLEAN", { "default": True, "tooltip":"Enable or disable the screensaver."}) } }) return Lexicon._parse(d) @classmethod def IS_CHANGED(cls, **kw) -> float: return float('nan') def run(self, ident, **kw) -> tuple[Any]: delay = parse_param(kw, Lexicon.TIMER, EnumConvertType.INT, -1, -1, JOV_DELAY_MAX)[0] if delay < 0: delay = JOV_DELAY_MAX if delay > JOV_DELAY_MIN: comfy_api_post("jovi-delay-user", ident, {"id": ident, "timeout": delay}) # enable = parse_param(kw, Lexicon.ENABLE, EnumConvertType.BOOLEAN, True)[0] step = 1 pbar = ProgressBar(delay) while step <= delay: try: data = ComfyAPIMessage.poll(ident, timeout=1) if data.get('id', None) == ident: if data.get('cmd', False) == False: interrupt_processing(True) logger.warning(f"delay [cancelled] ({step}): {ident}") break except TimedOutException as _: if step % 10 == 0: logger.info(f"delay [continue] ({step}): {ident}") pbar.update_absolute(step) step += 1 return kw[Lexicon.PASS_IN] class ExportNode(CozyBaseNode): NAME = "EXPORT (JOV) 📽" CATEGORY = JOV_CATEGORY NOT_IDEMPOTENT = True OUTPUT_NODE = True RETURN_TYPES = () DESCRIPTION = """ Responsible for saving images or animations to disk. It supports various output formats such as GIF and GIFSKI. Users can specify the output directory, filename prefix, image quality, frame rate, and other parameters. Additionally, it allows overwriting existing files or generating unique filenames to avoid conflicts. The node outputs the saved images or animation as a tensor. """ @classmethod def IS_CHANGED(cls, **kw) -> float: return float('nan') @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES() d = deep_merge(d, { "optional": { Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}), Lexicon.PATH: ("STRING", { "default": get_output_directory(), "default_top": "",}), Lexicon.FORMAT: (FORMATS, { "default": FORMATS[0],}), Lexicon.PREFIX: ("STRING", { "default": "jovi",}), Lexicon.OVERWRITE: ("BOOLEAN", { "default": False,}), # GIF ONLY Lexicon.OPTIMIZE: ("BOOLEAN", { "default": False,}), # GIFSKI ONLY Lexicon.QUALITY: ("INT", { "default": 90, "min": 1, "max": 100,}), Lexicon.QUALITY_M: ("INT", { "default": 100, "min": 1, "max": 100,}), # GIF OR GIFSKI Lexicon.FPS: ("INT", { "default": 24, "min": 1, "max": 60,}), # GIF OR GIFSKI Lexicon.LOOP: ("INT", { "default": 0, "min": 0,}), } }) return Lexicon._parse(d) def run(self, **kw) -> None: images = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None) suffix = parse_param(kw, Lexicon.PREFIX, EnumConvertType.STRING, uuid4().hex[:16])[0] output_dir = parse_param(kw, Lexicon.PATH, EnumConvertType.STRING, "")[0] format = parse_param(kw, Lexicon.FORMAT, EnumConvertType.STRING, "gif")[0] overwrite = parse_param(kw, Lexicon.OVERWRITE, EnumConvertType.BOOLEAN, False)[0] optimize = parse_param(kw, Lexicon.OPTIMIZE, EnumConvertType.BOOLEAN, False)[0] quality = parse_param(kw, Lexicon.QUALITY, EnumConvertType.INT, 90, 0, 100)[0] motion = parse_param(kw, Lexicon.QUALITY_M, EnumConvertType.INT, 100, 0, 100)[0] fps = parse_param(kw, Lexicon.FPS, EnumConvertType.INT, 24, 1, 60)[0] loop = parse_param(kw, Lexicon.LOOP, EnumConvertType.INT, 0, 0)[0] output_dir = Path(output_dir) output_dir.mkdir(parents=True, exist_ok=True) def output(extension) -> Path: path = output_dir / f"{suffix}.{extension}" if not overwrite and os.path.isfile(path): path = str(output_dir / f"{suffix}_%s.{extension}") path = path_next(path) return path images = [tensor_to_pil(i) for i in images] if format == "gifski": root = output_dir / f"{suffix}_{uuid4().hex[:16]}" # logger.debug(root) try: root.mkdir(parents=True, exist_ok=True) for idx, i in enumerate(images): fname = str(root / f"{suffix}_{idx}.png") i.save(fname) except Exception as e: logger.warning(output_dir) logger.error(str(e)) return else: out = output('gif') fps = f"--fps {fps}" if fps > 0 else "" q = f"--quality {quality}" mq = f"--motion-quality {motion}" cmd = f"{JOV_GIFSKI} -o {out} {q} {mq} {fps} {str(root)}/{suffix}_*.png" logger.info(cmd) try: os.system(cmd) except Exception as e: logger.warning(cmd) logger.error(str(e)) # shutil.rmtree(root) elif format == "gif": images[0].save( output('gif'), append_images=images[1:], disposal=2, duration=1 / fps * 1000 if fps else 0, loop=loop, optimize=optimize, save_all=True, ) else: for img in images: img.save(output(format), optimize=optimize) return () class RouteNode(CozyBaseNode): NAME = "ROUTE (JOV) 🚌" CATEGORY = JOV_CATEGORY RETURN_TYPES = ("BUS",) + (COZY_TYPE_ANY,) * 10 RETURN_NAMES = ("ROUTE",) OUTPUT_TOOLTIPS = ( "Pass through for Route node", ) DESCRIPTION = """ Routes the input data from the optional input ports to the output port, preserving the order of inputs. The `PASS_IN` optional input is directly passed through to the output, while other optional inputs are collected and returned as tuples, preserving the order of insertion. """ @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES() d = deep_merge(d, { "optional": { Lexicon.ROUTE: ("BUS", { "default": None,}), } }) return Lexicon._parse(d) def run(self, **kw) -> tuple[Any, ...]: inout = parse_param(kw, Lexicon.ROUTE, EnumConvertType.ANY, None) vars = kw.copy() vars.pop(Lexicon.ROUTE, None) vars.pop('ident', None) parsed = [] values = list(vars.values()) for x in values: p = parse_param_list(x, EnumConvertType.ANY, None) parsed.append(p) return inout, *parsed, class SaveOutputNode(CozyBaseNode): NAME = "SAVE OUTPUT (JOV) 💾" CATEGORY = JOV_CATEGORY NOT_IDEMPOTENT = True OUTPUT_NODE = True RETURN_TYPES = () DESCRIPTION = """ Save images with metadata to any specified path. Can save user metadata and prompt information. """ @classmethod def IS_CHANGED(cls, **kw) -> float: return float('nan') @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES(True, True) d = deep_merge(d, { "optional": { Lexicon.IMAGE: ("IMAGE", {}), Lexicon.PATH: ("STRING", { "default": "", "dynamicPrompts":False}), Lexicon.NAME: ("STRING", { "default": "output", "dynamicPrompts":False,}), Lexicon.META: ("JSON", { "default": None,}), Lexicon.USER: ("STRING", { "default": "", "multiline": True, "dynamicPrompts":False,}), } }) return Lexicon._parse(d) def run(self, **kw) -> dict[str, Any]: image = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None) path = parse_param(kw, Lexicon.PATH, EnumConvertType.STRING, "") fname = parse_param(kw, Lexicon.NAME, EnumConvertType.STRING, "output") metadata = parse_param(kw, Lexicon.META, EnumConvertType.DICT, {}) usermeta = parse_param(kw, Lexicon.USER, EnumConvertType.DICT, {}) prompt = parse_param(kw, 'prompt', EnumConvertType.STRING, "") pnginfo = parse_param(kw, 'extra_pnginfo', EnumConvertType.DICT, {}) params = list(zip_longest_fill(image, path, fname, metadata, usermeta, prompt, pnginfo)) pbar = ProgressBar(len(params)) for idx, (image, path, fname, metadata, usermeta, prompt, pnginfo) in enumerate(params): if image is None: logger.warning("no image") image = torch.zeros((32, 32, 4), dtype=torch.uint8, device="cpu") try: if not isinstance(usermeta, (dict,)): usermeta = json.loads(usermeta) metadata.update(usermeta) except json.decoder.JSONDecodeError: pass except Exception as e: logger.error(e) logger.error(usermeta) metadata["prompt"] = prompt metadata["workflow"] = json.dumps(pnginfo) image = tensor_to_cv(image) image = Image.fromarray(np.clip(image, 0, 255).astype(np.uint8)) meta_png = PngInfo() for x in metadata: try: data = json.dumps(metadata[x]) meta_png.add_text(x, data) except Exception as e: logger.error(e) logger.error(x) if path == "" or path is None: path = get_output_directory() root = Path(path) if not root.exists(): root = Path(get_output_directory()) root.mkdir(parents=True, exist_ok=True) outname = fname if len(params) > 1: outname += f"_{idx}" outname = (root / outname).with_suffix(".png") logger.info(f"wrote file: {outname}") image.save(outname, pnginfo=meta_png) pbar.update_absolute(idx) return () ================================================ FILE: core/vars.py ================================================ """ Jovimetrix - Variables """ import sys import random from typing import Any from comfy.utils import ProgressBar from cozy_comfyui import \ InputType, EnumConvertType, \ deep_merge, parse_param, parse_value, zip_longest_fill from cozy_comfyui.lexicon import \ Lexicon from cozy_comfyui.node import \ COZY_TYPE_ANY, COZY_TYPE_NUMERICAL, \ CozyBaseNode # ============================================================================== # === GLOBAL === # ============================================================================== JOV_CATEGORY = "VARIABLE" # ============================================================================== # === CLASS === # ============================================================================== class ValueNode(CozyBaseNode): NAME = "VALUE (JOV) 🧬" CATEGORY = JOV_CATEGORY RETURN_TYPES = (COZY_TYPE_ANY, COZY_TYPE_ANY, COZY_TYPE_ANY, COZY_TYPE_ANY, COZY_TYPE_ANY,) RETURN_NAMES = ("❔", Lexicon.X, Lexicon.Y, Lexicon.Z, Lexicon.W,) OUTPUT_IS_LIST = (True, True, True, True, True,) DESCRIPTION = """ Supplies raw or default values for various data types, supporting vector input with components for X, Y, Z, and W. It also provides a string input option. """ UPDATE = False @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES() typ = EnumConvertType._member_names_[:6] d = deep_merge(d, { "optional": { Lexicon.IN_A: (COZY_TYPE_ANY, { "default": None,}), Lexicon.X: (COZY_TYPE_NUMERICAL, { "default": 0, "mij": -sys.float_info.max, "maj": sys.float_info.max, "forceInput": True}), Lexicon.Y: (COZY_TYPE_NUMERICAL, { "default": 0, "mij": -sys.float_info.max, "maj": sys.float_info.max, "forceInput": True}), Lexicon.Z: (COZY_TYPE_NUMERICAL, { "default": 0, "mij": -sys.float_info.max, "maj": sys.float_info.max, "forceInput": True}), Lexicon.W: (COZY_TYPE_NUMERICAL, { "default": 0, "mij": -sys.float_info.max, "maj": sys.float_info.max, "forceInput": True}), Lexicon.TYPE: (typ, { "default": EnumConvertType.BOOLEAN.name}), Lexicon.DEFAULT_A: ("VEC4", { "default": (0, 0, 0, 0), "mij": -sys.float_info.max, "maj": sys.float_info.max, "label": [Lexicon.X, Lexicon.Y, Lexicon.Z, Lexicon.W]}), Lexicon.DEFAULT_B: ("VEC4", { "default": (1,1,1,1), "mij": -sys.float_info.max, "maj": sys.float_info.max, "label": [Lexicon.X, Lexicon.Y, Lexicon.Z, Lexicon.W]}), Lexicon.SEED: ("INT", { "default": 0, "min": 0, "max": sys.maxsize}), } }) return Lexicon._parse(d) def run(self, **kw) -> tuple[tuple[Any, ...]]: raw = parse_param(kw, Lexicon.IN_A, EnumConvertType.ANY, 0) r_x = parse_param(kw, Lexicon.X, EnumConvertType.FLOAT, None) r_y = parse_param(kw, Lexicon.Y, EnumConvertType.FLOAT, None) r_z = parse_param(kw, Lexicon.Z, EnumConvertType.FLOAT, None) r_w = parse_param(kw, Lexicon.W, EnumConvertType.FLOAT, None) typ = parse_param(kw, Lexicon.TYPE, EnumConvertType, EnumConvertType.BOOLEAN.name) xyzw = parse_param(kw, Lexicon.DEFAULT_A, EnumConvertType.VEC4, (0, 0, 0, 0)) yyzw = parse_param(kw, Lexicon.DEFAULT_B, EnumConvertType.VEC4, (1, 1, 1, 1)) seed = parse_param(kw, Lexicon.SEED, EnumConvertType.INT, 0) params = list(zip_longest_fill(raw, r_x, r_y, r_z, r_w, typ, xyzw, yyzw, seed)) results = [] pbar = ProgressBar(len(params)) old_seed = -1 for idx, (raw, r_x, r_y, r_z, r_w, typ, xyzw, yyzw, seed) in enumerate(params): # default = [x_str] default2 = None a, b, c, d = xyzw a2, b2, c2, d2 = yyzw default = (a if r_x is None else r_x, b if r_y is None else r_y, c if r_z is None else r_z, d if r_w is None else r_w) default2 = (a2, b2, c2, d2) val = parse_value(raw, typ, default) val2 = parse_value(default2, typ, default2) # check if set to randomize.... self.UPDATE = False if seed != 0: self.UPDATE = True val = list(val) if isinstance(val, (tuple, list,)) else [val] val2 = list(val2) if isinstance(val2, (tuple, list,)) else [val2] for i in range(len(val)): mx = max(val[i], val2[i]) mn = min(val[i], val2[i]) if mn == mx: val[i] = mn else: if old_seed != seed: random.seed(seed) old_seed = seed if typ in [EnumConvertType.INT, EnumConvertType.BOOLEAN]: val[i] = random.randint(mn, mx) else: val[i] = mn + random.random() * (mx - mn) out = parse_value(val, typ, val) items = [out,0,0,0] if not isinstance(out, (tuple, list,)) else out results.append([out, *items]) pbar.update_absolute(idx) return *list(zip(*results)), class Vector2Node(CozyBaseNode): NAME = "VECTOR2 (JOV)" CATEGORY = JOV_CATEGORY RETURN_TYPES = ("VEC2",) RETURN_NAMES = ("VEC2",) OUTPUT_IS_LIST = (True,) OUTPUT_TOOLTIPS = ( "Vector2 with float values", ) DESCRIPTION = """ Outputs a VECTOR2. """ @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES() d = deep_merge(d, { "optional": { Lexicon.X: (COZY_TYPE_NUMERICAL, { "min": -sys.float_info.max, "max": sys.float_info.max, "tooltip": "X channel value"}), Lexicon.Y: (COZY_TYPE_NUMERICAL, { "min": -sys.float_info.max, "max": sys.float_info.max, "tooltip": "Y channel value"}), Lexicon.DEFAULT: ("VEC2", { "default": (0,0), "mij": -sys.float_info.max, "maj": sys.float_info.max, "tooltip": "Default vector value"}), } }) return Lexicon._parse(d) def run(self, **kw) -> tuple[tuple[float, ...]]: x = parse_param(kw, Lexicon.X, EnumConvertType.FLOAT, None) y = parse_param(kw, Lexicon.Y, EnumConvertType.FLOAT, None) default = parse_param(kw, Lexicon.DEFAULT, EnumConvertType.VEC2, (0,0)) result = [] params = list(zip_longest_fill(x, y, default)) pbar = ProgressBar(len(params)) for idx, (x, y, default) in enumerate(params): x = round(default[0], 9) if x is None else round(x, 9) y = round(default[1], 9) if y is None else round(y, 9) result.append((x, y,)) pbar.update_absolute(idx) return result, class Vector3Node(CozyBaseNode): NAME = "VECTOR3 (JOV)" CATEGORY = JOV_CATEGORY RETURN_TYPES = ("VEC3",) RETURN_NAMES = ("VEC3",) OUTPUT_IS_LIST = (True,) OUTPUT_TOOLTIPS = ( "Vector3 with float values", ) DESCRIPTION = """ Outputs a VECTOR3. """ @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES() d = deep_merge(d, { "optional": { Lexicon.X: (COZY_TYPE_NUMERICAL, { "min": -sys.float_info.max, "max": sys.float_info.max, "tooltip": "X channel value"}), Lexicon.Y: (COZY_TYPE_NUMERICAL, { "min": -sys.float_info.max, "max": sys.float_info.max, "tooltip": "Y channel value"}), Lexicon.Z: (COZY_TYPE_NUMERICAL, { "min": -sys.float_info.max, "max": sys.float_info.max, "tooltip": "Z channel value"}), Lexicon.DEFAULT: ("VEC3", { "default": (0,0,0), "mij": -sys.float_info.max, "maj": sys.float_info.max, "tooltip": "Default vector value"}), } }) return Lexicon._parse(d) def run(self, **kw) -> tuple[tuple[float, ...]]: x = parse_param(kw, Lexicon.X, EnumConvertType.FLOAT, None) y = parse_param(kw, Lexicon.Y, EnumConvertType.FLOAT, None) z = parse_param(kw, Lexicon.Z, EnumConvertType.FLOAT, None) default = parse_param(kw, Lexicon.DEFAULT, EnumConvertType.VEC3, (0,0,0)) result = [] params = list(zip_longest_fill(x, y, z, default)) pbar = ProgressBar(len(params)) for idx, (x, y, z, default) in enumerate(params): x = round(default[0], 9) if x is None else round(x, 9) y = round(default[1], 9) if y is None else round(y, 9) z = round(default[2], 9) if z is None else round(z, 9) result.append((x, y, z,)) pbar.update_absolute(idx) return result, class Vector4Node(CozyBaseNode): NAME = "VECTOR4 (JOV)" CATEGORY = JOV_CATEGORY RETURN_TYPES = ("VEC4",) RETURN_NAMES = ("VEC4",) OUTPUT_IS_LIST = (True,) OUTPUT_TOOLTIPS = ( "Vector4 with float values", ) DESCRIPTION = """ Outputs a VECTOR4. """ @classmethod def INPUT_TYPES(cls) -> InputType: d = super().INPUT_TYPES() d = deep_merge(d, { "optional": { Lexicon.X: (COZY_TYPE_NUMERICAL, { "min": -sys.float_info.max, "max": sys.float_info.max, "tooltip": "X channel value"}), Lexicon.Y: (COZY_TYPE_NUMERICAL, { "min": -sys.float_info.max, "max": sys.float_info.max, "tooltip": "Y channel value"}), Lexicon.Z: (COZY_TYPE_NUMERICAL, { "min": -sys.float_info.max, "max": sys.float_info.max, "tooltip": "Z channel value"}), Lexicon.W: (COZY_TYPE_NUMERICAL, { "min": -sys.float_info.max, "max": sys.float_info.max, "tooltip": "W channel value"}), Lexicon.DEFAULT: ("VEC4", { "default": (0,0,0,0), "mij": -sys.float_info.max, "maj": sys.float_info.max, "tooltip": "Default vector value"}), } }) return Lexicon._parse(d) def run(self, **kw) -> tuple[tuple[float, ...]]: x = parse_param(kw, Lexicon.X, EnumConvertType.FLOAT, None) y = parse_param(kw, Lexicon.Y, EnumConvertType.FLOAT, None) z = parse_param(kw, Lexicon.Z, EnumConvertType.FLOAT, None) w = parse_param(kw, Lexicon.W, EnumConvertType.FLOAT, None) default = parse_param(kw, Lexicon.DEFAULT, EnumConvertType.VEC4, (0,0,0,0)) result = [] params = list(zip_longest_fill(x, y, z, w, default)) pbar = ProgressBar(len(params)) for idx, (x, y, z, w, default) in enumerate(params): x = round(default[0], 9) if x is None else round(x, 9) y = round(default[1], 9) if y is None else round(y, 9) z = round(default[2], 9) if z is None else round(z, 9) w = round(default[3], 9) if w is None else round(w, 9) result.append((x, y, z, w,)) pbar.update_absolute(idx) return result, ================================================ FILE: node_list.json ================================================ { "ADJUST: BLUR (JOV)": "Enhance and modify images with various blur effects", "ADJUST: COLOR (JOV)": "Enhance and modify images with various blur effects", "ADJUST: EDGE (JOV)": "Enhanced edge detection", "ADJUST: EMBOSS (JOV)": "Emboss boss mode", "ADJUST: LEVELS (JOV)": "Manual or automatic adjust image levels so that the darkest pixel becomes black\nand the brightest pixel becomes white, enhancing overall contrast", "ADJUST: LIGHT (JOV)": "Tonal adjustments", "ADJUST: MORPHOLOGY (JOV)": "Operations based on the image shape", "ADJUST: PIXEL (JOV)": "Pixel-level transformations", "ADJUST: SHARPEN (JOV)": "Sharpen the pixels of an image", "AKASHIC (JOV) \ud83d\udcd3": "Visualize data", "ARRAY (JOV) \ud83d\udcda": "Processes a batch of data based on the selected mode", "BATCH TO LIST (JOV)": "Convert a batch of values into a pure python list of values", "BIT SPLIT (JOV) \u2b44": "Split an input into separate bits", "BLEND (JOV) \u2697\ufe0f": "Combine two input images using various blending modes, such as normal, screen, multiply, overlay, etc", "COLOR BLIND (JOV) \ud83d\udc41\u200d\ud83d\udde8": "Simulate color blindness effects on images", "COLOR MATCH (JOV) \ud83d\udc9e": "Adjust the color scheme of one image to match another with the Color Match Node", "COLOR MEANS (JOV) \u3030\ufe0f": "The top-k colors ordered from most->least used as a strip, tonal palette and 3D LUT", "COLOR THEORY (JOV) \ud83d\udede": "Generate a color harmony based on the selected scheme", "COMPARISON (JOV) \ud83d\udd75\ud83c\udffd": "Evaluates two inputs (A and B) with a specified comparison operators and optional values for successful and failed comparisons", "CONSTANT (JOV) \ud83d\udfea": "Generate a constant image or mask of a specified size and color", "CROP (JOV) \u2702\ufe0f": "Extract a portion of an input image or resize it", "DELAY (JOV) \u270b\ud83c\udffd": "Introduce pauses in the workflow that accept an optional input to pass through and a timer parameter to specify the duration of the delay", "EXPORT (JOV) \ud83d\udcfd": "Responsible for saving images or animations to disk", "FILTER MASK (JOV) \ud83e\udd3f": "Create masks based on specific color ranges within an image", "FLATTEN (JOV) \u2b07\ufe0f": "Combine multiple input images into a single image by summing their pixel values", "GRADIENT MAP (JOV) \ud83c\uddf2\ud83c\uddfa": "Remaps an input image using a gradient lookup table (LUT)", "GRAPH (JOV) \ud83d\udcc8": "Visualize a series of data points over time", "HISTOGRAM (JOV)": "The Histogram Node generates a histogram representation of the input image, showing the distribution of pixel intensity values across different bins", "IMAGE INFO (JOV) \ud83d\udcda": "Exports and Displays immediate information about images", "LERP (JOV) \ud83d\udd30": "Calculate linear interpolation between two values or vectors based on a blending factor (alpha)", "OP BINARY (JOV) \ud83c\udf1f": "Execute binary operations like addition, subtraction, multiplication, division, and bitwise operations on input values, supporting various data types and vector sizes", "OP UNARY (JOV) \ud83c\udfb2": "Perform single function operations like absolute value, mean, median, mode, magnitude, normalization, maximum, or minimum on input values", "PIXEL MERGE (JOV) \ud83e\udec2": "Combines individual color channels (red, green, blue) along with an optional mask channel to create a composite image", "PIXEL SPLIT (JOV) \ud83d\udc94": "Split an input into individual color channels (red, green, blue, alpha)", "PIXEL SWAP (JOV) \ud83d\udd03": "Swap pixel values between two input images based on specified channel swizzle operations", "QUEUE (JOV) \ud83d\uddc3": "Manage a queue of items, such as file paths or data", "QUEUE TOO (JOV) \ud83d\uddc3": "Manage a queue of specific items: media files", "ROUTE (JOV) \ud83d\ude8c": "Routes the input data from the optional input ports to the output port, preserving the order of inputs", "SAVE OUTPUT (JOV) \ud83d\udcbe": "Save images with metadata to any specified path", "SHAPE GEN (JOV) \u2728": "Create n-sided polygons", "SPLIT (JOV) \ud83c\udfad": "Split an image into two or four images based on the percentages for width and height", "STACK (JOV) \u2795": "Merge multiple input images into a single composite image by stacking them along a specified axis", "STRINGER (JOV) \ud83e\ude80": "Manipulate strings through filtering", "SWIZZLE (JOV) \ud83d\ude35": "Swap components between two vectors based on specified swizzle patterns and values", "TEXT GEN (JOV) \ud83d\udcdd": "Generates images containing text based on parameters such as font, size, alignment, color, and position", "THRESHOLD (JOV) \ud83d\udcc9": "Define a range and apply it to an image for segmentation and feature extraction", "TICK (JOV) \u23f1": "Value generator with normalized values based on based on time interval", "TRANSFORM (JOV) \ud83c\udfdd\ufe0f": "Apply various geometric transformations to images, including translation, rotation, scaling, mirroring, tiling and perspective projection", "VALUE (JOV) \ud83e\uddec": "Supplies raw or default values for various data types, supporting vector input with components for X, Y, Z, and W", "VECTOR2 (JOV)": "Outputs a VECTOR2", "VECTOR3 (JOV)": "Outputs a VECTOR3", "VECTOR4 (JOV)": "Outputs a VECTOR4", "WAVE GEN (JOV) \ud83c\udf0a": "Produce waveforms like sine, square, or sawtooth with adjustable frequency, amplitude, phase, and offset" } ================================================ FILE: pyproject.toml ================================================ [project] name = "jovimetrix" description = "Animation via tick. Parameter manipulation with wave generator. Unary and Binary math support. Value convert int/float/bool, VectorN and Image, Mask types. Shape mask generator. Stack images, do channel ops, split, merge and randomize arrays and batches. Load images & video from anywhere. Dynamic bus routing. Save output anywhere! Flatten, crop, transform; check colorblindness or linear interpolate values." version = "2.1.25" license = { file = "LICENSE" } readme = "README.md" authors = [{ name = "Alexander G. Morano", email = "amorano@gmail.com" }] classifiers = [ "License :: OSI Approved :: MIT License", "Operating System :: OS Independent", "Programming Language :: Python", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.11", "Programming Language :: Python :: 3.12", "Intended Audience :: Developers", ] requires-python = ">=3.10" dependencies = [ "aenum", "git+https://github.com/cozy-comfyui/cozy_comfyui@main#egg=cozy_comfyui", "git+https://github.com/cozy-comfyui/cozy_comfy@main#egg=cozy_comfy", "matplotlib", "numpy>=1.25.0", "opencv-contrib-python", "Pillow" ] [project.urls] Homepage = "https://github.com/Amorano/Jovimetrix" Documentation = "https://github.com/Amorano/Jovimetrix/wiki" Repository = "https://github.com/Amorano/Jovimetrix" Issues = "https://github.com/Amorano/Jovimetrix/issues" [tool.comfy] PublisherId = "amorano" DisplayName = "Jovimetrix" Icon = "https://raw.githubusercontent.com/Amorano/Jovimetrix-examples/refs/heads/master/res/logo-jvmx.png" ================================================ FILE: requirements.txt ================================================ aenum git+https://github.com/cozy-comfyui/cozy_comfyui@main#egg=cozy_comfyui git+https://github.com/cozy-comfyui/cozy_comfy@main#egg=cozy_comfy matplotlib numpy>=1.25.0 opencv-contrib-python Pillow ================================================ FILE: web/core.js ================================================ /** ASYNC init setup registerCustomNodes nodeCreated beforeRegisterNodeDef getCustomWidgets afterConfigureGraph refreshComboInNodes NON-ASYNC onNodeOutputsUpdated beforeRegisterVueAppNodeDefs loadedGraphNode */ import { app } from "../../scripts/app.js" app.registerExtension({ name: "jovimetrix", async init() { const styleTagId = 'jovimetrix-stylesheet'; let styleTag = document.getElementById(styleTagId); if (styleTag) { return; } document.head.appendChild(Object.assign(document.createElement('script'), { src: "https://cdn.jsdelivr.net/npm/@jaames/iro@5" })); document.head.appendChild(Object.assign(document.createElement('link'), { id: styleTagId, rel: 'stylesheet', type: 'text/css', href: 'extensions/jovimetrix/jovi_metrix.css' })); } }); ================================================ FILE: web/fun.js ================================================ /**/ import { app } from "../../scripts/app.js"; export const bewm = function(ex, ey) { //- adapted from "Anchor Click Canvas Animation" by Nick Sheffield //- https://codepen.io/nicksheffield/pen/NNEoLg/ const colors = [ '#ffc000', '#ff3b3b', '#ff8400' ]; const bubbles = 25; const explode = () => { let particles = []; const ctx = app.canvas; const canvas = ctx.getContext('2d'); ctx.style.pointerEvents = 'none'; for(var i = 0; i < bubbles; i++) { particles.push({ x: canvas.width / 2, y: canvas.height / 2, radius: r(20, 30), color: colors[Math.floor(Math.random() * colors.length)], rotation: r(0, 360, true), speed: r(12, 16), friction: 0.9, opacity: r(0, 0.5, true), yVel: 0, gravity: 0.15 }); } render(particles, ctx); } const render = (particles, ctx) => { requestAnimationFrame(() => render(particles, ctx)); particles.forEach((p) => { p.x += p.speed * Math.cos(p.rotation * Math.PI / 180); p.y += p.speed * Math.sin(p.rotation * Math.PI / 180); p.opacity -= 0.01; p.speed *= p.friction; p.radius *= p.friction; p.yVel += p.gravity; p.y += p.yVel; if(p.opacity < 0 || p.radius < 0) return; ctx.beginPath(); ctx.globalAlpha = p.opacity; ctx.fillStyle = p.color; ctx.arc(p.x, p.y, p.radius, 0, 2 * Math.PI, false); ctx.fill(); }); } const r = (a, b, c) => parseFloat((Math.random() * ((a ? a : 1) - (b ? b : 0)) + (b ? b : 0)).toFixed(c ? c : 0)); explode(ex, ey); } export const bubbles = function() { const canvas = document.getElementById("graph-canvas"); const context = canvas.getContext("2d"); window.bubbles_alive = true; let mouseX; let mouseY; const particleArray = []; class Particle { constructor() { this.x = Math.random() * canvas.width * 0.85; this.y = canvas.height * 0.85; this.radius = Math.random() * 30; this.dx = Math.random() - 0.5 this.dx = Math.sign(this.dx) * Math.random() * 1.27; this.dy = 3 + Math.random() * 3; this.hue = 25 + Math.random() * 250; this.sat = 85 + Math.random() * 15; this.val = 35 + Math.random() * 20; } draw() { context.beginPath(); context.arc(this.x, this.y, this.radius, 0, 2 * Math.PI); context.strokeStyle = `hsl(${this.hue} ${this.sat}% ${this.val}%)`; context.stroke(); const gradient = context.createRadialGradient( this.x, this.y, 1, this.x + 0.5, this.y + 0.5, this.radius ); gradient.addColorStop(0.3, "rgba(255, 255, 255, 0.3)"); gradient.addColorStop(0.95, "#E7FEFF7F"); context.fillStyle = gradient; context.fill(); } move() { this.x = this.x + this.dx + (Math.random() - 0.5) * 0.5; this.y = this.y - this.dy + (Math.random() - 0.5) * 1.5; // Check if the particle is outside the canvas boundaries if ( this.x < -this.radius || this.x > canvas.width + this.radius || this.y < -this.radius || this.y > canvas.height + this.radius ) { // Remove the particle from the array particleArray.splice(particleArray.indexOf(this), 1); } } } const animate = () => { //context.clearRect(0, 0, canvas.width, canvas.height); app.canvas.setDirty(true); particleArray.forEach((particle) => { particle?.move(); particle?.draw(); }); if (window.bubbles_alive) { requestAnimationFrame(animate); if (Math.random() > 0.5) { const particle = new Particle(mouseX, mouseY); particleArray.push(particle); } } else { canvas.removeEventListener("mousemove", handleMouseMove); particleArray.length = 0; // Clear the particleArray return; } }; const handleMouseMove = (event) => { mouseX = event.clientX; mouseY = event.clientY; }; canvas.addEventListener("mousemove", handleMouseMove); animate(); } // flash status for each element const flashStatusMap = new Map(); export async function flashBackgroundColor(element, duration, flashCount, color="red") { if (flashStatusMap.get(element)) { return; } flashStatusMap.set(element, true); const originalColor = element.style.backgroundColor; for (let i = 0; i < flashCount; i++) { element.style.backgroundColor = color; await new Promise(resolve => setTimeout(resolve, duration / 2)); element.style.backgroundColor = originalColor; await new Promise(resolve => setTimeout(resolve, duration / 2)); } flashStatusMap.set(element, false); } ================================================ FILE: web/jovi_metrix.css ================================================ /**/ .jov-modal-content { max-width: 100%; max-height: 100%; overflow: visible; position: relative; margin: 3px; padding: 3px; table-layout: fixed; text-align: center; background-color: var(--bg-color); border-style: solid; border-color: rgb(95, 85, 75); } .jov-delay-header { font-weight: bold; font-size: 1.5em; text-align: center; } ================================================ FILE: web/nodes/akashic.js ================================================ /**/ import { app } from "../../../scripts/app.js" import { ComfyWidgets } from '../../../scripts/widgets.js'; import { nodeAddDynamic } from "../util.js" const _prefix = '📥' const _id = "AKASHIC (JOV) 📓" app.registerExtension({ name: 'jovimetrix.node.' + _id, async beforeRegisterNodeDef(nodeType, nodeData, app) { if (nodeData.name !== _id) { return } await nodeAddDynamic(nodeType, _prefix); const onExecuted = nodeType.prototype.onExecuted; nodeType.prototype.onExecuted = async function (message) { const me = onExecuted?.apply(this, arguments) if (this.widgets) { for (let i = 0; i < this.widgets.length; i++) { this.widgets[i].onRemove?.(); this.widgets.splice(i, 0); } this.widgets.length = 0; } if (this.inputs.length>1) { for (let i = 0; i < this.inputs.length-1; i++) { let textWidget = ComfyWidgets["STRING"](this, this.inputs[i].name, ["STRING", { multiline: true }], app).widget; textWidget.inputEl.readOnly = true; textWidget.inputEl.style.margin = "1px"; textWidget.inputEl.style.padding = "1px"; textWidget.inputEl.style.border = "1px"; textWidget.inputEl.style.backgroundColor = "#222"; textWidget.value = this.inputs[i].name + " "; let raw = message["text"][i] .replace(/\\n/g, '\n') .replace(/"/g, ''); try { raw = JSON.parse('"' + raw.replace(/"/g, '\\"') + '"'); } catch (e) { } textWidget.value += raw; } } return me; } } }) ================================================ FILE: web/nodes/array.js ================================================ /** * File: array.js * Project: Jovimetrix * */ import { app } from "../../../scripts/app.js" import { nodeAddDynamic } from "../util.js" const _id = "ARRAY (JOV) 📚" const _prefix = '❔' app.registerExtension({ name: 'jovimetrix.node.' + _id, async beforeRegisterNodeDef(nodeType, nodeData) { if (nodeData.name !== _id) { return; } await nodeAddDynamic(nodeType, _prefix); } }) ================================================ FILE: web/nodes/delay.js ================================================ /**/ import { api } from "../../../scripts/api.js"; import { app } from "../../../scripts/app.js"; import { apiJovimetrix } from "../util.js" import { bubbles } from '../fun.js' const _id = "DELAY (JOV) ✋🏽" const EVENT_JOVI_DELAY = "jovi-delay-user"; const EVENT_JOVI_UPDATE = "jovi-delay-update"; function domShowModal(innerHTML, eventCallback, timeout=null) { return new Promise((resolve, reject) => { const modal = document.createElement("div"); modal.className = "modal"; modal.innerHTML = innerHTML; document.body.appendChild(modal); // center const modalContent = modal.querySelector(".jov-modal-content"); modalContent.style.position = "absolute"; modalContent.style.left = "50%"; modalContent.style.top = "50%"; modalContent.style.transform = "translate(-50%, -50%)"; let timeoutId; const handleEvent = (event) => { const targetId = event.target.id; const result = eventCallback(targetId); if (result != null) { if (timeoutId) { clearTimeout(timeoutId); timeoutId = null; } modal.remove(); resolve(result); } }; modalContent.addEventListener("click", handleEvent); modalContent.addEventListener("dblclick", handleEvent); if (timeout) { timeout *= 1000; timeoutId = setTimeout(() => { modal.remove(); reject(new Error("TIMEOUT")); }, timeout); } //setTimeout(() => { // modal.dispatchEvent(new Event('tick')); //}, 1000); }); } app.registerExtension({ name: 'jovimetrix.node.' + _id, async beforeRegisterNodeDef(nodeType, nodeData) { if (nodeData.name !== _id) { return } const onNodeCreated = nodeType.prototype.onNodeCreated; nodeType.prototype.onNodeCreated = async function () { const me = await onNodeCreated?.apply(this, arguments); const widget_time = this.widgets.find(w => w.name == 'timer'); const widget_enable = this.widgets.find(w => w.name == 'enable'); this.total_timeout = 0; let showing = false; let delay_modal; const self = this; async function python_delay_user(event) { if (showing || event.detail.id != self.id) { return; } const time = event.detail.timeout; if (time > 4 && widget_enable.value == true) { bubbles(); console.info(time, widget_enable.value); } showing = true; delay_modal = domShowModal(`

DELAY NODE #${event.detail?.title || event.detail.id}

CANCEL OR CONTINUE RENDER?

`, (button) => { return (button != "jov-submit-cancel"); }, time); let value = false; try { value = await delay_modal; } catch (e) { if (e.message != "TIMEOUT") { console.error(e); } } apiJovimetrix(event.detail.id, value); showing = false; window.bubbles_alive = false; } async function python_delay_update() { } api.addEventListener(EVENT_JOVI_DELAY, python_delay_user); api.addEventListener(EVENT_JOVI_UPDATE, python_delay_update); this.onDestroy = () => { api.removeEventListener(EVENT_JOVI_DELAY, python_delay_user); api.removeEventListener(EVENT_JOVI_UPDATE, python_delay_update); }; return me; } const onExecutionStart = nodeType.prototype.onExecutionStart nodeType.prototype.onExecutionStart = function() { onExecutionStart?.apply(this, arguments); self.total_timeout = 0; } } }) ================================================ FILE: web/nodes/flatten.js ================================================ /**/ import { app } from "../../../scripts/app.js" import { nodeAddDynamic } from "../util.js" const _id = "FLATTEN (JOV) ⬇️" const _prefix = 'image' app.registerExtension({ name: 'jovimetrix.node.' + _id, async beforeRegisterNodeDef(nodeType, nodeData) { if (nodeData.name !== _id) { return; } await nodeAddDynamic(nodeType, _prefix, "IMAGE,MASK"); } }) ================================================ FILE: web/nodes/graph.js ================================================ /**/ import { app } from "../../../scripts/app.js" import { apiJovimetrix, nodeAddDynamic } from "../util.js" const _id = "GRAPH (JOV) 📈" const _prefix = '❔' app.registerExtension({ name: 'jovimetrix.node.' + _id, async init() { LGraphCanvas.link_type_colors['JOV_VG_0'] = "#A00"; LGraphCanvas.link_type_colors['JOV_VG_1'] = "#0A0"; LGraphCanvas.link_type_colors['JOV_VG_2'] = "#00A"; LGraphCanvas.link_type_colors['JOV_VG_3'] = "#0AA"; LGraphCanvas.link_type_colors['JOV_VG_4'] = "#AA0"; LGraphCanvas.link_type_colors['JOV_VG_5'] = "#A0A"; LGraphCanvas.link_type_colors['JOV_VG_6'] = "#000"; }, async beforeRegisterNodeDef(nodeType, nodeData, app) { if (nodeData.name !== _id) { return; } await nodeAddDynamic(nodeType, _prefix); const onNodeCreated = nodeType.prototype.onNodeCreated; nodeType.prototype.onNodeCreated = async function () { const me = await onNodeCreated?.apply(this, arguments); const self = this; const widget_reset = this.widgets.find(w => w.name == 'reset'); widget_reset.callback = async() => { widget_reset.value = false; apiJovimetrix(self.id, "reset"); } return me; } const onConnectionsChange = nodeType.prototype.onConnectionsChange nodeType.prototype.onConnectionsChange = function (slotType, slot, event, link_info) { const me = onConnectionsChange?.apply(this, arguments); if (!link_info || slot == this.inputs.length) { return; } let count = 0; for (let i = 0; i < this.inputs.length; i++) { const link_id = this.inputs[i].link; const link = app.graph.links[link_id]; const nameParts = this.inputs[i].name.split('_'); const isInteger = nameParts.length > 1 && !isNaN(nameParts[0]) && Number.isInteger(parseFloat(nameParts[0])); if (link && isInteger && nameParts[1].substring(0, _prefix.length) == _prefix) { //if(link && this.inputs[i].name.substring(0, _prefix.length) == _prefix) { link.type = `JOV_VG_${count}`; this.inputs[i].color_on = LGraphCanvas.link_type_colors[link.type]; count += 1; } } app.graph.setDirtyCanvas(true, true); return me; } } }) ================================================ FILE: web/nodes/lerp.js ================================================ /**/ import { app } from "../../../scripts/app.js" import { widgetHookControl } from "../util.js" const _id = "LERP (JOV) 🔰" app.registerExtension({ name: 'jovimetrix.node.' + _id, async beforeRegisterNodeDef(nodeType, nodeData) { if (nodeData.name !== _id) { return; } const onNodeCreated = nodeType.prototype.onNodeCreated nodeType.prototype.onNodeCreated = async function () { const me = await onNodeCreated?.apply(this, arguments); await widgetHookControl(this, 'type', 'alpha', true); await widgetHookControl(this, 'type', 'aa'); await widgetHookControl(this, 'type', 'bb'); return me; } return nodeType; } }) ================================================ FILE: web/nodes/op_binary.js ================================================ /**/ import { app } from "../../../scripts/app.js" import { widgetHookControl } from "../util.js" const _id = "OP BINARY (JOV) 🌟" app.registerExtension({ name: 'jovimetrix.node.' + _id, async beforeRegisterNodeDef(nodeType, nodeData) { if (nodeData.name !== _id) { return; } const onNodeCreated = nodeType.prototype.onNodeCreated nodeType.prototype.onNodeCreated = async function () { const me = await onNodeCreated?.apply(this, arguments); await widgetHookControl(this, 'type', 'aa'); await widgetHookControl(this, 'type', 'bb'); return me; } return nodeType; } }) ================================================ FILE: web/nodes/op_unary.js ================================================ /**/ import { app } from "../../../scripts/app.js" import { widgetHookControl } from "../util.js" const _id = "OP UNARY (JOV) 🎲" app.registerExtension({ name: 'jovimetrix.node.' + _id, async beforeRegisterNodeDef(nodeType, nodeData) { if (nodeData.name !== _id) { return; } const onNodeCreated = nodeType.prototype.onNodeCreated nodeType.prototype.onNodeCreated = async function () { const me = await onNodeCreated?.apply(this, arguments); await widgetHookControl(this, 'type', 'aa'); return me; } return nodeType; } }) ================================================ FILE: web/nodes/queue.js ================================================ /**/ import { api } from "../../../scripts/api.js"; import { app } from "../../../scripts/app.js"; import { ComfyWidgets } from '../../../scripts/widgets.js'; import { apiJovimetrix, TypeSlotEvent, TypeSlot } from "../util.js" import { flashBackgroundColor } from '../fun.js' const _id1 = "QUEUE (JOV) 🗃"; const _id2 = "QUEUE TOO (JOV) 🗃"; const _prefix = '❔'; const EVENT_JOVI_PING = "jovi-queue-ping"; const EVENT_JOVI_DONE = "jovi-queue-done"; app.registerExtension({ name: 'jovimetrix.node.' + _id1, async beforeRegisterNodeDef(nodeType, nodeData, app) { if (nodeData.name != _id1 && nodeData.name != _id2) { return; } function update_report(self) { self.widget_report.value = `[${self.data_index+1} / ${self.data_all.length}]\n${self.data_current}`; app.canvas.setDirty(true); } function update_list(self, value) { self.data_count = value.length; self.data_index = 1; self.data_current = ""; update_report(self); apiJovimetrix(self.id, "reset"); } const onNodeCreated = nodeType.prototype.onNodeCreated; nodeType.prototype.onNodeCreated = async function () { const me = await onNodeCreated?.apply(this, arguments); const self = this; this.data_index = 1; this.data_current = ""; this.data_all = []; this.widget_report = ComfyWidgets.STRING(this, 'QUEUE IS EMPTY 🔜', [ 'STRING', { multiline: true, }, ], app).widget; this.widget_report.inputEl.readOnly = true; this.widget_report.serializeValue = async () => { }; const widget_queue = this.widgets.find(w => w.name == 'queue'); const widget_batch = this.widgets.find(w => w.name == 'batch'); const widget_hold = this.widgets.find(w => w.name == 'hold'); const widget_reset = this.widgets.find(w => w.name == 'reset'); widget_queue.inputEl.addEventListener('input', function () { const value = widget_queue.value.split('\n'); update_list(self, value); }); widget_reset.callback = () => { widget_reset.value = false; apiJovimetrix(self.id, "reset"); } async function python_queue_ping(event) { if (event.detail.id != self.id) { return; } self.data_index = event.detail.i; self.data_all = event.detail.l; self.data_current = event.detail.c; update_report(self); } // Add names to list control that collapses. And counter to see where we are in the overall async function python_queue_done(event) { if (event.detail.id != self.id) { return; } await flashBackgroundColor(self.widget_queue.inputEl, 650, 4, "#995242CC"); } api.addEventListener(EVENT_JOVI_PING, python_queue_ping); api.addEventListener(EVENT_JOVI_DONE, python_queue_done); this.onDestroy = () => { api.removeEventListener(EVENT_JOVI_PING, python_queue_ping); api.removeEventListener(EVENT_JOVI_DONE, python_queue_done); }; setTimeout(() => { widget_hold.callback(); }, 5); setTimeout(() => { widget_batch.callback(); }, 5); return me; } const onConnectOutput = nodeType.prototype.onConnectOutput; nodeType.prototype.onConnectOutput = function(outputIndex, inputType, inputSlot, inputNode) { if (outputIndex == 0 && inputType == "COMBO") { // can link the "same" list -- user breaks it past that, their problem atm const widget_queue = this.widgets.find(w => w.name == 'queue'); const widget = inputNode.widgets.find(w => w.name == inputSlot.name); widget_queue.value = widget.options.values.join('\n'); } return onConnectOutput?.apply(this, arguments); } const onConnectionsChange = nodeType.prototype.onConnectionsChange; nodeType.prototype.onConnectionsChange = function (slotType, slot, event, link_info) //side, slot, connected, link_info { if (slotType == TypeSlot.Output && slot == 0 && link_info && event == TypeSlotEvent.Connect) { const node = app.graph.getNodeById(link_info.target_id); if (node === undefined || node.inputs === undefined) { return; } const target = node.inputs[link_info.target_slot]; if (target === undefined) { return; } const widget = node.widgets?.find(w => w.name == target.name); if (widget === undefined) { return; } this.outputs[0].name = widget.name; if (widget?.origType == "combo" || widget.type == "COMBO") { const values = widget.options.values; const widget_queue = this.widgets.find(w => w.name == 'queue'); // remove all connections that don't match the list? widget_queue.value = values.join('\n'); update_list(this, values); } this.outputs[0].name = _prefix; } return onConnectionsChange?.apply(this, arguments); }; } }) ================================================ FILE: web/nodes/route.js ================================================ /**/ import { app } from "../../../scripts/app.js" import { TypeSlot, TypeSlotEvent, nodeFitHeight, nodeVirtualLinkRoot, nodeInputsClear, nodeOutputsClear } from "../util.js" const _id = "ROUTE (JOV) 🚌"; const _prefix = '🔮'; const _dynamic_type = "*"; app.registerExtension({ name: 'jovimetrix.node.' + _id, async beforeRegisterNodeDef(nodeType, nodeData) { if (nodeData.name !== _id) { return; } const onNodeCreated = nodeType.prototype.onNodeCreated nodeType.prototype.onNodeCreated = async function () { const me = await onNodeCreated?.apply(this, arguments); this.addInput(_prefix, _dynamic_type); nodeOutputsClear(this, 1); return me; } const onConnectionsChange = nodeType.prototype.onConnectionsChange nodeType.prototype.onConnectionsChange = function (slotType, slot_idx, event, link_info, node_slot) { const me = onConnectionsChange?.apply(this, arguments); let bus_connected = false; if (event == TypeSlotEvent.Connect && link_info) { let fromNode = this.graph._nodes.find( (otherNode) => otherNode.id == link_info.origin_id ); if (slotType == TypeSlot.Input) { if (slot_idx == 0) { fromNode = nodeVirtualLinkRoot(fromNode); if (fromNode?.outputs && fromNode.outputs[0].type == node_slot.type) { // bus connection bus_connected = true; nodeInputsClear(this, 1); nodeOutputsClear(this, 1); } } else { // normal connection const parent_link = fromNode?.outputs[link_info.origin_slot]; if (parent_link) { node_slot.type = parent_link.type; node_slot.name = parent_link.name ; //`${fromNode.id}_${parent_link.name}`; // make sure there is a matching output... while(this.outputs.length < slot_idx + 1) { this.addOutput(_prefix, _dynamic_type); } this.outputs[slot_idx].name = node_slot.name; this.outputs[slot_idx].type = node_slot.type; } } } } else if (event == TypeSlotEvent.Disconnect) { bus_connected = false; if (slot_idx == 0) { nodeInputsClear(this, 1); nodeOutputsClear(this, 1); } else { this.removeInput(slot_idx); this.removeOutput(slot_idx); } } // add extra input if we are not in BUS connection mode if (!bus_connected) { const last = this.inputs[this.inputs.length-1]; if (last.name != _prefix || last.type != _dynamic_type) { this.addInput(_prefix, _dynamic_type); } } nodeFitHeight(this); return me; } return nodeType; } }) ================================================ FILE: web/nodes/stack.js ================================================ /**/ import { app } from "../../../scripts/app.js" import { nodeAddDynamic} from "../util.js" const _id = "STACK (JOV) ➕" const _prefix = 'image' app.registerExtension({ name: 'jovimetrix.node.' + _id, async beforeRegisterNodeDef(nodeType, nodeData) { if (nodeData.name !== _id) { return; } await nodeAddDynamic(nodeType, _prefix); } }) ================================================ FILE: web/nodes/stringer.js ================================================ /**/ import { app } from "../../../scripts/app.js" import { nodeAddDynamic } from "../util.js" const _id = "STRINGER (JOV) 🪀" const _prefix = 'string' app.registerExtension({ name: 'jovimetrix.node.' + _id, async beforeRegisterNodeDef(nodeType, nodeData) { if (nodeData.name !== _id) { return; } await nodeAddDynamic(nodeType, _prefix); } }) ================================================ FILE: web/nodes/value.js ================================================ /**/ import { app } from "../../../scripts/app.js" import { widgetHookControl, nodeFitHeight} from "../util.js" const _id = "VALUE (JOV) 🧬" app.registerExtension({ name: 'jovimetrix.node.' + _id, async beforeRegisterNodeDef(nodeType, nodeData) { if (nodeData.name !== _id) { return; } const onNodeCreated = nodeType.prototype.onNodeCreated nodeType.prototype.onNodeCreated = async function () { const me = await onNodeCreated?.apply(this, arguments); this.outputs[1].type = "*"; this.outputs[2].type = "*"; this.outputs[3].type = "*"; this.outputs[4].type = "*"; const ab_data = await widgetHookControl(this, 'type', 'aa'); await widgetHookControl(this, 'type', 'bb'); const oldCallback = ab_data.callback; ab_data.callback = () => { oldCallback?.apply(this, arguments); this.outputs[0].name = ab_data.value; this.outputs[0].type = ab_data.value; let type = ab_data.value; type = "FLOAT"; if (ab_data.value == "INT") { type = "INT"; } else if (ab_data.value == "BOOLEAN") { type = "BOOLEAN"; } this.outputs[1].type = type; this.outputs[2].type = type; this.outputs[3].type = type; this.outputs[4].type = type; nodeFitHeight(this); } return me; } } }) ================================================ FILE: web/util.js ================================================ /**/ import { app } from "../../scripts/app.js" import { api } from "../../scripts/api.js" export const TypeSlot = { Input: 1, Output: 2, }; export const TypeSlotEvent = { Connect: true, Disconnect: false, }; export async function apiJovimetrix(id, cmd, data=null, route="message", ) { try { const response = await api.fetchApi(`/cozy_comfyui/${route}`, { method: "POST", headers: { "Content-Type": "application/json", }, body: JSON.stringify({ id: id, cmd: cmd, data: data }), }); if (!response.ok) { throw new Error(`Error: ${response.status} - ${response.statusText}`); } return response; } catch (error) { console.error("API call to Jovimetrix failed:", error); throw error; } } /* * matchFloatSize forces the target to be float[n] based on its type size */ export async function widgetHookControl(node, control_key, child_key) { const control = node.widgets.find(w => w.name == control_key); const target = node.widgets.find(w => w.name == child_key); const target_input = node.inputs.find(w => w.name == child_key); if (!control || !target || !target_input) { throw new Error("Required widgets not found"); } const track_xyzw = { 0: target.options?.default?.[0] || 0, 1: target.options?.default?.[1] || 0, 2: target.options?.default?.[2] || 0, 3: target.options?.default?.[3] || 0, }; const track_options = {} Object.assign(track_options, target.options); const controlCallback = control.callback; control.callback = async () => { const me = await controlCallback?.apply(this, arguments); Object.assign(target.options, track_options); if (["VEC2", "VEC3", "VEC4", "FLOAT", "INT", "BOOLEAN"].includes(control.value)) { target_input.type = control.value; if (["INT", "FLOAT", "BOOLEAN"].includes(control.value)) { target.type = "VEC1"; } else { target.type = control.value; } target.options.type = target.type; let size = 1; if (["VEC2", "VEC3", "VEC4"].includes(target.type)) { const match = /\d/.exec(target.type); size = match[0]; } target.value = {}; if (["VEC2", "VEC3", "VEC4", "FLOAT"].includes(control.value)) { for (let i = 0; i < size; i++) { target.value[i] = parseFloat(track_xyzw[i]).toFixed(target.options.precision); } } else if (control.value == "INT") { target.options.step = 1; target.options.round = 0; target.options.precision = 0; target.options.int = true; target.value[0] = Number(track_xyzw[0]); } else if (control.value == "BOOLEAN") { target.options.step = 1; target.options.precision = 0; target.options.mij = 0; target.options.maj = 1; target.options.int = true; target.value[0] = track_xyzw[0] != 0 ? 1 : 0; } } nodeFitHeight(node); return me; } const targetCallback = target.callback; target.callback = async () => { const me = await targetCallback?.apply(this, arguments); if (target.type == "toggle") { track_xyzw[0] = target.value != 0 ? 1 : 0; } else if (["INT", "FLOAT"].includes(target.type)) { track_xyzw[0] = target.value; } else { Object.keys(target.value).forEach((key) => { track_xyzw[key] = target.value[key]; }); } return me; }; await control.callback(); return control; } export function nodeFitHeight(node) { const size_old = node.size; node.computeSize(); node.setSize([Math.max(size_old[0], node.size[0]), Math.min(size_old[1], node.size[1])]); node.setDirtyCanvas(!0, !1); app.graph.setDirtyCanvas(!0, !1); } /** * Manage the slots on a node to allow a dynamic number of inputs */ export async function nodeAddDynamic(nodeType, prefix, dynamic_type='*') { /* this one should just put the "prefix" as the last empty entry. Means we have to pay attention not to collide key names in the input list. Also need to make sure that we keep any non-dynamic ports. */ const onNodeCreated = nodeType.prototype.onNodeCreated nodeType.prototype.onNodeCreated = async function () { const me = await onNodeCreated?.apply(this, arguments); if (this.inputs.length == 0 || this.inputs[this.inputs.length-1].name != prefix) { this.addInput(prefix, dynamic_type); } return me; } function slot_name(slot) { return slot.name.split('_'); } const onConnectionsChange = nodeType.prototype.onConnectionsChange nodeType.prototype.onConnectionsChange = async function (slotType, slot_idx, event, link_info, node_slot) { const me = onConnectionsChange?.apply(this, arguments); const slot_parts = slot_name(node_slot); if ((node_slot.type === dynamic_type || slot_parts.length > 1) && slotType === TypeSlot.Input && link_info !== null) { const fromNode = this.graph._nodes.find( (otherNode) => otherNode.id == link_info.origin_id ) const parent_slot = fromNode.outputs[link_info.origin_slot]; if (event === TypeSlotEvent.Connect) { node_slot.type = parent_slot.type; node_slot.name = `0_${parent_slot.name}`; } else { this.removeInput(slot_idx); node_slot.type = dynamic_type; node_slot.name = prefix; node_slot.link = null; } // clean off missing slot connects let idx = 0; let offset = 0; while (idx < this.inputs.length) { const parts = slot_name(this.inputs[idx]); if (parts.length > 1) { const name = parts.slice(1).join(''); this.inputs[idx].name = `${offset}_${name}`; offset += 1; } idx += 1; } } // check that the last slot is a dynamic entry.... let last = this.inputs[this.inputs.length-1]; if (last.type != dynamic_type || last.name != prefix) { this.addInput(prefix, dynamic_type); } nodeFitHeight(this); return me; } } /** * Trace to the root node that is not a virtual node. * * @param {Object} node - The starting node to trace from. * @returns {Object} - The first physical (non-virtual) node encountered, or the last node if no physical node is found. */ export function nodeVirtualLinkRoot(node) { while (node) { const { isVirtualNode, findSetter } = node; if (!isVirtualNode || !findSetter) break; const nextNode = findSetter(node.graph); if (!nextNode) break; node = nextNode; } return node; } /** * Trace through outputs until a physical (non-virtual) node is found. * * @param {Object} node - The starting node to trace from. * @returns {Object} - The first physical node encountered, or the last node if no physical node is found. */ function nodeVirtualLinkChild(node) { while (node) { const { isVirtualNode, findGetter } = node; if (!isVirtualNode || !findGetter) break; const nextNode = findGetter(node.graph); if (!nextNode) break; node = nextNode; } return node; } /** * Remove inputs from a node until the stop condition is met. * * @param {Array} inputs - The list of inputs associated with the node. * @param {number} stop - The minimum number of inputs to retain. Default is 0. */ export function nodeInputsClear(node, stop = 0) { while (node.inputs?.length > stop) { node.removeInput(node.inputs.length - 1); } } /** * Remove outputs from a node until the stop condition is met. * * @param {Array} outputs - The list of outputs associated with the node. * @param {number} stop - The minimum number of outputs to retain. Default is 0. */ export function nodeOutputsClear(node, stop = 0) { while (node.outputs?.length > stop) { node.removeOutput(node.outputs.length - 1); } } ================================================ FILE: web/widget_vector.js ================================================ /**/ import { app } from "../../scripts/app.js" import { $el } from "../../scripts/ui.js" /** @import { IWidget, LGraphCanvas } from '../../types/litegraph/litegraph.d.ts' */ function arrayToObject(values, length, parseFn) { const result = {}; for (let i = 0; i < length; i++) { result[i] = parseFn(values[i]); } return result; } function domInnerValueChange(node, pos, widget, value, event=undefined) { //const numtype = widget.type.includes("INT") ? Number : parseFloat widget.value = arrayToObject(value, Object.keys(value).length, widget.convert); if ( widget.options && widget.options.property && node.properties[widget.options.property] !== undefined ) { node.setProperty(widget.options.property, widget.value) } if (widget.callback) { widget.callback(widget.value, app.canvas, node, pos, event) } } function colorHex2RGB(hex) { hex = hex.replace(/^#/, ''); const bigint = parseInt(hex, 16); const r = (bigint >> 16) & 255; const g = (bigint >> 8) & 255; const b = bigint & 255; return [r, g, b]; } function colorRGB2Hex(input) { const rgbArray = typeof input == 'string' ? input.match(/\d+/g) : input; if (rgbArray.length < 3) { throw new Error('input not 3 or 4 values'); } const hexValues = rgbArray.map((value, index) => { if (index == 3 && !value) return 'ff'; const hex = parseInt(value).toString(16); return hex.length == 1 ? '0' + hex : hex; }); return '#' + hexValues.slice(0, 3).join('') + (hexValues[3] || ''); } const VectorWidget = (app, inputName, options, initial) => { const values = options[1]?.default || initial; /** @type {IWidget} */ const widget = { name: inputName, type: options[0], y: 0, value: values, options: options[1] } widget.convert = parseFloat; widget.options.precision = widget.options?.precision || 2; widget.options.step = widget.options?.step || 0.1; widget.options.round = 1 / 10 ** widget.options.step; if (widget.options?.rgb || widget.options?.int || false) { widget.options.step = 1; widget.options.round = 1; widget.options.precision = 0; widget.convert = Number; } if (widget.options?.rgb || false) { widget.options.maj = 255; widget.options.mij = 0; widget.options.label = ['🟥', '🟩', '🟦', 'ALPHA']; } const offset_y = 4; const widget_padding_left = 13; const widget_padding = 30; const label_full = 72; const label_center = label_full / 2; /** @type {HTMLInputElement} */ let picker; widget.draw = function(ctx, node, width, Y, height) { // if ((app.canvas.ds.scale < 0.50) || (!this.type2.startsWith("VEC") && this.type2 != "COORD2D")) return; if ((app.canvas.ds.scale < 0.50) || (!this.type.startsWith("VEC"))) return; ctx.save() ctx.beginPath() ctx.lineWidth = 1 ctx.fillStyle = LiteGraph.WIDGET_OUTLINE_COLOR ctx.roundRect(widget_padding_left+2, Y, width - widget_padding, height, 15) ctx.stroke() ctx.lineWidth = 1 ctx.fillStyle = LiteGraph.WIDGET_BGCOLOR ctx.roundRect(widget_padding_left+2, Y, width - widget_padding, height, 15) ctx.fill() // label ctx.fillStyle = LiteGraph.WIDGET_SECONDARY_TEXT_COLOR ctx.fillText(inputName, label_center - (inputName.length * 1.5), Y + height / 2 + offset_y) let x = label_full + 1 const fields = Object.keys(this?.value || []); let count = fields.length; if (widget.options?.rgb) { count += 0.23; } const element_width = (width - label_full - widget_padding) / count; const element_width2 = element_width / 2; let converted = []; for (const idx of fields) { ctx.save() ctx.beginPath() ctx.fillStyle = LiteGraph.WIDGET_OUTLINE_COLOR // separation bar if (idx != fields.length || (idx == fields.length && !this.options?.rgb)) { ctx.moveTo(x, Y) ctx.lineTo(x, Y+height) ctx.stroke(); } // value ctx.fillStyle = LiteGraph.WIDGET_TEXT_COLOR const it = this.value[idx.toString()]; let value = (widget.options.precision == 0) ? Number(it) : parseFloat(it).toFixed(widget.options.precision); converted.push(value); const text = value.toString(); ctx.fillText(text, x + element_width2 - text.length * 3.3, Y + height/2 + offset_y); ctx.restore(); x += element_width; } if (this.options?.rgb && converted.length > 2) { try { ctx.fillStyle = colorRGB2Hex(converted); } catch (e) { console.error(converted, e); ctx.fillStyle = "#FFF"; } ctx.roundRect(width-1.17 * widget_padding, Y+1, 19, height-2, 16); ctx.fill() } ctx.restore() } function clamp(widget, v, idx) { v = Math.min(v, widget.options?.maj !== undefined ? widget.options.maj : v); v = Math.max(v, widget.options?.mij !== undefined ? widget.options.mij : v); widget.value[idx] = (widget.options.precision == 0) ? Number(v) : parseFloat(v).toFixed(widget.options.precision); } /** * @todo ▶️, 🖱️, 😀 * @this IWidget */ widget.onPointerDown = function (pointer, node, canvas) { const e = pointer.eDown const x = e.canvasX - node.pos[0] - label_full; const size = Object.keys(this.value).length; const element_width = (node.size[0] - label_full - widget_padding * 1.25) / size; const index = Math.floor(x / element_width); pointer.onClick = (eUp) => { /* if click on header, reset to defaults */ if (index == -1 && eUp.shiftKey) { widget.value = Object.assign({}, widget.options.default); return; } else if (index >= 0 && index < size) { const pos = [eUp.canvasX - node.pos[0], eUp.canvasY - node.pos[1]] const old_value = { ...this.value }; const label = this.options?.label ? this.name + '➖' + this.options.label?.[index] : this.name; LGraphCanvas.active_canvas.prompt(label, this.value[index], function(v) { if (/^[0-9+\-*/()\s]+|\d+\.\d+$/.test(v)) { try { v = eval(v); } catch { v = old_value[index]; } } else { v = old_value[index]; } if (this.value[index] != v) { setTimeout( function () { clamp(this, v, index); domInnerValueChange(node, pos, this, this.value, eUp); }.bind(this), 5) } }.bind(this), eUp); return; } if (!this.options?.rgb) return; const rgba = Object.values(this?.value || []); const color = colorRGB2Hex(rgba.slice(0, 3)); if (index != size && (x < 0 && rgba.length > 2)) { const target = Object.values(rgba.map((item) => 255 - item)).slice(0, 3); this.value = Object.values(this.value); this.value.splice(0, 3, ...target); return } if (!picker) { // firefox? //position: "absolute", // Use absolute positioning for consistency //left: `${eUp.pageX}px`, // Use pageX for more consistent placement //top: `${eUp.pageY}px`, picker = $el("input", { type: "color", parent: document.body, style: { position: "fixed", left: `${eUp.clientX}px`, top: `${eUp.clientY}px`, height: "0px", width: "0px", padding: "0px", opacity: 0, }, }); picker.addEventListener('blur', () => picker.style.display = 'none') picker.addEventListener('input', () => { if (!picker.value) return; widget.value = colorHex2RGB(picker.value); if (rgba.length > 3) { widget.value.push(rgba[3]); } canvas.setDirty(true) }) } else { picker.style.display = 'revert' picker.style.left = `${eUp.clientX}px` picker.style.top = `${eUp.clientY}px` } picker.value = color; requestAnimationFrame(() => { picker.showPicker() picker.focus() }) } pointer.onDrag = (eMove) => { if (!eMove.deltaX || !(index > -1)) return; if (index >= size) return; let v = parseFloat(this.value[index]); v += this.options.step * Math.sign(eMove.deltaX); clamp(this, v, index); if (widget.callback) { widget.callback(widget.value, app.canvas, node) } } } widget.serializeValue = async (node, index) => { const rawValues = Array.isArray(widget.value) ? widget.value : Object.values(widget.value); const funct = widget.options?.int ? Number : parseFloat; return rawValues.map(v => funct(v)); }; return widget; } app.registerExtension({ name: "jovi.widget.spinner", async getCustomWidgets(app) { return { VEC2: (node, inputName, inputData, app) => ({ widget: node.addCustomWidget(VectorWidget(app, inputName, inputData, [0, 0])), }), VEC3: (node, inputName, inputData, app) => ({ widget: node.addCustomWidget(VectorWidget(app, inputName, inputData, [0, 0, 0])), }), VEC4: (node, inputName, inputData, app) => ({ widget: node.addCustomWidget(VectorWidget(app, inputName, inputData, [0, 0, 0, 0])), }) } } })