Full Code of Amorano/Jovimetrix for AI

main a28214a01507 cached
41 files
267.6 KB
69.5k tokens
236 symbols
1 requests
Download .txt
Showing preview only (281K chars total). Download the full file or copy to clipboard to get everything.
Repository: Amorano/Jovimetrix
Branch: main
Commit: a28214a01507
Files: 41
Total size: 267.6 KB

Directory structure:
gitextract_d5cdnh8o/

├── .gitattributes
├── .github/
│   └── workflows/
│       └── publish_action.yml
├── .gitignore
├── LICENSE
├── NOTICE
├── README.md
├── __init__.py
├── core/
│   ├── __init__.py
│   ├── adjust.py
│   ├── anim.py
│   ├── calc.py
│   ├── color.py
│   ├── compose.py
│   ├── create.py
│   ├── trans.py
│   ├── utility/
│   │   ├── __init__.py
│   │   ├── batch.py
│   │   ├── info.py
│   │   └── io.py
│   └── vars.py
├── node_list.json
├── pyproject.toml
├── requirements.txt
└── web/
    ├── core.js
    ├── fun.js
    ├── jovi_metrix.css
    ├── nodes/
    │   ├── akashic.js
    │   ├── array.js
    │   ├── delay.js
    │   ├── flatten.js
    │   ├── graph.js
    │   ├── lerp.js
    │   ├── op_binary.js
    │   ├── op_unary.js
    │   ├── queue.js
    │   ├── route.js
    │   ├── stack.js
    │   ├── stringer.js
    │   └── value.js
    ├── util.js
    └── widget_vector.js

================================================
FILE CONTENTS
================================================

================================================
FILE: .gitattributes
================================================
# Auto detect text files and perform LF normalization
* text=auto


================================================
FILE: .github/workflows/publish_action.yml
================================================
name: Publish to Comfy registry
on:
  workflow_dispatch:
  push:
    branches:
      - main
    paths:
      - "pyproject.toml"

permissions:
  issues: write

jobs:
  publish-node:
    name: Publish Custom Node to registry
    runs-on: ubuntu-latest
    if: ${{ github.repository_owner == 'Amorano' }}
    steps:
      - name: Check out code
        uses: actions/checkout@v4
      - name: Publish Custom Node
        uses: Comfy-Org/publish-node-action@v1
        with:
          personal_access_token: ${{ secrets.REGISTRY_ACCESS_TOKEN }}


================================================
FILE: .gitignore
================================================
__pycache__
*.py[cod]
*$py.class
_*/
glsl/*
*.code-workspace
.vscode
config.json
ignore.txt
.env
.venv
.DS_Store
*.egg-info
*.bak
checkpoints
results
backup
node_modules
*-lock.json
*.config.mjs
package.json
_TODO*.*

================================================
FILE: LICENSE
================================================
MIT License

Copyright (c) 2023 Alexander G. Morano

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

GO NUTS; JUST TRY NOT TO DO IT IN YOUR HEAD.


================================================
FILE: NOTICE
================================================
This project includes code concepts from the MTB Nodes project (MIT)
https://github.com/melMass/comfy_mtb

This project includes code concepts from the ComfyUI-Custom-Scripts project (MIT)
https://github.com/pythongosssss/ComfyUI-Custom-Scripts

This project includes code concepts from the KJNodes for ComfyUI project (GPL 3.0)
https://github.com/kijai/ComfyUI-KJNodes

This project includes code concepts from the UE Nodes project (Apache 2.0)
https://github.com/chrisgoringe/cg-use-everywhere

This project includes code concepts from the WAS Node Suite project (MIT)
https://github.com/WASasquatch/was-node-suite-comfyui

This project includes code concepts from the rgthree-comfy project (MIT)
https://github.com/rgthree/rgthree-comfy

This project includes code concepts from the FizzNodes project (MIT)
https://github.com/FizzleDorf/ComfyUI_FizzNodes

================================================
FILE: README.md
================================================
<picture>
  <source media="(prefers-color-scheme: dark)" srcset="https://github.com/Amorano/Jovimetrix-examples/blob/master/res/logo-jovimetrix.png">
  <source media="(prefers-color-scheme: light)" srcset="https://github.com/Amorano/Jovimetrix-examples/blob/master/res/logo-jovimetrix-light.png">
  <img alt="ComfyUI Nodes for procedural masking, live composition and video manipulation">
</picture>

<h2><div align="center">
<a href="https://github.com/comfyanonymous/ComfyUI">COMFYUI</a> Nodes for procedural masking, live composition and video manipulation
</div></h2>

<h3><div align="center">
JOVIMETRIX IS ONLY GUARANTEED TO SUPPORT <a href="https://github.com/comfyanonymous/ComfyUI">COMFYUI 0.1.3+</a> and <a href="https://github.com/Comfy-Org/ComfyUI_frontend">FRONTEND 1.2.40+</a><br>
IF YOU NEED AN OLDER VERSION, PLEASE DO NOT UPDATE.
</div></h3>

<h2><div align="center">

![KNIVES!](https://badgen.net/github/open-issues/amorano/jovimetrix)
![FORKS!](https://badgen.net/github/forks/amorano/jovimetrix)

</div></h2>

<!---------------------------------------------------------------------------->

# SPONSORSHIP

Please consider sponsoring me if you enjoy the results of my work, code or documentation or otherwise. A good way to keep code development open and free is through sponsorship.

<div align="center">

&nbsp;|&nbsp;|&nbsp;|&nbsp;
-|-|-|-
[![BE A GITHUB SPONSOR ❤️](https://img.shields.io/badge/sponsor-30363D?style=for-the-badge&logo=GitHub-Sponsors&logoColor=#EA4AAA)](https://github.com/sponsors/Amorano) | [![DIRECTLY SUPPORT ME VIA PAYPAL](https://img.shields.io/badge/PayPal-00457C?style=for-the-badge&logo=paypal&logoColor=white)](https://www.paypal.com/paypalme/onarom) | [![PATREON SUPPORTER](https://img.shields.io/badge/Patreon-F96854?style=for-the-badge&logo=patreon&logoColor=white)](https://www.patreon.com/joviex) | [![SUPPORT ME ON KO-FI!](https://ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/alexandermorano)
</div>

## HIGHLIGHTS

* 30 function `BLEND` node -- subtract, multiply and overlay like the best
* Vector support for 2, 3, 4 size tuples of integer or float type
* Specific RGB/RGBA color vector support that provides a color picker
* All Image inputs support RGBA, RGB or pure MASK input
* Full Text generation support using installed system fonts
* Basic parametric shape (Circle, Square, Polygon) generator~~
* `COLOR BLIND` check support
* `COLOR MATCH` against existing images or create a custom LUT
* Generate `COLOR THEORY` spreads from an existing image
* `COLOR MEANS` to generate palettes for existing images to keep other images in the same tonal ranges
* `PIXEL SPLIT` separate the channels of an image to manipulate and `PIXEL MERGE` them back together
* `STACK` a series of images into a new single image vertically, horizontally or in a grid
* Or `FLATTEN` a batch of images into a single image with each image subsequently added on top (slap comp)
* `VALUE` Node has conversion support for all ComfyUI types and some 3rd party types (2DCoords, Mixlab Layers)
* `LERP` node to linear interpolate all ComfyUI and Jovimetrix value types
* Automatic conversion of Mixlab Layer types into Image types
* Generic `ARRAY` that can Merge, Split, Select, Slice or Randomize a list of ANY type
* `STRINGER` node to perform specific string manipulation operations: Split, Join, Replace, Slice.
* A `QUEUE` Node that supports recursing directories, filtering multiple file types and batch loading
* Use the `OP UNARY` and `OP BINARY` nodes to perform single and double type functions across all ComfyUI and Jovimetrix value types
* Manipulate vectors with the `SWIZZLE` node to swap their XYZW positions
* `DELAY` execution at certain parts in a workflow, with or without a timeout
* Generate curve data with the `TICK` and `WAVE GEN` nodes

<br>

<h1>AS OF VERSION 2.0.0, THESE NODES HAVE MIGRATED TO OTHER, SMALLER PACKAGES</h1>

Migrated to [Jovi_GLSL](https://github.com/Amorano/Jovi_GLSL)

~~* GLSL shader support~~
~~* * `GLSL Node`  provides raw access to Vertex and Fragment shaders~~
~~* * `Dynamic GLSL` dynamically convert existing GLSL scripts file into ComfyUI nodes at runtime~~
~~* * Over 20+ Hand written GLSL nodes to speed up specific tasks better done on the GPU (10x speedup in most cases)~~

Migrated to [Jovi_Capture](https://github.com/Amorano/Jovi_Capture)

~~* `STREAM READER` node to capture monitor, webcam or url media~~
~~* `STREAM WRITER` node to export media to a HTTP/HTTPS server for OBS or other 3rd party streaming software~~

Migrated to [Jovi_Spout](https://github.com/Amorano/Jovi_Spout)

~~* `SPOUT` streaming support *WINDOWS ONLY*~~

Migrated to [Jovi_MIDI](https://github.com/Amorano/Jovi_MIDI)

~~* `MIDI READER` Captures MIDI messages from an external MIDI device or controller~~
~~* `MIDI MESSAGE` Processes MIDI messages received from an external MIDI controller or device~~
~~* `MIDI FILTER` (advanced filter) to select messages from MIDI streams and devices~~
~~* `MIDI FILTER EZ` simpler interface to filter single messages from MIDI streams and devices~~

Migrated to [Jovi_Help](https://github.com/Amorano/Jovi_Help)

~~* Help System for *ALL NODES* that will auto-parse unknown knows for their type data and descriptions~~

Migrated to [Jovi_Colorizer](https://github.com/Amorano/Jovi_Colorizer)

~~* Colorization for *ALL NODES* using their own node settings, their node group or via regex pattern matching~~

## UPDATES

<h2>DO NOT UPDATE JOVIMETRIX PAST VERSION 1.7.48 IF YOU DONT WANT TO LOSE A BUNCH OF NODES</h2>

Nodes that have been removed are in various other packages now. You can install those specific packages to get the functionality back, but I have no way to migrate the actual connections -- you will need to do that manually. **

Nodes that have been migrated:

* ALL MIDI NODES:
* * MIDIMessageNode
* * MIDIReaderNode
* * MIDIFilterNode
* * MIDIFilterEZNode

[Migrated to Jovi_MIDI](https://github.com/Amorano/Jovi_MIDI)

* ALL STREAMING NODES:
* * StreamReaderNode
* * StreamWriterNode

[Migrated to Jovi_Capture](https://github.com/Amorano/Jovi_Capture)

* * SpoutWriterNode

[Migrated to Jovi_Spout](https://github.com/Amorano/Jovi_Spout)

* ALL GLSL NODES:
* * GLSL
* * GLSL BLEND LINEAR
* * GLSL COLOR CONVERSION
* * GLSL COLOR PALETTE
* * GLSL CONICAL GRADIENT
* * GLSL DIRECTIONAL WARP
* * GLSL FILTER RANGE
* * GLSL GRAYSCALE
* * GLSL HSV ADJUST
* * GLSL INVERT
* * GLSL NORMAL
* * GLSL NORMAL BLEND
* * GLSL POSTERIZE
* * GLSL TRANSFORM

[Migrated to Jovi_GLSL](https://github.com/Amorano/Jovi_GLSL)

**2025/09/04** @2.1.25:
* Auto-level for `LEVEL` node
* `HISTOGRAM` node
* new support for cozy_comfy (v3+ comfy node spec)

**2025/08/15** @2.1.23:
* fixed regression in `FLATTEN` node

**2025/08/12** @2.1.22:
* tick allows for float/int start

**2025/08/03** @2.1.21:
* fixed css for `DELAY` node
* delay node timer extended to 150+ days
* all tooltips checked to be TUPLE entries

**2025/07/31** @2.1.20:
* support for tensors in `OP UNARY` or `OP BINARY`

**2025/07/27** @2.1.19:
* added `BATCH TO LIST` node
* `VECTOR` node(s) default step changed to 0.1

**2025/07/13** @2.1.18:
* allow numpy>=1.25.0

**2025/07/07** @2.1.17:
* updated to cozy_comfyui 0.0.39

**2025/07/04** @2.1.16:
* Type hint updates

**2025/06/28** @2.1.15:
* `GRAPH NODE` updated to use new mechanism in cozy_comfyui 0.0.37 for list of list parse on dynamics

**2025/06/18** @2.1.14:
* fixed resize_matte mode to use full mask/alpha

**2025/06/18** @2.1.13:
* allow hex codes for vectors
* updated to cozy_comfyui 0.0.36

**2025/06/07** @2.1.11:
* cleaned up image_convert for grayscale/mask
* updated to cozy_comfyui 0.0.35

**2025/06/06** @2.1.10:
* updated to comfy_cozy 0.0.34
* default width and height to 1
* removed old debug string
* akashic try to parse unicode emoji strings

**2025/06/02** @2.1.9:
* fixed dynamic nodes that already start with inputs (dynamic input wouldnt show up)
* patched Queue node to work with new `COMBO` style of inputs

**2025/05/29** @2.1.8:
* updated to comfy_cozy 0.0.32

**2025/05/27** @2.1.7:
* re-ranged all FLOAT to their maximum representations
* clerical cleanup for JS callbacks
* added `SPLIT` node to break images into vertical or horizontal slices

**2025/05/25** @2.1.6:
* loosened restriction for python 3.11+ to allow for 3.10+
* * I make zero guarantee that will actually let 3.10 work and I will not support 3.10

**2025/05/16** @2.1.5:
* Full compatibility with [ComfyMath Vector](https://github.com/evanspearman/ComfyMath) nodes
* Masks can be inverted at inputs
* `EnumScaleInputMode` for `BLEND` node to adjust inputs prior to operation
* Allow images or mask inputs in `CONSTANT` node to fall through
* `VALUE` nodes return all items as list, not just >1
* Added explicit MASK option for `PIXEL SPLIT` node
* Split `ADJUST` node into `BLUR`, `EDGE`, `LIGHT`, `PIXEL`,
* Migrated most of image lib to cozy_comfyui
* widget_vector tweaked to disallow non-numerics
* widgetHookControl streamlined

**2025/05/08** @2.1.4:
* Support for NUMERICAL (bool, int, float, vecN) inputs on value inputs

**2025/05/08** @2.1.3:
* fixed for VEC* types using MIN/MAX

**2025/05/07** @2.1.2:
* `TICK` with normalization and new series generator

**2025/05/06** @2.1.1:
* fixed IS_CHANGED in graphnode
* updated `TICK SIMPLE` in situ of `TICK` to be inclusive of the end range
* migrated ease, normalization and wave functions to cozy_comfyui
* first pass preserving values in multi-type fields

**2025/05/05** @2.1.0:
* Cleaned up all node defaults
* Vector nodes aligned for list outputs
* Cleaned all emoji from input/output
* Clear all EnumConvertTypes and align with new comfy_cozy
* Lexicon defines come from Comfy_Cozy module
* `OP UNARY` fixed factorial
* Added fill array mode for `OP UNARY`
* removed `STEREOGRAM` and `STEROSCOPIC` -- they were designed poorly

**2025/05/01** @2.0.11:
* unified widget_vector.js
* new comfy_cozy support
* auto-convert all VEC*INT -> VEC* float types
* readability for node definitions

**2025/04/24** @2.0.10:
* `SHAPE NODE` fixed for transparency blends when using blurred masks

**2025/04/24** @2.0.9:
* removed inversion in pixel splitter

**2025/04/23** @2.0.8:
* categories aligned to new comfy-cozy support

**2025/04/19** @2.0.7:
* all JS messages fixed

**2025/04/19** @2.0.6:
* fixed reset message from JS

**2025/04/19** @2.0.5:
* patched new frontend input mechanism for dynamic inputs
* reduced requirements
* removed old vector conversions waiting for new frontend mechanism

**2025/04/17** @2.0.4:
* fixed bug in resize_matte `MODE` that would fail when the matte was smaller than the input image
* migrated to image_crop functions to cozy_comfyui

**2025/04/12** @2.0.0:
* REMOVED ALL STREAMING, MIDI and GLSL nodes for new packages, HELP System and Node Colorization system:

   [Jovi_Capture - Web camera, Monitor Capture, Window Capture](https://github.com/Amorano/Jovi_Capture)

   [Jovi_MIDI - MIDI capture and MIDI message parsing](https://github.com/Amorano/Jovi_MIDI)

   [Jovi_GLSL - GLSL Shaders](https://github.com/Amorano/Jovi_GLSL)

   [Jovi_Spout - SPOUT Streaming support](https://github.com/Amorano/Jovi_Spout)

   [Jovi_Colorizer - Node Colorization](https://github.com/Amorano/Jovi_Colorizer)

   [Jovi_Help - Node Help](https://github.com/Amorano/Jovi_Help)

* all nodes will accept `LIST` or `BATCH` and process as if all elements are in a list.
* patched constant node to work with `MATTE_RESIZE`
* patched import loader to work with old/new comfyui
* missing array web node partial
* removed array and no one even noticed.
* all inputs should be treated as a list even single elements []

<div align="center">
<img src="https://github.com/user-attachments/assets/8ed13e6a-218c-468a-a480-53ab55b04d21" alt="explicit vector node supports" width="640"/>
<img src="https://github.com/user-attachments/assets/4459855c-c4e6-4739-811e-a6c90aa5a90c" alt="TICK Node Batch Support Output" width="384"/>
</div>

# INSTALLATION

[Please see the wiki for advanced use of the environment variables used during startup](https://github.com/Amorano/Jovimetrix/wiki/B.-ASICS)

## COMFYUI MANAGER

If you have [ComfyUI Manager](https://github.com/ltdrdata/ComfyUI-Manager) installed, simply search for Jovimetrix and install from the manager's database.

## MANUAL INSTALL
Clone the repository into your ComfyUI custom_nodes directory. You can clone the repository with the command:
```
git clone https://github.com/Amorano/Jovimetrix.git
```
You can then install the requirements by using the command:
```
.\python_embed\python.exe -s -m pip install -r requirements.txt
```
If you are using a <code>virtual environment</code> (<code><i>venv</i></code>), make sure it is activated before installation. Then install the requirements with the command:
```
pip install -r requirements.txt
```
# WHERE TO FIND ME

You can find me on [![DISCORD](https://dcbadge.vercel.app/api/server/62TJaZ3Z5r?style=flat-square)](https://discord.gg/62TJaZ3Z5r).


================================================
FILE: __init__.py
================================================
"""
     ██  ██████  ██    ██ ██ ███    ███ ███████ ████████ ██████  ██ ██   ██ 
     ██ ██    ██ ██    ██ ██ ████  ████ ██         ██    ██   ██ ██  ██ ██  
     ██ ██    ██ ██    ██ ██ ██ ████ ██ █████      ██    ██████  ██   ███  
██   ██ ██    ██  ██  ██  ██ ██  ██  ██ ██         ██    ██   ██ ██  ██ ██ 
 █████   ██████    ████   ██ ██      ██ ███████    ██    ██   ██ ██ ██   ██ 

              Animation, Image Compositing & Procedural Creation

@title: Jovimetrix
@author: Alexander G. Morano
@category: Compositing
@reference: https://github.com/Amorano/Jovimetrix
@tags: adjust, animate, compose, compositing, composition, device, flow, video,
mask, shape, animation, logic
@description: Animation via tick. Parameter manipulation with wave generator.
Unary and Binary math support. Value convert int/float/bool, VectorN and Image,
Mask types. Shape mask generator. Stack images, do channel ops, split, merge
and randomize arrays and batches. Load images & video from anywhere. Dynamic
bus routing. Save output anywhere! Flatten, crop, transform; check
colorblindness or linear interpolate values.
@node list:
    TickNode, TickSimpleNode, WaveGeneratorNode
    BitSplitNode, ComparisonNode, LerpNode, OPUnaryNode, OPBinaryNode, StringerNode, SwizzleNode,
    ColorBlindNode, ColorMatchNode, ColorKMeansNode, ColorTheoryNode, GradientMapNode,
    AdjustNode, BlendNode, FilterMaskNode, PixelMergeNode, PixelSplitNode, PixelSwapNode, ThresholdNode,
    ConstantNode, ShapeNode, TextNode,
    CropNode, FlattenNode, StackNode, TransformNode,

    ArrayNode, QueueNode, QueueTooNode,
    AkashicNode, GraphNode, ImageInfoNode,
    DelayNode, ExportNode, RouteNode, SaveOutputNode

    ValueNode, Vector2Node, Vector3Node, Vector4Node,
"""

__author__ = "Alexander G. Morano"
__email__ = "amorano@gmail.com"

from pathlib import Path

from cozy_comfyui import \
    logger

from cozy_comfyui.node import \
    loader

JOV_DOCKERENV = False
try:
    with open('/proc/1/cgroup', 'rt') as f:
        content = f.read()
        JOV_DOCKERENV = any(x in content for x in ['docker', 'kubepods', 'containerd'])
except FileNotFoundError:
    pass

if JOV_DOCKERENV:
    logger.info("RUNNING IN A DOCKER")

# ==============================================================================
# === GLOBAL ===
# ==============================================================================

PACKAGE = "JOVIMETRIX"
WEB_DIRECTORY = "./web"
ROOT = Path(__file__).resolve().parent
NODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS = loader(ROOT,
                                                         PACKAGE,
                                                         "core",
                                                         f"{PACKAGE} 🔺🟩🔵",
                                                         False)


================================================
FILE: core/__init__.py
================================================

from enum import Enum

class EnumFillOperation(Enum):
    DEFAULT = 0
    FILL_ZERO = 20
    FILL_ALL = 10


================================================
FILE: core/adjust.py
================================================
""" Jovimetrix - Adjust """

import sys
from enum import Enum
from typing import Any
from typing_extensions import override

import comfy.model_management
from comfy_api.latest import ComfyExtension, io
from comfy.utils import ProgressBar

from cozy_comfyui import \
    InputType, RGBAMaskType, EnumConvertType, \
    deep_merge, parse_param, zip_longest_fill

from cozy_comfyui.lexicon import \
    Lexicon

from cozy_comfy.node import \
    COZY_TYPE_IMAGE as COZY_TYPE_IMAGEv3, \
    CozyImageNode as CozyImageNodev3

from cozy_comfyui.node import \
    COZY_TYPE_IMAGE, \
    CozyImageNode

from cozy_comfyui.image.adjust import \
    EnumAdjustBlur, EnumAdjustColor, EnumAdjustEdge, EnumAdjustMorpho, \
    image_contrast, image_brightness, image_equalize, image_gamma, \
    image_exposure, image_pixelate, image_pixelscale, \
    image_posterize, image_quantize, image_sharpen, image_morphology, \
    image_emboss, image_blur, image_edge, image_color, \
    image_autolevel, image_autolevel_histogram

from cozy_comfyui.image.channel import \
    channel_solid

from cozy_comfyui.image.compose import \
    image_levels

from cozy_comfyui.image.convert import \
    tensor_to_cv, cv_to_tensor_full, image_mask, image_mask_add

from cozy_comfyui.image.misc import \
    image_stack

# ==============================================================================
# === GLOBAL ===
# ==============================================================================

JOV_CATEGORY = "ADJUST"

# ==============================================================================
# === ENUMERATION ===
# ==============================================================================

class EnumAutoLevel(Enum):
    MANUAL = 10
    AUTO = 20
    HISTOGRAM = 30

class EnumAdjustLight(Enum):
    EXPOSURE = 10
    GAMMA = 20
    BRIGHTNESS = 30
    CONTRAST = 40
    EQUALIZE = 50

class EnumAdjustPixel(Enum):
    PIXELATE = 10
    PIXELSCALE = 20
    QUANTIZE = 30
    POSTERIZE = 40

# ==============================================================================
# === CLASS ===
# ==============================================================================

class AdjustBlurNode(CozyImageNode):
    NAME = "ADJUST: BLUR (JOV)"
    CATEGORY = JOV_CATEGORY
    DESCRIPTION = """
Enhance and modify images with various blur effects.
"""

    @classmethod
    def INPUT_TYPES(cls) -> InputType:
        d = super().INPUT_TYPES()
        d = deep_merge(d, {
            "optional": {
                Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),
                Lexicon.FUNCTION: (EnumAdjustBlur._member_names_, {
                    "default": EnumAdjustBlur.BLUR.name,}),
                Lexicon.RADIUS: ("INT", {
                    "default": 3, "min": 3}),
            }
        })
        return Lexicon._parse(d)

    def run(self, **kw) -> RGBAMaskType:
        pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)
        op = parse_param(kw, Lexicon.FUNCTION, EnumAdjustBlur, EnumAdjustBlur.BLUR.name)
        radius = parse_param(kw, Lexicon.RADIUS, EnumConvertType.INT, 3)
        params = list(zip_longest_fill(pA, op, radius))
        images = []
        pbar = ProgressBar(len(params))
        for idx, (pA, op, radius) in enumerate(params):
            pA = channel_solid() if pA is None else tensor_to_cv(pA)
            # height, width = pA.shape[:2]
            pA = image_blur(pA, op, radius)
            #pA = image_blend(pA, img_new, mask)
            images.append(cv_to_tensor_full(pA))
            pbar.update_absolute(idx)
        return image_stack(images)

class AdjustColorNode(CozyImageNode):
    NAME = "ADJUST: COLOR (JOV)"
    CATEGORY = JOV_CATEGORY
    DESCRIPTION = """
Enhance and modify images with various blur effects.
"""

    @classmethod
    def INPUT_TYPES(cls) -> InputType:
        d = super().INPUT_TYPES()
        d = deep_merge(d, {
            "optional": {
                Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),
                Lexicon.FUNCTION: (EnumAdjustColor._member_names_, {
                    "default": EnumAdjustColor.RGB.name,}),
                Lexicon.VEC: ("VEC3", {
                    "default": (0,0,0), "mij": -1, "maj": 1, "step": 0.025})
            }
        })
        return Lexicon._parse(d)

    def run(self, **kw) -> RGBAMaskType:
        pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)
        op = parse_param(kw, Lexicon.FUNCTION, EnumAdjustColor, EnumAdjustColor.RGB.name)
        vec = parse_param(kw, Lexicon.VEC, EnumConvertType.VEC3, (0,0,0))
        params = list(zip_longest_fill(pA, op, vec))
        images = []
        pbar = ProgressBar(len(params))
        for idx, (pA, op, vec) in enumerate(params):
            pA = channel_solid() if pA is None else tensor_to_cv(pA)
            pA = image_color(pA, op, vec[0], vec[1], vec[2])
            images.append(cv_to_tensor_full(pA))
            pbar.update_absolute(idx)
        return image_stack(images)

class AdjustEdgeNode(CozyImageNode):
    NAME = "ADJUST: EDGE (JOV)"
    CATEGORY = JOV_CATEGORY
    DESCRIPTION = """
Enhanced edge detection.
"""

    @classmethod
    def INPUT_TYPES(cls) -> InputType:
        d = super().INPUT_TYPES()
        d = deep_merge(d, {
            "optional": {
                Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),
                Lexicon.FUNCTION: (EnumAdjustEdge._member_names_, {
                    "default": EnumAdjustEdge.CANNY.name,}),
                Lexicon.RADIUS: ("INT", {
                    "default": 1, "min": 1}),
                Lexicon.ITERATION: ("INT", {
                    "default": 1, "min": 1, "max": 1000}),
                Lexicon.LOHI: ("VEC2", {
                    "default": (0, 1), "mij": 0, "maj": 1, "step": 0.01})
            }
        })
        return Lexicon._parse(d)

    def run(self, **kw) -> RGBAMaskType:
        pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)
        op = parse_param(kw, Lexicon.FUNCTION, EnumAdjustEdge, EnumAdjustEdge.CANNY.name)
        radius = parse_param(kw, Lexicon.RADIUS, EnumConvertType.INT, 1)
        count = parse_param(kw, Lexicon.ITERATION, EnumConvertType.INT, 1)
        lohi = parse_param(kw, Lexicon.LOHI, EnumConvertType.VEC2, (0,1))
        params = list(zip_longest_fill(pA, op, radius, count, lohi))
        images = []
        pbar = ProgressBar(len(params))
        for idx, (pA, op, radius, count, lohi) in enumerate(params):
            pA = channel_solid() if pA is None else tensor_to_cv(pA)
            alpha = image_mask(pA)
            pA = image_edge(pA, op, radius, count, lohi[0], lohi[1])
            pA = image_mask_add(pA, alpha)
            images.append(cv_to_tensor_full(pA))
            pbar.update_absolute(idx)
        return image_stack(images)

class AdjustEmbossNode(CozyImageNode):
    NAME = "ADJUST: EMBOSS (JOV)"
    CATEGORY = JOV_CATEGORY
    DESCRIPTION = """
Emboss boss mode.
"""

    @classmethod
    def INPUT_TYPES(cls) -> InputType:
        d = super().INPUT_TYPES()
        d = deep_merge(d, {
            "optional": {
                Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),
                Lexicon.HEADING: ("FLOAT", {
                    "default": -45, "min": -sys.float_info.max, "max": sys.float_info.max, "step": 0.1}),
                Lexicon.ELEVATION: ("FLOAT", {
                    "default": 45, "min": -sys.float_info.max, "max": sys.float_info.max, "step": 0.1}),
                Lexicon.DEPTH: ("FLOAT", {
                    "default": 10, "min": 0, "max": sys.float_info.max, "step": 0.1,
                    "tooltip": "Depth perceived from the light angles above"}),
            }
        })
        return Lexicon._parse(d)

    def run(self, **kw) -> RGBAMaskType:
        pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)
        heading = parse_param(kw, Lexicon.HEADING, EnumConvertType.FLOAT, -45)
        elevation = parse_param(kw, Lexicon.ELEVATION, EnumConvertType.FLOAT, 45)
        depth = parse_param(kw, Lexicon.DEPTH, EnumConvertType.FLOAT, 10)
        params = list(zip_longest_fill(pA, heading, elevation, depth))
        images = []
        pbar = ProgressBar(len(params))
        for idx, (pA, heading, elevation, depth) in enumerate(params):
            pA = channel_solid() if pA is None else tensor_to_cv(pA)
            alpha = image_mask(pA)
            pA = image_emboss(pA, heading, elevation, depth)
            pA = image_mask_add(pA, alpha)
            images.append(cv_to_tensor_full(pA))
            pbar.update_absolute(idx)
        return image_stack(images)

class AdjustLevelNode(CozyImageNode):
    NAME = "ADJUST: LEVELS (JOV)"
    CATEGORY = JOV_CATEGORY
    DESCRIPTION = """
Manual or automatic adjust image levels so that the darkest pixel becomes black
and the brightest pixel becomes white, enhancing overall contrast.
"""

    @classmethod
    def INPUT_TYPES(cls) -> InputType:
        d = super().INPUT_TYPES()
        d = deep_merge(d, {
            "optional": {
                Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),
                Lexicon.LMH: ("VEC3", {
                    "default": (0,0.5,1), "mij": 0, "maj": 1, "step": 0.01,
                    "label": ["LOW", "MID", "HIGH"]}),
                Lexicon.RANGE: ("VEC2", {
                    "default": (0, 1), "mij": 0, "maj": 1, "step": 0.01,
                    "label": ["IN", "OUT"]}),
                Lexicon.MODE: (EnumAutoLevel._member_names_, {
                    "default": EnumAutoLevel.MANUAL.name,
                    "tooltip": "Autolevel linearly or with Histogram bin values, per channel"
                }),
                "clip": ("FLOAT", {
                    "default": 0.5, "min": 0, "max": 1.0, "step": 0.01
                })
            }
        })
        return Lexicon._parse(d)

    def run(self, **kw) -> RGBAMaskType:
        pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)
        LMH = parse_param(kw, Lexicon.LMH, EnumConvertType.VEC3, (0,0.5,1))
        inout = parse_param(kw, Lexicon.RANGE, EnumConvertType.VEC2, (0,1))
        mode = parse_param(kw, Lexicon.MODE, EnumAutoLevel, EnumAutoLevel.AUTO.name)
        clip = parse_param(kw, "clip", EnumConvertType.FLOAT, 0.5, 0, 1)
        params = list(zip_longest_fill(pA, LMH, inout, mode, clip))
        images = []
        pbar = ProgressBar(len(params))
        for idx, (pA, LMH, inout, mode, clip) in enumerate(params):
            pA = channel_solid() if pA is None else tensor_to_cv(pA)
            '''
            h, s, v = hsv
            img_new = image_hsv(img_new, h, s, v)
            '''
            match mode:
                case EnumAutoLevel.MANUAL:
                    low, mid, high = LMH
                    start, end = inout
                    pA = image_levels(pA, low, mid, high, start, end)

                case EnumAutoLevel.AUTO:
                    pA = image_autolevel(pA)

                case EnumAutoLevel.HISTOGRAM:
                    pA = image_autolevel_histogram(pA, clip)

            images.append(cv_to_tensor_full(pA))
            pbar.update_absolute(idx)
        return image_stack(images)

class AdjustLightNode(CozyImageNode):
    NAME = "ADJUST: LIGHT (JOV)"
    CATEGORY = JOV_CATEGORY
    DESCRIPTION = """
Tonal adjustments. They can be applied individually or all at the same time in order: brightness, contrast, histogram equalization, exposure, and gamma correction.
"""

    @classmethod
    def INPUT_TYPES(cls) -> InputType:
        d = super().INPUT_TYPES()
        d = deep_merge(d, {
            "optional": {
                Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),
                Lexicon.BRIGHTNESS: ("FLOAT", {
                    "default": 0.5, "min": 0, "max": 1, "step": 0.01}),
                Lexicon.CONTRAST: ("FLOAT", {
                    "default": 0, "min": -1, "max": 1, "step": 0.01}),
                Lexicon.EQUALIZE: ("BOOLEAN", {
                    "default": False}),
                Lexicon.EXPOSURE: ("FLOAT", {
                    "default": 1, "min": -8, "max": 8, "step": 0.01}),
                Lexicon.GAMMA: ("FLOAT", {
                    "default": 1, "min": 0, "max": 8, "step": 0.01}),
            }
        })
        return Lexicon._parse(d)

    def run(self, **kw) -> RGBAMaskType:
        pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)
        brightness = parse_param(kw, Lexicon.BRIGHTNESS, EnumConvertType.FLOAT, 0.5)
        contrast = parse_param(kw, Lexicon.CONTRAST, EnumConvertType.FLOAT, 0)
        equalize = parse_param(kw, Lexicon.EQUALIZE, EnumConvertType.FLOAT, 0)
        exposure = parse_param(kw, Lexicon.EXPOSURE, EnumConvertType.FLOAT, 0)
        gamma = parse_param(kw, Lexicon.GAMMA, EnumConvertType.FLOAT, 0)
        params = list(zip_longest_fill(pA, brightness, contrast, equalize, exposure, gamma))
        images = []
        pbar = ProgressBar(len(params))
        for idx, (pA, brightness, contrast, equalize, exposure, gamma) in enumerate(params):
            pA = channel_solid() if pA is None else tensor_to_cv(pA)
            alpha = image_mask(pA)

            brightness = 2. * (brightness - 0.5)
            if brightness != 0:
                pA = image_brightness(pA, brightness)

            if contrast != 0:
                pA = image_contrast(pA, contrast)

            if equalize:
                pA = image_equalize(pA)

            if exposure != 1:
                pA = image_exposure(pA, exposure)

            if gamma != 1:
                pA = image_gamma(pA, gamma)

            '''
            h, s, v = hsv
            img_new = image_hsv(img_new, h, s, v)

            l, m, h = level
            img_new = image_levels(img_new, l, h, m, gamma)
            '''
            pA = image_mask_add(pA, alpha)
            images.append(cv_to_tensor_full(pA))
            pbar.update_absolute(idx)
        return image_stack(images)

class AdjustMorphNode(CozyImageNode):
    NAME = "ADJUST: MORPHOLOGY (JOV)"
    CATEGORY = JOV_CATEGORY
    DESCRIPTION = """
Operations based on the image shape.
"""

    @classmethod
    def INPUT_TYPES(cls) -> InputType:
        d = super().INPUT_TYPES()
        d = deep_merge(d, {
            "optional": {
                Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),
                Lexicon.FUNCTION: (EnumAdjustMorpho._member_names_, {
                    "default": EnumAdjustMorpho.DILATE.name,}),
                Lexicon.RADIUS: ("INT", {
                    "default": 1, "min": 1}),
                Lexicon.ITERATION: ("INT", {
                    "default": 1, "min": 1, "max": 1000}),
            }
        })
        return Lexicon._parse(d)

    def run(self, **kw) -> RGBAMaskType:
        pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)
        op = parse_param(kw, Lexicon.FUNCTION, EnumAdjustMorpho, EnumAdjustMorpho.DILATE.name)
        kernel = parse_param(kw, Lexicon.RADIUS, EnumConvertType.INT, 1)
        count = parse_param(kw, Lexicon.ITERATION, EnumConvertType.INT, 1)
        params = list(zip_longest_fill(pA, op, kernel, count))
        images: list[Any] = []
        pbar = ProgressBar(len(params))
        for idx, (pA, op, kernel, count) in enumerate(params):
            pA = channel_solid() if pA is None else tensor_to_cv(pA)
            alpha = image_mask(pA)
            pA = image_morphology(pA, op, kernel, count)
            pA = image_mask_add(pA, alpha)
            images.append(cv_to_tensor_full(pA))
            pbar.update_absolute(idx)
        return image_stack(images)

class AdjustPixelNode(CozyImageNode):
    NAME = "ADJUST: PIXEL (JOV)"
    CATEGORY = JOV_CATEGORY
    DESCRIPTION = """
Pixel-level transformations. The val parameter controls the intensity or resolution of the effect, depending on the operation.
"""

    @classmethod
    def INPUT_TYPES(cls) -> InputType:
        d = super().INPUT_TYPES()
        d = deep_merge(d, {
            "optional": {
                Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),
                Lexicon.FUNCTION: (EnumAdjustPixel._member_names_, {
                    "default": EnumAdjustPixel.PIXELATE.name,}),
                Lexicon.VALUE: ("FLOAT", {
                    "default": 0, "min": 0, "max": 1, "step": 0.01})
            }
        })
        return Lexicon._parse(d)

    def run(self, **kw) -> RGBAMaskType:
        pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)
        op = parse_param(kw, Lexicon.FUNCTION, EnumAdjustPixel, EnumAdjustPixel.PIXELATE.name)
        val = parse_param(kw, Lexicon.VALUE, EnumConvertType.FLOAT, 0)
        params = list(zip_longest_fill(pA, op, val))
        images = []
        pbar = ProgressBar(len(params))
        for idx, (pA, op, val) in enumerate(params):
            pA = channel_solid() if pA is None else tensor_to_cv(pA, chan=4)
            alpha = image_mask(pA)

            match op:
                case EnumAdjustPixel.PIXELATE:
                    pA = image_pixelate(pA, val / 2.)

                case EnumAdjustPixel.PIXELSCALE:
                    pA = image_pixelscale(pA, val)

                case EnumAdjustPixel.QUANTIZE:
                    pA = image_quantize(pA, val)

                case EnumAdjustPixel.POSTERIZE:
                    pA = image_posterize(pA, val)

            pA = image_mask_add(pA, alpha)
            images.append(cv_to_tensor_full(pA))
            pbar.update_absolute(idx)
        return image_stack(images)

class AdjustSharpenNode(CozyImageNode):
    NAME = "ADJUST: SHARPEN (JOV)"
    CATEGORY = JOV_CATEGORY
    DESCRIPTION = """
Sharpen the pixels of an image.
"""

    @classmethod
    def INPUT_TYPES(cls) -> InputType:
        d = super().INPUT_TYPES()
        d = deep_merge(d, {
            "optional": {
                Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),
                Lexicon.AMOUNT: ("FLOAT", {
                    "default": 0, "min": 0, "max": 1, "step": 0.01}),
                Lexicon.THRESHOLD: ("FLOAT", {
                    "default": 0, "min": 0, "max": 1, "step": 0.01})
            }
        })
        return Lexicon._parse(d)

    def run(self, **kw) -> RGBAMaskType:
        pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)
        amount = parse_param(kw, Lexicon.AMOUNT, EnumConvertType.FLOAT, 0)
        threshold = parse_param(kw, Lexicon.THRESHOLD, EnumConvertType.FLOAT, 0)
        params = list(zip_longest_fill(pA, amount, threshold))
        images = []
        pbar = ProgressBar(len(params))
        for idx, (pA, amount, threshold) in enumerate(params):
            pA = channel_solid() if pA is None else tensor_to_cv(pA)
            pA = image_sharpen(pA, amount / 2., threshold=threshold / 25.5)
            images.append(cv_to_tensor_full(pA))
            pbar.update_absolute(idx)
        return image_stack(images)

class AdjustSharpenNodev3(CozyImageNodev3):
    @classmethod
    def define_schema(cls, **kwarg) -> io.Schema:
        schema = super(**kwarg).define_schema()
        schema.display_name = "ADJUST: SHARPEN (JOV)"
        schema.category = JOV_CATEGORY
        schema.description = "Sharpen the pixels of an image."

        schema.inputs.extend([
            io.MultiType.Input(
                id=Lexicon.IMAGE[0],
                types=COZY_TYPE_IMAGEv3,
                display_name=Lexicon.IMAGE[0],
                optional=True,
                tooltip=Lexicon.IMAGE[1]
            ),
            io.Float.Input(
                id=Lexicon.AMOUNT[0],
                display_name=Lexicon.AMOUNT[0],
                optional=True,
                default= 0,
                min=0,
                max=1,
                step=0.01,
                tooltip=Lexicon.AMOUNT[1]
            ),
            io.Float.Input(
                id=Lexicon.THRESHOLD[0],
                display_name=Lexicon.THRESHOLD[0],
                optional=True,
                default= 0,
                min=0,
                max=1,
                step=0.01,
                tooltip=Lexicon.THRESHOLD[1]
            )

        ])
        return schema

    @classmethod
    def execute(self, *arg, **kw) -> io.NodeOutput:
        pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)
        amount = parse_param(kw, Lexicon.AMOUNT, EnumConvertType.FLOAT, 0)
        threshold = parse_param(kw, Lexicon.THRESHOLD, EnumConvertType.FLOAT, 0)
        params = list(zip_longest_fill(pA, amount, threshold))
        images = []
        pbar = ProgressBar(len(params))
        for idx, (pA, amount, threshold) in enumerate(params):
            pA = channel_solid() if pA is None else tensor_to_cv(pA)
            pA = image_sharpen(pA, amount / 2., threshold=threshold / 25.5)
            images.append(cv_to_tensor_full(pA))
            pbar.update_absolute(idx)
        return io.NodeOutput(image_stack(images))

class AdjustExtension(ComfyExtension):
    @override
    async def get_node_list(self) -> list[type[io.ComfyNode]]:
        return [
            AdjustSharpenNodev3
        ]

async def comfy_entrypoint() -> AdjustExtension:
    return AdjustExtension()

================================================
FILE: core/anim.py
================================================
""" Jovimetrix - Animation """

import sys

import numpy as np

from comfy.utils import ProgressBar

from cozy_comfyui import \
    InputType, EnumConvertType, \
    deep_merge, parse_param, zip_longest_fill

from cozy_comfyui.lexicon import \
    Lexicon

from cozy_comfyui.node import \
    CozyBaseNode

from cozy_comfyui.maths.ease import \
    EnumEase, \
    ease_op

from cozy_comfyui.maths.norm import \
    EnumNormalize, \
    norm_op

from cozy_comfyui.maths.wave import \
    EnumWave, \
    wave_op

from cozy_comfyui.maths.series import \
    seriesLinear

# ==============================================================================
# === GLOBAL ===
# ==============================================================================

JOV_CATEGORY = "ANIMATION"

# ==============================================================================
# === CLASS ===
# ==============================================================================

class ResultObject(object):
    def __init__(self, *arg, **kw) -> None:
        self.frame = []
        self.lin = []
        self.fixed = []
        self.trigger = []
        self.batch = []

class TickNode(CozyBaseNode):
    NAME = "TICK (JOV) ⏱"
    CATEGORY = JOV_CATEGORY
    RETURN_TYPES = ("FLOAT", "FLOAT", "FLOAT", "FLOAT", "FLOAT")
    RETURN_NAMES = ("VALUE", "LINEAR", "EASED", "SCALAR_LIN", "SCALAR_EASE")
    OUTPUT_IS_LIST = (True, True, True, True, True,)
    OUTPUT_TOOLTIPS = (
        "List of values",
        "Normalized values",
        "Eased values",
        "Scalar normalized values",
        "Scalar eased values",
    )
    DESCRIPTION = """
Value generator with normalized values based on based on time interval.
"""

    @classmethod
    def INPUT_TYPES(cls) -> InputType:
        d = super().INPUT_TYPES()
        d = deep_merge(d, {
            "optional": {
                # forces a MOD on CYCLE
                Lexicon.START: ("FLOAT", {
                    "default": 0, "min": -sys.maxsize, "max": sys.maxsize
                }),
                # interval between frames
                Lexicon.STEP: ("FLOAT", {
                    "default": 0, "min": -sys.float_info.max, "max": sys.float_info.max, "precision": 3,
                    "tooltip": "Amount to add to each frame per tick"
                }),
                # how many frames to dump....
                Lexicon.COUNT: ("INT", {
                    "default": 1, "min": 1, "max": 1500
                }),
                Lexicon.LOOP: ("INT", {
                    "default": 0, "min": 0, "max": sys.maxsize,
                    "tooltip": "What value before looping starts. 0 means linear playback (no loop point)"
                }),
                Lexicon.PINGPONG: ("BOOLEAN", {
                    "default": False
                }),
                Lexicon.EASE: (EnumEase._member_names_, {
                    "default": EnumEase.LINEAR.name}),
                Lexicon.NORMALIZE: (EnumNormalize._member_names_, {
                    "default": EnumNormalize.MINMAX2.name}),
                Lexicon.SCALAR: ("FLOAT", {
                    "default": 1, "min": 0, "max": sys.float_info.max
                })

            }
        })
        return Lexicon._parse(d)

    def run(self, **kw) -> tuple[float, ...]:
        """
        Generates a series of numbers with various options including:
        - Custom start value (supporting floating point and negative numbers)
        - Custom step value (supporting floating point and negative numbers)
        - Fixed number of frames
        - Custom loop point (series restarts after reaching this many steps)
        - Ping-pong option (reverses direction at end points)
        - Support for easing functions
        - Normalized output 0..1, -1..1, L2 or ZScore
        """

        start = parse_param(kw, Lexicon.START, EnumConvertType.FLOAT, 0)[0]
        step = parse_param(kw, Lexicon.STEP, EnumConvertType.FLOAT, 0)[0]
        count = parse_param(kw, Lexicon.COUNT, EnumConvertType.INT, 1, 1, 1500)[0]
        loop = parse_param(kw, Lexicon.LOOP, EnumConvertType.INT, 0, 0)[0]
        pingpong = parse_param(kw, Lexicon.PINGPONG, EnumConvertType.BOOLEAN, False)[0]
        ease = parse_param(kw, Lexicon.EASE, EnumEase, EnumEase.LINEAR.name)[0]
        normalize = parse_param(kw, Lexicon.NORMALIZE, EnumNormalize, EnumNormalize.MINMAX1.name)[0]
        scalar = parse_param(kw, Lexicon.SCALAR, EnumConvertType.FLOAT, 1, 0)[0]

        if step == 0:
            step = 1

        cycle = seriesLinear(start, step, count, loop, pingpong)
        linear = norm_op(normalize, np.array(cycle))
        eased = ease_op(ease, linear, len(linear))
        scalar_linear = linear * scalar
        scalar_eased = eased * scalar

        return (
            cycle,
            linear.tolist(),
            eased.tolist(),
            scalar_linear.tolist(),
            scalar_eased.tolist(),
        )

class WaveGeneratorNode(CozyBaseNode):
    NAME = "WAVE GEN (JOV) 🌊"
    NAME_PRETTY = "WAVE GEN (JOV) 🌊"
    CATEGORY = JOV_CATEGORY
    RETURN_TYPES = ("FLOAT", "INT", )
    RETURN_NAMES = ("FLOAT", "INT", )
    DESCRIPTION = """
Produce waveforms like sine, square, or sawtooth with adjustable frequency, amplitude, phase, and offset. It's handy for creating oscillating patterns or controlling animation dynamics. This node emits both continuous floating-point values and integer representations of the generated waves.
"""

    @classmethod
    def INPUT_TYPES(cls) -> InputType:
        d = super().INPUT_TYPES()
        d = deep_merge(d, {
            "optional": {
                Lexicon.WAVE: (EnumWave._member_names_, {
                    "default": EnumWave.SIN.name}),
                Lexicon.FREQ: ("FLOAT", {
                    "default": 1, "min": 0, "max": sys.float_info.max, "step": 0.01,}),
                Lexicon.AMP: ("FLOAT", {
                    "default": 1, "min": 0, "max": sys.float_info.max, "step": 0.01,}),
                Lexicon.PHASE: ("FLOAT", {
                    "default": 0, "min": 0, "max": 1, "step": 0.01}),
                Lexicon.OFFSET: ("FLOAT", {
                    "default": 0, "min": 0, "max": 1, "step": 0.001}),
                Lexicon.TIME: ("FLOAT", {
                    "default": 0, "min": 0, "max": sys.float_info.max, "step": 0.0001}),
                Lexicon.INVERT: ("BOOLEAN", {
                    "default": False}),
                Lexicon.ABSOLUTE: ("BOOLEAN", {
                    "default": False,}),
            }
        })
        return Lexicon._parse(d)

    def run(self, **kw) -> tuple[float, int]:
        op = parse_param(kw, Lexicon.WAVE, EnumWave, EnumWave.SIN.name)
        freq = parse_param(kw, Lexicon.FREQ, EnumConvertType.FLOAT, 1, 0)
        amp = parse_param(kw, Lexicon.AMP, EnumConvertType.FLOAT, 1, 0)
        phase = parse_param(kw, Lexicon.PHASE, EnumConvertType.FLOAT, 0, 0)
        shift = parse_param(kw, Lexicon.OFFSET, EnumConvertType.FLOAT, 0, 0)
        delta_time = parse_param(kw, Lexicon.TIME, EnumConvertType.FLOAT, 0, 0)
        invert = parse_param(kw, Lexicon.INVERT, EnumConvertType.BOOLEAN, False)
        absolute = parse_param(kw, Lexicon.ABSOLUTE, EnumConvertType.BOOLEAN, False)
        results = []
        params = list(zip_longest_fill(op, freq, amp, phase, shift, delta_time, invert, absolute))
        pbar = ProgressBar(len(params))
        for idx, (op, freq, amp, phase, shift, delta_time, invert, absolute) in enumerate(params):
            # freq = 1. / freq
            if invert:
                amp = 1. / val
            val = wave_op(op, phase, freq, amp, shift, delta_time)
            if absolute:
                val = np.abs(val)
            val = max(-sys.float_info.max, min(val, sys.float_info.max))
            results.append([val, int(val)])
            pbar.update_absolute(idx)
        return *list(zip(*results)),

'''
class TickOldNode(CozyBaseNode):
    NAME = "TICK OLD (JOV) ⏱"
    CATEGORY = JOV_CATEGORY
    RETURN_TYPES = ("INT", "FLOAT", "FLOAT", COZY_TYPE_ANY, COZY_TYPE_ANY,)
    RETURN_NAMES = ("VAL", "LINEAR", "FPS", "TRIGGER", "BATCH",)
    OUTPUT_IS_LIST = (True, False, False, False, False,)
    OUTPUT_TOOLTIPS = (
        "Current value for the configured tick as ComfyUI List",
        "Normalized tick value (0..1) based on BPM and Loop",
        "Current 'frame' in the tick based on FPS setting",
        "Based on the BPM settings, on beat hit, output the input at '⚡'",
        "Current batch of values for the configured tick as standard list which works in other Jovimetrix nodes",
    )
    DESCRIPTION = """
A timer and frame counter, emitting pulses or signals based on time intervals. It allows precise synchronization and control over animation sequences, with options to adjust FPS, BPM, and loop points. This node is useful for generating time-based events or driving animations with rhythmic precision.
"""

    @classmethod
    def INPUT_TYPES(cls) -> InputType:
        d = super().INPUT_TYPES()
        d = deep_merge(d, {
            "optional": {
                # data to pass on a pulse of the loop
                Lexicon.TRIGGER: (COZY_TYPE_ANY, {
                    "default": None,
                    "tooltip": "Output to send when beat (BPM setting) is hit"
                }),
                # forces a MOD on CYCLE
                Lexicon.START: ("INT", {
                    "default": 0, "min": 0, "max": sys.maxsize,
                }),
                Lexicon.LOOP: ("INT", {
                    "default": 0, "min": 0, "max": sys.maxsize,
                    "tooltip": "Number of frames before looping starts. 0 means continuous playback (no loop point)"
                }),
                Lexicon.FPS: ("INT", {
                    "default": 24, "min": 1
                }),
                Lexicon.BPM: ("INT", {
                    "default": 120, "min": 1, "max": 60000,
                    "tooltip": "BPM trigger rate to send the input. If input is empty, TRUE is sent on trigger"
                }),
                Lexicon.NOTE: ("INT", {
                    "default": 4, "min": 1, "max": 256,
                    "tooltip": "Number of beats per measure. Quarter note is 4, Eighth is 8, 16 is 16, etc."}),
                # how many frames to dump....
                Lexicon.BATCH: ("INT", {
                    "default": 1, "min": 1, "max": 32767,
                    "tooltip": "Number of frames wanted"
                }),
                Lexicon.STEP: ("INT", {
                    "default": 0, "min": 0, "max": sys.maxsize
                }),
            }
        })
        return Lexicon._parse(d)

    def run(self, ident, **kw) -> tuple[int, float, float, Any]:
        passthru = parse_param(kw, Lexicon.TRIGGER, EnumConvertType.ANY, None)[0]
        stride = parse_param(kw, Lexicon.STEP, EnumConvertType.INT, 0)[0]
        loop = parse_param(kw, Lexicon.LOOP, EnumConvertType.INT, 0)[0]
        start = parse_param(kw, Lexicon.START, EnumConvertType.INT, self.__frame)[0]
        if loop != 0:
            self.__frame %= loop
        fps = parse_param(kw, Lexicon.FPS, EnumConvertType.INT, 24, 1)[0]
        bpm = parse_param(kw, Lexicon.BPM, EnumConvertType.INT, 120, 1)[0]
        divisor = parse_param(kw, Lexicon.NOTE, EnumConvertType.INT, 4, 1)[0]
        beat = 60. / max(1., bpm) / divisor
        batch = parse_param(kw, Lexicon.BATCH, EnumConvertType.INT, 1, 1)[0]
        step_fps = 1. / max(1., float(fps))

        trigger = None
        results = ResultObject()
        pbar = ProgressBar(batch)
        step = stride if stride != 0 else max(1, loop / batch)
        for idx in range(batch):
            trigger = False
            lin = start if loop == 0 else start / loop
            fixed_step = math.fmod(start * step_fps, fps)
            if (math.fmod(fixed_step, beat) == 0):
                trigger = [passthru]
            if loop != 0:
                start %= loop
            results.frame.append(start)
            results.lin.append(float(lin))
            results.fixed.append(float(fixed_step))
            results.trigger.append(trigger)
            results.batch.append(start)
            start += step
            pbar.update_absolute(idx)

        return (results.frame, results.lin, results.fixed, results.trigger, results.batch,)

'''

================================================
FILE: core/calc.py
================================================
""" Jovimetrix - Calculation """

import sys
import math
import struct
from enum import Enum
from typing import Any
from collections import Counter

import torch
from scipy.special import gamma

from comfy.utils import ProgressBar

from cozy_comfyui import \
    logger, \
    TensorType, InputType, EnumConvertType, \
    deep_merge, parse_dynamic, parse_param, parse_value, zip_longest_fill

from cozy_comfyui.lexicon import \
    Lexicon

from cozy_comfyui.node import \
    COZY_TYPE_ANY, COZY_TYPE_NUMERICAL, COZY_TYPE_FULL, \
    CozyBaseNode

# ==============================================================================
# === GLOBAL ===
# ==============================================================================

JOV_CATEGORY = "CALC"

# ==============================================================================
# === ENUMERATION ===
# ==============================================================================

class EnumBinaryOperation(Enum):
    ADD = 0
    SUBTRACT = 1
    MULTIPLY   = 2
    DIVIDE = 3
    DIVIDE_FLOOR = 4
    MODULUS = 5
    POWER = 6
    # TERNARY WITHOUT THE NEED
    MAXIMUM = 20
    MINIMUM = 21
    # VECTOR
    DOT_PRODUCT = 30
    CROSS_PRODUCT = 31
    # MATRIX

    # BITS
    # BIT_NOT = 39
    BIT_AND = 60
    BIT_NAND = 61
    BIT_OR = 62
    BIT_NOR = 63
    BIT_XOR = 64
    BIT_XNOR = 65
    BIT_LSHIFT = 66
    BIT_RSHIFT = 67
    # GROUP
    UNION = 80
    INTERSECTION = 81
    DIFFERENCE = 82
    # WEIRD ONES
    BASE = 90

class EnumComparison(Enum):
    EQUAL = 0
    NOT_EQUAL = 1
    LESS_THAN = 2
    LESS_THAN_EQUAL = 3
    GREATER_THAN = 4
    GREATER_THAN_EQUAL = 5
    # LOGIC
    # NOT = 10
    AND = 20
    NAND = 21
    OR = 22
    NOR = 23
    XOR = 24
    XNOR = 25
    # TYPE
    IS = 80
    IS_NOT = 81
    # GROUPS
    IN = 82
    NOT_IN = 83

class EnumConvertString(Enum):
    SPLIT = 10
    JOIN = 30
    FIND = 40
    REPLACE = 50
    SLICE = 70  # start - end - step  = -1, -1, 1

class EnumSwizzle(Enum):
    A_X = 0
    A_Y = 10
    A_Z = 20
    A_W = 30
    B_X = 9
    B_Y = 11
    B_Z = 21
    B_W = 31
    CONSTANT = 40

class EnumUnaryOperation(Enum):
    ABS = 0
    FLOOR = 1
    CEIL = 2
    SQRT = 3
    SQUARE = 4
    LOG = 5
    LOG10 = 6
    SIN = 7
    COS = 8
    TAN = 9
    NEGATE = 10
    RECIPROCAL = 12
    FACTORIAL = 14
    EXP = 16
    # COMPOUND
    MINIMUM = 20
    MAXIMUM = 21
    MEAN = 22
    MEDIAN = 24
    MODE = 26
    MAGNITUDE = 30
    NORMALIZE = 32
    # LOGICAL
    NOT = 40
    # BITWISE
    BIT_NOT = 45
    COS_H = 60
    SIN_H = 62
    TAN_H = 64
    RADIANS = 70
    DEGREES = 72
    GAMMA = 80
    # IS_EVEN
    IS_EVEN = 90
    IS_ODD = 91

# Dictionary to map each operation to its corresponding function
OP_UNARY = {
    EnumUnaryOperation.ABS: lambda x: math.fabs(x),
    EnumUnaryOperation.FLOOR: lambda x: math.floor(x),
    EnumUnaryOperation.CEIL: lambda x: math.ceil(x),
    EnumUnaryOperation.SQRT: lambda x: math.sqrt(x),
    EnumUnaryOperation.SQUARE: lambda x: math.pow(x, 2),
    EnumUnaryOperation.LOG: lambda x: math.log(x) if x != 0 else -math.inf,
    EnumUnaryOperation.LOG10: lambda x: math.log10(x) if x != 0 else -math.inf,
    EnumUnaryOperation.SIN: lambda x: math.sin(x),
    EnumUnaryOperation.COS: lambda x: math.cos(x),
    EnumUnaryOperation.TAN: lambda x: math.tan(x),
    EnumUnaryOperation.NEGATE: lambda x: -x,
    EnumUnaryOperation.RECIPROCAL: lambda x: 1 / x if x != 0 else 0,
    EnumUnaryOperation.FACTORIAL: lambda x: math.factorial(abs(int(x))),
    EnumUnaryOperation.EXP: lambda x: math.exp(x),
    EnumUnaryOperation.NOT: lambda x: not x,
    EnumUnaryOperation.BIT_NOT: lambda x: ~int(x),
    EnumUnaryOperation.IS_EVEN: lambda x: x % 2 == 0,
    EnumUnaryOperation.IS_ODD: lambda x: x % 2 == 1,
    EnumUnaryOperation.COS_H: lambda x: math.cosh(x),
    EnumUnaryOperation.SIN_H: lambda x: math.sinh(x),
    EnumUnaryOperation.TAN_H: lambda x: math.tanh(x),
    EnumUnaryOperation.RADIANS: lambda x: math.radians(x),
    EnumUnaryOperation.DEGREES: lambda x: math.degrees(x),
    EnumUnaryOperation.GAMMA: lambda x: gamma(x) if x > 0 else 0,
}

# ==============================================================================
# === SUPPORT ===
# ==============================================================================

def to_bits(value: Any):
    if isinstance(value, int):
        return bin(value)[2:]
    elif isinstance(value, float):
        packed = struct.pack('>d', value)
        return ''.join(f'{byte:08b}' for byte in packed)
    elif isinstance(value, str):
        return ''.join(f'{ord(c):08b}' for c in value)
    else:
        raise TypeError(f"Unsupported type: {type(value)}")

def vector_swap(pA: Any, pB: Any, swap_x: EnumSwizzle, swap_y:EnumSwizzle,
                swap_z:EnumSwizzle, swap_w:EnumSwizzle, default:list[float]) -> list[float]:
    """Swap out a vector's values with another vector's values, or a constant fill."""

    def parse(target, targetB, swap, val) -> float:
        if swap == EnumSwizzle.CONSTANT:
            return val
        if swap in [EnumSwizzle.B_X, EnumSwizzle.B_Y, EnumSwizzle.B_Z, EnumSwizzle.B_W]:
            target = targetB
        swap = int(swap.value / 10)
        return target[swap]

    while len(pA) < 4:
        pA.append(0)

    while len(pB) < 4:
        pB.append(0)

    while len(default) < 4:
        default.append(0)

    return [
        parse(pA, pB, swap_x, default[0]),
        parse(pA, pB, swap_y, default[1]),
        parse(pA, pB, swap_z, default[2]),
        parse(pA, pB, swap_w, default[3])
    ]

# ==============================================================================
# === CLASS ===
# ==============================================================================

class BitSplitNode(CozyBaseNode):
    NAME = "BIT SPLIT (JOV) ⭄"
    CATEGORY = JOV_CATEGORY
    RETURN_TYPES = (COZY_TYPE_ANY, "BOOLEAN",)
    RETURN_NAMES = ("BIT", "BOOL",)
    OUTPUT_IS_LIST = (True, True,)
    OUTPUT_TOOLTIPS = (
        "Bits as Numerical output (0 or 1)",
        "Bits as Boolean output (True or False)"
    )
    DESCRIPTION = """
Split an input into separate bits.
BOOL, INT and FLOAT use their numbers,
STRING is treated as a list of CHARACTER.
IMAGE and MASK will return a TRUE bit for any non-black pixel, as a stream of bits for all pixels in the image.
"""
    @classmethod
    def INPUT_TYPES(cls) -> InputType:
        d = super().INPUT_TYPES()
        d = deep_merge(d, {
            "optional": {
                Lexicon.VALUE: (COZY_TYPE_NUMERICAL, {
                    "default": None,
                    "tooltip": "Value to convert into bits"}),
                Lexicon.BITS: ("INT", {
                    "default": 8, "min": 0, "max": 64,
                    "tooltip": "Number of output bits requested"})
            }
        })
        return Lexicon._parse(d)

    def run(self, **kw) -> tuple[list[int], list[bool]]:
        value = parse_param(kw, Lexicon.VALUE, EnumConvertType.LIST, 0)
        bits = parse_param(kw, Lexicon.BITS, EnumConvertType.INT, 8)
        params = list(zip_longest_fill(value, bits))
        pbar = ProgressBar(len(params))
        results = []
        for idx, (value, bits) in enumerate(params):
            bit_repr = to_bits(value[0])[::-1]
            if bits > 0:
                if len(bit_repr) > bits:
                    bit_repr = bit_repr[0:bits]
                else:
                    bit_repr = bit_repr.ljust(bits, '0')

            int_bits = []
            bool_bits = []
            for b in bit_repr:
                bit = int(b)
                int_bits.append(bit)
                bool_bits.append(bool(bit))
            results.append([int_bits, bool_bits])
            pbar.update_absolute(idx)
        return *list(zip(*results)),

class ComparisonNode(CozyBaseNode):
    NAME = "COMPARISON (JOV) 🕵🏽"
    CATEGORY = JOV_CATEGORY
    RETURN_TYPES = (COZY_TYPE_ANY, COZY_TYPE_ANY,)
    RETURN_NAMES = ("OUT", "VAL",)
    OUTPUT_IS_LIST = (True, True,)
    OUTPUT_TOOLTIPS = (
        "Outputs the input at PASS or FAIL depending the evaluation",
        "The comparison result value"
    )
    DESCRIPTION = """
Evaluates two inputs (A and B) with a specified comparison operators and optional values for successful and failed comparisons. The node performs the specified operation element-wise between corresponding elements of A and B. If the comparison is successful for all elements, it returns the success value; otherwise, it returns the failure value. The node supports various comparison operators such as EQUAL, GREATER_THAN, LESS_THAN, AND, OR, IS, IN, etc.
"""

    @classmethod
    def INPUT_TYPES(cls) -> InputType:
        d = super().INPUT_TYPES()
        d = deep_merge(d, {
            "optional": {
                Lexicon.IN_A: (COZY_TYPE_NUMERICAL, {
                    "default": 0,
                    "tooltip":"First value to compare"}),
                Lexicon.IN_B: (COZY_TYPE_NUMERICAL, {
                    "default": 0,
                    "tooltip":"Second value to compare"}),
                Lexicon.SUCCESS: (COZY_TYPE_ANY, {
                    "default": 0,
                    "tooltip": "Sent to OUT on a successful condition"}),
                Lexicon.FAIL: (COZY_TYPE_ANY, {
                    "default": 0,
                    "tooltip": "Sent to OUT on a failure condition"}),
                Lexicon.FUNCTION: (EnumComparison._member_names_, {
                    "default": EnumComparison.EQUAL.name,
                    "tooltip": "Comparison function. Sends the data in PASS on successful comparison to OUT, otherwise sends the value in FAIL"}),
                Lexicon.SWAP: ("BOOLEAN", {
                    "default": False,
                    "tooltip": "Reverse the A and B inputs"}),
                Lexicon.INVERT: ("BOOLEAN", {
                    "default": False,
                    "tooltip": "Reverse the PASS and FAIL inputs"}),
            }
        })
        return Lexicon._parse(d)

    def run(self, **kw) -> tuple[Any, Any]:
        in_a = parse_param(kw, Lexicon.IN_A, EnumConvertType.ANY, 0)
        in_b = parse_param(kw, Lexicon.IN_B, EnumConvertType.ANY, 0)
        size = max(len(in_a), len(in_b))
        good = parse_param(kw, Lexicon.SUCCESS, EnumConvertType.ANY, 0)[:size]
        fail = parse_param(kw, Lexicon.FAIL, EnumConvertType.ANY, 0)[:size]
        op = parse_param(kw, Lexicon.FUNCTION, EnumComparison, EnumComparison.EQUAL.name)[:size]
        swap = parse_param(kw, Lexicon.SWAP, EnumConvertType.BOOLEAN, False)[:size]
        invert = parse_param(kw, Lexicon.INVERT, EnumConvertType.BOOLEAN, False)[:size]
        params = list(zip_longest_fill(in_a, in_b, good, fail, op, swap, invert))
        pbar = ProgressBar(len(params))
        vals = []
        results = []
        for idx, (A, B, good, fail, op, swap, invert) in enumerate(params):
            if not isinstance(A, (tuple, list,)):
                A = [A]
            if not isinstance(B, (tuple, list,)):
                B = [B]

            size = min(4, max(len(A), len(B))) - 1
            typ = [EnumConvertType.FLOAT, EnumConvertType.VEC2, EnumConvertType.VEC3, EnumConvertType.VEC4][size]

            val_a = parse_value(A, typ, [A[-1]] * size)
            if not isinstance(val_a, (list,)):
                val_a = [val_a]

            val_b = parse_value(B, typ, [B[-1]] * size)
            if not isinstance(val_b, (list,)):
                val_b = [val_b]

            if swap:
                val_a, val_b = val_b, val_a

            match op:
                case EnumComparison.EQUAL:
                    val = [a == b for a, b in zip(val_a, val_b)]
                case EnumComparison.GREATER_THAN:
                    val = [a > b for a, b in zip(val_a, val_b)]
                case EnumComparison.GREATER_THAN_EQUAL:
                    val = [a >= b for a, b in zip(val_a, val_b)]
                case EnumComparison.LESS_THAN:
                    val = [a < b for a, b in zip(val_a, val_b)]
                case EnumComparison.LESS_THAN_EQUAL:
                    val = [a <= b for a, b in zip(val_a, val_b)]
                case EnumComparison.NOT_EQUAL:
                    val = [a != b for a, b in zip(val_a, val_b)]
                # LOGIC
                # case EnumBinaryOperation.NOT = 10
                case EnumComparison.AND:
                    val = [a and b for a, b in zip(val_a, val_b)]
                case EnumComparison.NAND:
                    val = [not(a and b) for a, b in zip(val_a, val_b)]
                case EnumComparison.OR:
                    val = [a or b for a, b in zip(val_a, val_b)]
                case EnumComparison.NOR:
                    val = [not(a or b) for a, b in zip(val_a, val_b)]
                case EnumComparison.XOR:
                    val = [(a and not b) or (not a and b) for a, b in zip(val_a, val_b)]
                case EnumComparison.XNOR:
                    val = [not((a and not b) or (not a and b)) for a, b in zip(val_a, val_b)]
                # IDENTITY
                case EnumComparison.IS:
                    val = [a is b for a, b in zip(val_a, val_b)]
                case EnumComparison.IS_NOT:
                    val = [a is not b for a, b in zip(val_a, val_b)]
                # GROUP
                case EnumComparison.IN:
                    val = [a in val_b for a in val_a]
                case EnumComparison.NOT_IN:
                    val = [a not in val_b for a in val_a]

            output = all([bool(v) for v in val])
            if invert:
                output = not output

            output = good if output == True else fail
            results.append([output, val])
            pbar.update_absolute(idx)

        outs, vals = zip(*results)
        if isinstance(outs[0], (TensorType,)):
            if len(outs) > 1:
                outs = torch.stack(outs)
            else:
                outs = outs[0].unsqueeze(0)
            outs = [outs]
        else:
            outs = list(outs)
        return outs, *vals,

class LerpNode(CozyBaseNode):
    NAME = "LERP (JOV) 🔰"
    CATEGORY = JOV_CATEGORY
    RETURN_TYPES = (COZY_TYPE_ANY,)
    RETURN_NAMES = ("❔",)
    OUTPUT_IS_LIST = (True,)
    OUTPUT_TOOLTIPS = (
        f"Output can vary depending on the type chosen in the {"TYPE"} parameter",
    )
    DESCRIPTION = """
Calculate linear interpolation between two values or vectors based on a blending factor (alpha).

The node accepts optional start (IN_A) and end (IN_B) points, a blending factor (FLOAT), and various input types for both start and end points, such as single values (X, Y), 2-value vectors (IN_A2, IN_B2), 3-value vectors (IN_A3, IN_B3), and 4-value vectors (IN_A4, IN_B4).

Additionally, you can specify the easing function (EASE) and the desired output type (TYPE). It supports various easing functions for smoother transitions.
"""

    @classmethod
    def INPUT_TYPES(cls) -> InputType:
        d = super().INPUT_TYPES()
        d = deep_merge(d, {
            "optional": {
                Lexicon.IN_A: (COZY_TYPE_NUMERICAL, {
                    "tooltip": "Custom Start Point"}),
                Lexicon.IN_B: (COZY_TYPE_NUMERICAL, {
                    "tooltip": "Custom End Point"}),
                Lexicon.ALPHA: ("VEC4", {
                    "default": (0.5, 0.5, 0.5, 0.5), "mij": 0, "maj": 1,}),
                Lexicon.TYPE: (EnumConvertType._member_names_[:6], {
                    "default": EnumConvertType.FLOAT.name,
                    "tooltip": "Output type desired from resultant operation"}),
                Lexicon.DEFAULT_A: ("VEC4", {
                    "default": (0, 0, 0, 0)}),
                Lexicon.DEFAULT_B: ("VEC4", {
                    "default": (1,1,1,1)})
            }
        })
        return Lexicon._parse(d)

    def run(self, **kw) -> tuple[Any, Any]:
        A = parse_param(kw, Lexicon.IN_A, EnumConvertType.ANY, 0)
        B = parse_param(kw, Lexicon.IN_B, EnumConvertType.ANY, 0)
        alpha = parse_param(kw, Lexicon.ALPHA,EnumConvertType.VEC4, (0.5,0.5,0.5,0.5))
        typ = parse_param(kw, Lexicon.TYPE, EnumConvertType, EnumConvertType.FLOAT.name)
        a_xyzw = parse_param(kw, Lexicon.DEFAULT_A, EnumConvertType.VEC4, (0, 0, 0, 0))
        b_xyzw = parse_param(kw, Lexicon.DEFAULT_B, EnumConvertType.VEC4, (1, 1, 1, 1))
        values = []
        params = list(zip_longest_fill(A, B, alpha, typ, a_xyzw, b_xyzw))
        pbar = ProgressBar(len(params))
        for idx, (A, B, alpha, typ, a_xyzw, b_xyzw) in enumerate(params):
            size = int(typ.value / 10)

            if A is None:
                A = a_xyzw[:size]
            if B is None:
                B = b_xyzw[:size]

            val_a = parse_value(A, EnumConvertType.VEC4, a_xyzw)
            val_b = parse_value(B, EnumConvertType.VEC4, b_xyzw)
            alpha = parse_value(alpha, EnumConvertType.VEC4, alpha)

            if size > 1:
                val_a = val_a[:size + 1]
                val_b = val_b[:size + 1]
            else:
                val_a = [val_a[0]]
                val_b = [val_b[0]]

            val = [val_b[x] * alpha[x] + val_a[x] * (1 - alpha[x]) for x in range(size)]
            convert = int if "INT" in typ.name else float
            ret = []
            for v in val:
                try:
                    ret.append(convert(v))
                except OverflowError:
                    ret.append(0)
                except Exception as e:
                    ret.append(0)
            val = ret[0] if size == 1 else ret[:size+1]
            values.append(val)
            pbar.update_absolute(idx)
        return [values]

class OPUnaryNode(CozyBaseNode):
    NAME = "OP UNARY (JOV) 🎲"
    CATEGORY = JOV_CATEGORY
    RETURN_TYPES = (COZY_TYPE_ANY,)
    RETURN_NAMES = ("❔",)
    OUTPUT_IS_LIST = (True,)
    OUTPUT_TOOLTIPS = (
        "Output type will match the input type",
    )
    DESCRIPTION = """
Perform single function operations like absolute value, mean, median, mode, magnitude, normalization, maximum, or minimum on input values.
"""

    @classmethod
    def INPUT_TYPES(cls) -> InputType:
        d = super().INPUT_TYPES()
        typ = EnumConvertType._member_names_[:6]
        d = deep_merge(d, {
            "optional": {
                Lexicon.IN_A: (COZY_TYPE_FULL, {
                    "default": 0}),
                Lexicon.FUNCTION: (EnumUnaryOperation._member_names_, {
                    "default": EnumUnaryOperation.ABS.name}),
                Lexicon.TYPE: (typ, {
                    "default": EnumConvertType.FLOAT.name,}),
                Lexicon.DEFAULT_A: ("VEC4", {
                    "default": (0,0,0,0), "mij": -sys.float_info.max, "maj": sys.float_info.max,
                    "precision": 2,
                    "label": ["X", "Y", "Z", "W"]})
            }
        })
        return Lexicon._parse(d)

    def run(self, **kw) -> tuple[bool]:
        results = []
        A = parse_param(kw, Lexicon.IN_A, EnumConvertType.ANY, 0)
        op = parse_param(kw, Lexicon.FUNCTION, EnumUnaryOperation, EnumUnaryOperation.ABS.name)
        out = parse_param(kw, Lexicon.TYPE, EnumConvertType, EnumConvertType.FLOAT.name)
        a_xyzw = parse_param(kw, Lexicon.DEFAULT_A, EnumConvertType.VEC4, (0, 0, 0, 0))
        params = list(zip_longest_fill(A, op, out, a_xyzw))
        pbar = ProgressBar(len(params))
        for idx, (A, op, out, a_xyzw) in enumerate(params):
            if not isinstance(A, (list, tuple,)):
                A = [A]
            best_type = [EnumConvertType.FLOAT, EnumConvertType.VEC2, EnumConvertType.VEC3, EnumConvertType.VEC4][len(A)-1]
            val = parse_value(A, best_type, a_xyzw)
            val = parse_value(val, EnumConvertType.VEC4, a_xyzw)
            match op:
                case EnumUnaryOperation.MEAN:
                    val = [sum(val) / len(val)]
                case EnumUnaryOperation.MEDIAN:
                    val = [sorted(val)[len(val) // 2]]
                case EnumUnaryOperation.MODE:
                    counts = Counter(val)
                    val = [max(counts, key=counts.get)]
                case EnumUnaryOperation.MAGNITUDE:
                    val = [math.sqrt(sum(x ** 2 for x in val))]
                case EnumUnaryOperation.NORMALIZE:
                    if len(val) == 1:
                        val = [1]
                    else:
                        m = math.sqrt(sum(x ** 2 for x in val))
                        if m > 0:
                            val = [v / m for v in val]
                        else:
                            val = [0] * len(val)
                case EnumUnaryOperation.MAXIMUM:
                    val = [max(val)]
                case EnumUnaryOperation.MINIMUM:
                    val = [min(val)]
                case _:
                    # Apply unary operation to each item in the list
                    ret = []
                    for v in val:
                        try:
                            v = OP_UNARY[op](v)
                        except Exception as e:
                            logger.error(f"{e} :: {op}")
                            v = 0
                        ret.append(v)
                    val = ret

            val = parse_value(val, out, 0)
            results.append(val)
            pbar.update_absolute(idx)
        return (results,)

class OPBinaryNode(CozyBaseNode):
    NAME = "OP BINARY (JOV) 🌟"
    CATEGORY = JOV_CATEGORY
    RETURN_TYPES = (COZY_TYPE_ANY,)
    RETURN_NAMES = ("❔",)
    OUTPUT_IS_LIST = (True,)
    OUTPUT_TOOLTIPS = (
        "Output type will match the input type",
    )
    DESCRIPTION = """
Execute binary operations like addition, subtraction, multiplication, division, and bitwise operations on input values, supporting various data types and vector sizes.
"""

    @classmethod
    def INPUT_TYPES(cls) -> InputType:
        names_convert = EnumConvertType._member_names_[:6]
        d = super().INPUT_TYPES()
        d = deep_merge(d, {
            "optional": {
                Lexicon.IN_A: (COZY_TYPE_FULL, {
                    "default": None}),
                Lexicon.IN_B: (COZY_TYPE_FULL, {
                    "default": None}),
                Lexicon.FUNCTION: (EnumBinaryOperation._member_names_, {
                    "default": EnumBinaryOperation.ADD.name,}),
                Lexicon.TYPE: (names_convert, {
                    "default": names_convert[2],
                    "tooltip":"Output type desired from resultant operation"}),
                Lexicon.SWAP: ("BOOLEAN", {
                    "default": False}),
                Lexicon.DEFAULT_A: ("VEC4", {
                    "default": (0,0,0,0), "mij": -sys.float_info.max, "maj": sys.float_info.max,
                    "label": ["X", "Y", "Z", "W"]}),
                Lexicon.DEFAULT_B: ("VEC4", {
                    "default": (0,0,0,0), "mij": -sys.float_info.max, "maj": sys.float_info.max,
                    "label": ["X", "Y", "Z", "W"]})
            }
        })
        return Lexicon._parse(d)

    def run(self, **kw) -> tuple[bool]:
        results = []
        A = parse_param(kw, Lexicon.IN_A, EnumConvertType.ANY, None)
        B = parse_param(kw, Lexicon.IN_B, EnumConvertType.ANY, None)
        op = parse_param(kw, Lexicon.FUNCTION, EnumBinaryOperation, EnumBinaryOperation.ADD.name)
        typ = parse_param(kw, Lexicon.TYPE, EnumConvertType, EnumConvertType.FLOAT.name)
        swap = parse_param(kw, Lexicon.SWAP, EnumConvertType.BOOLEAN, False)
        a_xyzw = parse_param(kw, Lexicon.DEFAULT_A, EnumConvertType.VEC4, (0, 0, 0, 0))
        b_xyzw = parse_param(kw, Lexicon.DEFAULT_B, EnumConvertType.VEC4, (0, 0, 0, 0))
        params = list(zip_longest_fill(A, B, a_xyzw, b_xyzw, op, typ, swap))
        pbar = ProgressBar(len(params))
        for idx, (A, B, a_xyzw, b_xyzw, op, typ, swap) in enumerate(params):
            if not isinstance(A, (list, tuple,)):
                A = [A]
            if not isinstance(B, (list, tuple,)):
                B = [B]
            size = min(3, max(len(A)-1, len(B)-1))
            best_type = [EnumConvertType.FLOAT, EnumConvertType.VEC2, EnumConvertType.VEC3, EnumConvertType.VEC4][size]
            val_a = parse_value(A, best_type, a_xyzw)
            val_a = parse_value(val_a, EnumConvertType.VEC4, a_xyzw)
            val_b = parse_value(B, best_type, b_xyzw)
            val_b = parse_value(val_b, EnumConvertType.VEC4, b_xyzw)

            if swap:
                val_a, val_b = val_b, val_a

            size = max(1, int(typ.value / 10))
            val_a = val_a[:size+1]
            val_b = val_b[:size+1]

            match op:
                # VECTOR
                case EnumBinaryOperation.DOT_PRODUCT:
                    val = [sum(a * b for a, b in zip(val_a, val_b))]
                case EnumBinaryOperation.CROSS_PRODUCT:
                    val = [0, 0, 0]
                    if len(val_a) < 3 or len(val_b) < 3:
                        logger.warning("Cross product only defined for 3D vectors")
                    else:
                        val = [
                            val_a[1] * val_b[2] - val_a[2] * val_b[1],
                            val_a[2] * val_b[0] - val_a[0] * val_b[2],
                            val_a[0] * val_b[1] - val_a[1] * val_b[0]
                        ]

                # ARITHMETIC
                case EnumBinaryOperation.ADD:
                    val = [sum(pair) for pair in zip(val_a, val_b)]
                case EnumBinaryOperation.SUBTRACT:
                    val = [a - b for a, b in zip(val_a, val_b)]
                case EnumBinaryOperation.MULTIPLY:
                    val = [a * b for a, b in zip(val_a, val_b)]
                case EnumBinaryOperation.DIVIDE:
                    val = [a / b if b != 0 else 0 for a, b in zip(val_a, val_b)]
                case EnumBinaryOperation.DIVIDE_FLOOR:
                    val = [a // b if b != 0 else 0 for a, b in zip(val_a, val_b)]
                case EnumBinaryOperation.MODULUS:
                    val = [a % b if b != 0 else 0 for a, b in zip(val_a, val_b)]
                case EnumBinaryOperation.POWER:
                    val = [a ** b if b >= 0 else 0 for a, b in zip(val_a, val_b)]
                case EnumBinaryOperation.MAXIMUM:
                    val = [max(a, val_b[i]) for i, a in enumerate(val_a)]
                case EnumBinaryOperation.MINIMUM:
                    # val = min(val_a, val_b)
                    val = [min(a, val_b[i]) for i, a in enumerate(val_a)]

                # BITS
                # case EnumBinaryOperation.BIT_NOT:
                case EnumBinaryOperation.BIT_AND:
                    val = [int(a) & int(b) for a, b in zip(val_a, val_b)]
                case EnumBinaryOperation.BIT_NAND:
                    val = [not(int(a) & int(b)) for a, b in zip(val_a, val_b)]
                case EnumBinaryOperation.BIT_OR:
                    val = [int(a) | int(b) for a, b in zip(val_a, val_b)]
                case EnumBinaryOperation.BIT_NOR:
                    val = [not(int(a) | int(b)) for a, b in zip(val_a, val_b)]
                case EnumBinaryOperation.BIT_XOR:
                    val = [int(a) ^ int(b) for a, b in zip(val_a, val_b)]
                case EnumBinaryOperation.BIT_XNOR:
                    val = [not(int(a) ^ int(b)) for a, b in zip(val_a, val_b)]
                case EnumBinaryOperation.BIT_LSHIFT:
                    val = [int(a) << int(b) if b >= 0 else 0 for a, b in zip(val_a, val_b)]
                case EnumBinaryOperation.BIT_RSHIFT:
                    val = [int(a) >> int(b) if b >= 0 else 0 for a, b in zip(val_a, val_b)]

                # GROUP
                case EnumBinaryOperation.UNION:
                    val = list(set(val_a) | set(val_b))
                case EnumBinaryOperation.INTERSECTION:
                    val = list(set(val_a) & set(val_b))
                case EnumBinaryOperation.DIFFERENCE:
                    val = list(set(val_a) - set(val_b))

                # WEIRD
                case EnumBinaryOperation.BASE:
                    val = list(set(val_a) - set(val_b))

            # cast into correct type....
            default = val
            if len(val) == 0:
                default = [0]

            val = parse_value(val, typ, default)
            results.append(val)
            pbar.update_absolute(idx)
        return (results,)

class StringerNode(CozyBaseNode):
    NAME = "STRINGER (JOV) 🪀"
    CATEGORY = JOV_CATEGORY
    RETURN_TYPES = ("STRING", "INT",)
    RETURN_NAMES = ("STRING", "COUNT",)
    OUTPUT_IS_LIST = (True, False,)
    DESCRIPTION = """
Manipulate strings through filtering
"""

    @classmethod
    def INPUT_TYPES(cls) -> InputType:
        d = super().INPUT_TYPES()
        d = deep_merge(d, {
            "optional": {
                # split, join, replace, trim/lift
                Lexicon.FUNCTION: (EnumConvertString._member_names_, {
                    "default": EnumConvertString.SPLIT.name}),
                Lexicon.KEY: ("STRING", {
                    "default":"", "dynamicPrompt":False,
                    "tooltip": "Delimiter (SPLIT/JOIN) or string to use as search string (FIND/REPLACE)."}),
                Lexicon.REPLACE: ("STRING", {
                    "default":"", "dynamicPrompt":False}),
                Lexicon.RANGE: ("VEC3", {
                    "default":(0, -1, 1), "int": True,
                    "tooltip": "Start, End and Step. Values will clip to the actual list size(s)."}),
            }
        })
        return Lexicon._parse(d)

    def run(self, **kw) -> tuple[TensorType, ...]:
        # turn any all inputs into the
        data_list = parse_dynamic(kw, Lexicon.STRING, EnumConvertType.ANY, "")
        if data_list is None:
            logger.warn("no data for list")
            return ([], 0)

        op = parse_param(kw, Lexicon.FUNCTION, EnumConvertString, EnumConvertString.SPLIT.name)[0]
        key = parse_param(kw, Lexicon.KEY, EnumConvertType.STRING, "")[0]
        replace = parse_param(kw, Lexicon.REPLACE, EnumConvertType.STRING, "")[0]
        stenst = parse_param(kw, Lexicon.RANGE, EnumConvertType.VEC3INT, (0, -1, 1))[0]
        results = []
        match op:
            case EnumConvertString.SPLIT:
                results = data_list
                if key != "":
                    results = []
                    for d in data_list:
                        d = [key if len(r) == 0 else r for r in d.split(key)]
                        results.extend(d)
            case EnumConvertString.JOIN:
                results = [key.join(data_list)]
            case EnumConvertString.FIND:
                results = [r for r in data_list if r.find(key) > -1]
            case EnumConvertString.REPLACE:
                results = data_list
                if key != "":
                    results = [r.replace(key, replace) for r in data_list]
            case EnumConvertString.SLICE:
                start, end, step = stenst
                for x in data_list:
                    start = len(x) if start < 0 else min(max(0, start), len(x))
                    end = len(x) if end < 0 else min(max(0, end), len(x))
                    if step != 0:
                        results.append(x[start:end:step])
                    else:
                        results.append(x)
        return (results, len(results),)

class SwizzleNode(CozyBaseNode):
    NAME = "SWIZZLE (JOV) 😵"
    CATEGORY = JOV_CATEGORY
    RETURN_TYPES = (COZY_TYPE_ANY,)
    RETURN_NAMES = ("❔",)
    OUTPUT_IS_LIST = (True,)
    DESCRIPTION = """
Swap components between two vectors based on specified swizzle patterns and values. It provides flexibility in rearranging vector elements dynamically.
"""

    @classmethod
    def INPUT_TYPES(cls) -> InputType:
        d = super().INPUT_TYPES()
        names_convert = EnumConvertType._member_names_[3:6]
        d = deep_merge(d, {
            "optional": {
                Lexicon.IN_A: (COZY_TYPE_NUMERICAL, {}),
                Lexicon.IN_B: (COZY_TYPE_NUMERICAL, {}),
                Lexicon.TYPE: (names_convert, {
                    "default": names_convert[0]}),
                Lexicon.SWAP_X: (EnumSwizzle._member_names_, {
                    "default": EnumSwizzle.A_X.name,}),
                Lexicon.SWAP_Y: (EnumSwizzle._member_names_, {
                    "default": EnumSwizzle.A_Y.name,}),
                Lexicon.SWAP_Z: (EnumSwizzle._member_names_, {
                    "default": EnumSwizzle.A_Z.name,}),
                Lexicon.SWAP_W: (EnumSwizzle._member_names_, {
                    "default": EnumSwizzle.A_W.name,}),
                Lexicon.DEFAULT: ("VEC4", {
                    "default": (0,0,0,0), "mij": -sys.float_info.max, "maj": sys.float_info.max})
            }
        })
        return Lexicon._parse(d)

    def run(self, **kw) -> tuple[float, ...]:
        pA = parse_param(kw, Lexicon.IN_A, EnumConvertType.LIST, None)
        pB = parse_param(kw, Lexicon.IN_B, EnumConvertType.LIST, None)
        typ = parse_param(kw, Lexicon.TYPE, EnumConvertType, EnumConvertType.VEC2.name)
        swap_x = parse_param(kw, Lexicon.SWAP_X, EnumSwizzle, EnumSwizzle.A_X.name)
        swap_y = parse_param(kw, Lexicon.SWAP_Y, EnumSwizzle, EnumSwizzle.A_Y.name)
        swap_z = parse_param(kw, Lexicon.SWAP_Z, EnumSwizzle, EnumSwizzle.A_W.name)
        swap_w = parse_param(kw, Lexicon.SWAP_W, EnumSwizzle, EnumSwizzle.A_Z.name)
        default = parse_param(kw, Lexicon.DEFAULT, EnumConvertType.VEC4, (0, 0, 0, 0))
        params = list(zip_longest_fill(pA, pB, typ, swap_x, swap_y, swap_z, swap_w, default))
        results = []
        pbar = ProgressBar(len(params))
        for idx, (pA, pB, typ, swap_x, swap_y, swap_z, swap_w, default) in enumerate(params):
            default = list(default)
            pA = pA + default[len(pA):]
            pB = pB + default[len(pB):]
            val = vector_swap(pA, pB, swap_x, swap_y, swap_z, swap_w, default)
            val = parse_value(val, typ, val)
            results.append(val)
            pbar.update_absolute(idx)
        return (results,)


================================================
FILE: core/color.py
================================================
""" Jovimetrix - Color """

from enum import Enum

import cv2
import torch

from comfy.utils import ProgressBar

from cozy_comfyui import \
    IMAGE_SIZE_MIN, \
    InputType, RGBAMaskType, EnumConvertType, TensorType, \
    deep_merge, parse_param, zip_longest_fill

from cozy_comfyui.lexicon import \
    Lexicon

from cozy_comfyui.node import \
    COZY_TYPE_IMAGE, \
    CozyBaseNode, CozyImageNode

from cozy_comfyui.image.adjust import \
    image_invert

from cozy_comfyui.image.color import \
    EnumCBDeficiency, EnumCBSimulator, EnumColorMap, EnumColorTheory, \
    color_lut_full, color_lut_match, color_lut_palette, \
    color_lut_tonal, color_lut_visualize, color_match_reinhard, \
    color_theory, color_blind, color_top_used, image_gradient_expand, \
    image_gradient_map

from cozy_comfyui.image.channel import \
    channel_solid

from cozy_comfyui.image.compose import \
    EnumScaleMode, EnumInterpolation, \
    image_scalefit

from cozy_comfyui.image.convert import \
    tensor_to_cv, cv_to_tensor, cv_to_tensor_full, image_mask, image_mask_add

from cozy_comfyui.image.misc import \
    image_stack

# ==============================================================================
# === GLOBAL ===
# ==============================================================================

JOV_CATEGORY = "COLOR"

# ==============================================================================
# === ENUMERATION ===
# ==============================================================================

class EnumColorMatchMode(Enum):
    REINHARD = 30
    LUT = 10
    # HISTOGRAM = 20

class EnumColorMatchMap(Enum):
    USER_MAP = 0
    PRESET_MAP = 10

# ==============================================================================
# === CLASS ===
# ==============================================================================

class ColorBlindNode(CozyImageNode):
    NAME = "COLOR BLIND (JOV) 👁‍🗨"
    CATEGORY = JOV_CATEGORY
    DESCRIPTION = """
Simulate color blindness effects on images. You can select various types of color deficiencies, adjust the severity of the effect, and apply the simulation using different simulators. This node is ideal for accessibility testing and design adjustments, ensuring inclusivity in your visual content.
"""

    @classmethod
    def INPUT_TYPES(cls) -> InputType:
        d = super().INPUT_TYPES()
        d = deep_merge(d, {
            "optional": {
                Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),
                Lexicon.DEFICIENCY: (EnumCBDeficiency._member_names_, {
                    "default": EnumCBDeficiency.PROTAN.name,}),
                Lexicon.SOLVER: (EnumCBSimulator._member_names_, {
                    "default": EnumCBSimulator.AUTOSELECT.name,})
            }
        })
        return Lexicon._parse(d)

    def run(self, **kw) -> RGBAMaskType:
        pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)
        deficiency = parse_param(kw, Lexicon.DEFICIENCY, EnumCBDeficiency, EnumCBDeficiency.PROTAN.name)
        simulator = parse_param(kw, Lexicon.SOLVER, EnumCBSimulator, EnumCBSimulator.AUTOSELECT.name)
        severity = parse_param(kw, Lexicon.VALUE, EnumConvertType.FLOAT, 1)
        params = list(zip_longest_fill(pA, deficiency, simulator, severity))
        images = []
        pbar = ProgressBar(len(params))
        for idx, (pA, deficiency, simulator, severity) in enumerate(params):
            pA = channel_solid() if pA is None else tensor_to_cv(pA)
            pA = color_blind(pA, deficiency, simulator, severity)
            images.append(cv_to_tensor_full(pA))
            pbar.update_absolute(idx)
        return image_stack(images)

class ColorMatchNode(CozyImageNode):
    NAME = "COLOR MATCH (JOV) 💞"
    CATEGORY = JOV_CATEGORY
    DESCRIPTION = """
Adjust the color scheme of one image to match another with the Color Match Node. Choose from various color matching LUTs or Reinhard matching. You can specify a custom user color maps, the number of colors, and whether to flip or invert the images.
"""

    @classmethod
    def INPUT_TYPES(cls) -> InputType:
        d = super().INPUT_TYPES()
        d = deep_merge(d, {
            "optional": {
                Lexicon.IMAGE_SOURCE: (COZY_TYPE_IMAGE, {}),
                Lexicon.IMAGE_TARGET: (COZY_TYPE_IMAGE, {}),
                Lexicon.MODE: (EnumColorMatchMode._member_names_, {
                    "default": EnumColorMatchMode.REINHARD.name,
                    "tooltip": "Match colors from an image or built-in (LUT), Histogram lookups or Reinhard method"}),
                Lexicon.MAP: (EnumColorMatchMap._member_names_, {
                    "default": EnumColorMatchMap.USER_MAP.name, }),
                Lexicon.COLORMAP: (EnumColorMap._member_names_, {
                    "default": EnumColorMap.HSV.name,}),
                Lexicon.VALUE: ("INT", {
                    "default": 255, "min": 0, "max": 255,
                    "tooltip":"The number of colors to use from the LUT during the remap. Will quantize the LUT range."}),
                Lexicon.SWAP: ("BOOLEAN", {
                    "default": False,}),
                Lexicon.INVERT: ("BOOLEAN", {
                    "default": False,}),
                Lexicon.MATTE: ("VEC4", {
                    "default": (0, 0, 0, 255), "rgb": True,}),
            }
        })
        return Lexicon._parse(d)

    def run(self, **kw) -> RGBAMaskType:
        pA = parse_param(kw, Lexicon.IMAGE_SOURCE, EnumConvertType.IMAGE, None)
        pB = parse_param(kw, Lexicon.IMAGE_TARGET, EnumConvertType.IMAGE, None)
        mode = parse_param(kw, Lexicon.MODE, EnumColorMatchMode, EnumColorMatchMode.REINHARD.name)
        cmap = parse_param(kw, Lexicon.MAP, EnumColorMatchMap, EnumColorMatchMap.USER_MAP.name)
        colormap = parse_param(kw, Lexicon.COLORMAP, EnumColorMap, EnumColorMap.HSV.name)
        num_colors = parse_param(kw, Lexicon.VALUE, EnumConvertType.INT, 255)
        swap = parse_param(kw, Lexicon.SWAP, EnumConvertType.BOOLEAN, False)
        invert = parse_param(kw, Lexicon.INVERT, EnumConvertType.BOOLEAN, False)
        matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4, (0, 0, 0, 255), 0, 255)
        params = list(zip_longest_fill(pA, pB, mode, cmap, colormap, num_colors, swap, invert, matte))
        images = []
        pbar = ProgressBar(len(params))
        for idx, (pA, pB, mode, cmap, colormap, num_colors, swap, invert, matte) in enumerate(params):
            if swap == True:
                pA, pB = pB, pA

            mask = None
            if pA is None:
                pA = channel_solid()
            else:
                pA = tensor_to_cv(pA)
                if pA.ndim == 3 and pA.shape[2] == 4:
                    mask = image_mask(pA)

            # h, w = pA.shape[:2]
            if pB is None:
                pB = channel_solid()
            else:
                pB = tensor_to_cv(pB)

            match mode:
                case EnumColorMatchMode.LUT:
                    if cmap == EnumColorMatchMap.PRESET_MAP:
                        pB = None
                    pA = color_lut_match(pA, colormap.value, pB, num_colors)

                case EnumColorMatchMode.REINHARD:
                    pA = color_match_reinhard(pA, pB)

            if invert == True:
                pA = image_invert(pA, 1)

            if mask is not None:
                pA = image_mask_add(pA, mask)

            images.append(cv_to_tensor_full(pA, matte))
            pbar.update_absolute(idx)
        return image_stack(images)

class ColorKMeansNode(CozyBaseNode):
    NAME = "COLOR MEANS (JOV) 〰️"
    CATEGORY = JOV_CATEGORY
    RETURN_TYPES = ("IMAGE", "IMAGE", "IMAGE", "JLUT", "IMAGE",)
    RETURN_NAMES = ("IMAGE", "PALETTE", "GRADIENT", "LUT", "RGB", )
    OUTPUT_TOOLTIPS = (
        "Sequence of top-K colors. Count depends on value in `VAL`.",
        "Simple Tone palette based on result top-K colors. Width is taken from input.",
        "Gradient of top-K colors.",
        "Full 3D LUT of the image mapped to the resultant top-K colors chosen.",
        "Visualization of full 3D .cube LUT in JLUT output"
    )
    DESCRIPTION = """
The top-k colors ordered from most->least used as a strip, tonal palette and 3D LUT.
"""

    @classmethod
    def INPUT_TYPES(cls) -> InputType:
        d = super().INPUT_TYPES()
        d = deep_merge(d, {
            "optional": {
                Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),
                Lexicon.VALUE: ("INT", {
                    "default": 12, "min": 1, "max": 255,
                    "tooltip": "The top K colors to select"}),
                Lexicon.SIZE: ("INT", {
                    "default": 32, "min": 1, "max": 256,
                    "tooltip": "Height of the tones in the strip. Width is based on input"}),
                Lexicon.COUNT: ("INT", {
                    "default": 33, "min": 1, "max": 255,
                    "tooltip": "Number of nodes to use in interpolation of full LUT (256 is every pixel)"}),
                Lexicon.WH: ("VEC2", {
                    "default": (256, 256), "mij":IMAGE_SIZE_MIN, "int": True,
                    "label": ["W", "H"]
                }),
            }
        })
        return Lexicon._parse(d)

    def run(self, **kw) -> RGBAMaskType:
        pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)
        kcolors = parse_param(kw, Lexicon.VALUE, EnumConvertType.INT, 12, 1, 255)
        lut_height = parse_param(kw, Lexicon.SIZE, EnumConvertType.INT, 32, 1, 256)
        nodes = parse_param(kw, Lexicon.COUNT, EnumConvertType.INT, 33, 1, 255)
        wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (256, 256), IMAGE_SIZE_MIN)

        params = list(zip_longest_fill(pA, kcolors, nodes, lut_height, wihi))
        top_colors = []
        lut_tonal = []
        lut_full = []
        lut_visualized = []
        gradients = []
        pbar = ProgressBar(len(params) * sum(kcolors))
        for idx, (pA, kcolors, nodes, lut_height, wihi) in enumerate(params):
            if pA is None:
                pA = channel_solid()

            pA = tensor_to_cv(pA)
            colors = color_top_used(pA, kcolors)

            # size down to 1px strip then expand to 256 for full gradient
            top_colors.extend([cv_to_tensor(channel_solid(*wihi, color=c)) for c in colors])
            lut = color_lut_tonal(colors, width=pA.shape[1], height=lut_height)
            lut_tonal.append(cv_to_tensor(lut))
            full = color_lut_full(colors, nodes)
            lut_full.append(torch.from_numpy(full))
            lut = color_lut_visualize(full, wihi[1])
            lut_visualized.append(cv_to_tensor(lut))
            palette = color_lut_palette(colors, 1)
            gradient = image_gradient_expand(palette)
            gradient = cv2.resize(gradient, wihi)
            gradients.append(cv_to_tensor(gradient))
            pbar.update_absolute(idx)

        return torch.stack(top_colors), torch.stack(lut_tonal), torch.stack(gradients), lut_full, torch.stack(lut_visualized),

class ColorTheoryNode(CozyBaseNode):
    NAME = "COLOR THEORY (JOV) 🛞"
    CATEGORY = JOV_CATEGORY
    RETURN_TYPES = ("IMAGE", "IMAGE", "IMAGE", "IMAGE", "IMAGE")
    RETURN_NAMES = ("C1", "C2", "C3", "C4", "C5")
    DESCRIPTION = """
Generate a color harmony based on the selected scheme.

Supported schemes include complimentary, analogous, triadic, tetradic, and more.

Users can customize the angle of separation for color calculations, offering flexibility in color manipulation and exploration of different color palettes.
"""

    @classmethod
    def INPUT_TYPES(cls) -> InputType:
        d = super().INPUT_TYPES()
        d = deep_merge(d, {
            "optional": {
                Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),
                Lexicon.SCHEME: (EnumColorTheory._member_names_, {
                    "default": EnumColorTheory.COMPLIMENTARY.name}),
                Lexicon.VALUE: ("INT", {
                    "default": 45, "min": -90, "max": 90,
                    "tooltip": "Custom angle of separation to use when calculating colors"}),
                Lexicon.INVERT: ("BOOLEAN", {
                    "default": False})
            }
        })
        return Lexicon._parse(d)

    def run(self, **kw) -> tuple[list[TensorType], list[TensorType]]:
        pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)
        scheme = parse_param(kw, Lexicon.SCHEME, EnumColorTheory, EnumColorTheory.COMPLIMENTARY.name)
        value = parse_param(kw, Lexicon.VALUE, EnumConvertType.INT, 45, -90, 90)
        invert = parse_param(kw, Lexicon.INVERT, EnumConvertType.BOOLEAN, False)
        params = list(zip_longest_fill(pA, scheme, value, invert))
        images = []
        pbar = ProgressBar(len(params))
        for idx, (img, scheme, value, invert) in enumerate(params):
            img = channel_solid() if img is None else tensor_to_cv(img)
            img = color_theory(img, value, scheme)
            if invert:
                img = (image_invert(s, 1) for s in img)
            images.append([cv_to_tensor(a) for a in img])
            pbar.update_absolute(idx)
        return image_stack(images)

class GradientMapNode(CozyImageNode):
    NAME = "GRADIENT MAP (JOV) 🇲🇺"
    CATEGORY = JOV_CATEGORY
    DESCRIPTION = """
Remaps an input image using a gradient lookup table (LUT).

The gradient image will be translated into a single row lookup table.
"""

    @classmethod
    def INPUT_TYPES(cls) -> InputType:
        d = super().INPUT_TYPES()
        d = deep_merge(d, {
            "optional": {
                Lexicon.IMAGE: (COZY_TYPE_IMAGE, {
                    "tooltip": "Image to remap with gradient input"}),
                Lexicon.GRADIENT: (COZY_TYPE_IMAGE, {
                    "tooltip": f"Look up table (LUT) to remap the input image in `{"IMAGE"}`"}),
                Lexicon.REVERSE: ("BOOLEAN", {
                    "default": False,
                    "tooltip": "Reverse the gradient from left-to-right"}),
                Lexicon.MODE: (EnumScaleMode._member_names_, {
                    "default": EnumScaleMode.MATTE.name,}),
                Lexicon.WH: ("VEC2", {
                    "default": (512, 512), "mij":IMAGE_SIZE_MIN, "int": True,
                    "label": ["W", "H"] }),
                Lexicon.SAMPLE: (EnumInterpolation._member_names_, {
                    "default": EnumInterpolation.LANCZOS4.name,}),
                Lexicon.MATTE: ("VEC4", {
                    "default": (0, 0, 0, 255), "rgb": True,})
            }
        })
        return Lexicon._parse(d)

    def run(self, **kw) -> RGBAMaskType:
        pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)
        gradient = parse_param(kw, Lexicon.GRADIENT, EnumConvertType.IMAGE, None)
        reverse = parse_param(kw, Lexicon.REVERSE, EnumConvertType.BOOLEAN, False)
        mode = parse_param(kw, Lexicon.MODE, EnumScaleMode, EnumScaleMode.MATTE.name)
        wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (512, 512), IMAGE_SIZE_MIN)
        sample = parse_param(kw, Lexicon.SAMPLE, EnumInterpolation, EnumInterpolation.LANCZOS4.name)
        matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255)
        images = []
        params = list(zip_longest_fill(pA, gradient, reverse, mode, sample, wihi, matte))
        pbar = ProgressBar(len(params))
        for idx, (pA, gradient, reverse, mode, sample, wihi, matte) in enumerate(params):
            pA = channel_solid() if pA is None else tensor_to_cv(pA)
            mask = None
            if pA.ndim == 3 and pA.shape[2] == 4:
                mask = image_mask(pA)

            gradient = channel_solid() if gradient is None else tensor_to_cv(gradient)
            pA = image_gradient_map(pA, gradient)
            if mode != EnumScaleMode.MATTE:
                w, h = wihi
                pA = image_scalefit(pA, w, h, mode, sample)

            if mask is not None:
                pA = image_mask_add(pA, mask)

            images.append(cv_to_tensor_full(pA, matte))
            pbar.update_absolute(idx)
        return image_stack(images)


================================================
FILE: core/compose.py
================================================
""" Jovimetrix - Composition """

import numpy as np

from comfy.utils import ProgressBar

from cozy_comfyui import \
    IMAGE_SIZE_MIN, \
    InputType, RGBAMaskType, EnumConvertType, \
    deep_merge, parse_param, zip_longest_fill

from cozy_comfyui.lexicon import \
    Lexicon

from cozy_comfyui.node import \
    COZY_TYPE_IMAGE, \
    CozyBaseNode, CozyImageNode

from cozy_comfyui.image import \
    EnumImageType

from cozy_comfyui.image.adjust import \
    EnumThreshold, EnumThresholdAdapt, \
    image_histogram2, image_invert, image_filter, image_threshold

from cozy_comfyui.image.channel import \
    EnumPixelSwizzle, \
    channel_merge, channel_solid, channel_swap

from cozy_comfyui.image.compose import \
    EnumBlendType, EnumScaleMode, EnumScaleInputMode, EnumInterpolation, \
    image_resize, \
    image_scalefit, image_split, image_blend, image_matte

from cozy_comfyui.image.convert import \
    image_mask, image_convert, tensor_to_cv, cv_to_tensor, cv_to_tensor_full

from cozy_comfyui.image.misc import \
    image_by_size, image_minmax, image_stack

# ==============================================================================
# === GLOBAL ===
# ==============================================================================

JOV_CATEGORY = "COMPOSE"

# ==============================================================================
# === CLASS ===
# ==============================================================================

class BlendNode(CozyImageNode):
    NAME = "BLEND (JOV) ⚗️"
    CATEGORY = JOV_CATEGORY
    DESCRIPTION = """
Combine two input images using various blending modes, such as normal, screen, multiply, overlay, etc. It also supports alpha blending and masking to achieve complex compositing effects. This node is essential for creating layered compositions and adding visual richness to images.
"""

    @classmethod
    def INPUT_TYPES(cls) -> InputType:
        d = super().INPUT_TYPES()
        d = deep_merge(d, {
            "optional": {
                Lexicon.IMAGE_BACK: (COZY_TYPE_IMAGE, {}),
                Lexicon.IMAGE_FORE: (COZY_TYPE_IMAGE, {}),
                Lexicon.MASK: (COZY_TYPE_IMAGE, {
                    "tooltip": "Optional Mask for Alpha Blending. If empty, it will use the ALPHA of the FOREGROUND"}),
                Lexicon.FUNCTION: (EnumBlendType._member_names_, {
                    "default": EnumBlendType.NORMAL.name,}),
                Lexicon.ALPHA: ("FLOAT", {
                    "default": 1, "min": 0, "max": 1, "step": 0.01,}),
                Lexicon.SWAP: ("BOOLEAN", {
                    "default": False}),
                Lexicon.INVERT: ("BOOLEAN", {
                    "default": False, "tooltip": "Invert the mask input"}),
                Lexicon.MODE: (EnumScaleMode._member_names_, {
                    "default": EnumScaleMode.MATTE.name,}),
                Lexicon.WH: ("VEC2", {
                    "default": (512, 512), "mij":IMAGE_SIZE_MIN, "int": True,
                    "label": ["W", "H"]}),
                Lexicon.SAMPLE: (EnumInterpolation._member_names_, {
                    "default": EnumInterpolation.LANCZOS4.name,}),
                Lexicon.MATTE: ("VEC4", {
                    "default": (0, 0, 0, 255), "rgb": True,}),
                Lexicon.INPUT: (EnumScaleInputMode._member_names_, {
                    "default": EnumScaleInputMode.NONE.name,}),
            }
        })
        return Lexicon._parse(d)

    def run(self, **kw) -> RGBAMaskType:
        back = parse_param(kw, Lexicon.IMAGE_BACK, EnumConvertType.IMAGE, None)
        fore = parse_param(kw, Lexicon.IMAGE_FORE, EnumConvertType.IMAGE, None)
        mask = parse_param(kw, Lexicon.MASK, EnumConvertType.MASK, None)
        func = parse_param(kw, Lexicon.FUNCTION, EnumBlendType, EnumBlendType.NORMAL.name)
        alpha = parse_param(kw, Lexicon.ALPHA, EnumConvertType.FLOAT, 1)
        swap = parse_param(kw, Lexicon.SWAP, EnumConvertType.BOOLEAN, False)
        invert = parse_param(kw, Lexicon.INVERT, EnumConvertType.BOOLEAN, False)
        mode = parse_param(kw, Lexicon.MODE, EnumScaleMode, EnumScaleMode.MATTE.name)
        wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (512, 512), IMAGE_SIZE_MIN)
        sample = parse_param(kw, Lexicon.SAMPLE, EnumInterpolation, EnumInterpolation.LANCZOS4.name)
        matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255)
        inputMode = parse_param(kw, Lexicon.INPUT, EnumScaleInputMode, EnumScaleInputMode.NONE.name)
        params = list(zip_longest_fill(back, fore, mask, func, alpha, swap, invert, mode, wihi, sample, matte, inputMode))
        images = []
        pbar = ProgressBar(len(params))
        for idx, (back, fore, mask, func, alpha, swap, invert, mode, wihi, sample, matte, inputMode) in enumerate(params):
            if swap:
                back, fore = fore, back

            width, height = IMAGE_SIZE_MIN, IMAGE_SIZE_MIN
            if back is None:
                if fore is None:
                    if mask is None:
                        if mode != EnumScaleMode.MATTE:
                            width, height = wihi
                    else:
                        height, width = mask.shape[:2]
                else:
                    height, width = fore.shape[:2]
            else:
                height, width = back.shape[:2]

            if back is None:
                back = channel_solid(width, height, matte)
            else:
                back = tensor_to_cv(back)
                #matted = pixel_eval(matte)
                #back = image_matte(back, matted)

            if fore is None:
                clear = list(matte[:3]) + [0]
                fore = channel_solid(width, height, clear)
            else:
                fore = tensor_to_cv(fore)

            if mask is None:
                mask = image_mask(fore, 255)
            else:
                mask = tensor_to_cv(mask, 1)

            if invert:
                mask = 255 - mask

            if inputMode != EnumScaleInputMode.NONE:
                # get the min/max of back, fore; and mask?
                imgs = [back, fore]
                _, w, h = image_by_size(imgs)
                back = image_scalefit(back, w, h, inputMode, sample, matte)
                fore = image_scalefit(fore, w, h, inputMode, sample, matte)
                mask = image_scalefit(mask, w, h, inputMode, sample)

                back = image_scalefit(back, w, h, EnumScaleMode.RESIZE_MATTE, sample, matte)
                fore = image_scalefit(fore, w, h, EnumScaleMode.RESIZE_MATTE, sample, (0,0,0,255))
                mask = image_scalefit(mask, w, h, EnumScaleMode.RESIZE_MATTE, sample, (255,255,255,255))

            img = image_blend(back, fore, mask, func, alpha)
            mask = image_mask(img)

            if mode != EnumScaleMode.MATTE:
                width, height = wihi
                img = image_scalefit(img, width, height, mode, sample, matte)

            img = cv_to_tensor_full(img, matte)
            #img = [cv_to_tensor(back), cv_to_tensor(fore), cv_to_tensor(mask, True)]
            images.append(img)
            pbar.update_absolute(idx)

        return image_stack(images)

class FilterMaskNode(CozyImageNode):
    NAME = "FILTER MASK (JOV) 🤿"
    CATEGORY = JOV_CATEGORY
    DESCRIPTION = """
Create masks based on specific color ranges within an image. Specify the color range using start and end values and an optional fuzziness factor to adjust the range. This node allows for precise color-based mask creation, ideal for tasks like object isolation, background removal, or targeted color adjustments.
"""

    @classmethod
    def INPUT_TYPES(cls) -> InputType:
        d = super().INPUT_TYPES()
        d = deep_merge(d, {
            "optional": {
                Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),
                Lexicon.START: ("VEC3", {
                    "default": (128, 128, 128), "rgb": True}),
                Lexicon.RANGE: ("BOOLEAN", {
                    "default": False,
                    "tooltip": "Use an end point (start->end) when calculating the filter range"}),
                Lexicon.END: ("VEC3", {
                    "default": (128, 128, 128), "rgb": True}),
                Lexicon.FUZZ: ("VEC3", {
                    "default": (0.5,0.5,0.5), "mij":0, "maj":1,}),
                Lexicon.MATTE: ("VEC4", {
                    "default": (0, 0, 0, 255), "rgb": True,}),
            }
        })
        return Lexicon._parse(d)

    def run(self, **kw) -> RGBAMaskType:
        pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)
        start = parse_param(kw, Lexicon.START, EnumConvertType.VEC3INT, (128,128,128), 0, 255)
        use_range = parse_param(kw, Lexicon.RANGE, EnumConvertType.BOOLEAN, False)
        end = parse_param(kw, Lexicon.END, EnumConvertType.VEC3INT, (128,128,128), 0, 255)
        fuzz = parse_param(kw, Lexicon.FUZZ, EnumConvertType.VEC3, (0.5,0.5,0.5), 0, 1)
        matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255)
        params = list(zip_longest_fill(pA, start, use_range, end, fuzz, matte))
        images = []
        pbar = ProgressBar(len(params))
        for idx, (pA, start, use_range, end, fuzz, matte) in enumerate(params):
            img = np.zeros((IMAGE_SIZE_MIN, IMAGE_SIZE_MIN, 3), dtype=np.uint8) if pA is None else tensor_to_cv(pA)

            img, mask = image_filter(img, start, end, fuzz, use_range)
            if img.shape[2] == 3:
                alpha_channel = np.zeros((img.shape[0], img.shape[1], 1), dtype=img.dtype)
                img = np.concatenate((img, alpha_channel), axis=2)
            img[..., 3] = mask[:,:]
            images.append(cv_to_tensor_full(img, matte))
            pbar.update_absolute(idx)
        return image_stack(images)

class HistogramNode(CozyImageNode):
    NAME = "HISTOGRAM (JOV)"
    CATEGORY = JOV_CATEGORY
    DESCRIPTION = """
The Histogram Node generates a histogram representation of the input image, showing the distribution of pixel intensity values across different bins. This visualization is useful for understanding the overall brightness and contrast characteristics of an image. Additionally, the node performs histogram normalization, which adjusts the pixel values to enhance the contrast of the image. Histogram normalization can be helpful for improving the visual quality of images or preparing them for further image processing tasks.
"""

    @classmethod
    def INPUT_TYPES(cls) -> InputType:
        d = super().INPUT_TYPES()
        d = deep_merge(d, {
            "optional": {
                Lexicon.IMAGE: (COZY_TYPE_IMAGE, {
                    "tooltip": "Pixel Data (RGBA, RGB or Grayscale)"}),
                Lexicon.WH: ("VEC2", {
                    "default": (512, 512), "mij":IMAGE_SIZE_MIN, "int": True,
                    "label": ["W", "H"]}),
            }
        })
        return Lexicon._parse(d)

    def run(self, **kw) -> RGBAMaskType:
        pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)
        wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (512, 512), IMAGE_SIZE_MIN)
        params = list(zip_longest_fill(pA, wihi))
        images = []
        pbar = ProgressBar(len(params))
        for idx, (pA, wihi) in enumerate(params):
            pA = tensor_to_cv(pA) if pA is not None else channel_solid()
            hist_img = image_histogram2(pA, bins=256)
            width, height = wihi
            hist_img = image_resize(hist_img, width, height, EnumInterpolation.NEAREST)
            images.append(cv_to_tensor_full(hist_img))
            pbar.update_absolute(idx)
        return image_stack(images)

class PixelMergeNode(CozyImageNode):
    NAME = "PIXEL MERGE (JOV) 🫂"
    CATEGORY = JOV_CATEGORY
    DESCRIPTION = """
Combines individual color channels (red, green, blue) along with an optional mask channel to create a composite image.
"""

    @classmethod
    def INPUT_TYPES(cls) -> InputType:
        d = super().INPUT_TYPES()
        d = deep_merge(d, {
            "optional": {
                Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),
                Lexicon.CHAN_RED: (COZY_TYPE_IMAGE, {}),
                Lexicon.CHAN_GREEN: (COZY_TYPE_IMAGE, {}),
                Lexicon.CHAN_BLUE: (COZY_TYPE_IMAGE, {}),
                Lexicon.CHAN_ALPHA: (COZY_TYPE_IMAGE, {}),
                Lexicon.MATTE: ("VEC4", {
                    "default": (0, 0, 0, 255), "rgb": True,}),
                Lexicon.FLIP: ("VEC4", {
                    "default": (0,0,0,0), "mij":0, "maj":1, "step": 0.01,
                    "tooltip": "Invert specific input prior to merging. R, G, B, A."}),
                Lexicon.INVERT: ("BOOLEAN", {
                    "default": False,})
            }
        })
        return Lexicon._parse(d)

    def run(self, **kw) -> RGBAMaskType:
        rgba = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)
        R = parse_param(kw, Lexicon.CHAN_RED, EnumConvertType.MASK, None)
        G = parse_param(kw, Lexicon.CHAN_GREEN, EnumConvertType.MASK, None)
        B = parse_param(kw, Lexicon.CHAN_BLUE, EnumConvertType.MASK, None)
        A = parse_param(kw, Lexicon.CHAN_ALPHA, EnumConvertType.MASK, None)
        matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255)
        flip = parse_param(kw, Lexicon.FLIP, EnumConvertType.VEC4, (0, 0, 0, 0), 0, 1)
        invert = parse_param(kw, Lexicon.INVERT, EnumConvertType.BOOLEAN, False)
        params = list(zip_longest_fill(rgba, R, G, B, A, matte, flip, invert))
        images = []
        pbar = ProgressBar(len(params))
        for idx, (rgba, r, g, b, a, matte, flip, invert) in enumerate(params):
            replace = r, g, b, a
            if rgba is not None:
                rgba = image_split(tensor_to_cv(rgba, chan=4))
                img = [tensor_to_cv(replace[i]) if replace[i] is not None else x for i, x in enumerate(rgba)]
            else:
                img = [tensor_to_cv(x) if x is not None else x for x in replace]

            _, _, w_max, h_max = image_minmax(img)
            for i, x in enumerate(img):
                if x is None:
                    x = np.full((h_max, w_max, 1), matte[i], dtype=np.uint8)
                else:
                    x = image_convert(x, 1)
                    x = image_scalefit(x, w_max, h_max, EnumScaleMode.ASPECT)

                if flip[i] != 0:
                    x = image_invert(x, flip[i])
                img[i] = x

            img = channel_merge(img)

            #if invert == True:
            #    img = image_invert(img, 1)

            images.append(cv_to_tensor_full(img, matte))
            pbar.update_absolute(idx)
        return image_stack(images)

class PixelSplitNode(CozyBaseNode):
    NAME = "PIXEL SPLIT (JOV) 💔"
    CATEGORY = JOV_CATEGORY
    RETURN_TYPES = ("MASK", "MASK", "MASK", "MASK", "IMAGE")
    RETURN_NAMES = ("❤️", "💚", "💙", "🤍", "RGB")
    OUTPUT_TOOLTIPS = (
        "Single channel output of Red Channel.",
        "Single channel output of Green Channel",
        "Single channel output of Blue Channel",
        "Single channel output of Alpha Channel",
        "RGB pack of the input",
    )
    DESCRIPTION = """
Split an input into individual color channels (red, green, blue, alpha).
"""

    @classmethod
    def INPUT_TYPES(cls) -> InputType:
        d = super().INPUT_TYPES()
        d = deep_merge(d, {
            "optional": {
                Lexicon.IMAGE: (COZY_TYPE_IMAGE, {})
            }
        })
        return Lexicon._parse(d)

    def run(self, **kw) -> RGBAMaskType:
        pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)
        images = []
        pbar = ProgressBar(len(pA))
        for idx, pA in enumerate(pA):
            pA = channel_solid(chan=EnumImageType.RGBA) if pA is None else tensor_to_cv(pA, chan=4)
            out = [cv_to_tensor(x, True) for x in image_split(pA)] + [cv_to_tensor(image_convert(pA, 3))]
            images.append(out)
            pbar.update_absolute(idx)
        return image_stack(images)

class PixelSwapNode(CozyImageNode):
    NAME = "PIXEL SWAP (JOV) 🔃"
    CATEGORY = JOV_CATEGORY
    DESCRIPTION = """
Swap pixel values between two input images based on specified channel swizzle operations. Options include pixel inputs, swap operations for red, green, blue, and alpha channels, and constant values for each channel. The swap operations allow for flexible pixel manipulation by determining the source of each channel in the output image, whether it be from the first image, the second image, or a constant value.
"""

    @classmethod
    def INPUT_TYPES(cls) -> InputType:
        d = super().INPUT_TYPES()
        d = deep_merge(d, {
            "optional": {
                Lexicon.IMAGE_SOURCE: (COZY_TYPE_IMAGE, {}),
                Lexicon.IMAGE_TARGET: (COZY_TYPE_IMAGE, {}),
                Lexicon.SWAP_R: (EnumPixelSwizzle._member_names_, {
                    "default": EnumPixelSwizzle.RED_A.name,}),
                Lexicon.SWAP_G: (EnumPixelSwizzle._member_names_, {
                    "default": EnumPixelSwizzle.GREEN_A.name,}),
                Lexicon.SWAP_B: (EnumPixelSwizzle._member_names_, {
                    "default": EnumPixelSwizzle.BLUE_A.name,}),
                Lexicon.SWAP_A: (EnumPixelSwizzle._member_names_, {
                    "default": EnumPixelSwizzle.ALPHA_A.name,}),
                Lexicon.MATTE: ("VEC4", {
                    "default": (0, 0, 0, 255), "rgb": True,})
            }
        })
        return Lexicon._parse(d)

    def run(self, **kw) -> RGBAMaskType:
        pA = parse_param(kw, Lexicon.IMAGE_SOURCE, EnumConvertType.IMAGE, None)
        pB = parse_param(kw, Lexicon.IMAGE_TARGET, EnumConvertType.IMAGE, None)
        swap_r = parse_param(kw, Lexicon.SWAP_R, EnumPixelSwizzle, EnumPixelSwizzle.RED_A.name)
        swap_g = parse_param(kw, Lexicon.SWAP_G, EnumPixelSwizzle, EnumPixelSwizzle.GREEN_A.name)
        swap_b = parse_param(kw, Lexicon.SWAP_B, EnumPixelSwizzle, EnumPixelSwizzle.BLUE_A.name)
        swap_a = parse_param(kw, Lexicon.SWAP_A, EnumPixelSwizzle, EnumPixelSwizzle.ALPHA_A.name)
        matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255)
        params = list(zip_longest_fill(pA, pB, swap_r, swap_g, swap_b, swap_a, matte))
        images = []
        pbar = ProgressBar(len(params))
        for idx, (pA, pB, swap_r, swap_g, swap_b, swap_a, matte) in enumerate(params):
            if pA is None:
                if pB is None:
                    out = channel_solid()
                    images.append(cv_to_tensor_full(out))
                    pbar.update_absolute(idx)
                    continue

                h, w = pB.shape[:2]
                pA = channel_solid(w, h)
            else:
                h, w = pA.shape[:2]
                pA = tensor_to_cv(pA)
                pA = image_convert(pA, 4)

            pB = tensor_to_cv(pB) if pB is not None else channel_solid(w, h)
            pB = image_convert(pB, 4)
            pB = image_matte(pB, (0,0,0,0), w, h)
            pB = image_scalefit(pB, w, h, EnumScaleMode.CROP)

            out = channel_swap(pA, pB, (swap_r, swap_g, swap_b, swap_a), matte)

            images.append(cv_to_tensor_full(out))
            pbar.update_absolute(idx)
        return image_stack(images)

class ThresholdNode(CozyImageNode):
    NAME = "THRESHOLD (JOV) 📉"
    CATEGORY = JOV_CATEGORY
    DESCRIPTION = """
Define a range and apply it to an image for segmentation and feature extraction. Choose from various threshold modes, such as binary and adaptive, and adjust the threshold value and block size to suit your needs. You can also invert the resulting mask if necessary. This node is versatile for a variety of image processing tasks.
"""

    @classmethod
    def INPUT_TYPES(cls) -> InputType:
        d = super().INPUT_TYPES()
        d = deep_merge(d, {
            "optional": {
                Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),
                Lexicon.ADAPT: ( EnumThresholdAdapt._member_names_, {
                    "default": EnumThresholdAdapt.ADAPT_NONE.name,}),
                Lexicon.FUNCTION: ( EnumThreshold._member_names_, {
                    "default": EnumThreshold.BINARY.name}),
                Lexicon.THRESHOLD: ("FLOAT", {
                    "default": 0.5, "min": 0, "max": 1, "step": 0.005}),
                Lexicon.SIZE: ("INT", {
                    "default": 3, "min": 3, "max": 103}),
                Lexicon.INVERT: ("BOOLEAN", {
                    "default": False,
                    "tooltip": "Invert the mask input"})
            }
        })
        return Lexicon._parse(d)

    def run(self, **kw) -> RGBAMaskType:
        pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)
        mode = parse_param(kw, Lexicon.FUNCTION, EnumThreshold, EnumThreshold.BINARY.name)
        adapt = parse_param(kw, Lexicon.ADAPT, EnumThresholdAdapt, EnumThresholdAdapt.ADAPT_NONE.name)
        threshold = parse_param(kw, Lexicon.THRESHOLD, EnumConvertType.FLOAT, 1, 0, 1)
        block = parse_param(kw, Lexicon.SIZE, EnumConvertType.INT, 3, 3, 103)
        invert = parse_param(kw, Lexicon.INVERT, EnumConvertType.BOOLEAN, False)
        params = list(zip_longest_fill(pA, mode, adapt, threshold, block, invert))
        images = []
        pbar = ProgressBar(len(params))
        for idx, (pA, mode, adapt, th, block, invert) in enumerate(params):
            pA = tensor_to_cv(pA) if pA is not None else channel_solid()
            pA = image_threshold(pA, th, mode, adapt, block)
            if invert == True:
                pA = image_invert(pA, 1)
            images.append(cv_to_tensor_full(pA))
            pbar.update_absolute(idx)
        return image_stack(images)


================================================
FILE: core/create.py
================================================
""" Jovimetrix - Creation """

import numpy as np
from PIL import ImageFont
from skimage.filters import gaussian

from comfy.utils import ProgressBar

from cozy_comfyui import \
    IMAGE_SIZE_MIN, \
    InputType, EnumConvertType, RGBAMaskType, \
    deep_merge, parse_param, zip_longest_fill

from cozy_comfyui.lexicon import \
    Lexicon

from cozy_comfyui.node import \
    COZY_TYPE_IMAGE, \
    CozyImageNode

from cozy_comfyui.image import \
    EnumImageType

from cozy_comfyui.image.adjust import \
    image_invert

from cozy_comfyui.image.channel import \
    channel_solid

from cozy_comfyui.image.compose import \
    EnumEdge, EnumScaleMode, EnumInterpolation, \
    image_rotate, image_scalefit, image_transform, image_translate, image_blend

from cozy_comfyui.image.convert import \
    image_convert, pil_to_cv, cv_to_tensor, cv_to_tensor_full, tensor_to_cv, \
    image_mask, image_mask_add, image_mask_binary

from cozy_comfyui.image.misc import \
    image_stack

from cozy_comfyui.image.shape import \
    EnumShapes, \
    shape_ellipse, shape_polygon, shape_quad

from cozy_comfyui.image.text import \
    EnumAlignment, EnumJustify, \
    font_names, text_autosize, text_draw

# ==============================================================================
# === GLOBAL ===
# ==============================================================================

JOV_CATEGORY = "CREATE"

# ==============================================================================
# === CLASS ===
# ==============================================================================

class ConstantNode(CozyImageNode):
    NAME = "CONSTANT (JOV) 🟪"
    CATEGORY = JOV_CATEGORY
    DESCRIPTION = """
Generate a constant image or mask of a specified size and color. It can be used to create solid color backgrounds or matte images for compositing with other visual elements. The node allows you to define the desired width and height of the output and specify the RGBA color value for the constant output. Additionally, you can input an optional image to use as a matte with the selected color.
"""

    @classmethod
    def INPUT_TYPES(cls) -> InputType:
        d = super().INPUT_TYPES()
        d = deep_merge(d, {
            "optional": {
                Lexicon.IMAGE: (COZY_TYPE_IMAGE, {
                    "tooltip":"Optional Image to Matte with Selected Color"}),
                Lexicon.MASK: (COZY_TYPE_IMAGE, {
                    "tooltip":"Override Image mask"}),
                Lexicon.COLOR: ("VEC4", {
                    "default": (0, 0, 0, 255), "rgb": True,
                    "tooltip": "Constant Color to Output"}),
                Lexicon.MODE: (EnumScaleMode._member_names_, {
                    "default": EnumScaleMode.MATTE.name,}),
                Lexicon.WH: ("VEC2", {
                    "default": (512, 512), "mij": 1, "int": True,
                    "label": ["W", "H"],}),
                Lexicon.SAMPLE: (EnumInterpolation._member_names_, {
                    "default": EnumInterpolation.LANCZOS4.name,})
            }
        })
        return Lexicon._parse(d)

    def run(self, **kw) -> RGBAMaskType:
        pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)
        mask = parse_param(kw, Lexicon.MASK, EnumConvertType.MASK, None)
        matte = parse_param(kw, Lexicon.COLOR, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255)
        mode = parse_param(kw, Lexicon.MODE, EnumScaleMode, EnumScaleMode.MATTE.name)
        wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (512, 512), 1)
        sample = parse_param(kw, Lexicon.SAMPLE, EnumInterpolation, EnumInterpolation.LANCZOS4.name)
        images = []
        params = list(zip_longest_fill(pA, mask, matte, mode, wihi, sample))
        pbar = ProgressBar(len(params))
        for idx, (pA, mask, matte, mode, wihi, sample) in enumerate(params):
            width, height = wihi
            w, h = width, height

            if pA is None:
                pA = channel_solid(width, height, (0,0,0,255))
            else:
                pA = tensor_to_cv(pA)
                pA = image_convert(pA, 4)
                h, w = pA.shape[:2]

            if mask is None:
                mask = image_mask(pA, 0)
            else:
                mask = tensor_to_cv(mask, invert=1, chan=1)
                mask = image_scalefit(mask, w, h, matte=(0,0,0,255), mode=EnumScaleMode.FIT)

            pB = channel_solid(w, h, matte)
            pA = image_blend(pB, pA, mask)
            #mask = image_invert(mask, 1)
            pA = image_mask_add(pA, mask)

            if mode != EnumScaleMode.MATTE:
                pA = image_scalefit(pA, width, height, mode, sample, matte)
            images.append(cv_to_tensor_full(pA, matte))
            pbar.update_absolute(idx)
        return image_stack(images)

class ShapeNode(CozyImageNode):
    NAME = "SHAPE GEN (JOV) ✨"
    CATEGORY = JOV_CATEGORY
    DESCRIPTION = """
Create n-sided polygons. These shapes can be customized by adjusting parameters such as size, color, position, rotation angle, and edge blur. The node provides options to specify the shape type, the number of sides for polygons, the RGBA color value for the main shape, and the RGBA color value for the background. Additionally, you can control the width and height of the output images, the position offset, and the amount of edge blur applied to the shapes.
"""

    @classmethod
    def INPUT_TYPES(cls) -> InputType:
        d = super().INPUT_TYPES()
        d = deep_merge(d, {
            "optional": {
                Lexicon.SHAPE: (EnumShapes._member_names_, {
                    "default": EnumShapes.CIRCLE.name}),
                Lexicon.SIDES: ("INT", {
                    "default": 3, "min": 3, "max": 100}),
                Lexicon.COLOR: ("VEC4", {
                    "default": (255, 255, 255, 255), "rgb": True,
                    "tooltip": "Main Shape Color"}),
                Lexicon.MATTE: ("VEC4", {
                    "default": (0, 0, 0, 255), "rgb": True,}),
                Lexicon.WH: ("VEC2", {
                    "default": (256, 256), "mij":IMAGE_SIZE_MIN, "int": True,
                    "label": ["W", "H"],}),
                Lexicon.XY: ("VEC2", {
                    "default": (0, 0,), "mij": -1, "maj": 1,
                    "label": ["X", "Y"]}),
                Lexicon.ANGLE: ("FLOAT", {
                    "default": 0, "min": -180, "max": 180, "step": 0.01,}),
                Lexicon.SIZE: ("VEC2", {
                    "default": (1, 1), "mij": 0, "maj": 1,
                    "label": ["X", "Y"]}),
                Lexicon.EDGE: (EnumEdge._member_names_, {
                    "default": EnumEdge.CLIP.name}),
                Lexicon.BLUR: ("FLOAT", {
                    "default": 0, "min": 0, "step": 0.01,}),
            }
        })
        return Lexicon._parse(d)

    def run(self, **kw) -> RGBAMaskType:
        shape = parse_param(kw, Lexicon.SHAPE, EnumShapes, EnumShapes.CIRCLE.name)
        sides = parse_param(kw, Lexicon.SIDES, EnumConvertType.INT, 3, 3)
        color = parse_param(kw, Lexicon.COLOR, EnumConvertType.VEC4INT, (255, 255, 255, 255), 0, 255)
        matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255)
        wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (256, 256), IMAGE_SIZE_MIN)
        offset = parse_param(kw, Lexicon.XY, EnumConvertType.VEC2, (0, 0), -1, 1)
        angle = parse_param(kw, Lexicon.ANGLE, EnumConvertType.FLOAT, 0, -180, 180)
        size = parse_param(kw, Lexicon.SIZE, EnumConvertType.VEC2, (1, 1), 0, 1, zero=0.001)
        edge = parse_param(kw, Lexicon.EDGE, EnumEdge, EnumEdge.CLIP.name)
        blur = parse_param(kw, Lexicon.BLUR, EnumConvertType.FLOAT, 0, 0)
        params = list(zip_longest_fill(shape, sides, color, matte, wihi, offset, angle, size, edge, blur))
        images = []
        pbar = ProgressBar(len(params))
        for idx, (shape, sides, color, matte, wihi, offset, angle, size, edge, blur) in enumerate(params):
            width, height = wihi
            sizeX, sizeY = size
            fill = color[:3][::-1]

            match shape:
                case EnumShapes.SQUARE:
                    rgb = shape_quad(width, height, sizeX, sizeY, fill)

                case EnumShapes.CIRCLE:
                    rgb = shape_ellipse(width, height, sizeX, sizeY, fill)

                case EnumShapes.POLYGON:
                    rgb = shape_polygon(width, height, sizeX, sides, fill)

            rgb = pil_to_cv(rgb)
            rgb = image_transform(rgb, offset, angle, edge=edge)
            mask = image_mask_binary(rgb)

            if blur > 0:
                # @TODO: Do blur on larger canvas to remove wrap bleed.
                rgb = (gaussian(rgb, sigma=blur, channel_axis=2) * 255).astype(np.uint8)
                mask = (gaussian(mask, sigma=blur, channel_axis=2) * 255).astype(np.uint8)

            mask = (mask * (color[3] / 255.)).astype(np.uint8)
            back = list(matte[:3]) + [255]
            canvas = np.full((height, width, 4), back, dtype=rgb.dtype)
            rgba = image_blend(canvas, rgb, mask)
            rgba = image_mask_add(rgba, mask)
            rgb = image_convert(rgba, 3)

            images.append([cv_to_tensor(rgba), cv_to_tensor(rgb), cv_to_tensor(mask, True)])
            pbar.update_absolute(idx)
        return image_stack(images)

class TextNode(CozyImageNode):
    NAME = "TEXT GEN (JOV) 📝"
    CATEGORY = JOV_CATEGORY
    FONTS = font_names()
    FONT_NAMES = sorted(FONTS.keys())
    DESCRIPTION = """
Generates images containing text based on parameters such as font, size, alignment, color, and position. Users can input custom text messages, select fonts from a list of available options, adjust font size, and specify the alignment and justification of the text. Additionally, the node provides options for auto-sizing text to fit within specified dimensions, controlling letter-by-letter rendering, and applying edge effects such as clipping and inversion.
"""

    @classmethod
    def INPUT_TYPES(cls) -> InputType:
        d = super().INPUT_TYPES()
        d = deep_merge(d, {
            "optional": {
                Lexicon.STRING: ("STRING", {
                    "default": "jovimetrix", "multiline": True,
                    "dynamicPrompts": False,
                    "tooltip": "Your Message"}),
                Lexicon.FONT: (cls.FONT_NAMES, {
                    "default": cls.FONT_NAMES[0]}),
                Lexicon.LETTER: ("BOOLEAN", {
                    "default": False,}),
                Lexicon.AUTOSIZE: ("BOOLEAN", {
                    "default": False,
                    "tooltip": "Scale based on Width & Height"}),
                Lexicon.COLOR: ("VEC4", {
                    "default": (255, 255, 255, 255), "rgb": True,
                    "tooltip": "Color of the letters"}),
                Lexicon.MATTE: ("VEC4", {
                    "default": (0, 0, 0, 255), "rgb": True,}),
                Lexicon.COLUMNS: ("INT", {
                    "default": 0, "min": 0}),
                # if auto on, hide these...
                Lexicon.SIZE: ("INT", {
                    "default": 16, "min": 8}),
                Lexicon.ALIGN: (EnumAlignment._member_names_, {
                    "default": EnumAlignment.CENTER.name,}),
                Lexicon.JUSTIFY: (EnumJustify._member_names_, {
                    "default": EnumJustify.CENTER.name,}),
                Lexicon.MARGIN: ("INT", {
                    "default": 0, "min": -1024, "max": 1024,}),
                Lexicon.SPACING: ("INT", {
                    "default": 0, "min": -1024, "max": 1024}),
                Lexicon.WH: ("VEC2", {
                    "default": (256, 256), "mij":IMAGE_SIZE_MIN, "int": True,
                    "label": ["W", "H"],}),
                Lexicon.XY: ("VEC2", {
                    "default": (0, 0,), "mij": -1, "maj": 1,
                    "label": ["X", "Y"],
                    "tooltip":"Offset the position"}),
                Lexicon.ANGLE: ("FLOAT", {
                    "default": 0, "step": 0.01,}),
                Lexicon.EDGE: (EnumEdge._member_names_, {
                    "default": EnumEdge.CLIP.name}),
                Lexicon.INVERT: ("BOOLEAN", {
                    "default": False,
                    "tooltip": "Invert the mask input"})
            }
        })
        return Lexicon._parse(d)

    def run(self, **kw) -> RGBAMaskType:
        full_text = parse_param(kw, Lexicon.STRING, EnumConvertType.STRING, "jovimetrix")
        font_idx = parse_param(kw, Lexicon.FONT, EnumConvertType.STRING, self.FONT_NAMES[0])
        autosize = parse_param(kw, Lexicon.AUTOSIZE, EnumConvertType.BOOLEAN, False)
        letter = parse_param(kw, Lexicon.LETTER, EnumConvertType.BOOLEAN, False)
        color = parse_param(kw, Lexicon.COLOR, EnumConvertType.VEC4INT, (255,255,255,255))
        matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0,0,0,255))
        columns = parse_param(kw, Lexicon.COLUMNS, EnumConvertType.INT, 0)
        font_size = parse_param(kw, Lexicon.SIZE, EnumConvertType.INT, 1)
        align = parse_param(kw, Lexicon.ALIGN, EnumAlignment, EnumAlignment.CENTER.name)
        justify = parse_param(kw, Lexicon.JUSTIFY, EnumJustify, EnumJustify.CENTER.name)
        margin = parse_param(kw, Lexicon.MARGIN, EnumConvertType.INT, 0)
        line_spacing = parse_param(kw, Lexicon.SPACING, EnumConvertType.INT, 0)
        wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (512, 512), IMAGE_SIZE_MIN)
        pos = parse_param(kw, Lexicon.XY, EnumConvertType.VEC2, (0, 0))
        angle = parse_param(kw, Lexicon.ANGLE, EnumConvertType.INT, 0)
        edge = parse_param(kw, Lexicon.EDGE, EnumEdge, EnumEdge.CLIP.name)
        invert = parse_param(kw, Lexicon.INVERT, EnumConvertType.BOOLEAN, False)
        images = []
        params = list(zip_longest_fill(full_text, font_idx, autosize, letter, color,
                                matte, columns, font_size, align, justify, margin,
                                line_spacing, wihi, pos, angle, edge, invert))

        pbar = ProgressBar(len(params))
        for idx, (full_text, font_idx, autosize, letter, color, matte, columns,
                font_size, align, justify, margin, line_spacing, wihi, pos,
                angle, edge, invert) in enumerate(params):

            width, height = wihi
            font_name = self.FONTS[font_idx]
            full_text = str(full_text)

            if letter:
                full_text = full_text.replace('\n', '')
                if autosize:
                    _, font_size = text_autosize(full_text[0].upper(), font_name, width, height)[:2]
                    margin = 0
                    line_spacing = 0
            else:
                if autosize:
                    wm = width - margin * 2
                    hm = height - margin * 2 - line_spacing
                    columns = 0 if columns == 0 else columns * 2 + 2
                    full_text, font_size = text_autosize(full_text, font_name, wm, hm, columns)[:2]
                full_text = [full_text]
            font_size *= 2.5

            font = ImageFont.truetype(font_name, font_size)
            for ch in full_text:
                img = text_draw(ch, font, width, height, align, justify, margin, line_spacing, color)
                img = image_rotate(img, angle, edge=edge)
                img = image_translate(img, pos, edge=edge)
                if invert:
                    img = image_invert(img, 1)
                images.append(cv_to_tensor_full(img, matte))
            pbar.update_absolute(idx)
        return image_stack(images)


================================================
FILE: core/trans.py
================================================
""" Jovimetrix - Transform """

import sys
from enum import Enum

from comfy.utils import ProgressBar

from cozy_comfyui import \
    logger, \
    IMAGE_SIZE_MIN, \
    InputType, RGBAMaskType, EnumConvertType, \
    deep_merge, parse_param, parse_dynamic, zip_longest_fill

from cozy_comfyui.lexicon import \
    Lexicon

from cozy_comfyui.node import \
    COZY_TYPE_IMAGE, \
    CozyImageNode, CozyBaseNode

from cozy_comfyui.image.channel import \
    channel_solid

from cozy_comfyui.image.convert import \
    tensor_to_cv, cv_to_tensor_full, cv_to_tensor, image_mask, image_mask_add

from cozy_comfyui.image.compose import \
    EnumOrientation, EnumEdge, EnumMirrorMode, EnumScaleMode, EnumInterpolation, \
    image_edge_wrap, image_mirror, image_scalefit, image_transform, \
    image_crop, image_crop_center, image_crop_polygonal, image_stacker, \
    image_flatten

from cozy_comfyui.image.misc import \
    image_stack

from cozy_comfyui.image.mapping import \
    EnumProjection, \
    remap_fisheye, remap_perspective, remap_polar, remap_sphere

# ==============================================================================
# === GLOBAL ===
# ==============================================================================

JOV_CATEGORY = "TRANSFORM"

# ==============================================================================
# === ENUMERATION ===
# ==============================================================================

class EnumCropMode(Enum):
    CENTER = 20
    XY = 0
    FREE = 10

# ==============================================================================
# === CLASS ===
# ==============================================================================

class CropNode(CozyImageNode):
    NAME = "CROP (JOV) ✂️"
    CATEGORY = JOV_CATEGORY
    DESCRIPTION = """
Extract a portion of an input image or resize it. It supports various cropping modes, including center cropping, custom XY cropping, and free-form polygonal cropping. This node is useful for preparing image data for specific tasks or extracting regions of interest.
"""

    @classmethod
    def INPUT_TYPES(cls) -> InputType:
        d = super().INPUT_TYPES()
        d = deep_merge(d, {
            "optional": {
                Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),
                Lexicon.FUNCTION: (EnumCropMode._member_names_, {
                    "default": EnumCropMode.CENTER.name}),
                Lexicon.XY: ("VEC2", {
                    "default": (0, 0), "mij": 0, "maj": 1,
                    "label": ["X", "Y"]}),
                Lexicon.WH: ("VEC2", {
                    "default": (512, 512), "mij": IMAGE_SIZE_MIN, "int": True,
                    "label": ["W", "H"]}),
                Lexicon.TLTR: ("VEC4", {
                    "default": (0, 0, 0, 1), "mij": 0, "maj": 1,
                    "label": ["TOP", "LEFT", "TOP", "RIGHT"],}),
                Lexicon.BLBR: ("VEC4", {
                    "default": (1, 0, 1, 1), "mij": 0, "maj": 1,
                    "label": ["BOTTOM", "LEFT", "BOTTOM", "RIGHT"],}),
                Lexicon.MATTE: ("VEC4", {
                    "default": (0, 0, 0, 255), "rgb": True,})
            }
        })
        return Lexicon._parse(d)

    def run(self, **kw) -> RGBAMaskType:
        pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)
        func = parse_param(kw, Lexicon.FUNCTION, EnumCropMode, EnumCropMode.CENTER.name)
        # if less than 1 then use as scalar, over 1 = int(size)
        xy = parse_param(kw, Lexicon.XY, EnumConvertType.VEC2, (0, 0,))
        wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (512, 512), IMAGE_SIZE_MIN)
        tltr = parse_param(kw, Lexicon.TLTR, EnumConvertType.VEC4, (0, 0, 0, 1,))
        blbr = parse_param(kw, Lexicon.BLBR, EnumConvertType.VEC4, (1, 0, 1, 1,))
        matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255)
        params = list(zip_longest_fill(pA, func, xy, wihi, tltr, blbr, matte))
        images = []
        pbar = ProgressBar(len(params))
        for idx, (pA, func, xy, wihi, tltr, blbr, matte) in enumerate(params):
            width, height = wihi
            pA = tensor_to_cv(pA) if pA is not None else channel_solid(width, height)
            alpha = None
            if pA.ndim == 3 and pA.shape[2] == 4:
                alpha = image_mask(pA)

            if func == EnumCropMode.FREE:
                x1, y1, x2, y2 = tltr
                x4, y4, x3, y3 = blbr
                points = (x1 * width, y1 * height), (x2 * width, y2 * height), \
                    (x3 * width, y3 * height), (x4 * width, y4 * height)
                pA = image_crop_polygonal(pA, points)
                if alpha is not None:
                    alpha = image_crop_polygonal(alpha, points)
                    pA[..., 3] = alpha[..., 0][:,:]
            elif func == EnumCropMode.XY:
                pA = image_crop(pA, width, height, xy)
            else:
                pA = image_crop_center(pA, width, height)
            images.append(cv_to_tensor_full(pA, matte))
            pbar.update_absolute(idx)
        return image_stack(images)

class FlattenNode(CozyImageNode):
    NAME = "FLATTEN (JOV) ⬇️"
    CATEGORY = JOV_CATEGORY
    DESCRIPTION = """
Combine multiple input images into a single image by summing their pixel values. This operation is useful for merging multiple layers or images into one composite image, such as combining different elements of a design or merging masks. Users can specify the blending mode and interpolation method to control how the images are combined. Additionally, a matte can be applied to adjust the transparency of the final composite image.
"""

    @classmethod
    def INPUT_TYPES(cls) -> InputType:
        d = super().INPUT_TYPES()
        d = deep_merge(d, {
            "optional": {
                Lexicon.MODE: (EnumScaleMode._member_names_, {
                    "default": EnumScaleMode.MATTE.name,}),
                Lexicon.WH: ("VEC2", {
                    "default": (512, 512), "mij":1, "int": True,
                    "label": ["W", "H"]}),
                Lexicon.SAMPLE: (EnumInterpolation._member_names_, {
                    "default": EnumInterpolation.LANCZOS4.name,}),
                Lexicon.MATTE: ("VEC4", {
                    "default": (0, 0, 0, 255), "rgb": True,}),
                Lexicon.OFFSET: ("VEC2", {
                    "default": (0, 0), "mij":0, "int": True,
                    "label": ["X", "Y"]}),
            }
        })
        return Lexicon._parse(d)

    def run(self, **kw) -> RGBAMaskType:
        imgs = parse_dynamic(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)
        if imgs is None:
            logger.warning("no images to flatten")
            return ()

        # be less dumb when merging
        pA = [tensor_to_cv(i) for i in imgs]
        mode = parse_param(kw, Lexicon.MODE, EnumScaleMode, EnumScaleMode.MATTE.name)[0]
        wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (512, 512), 1)[0]
        sample = parse_param(kw, Lexicon.SAMPLE, EnumInterpolation, EnumInterpolation.LANCZOS4.name)[0]
        matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255)[0]
        offset = parse_param(kw, Lexicon.OFFSET, EnumConvertType.VEC2INT, (0, 0), 0)[0]
        w, h = wihi
        x, y = offset
        pA = image_flatten(pA, x, y, w, h, mode=mode, sample=sample)
        pA = [cv_to_tensor_full(pA, matte)]
        return image_stack(pA)

class SplitNode(CozyBaseNode):
    NAME = "SPLIT (JOV) 🎭"
    CATEGORY = JOV_CATEGORY
    RETURN_TYPES = ("IMAGE", "IMAGE",)
    RETURN_NAMES = ("IMAGEA", "IMAGEB",)
    OUTPUT_TOOLTIPS = (
        "Left/Top image",
        "Right/Bottom image"
    )
    DESCRIPTION = """
Split an image into two or four images based on the percentages for width and height.
"""

    @classmethod
    def INPUT_TYPES(cls) -> InputType:
        d = super().INPUT_TYPES()
        d = deep_merge(d, {
            "optional": {
                Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),
                Lexicon.VALUE: ("FLOAT", {
                    "default": 0.5, "min": 0, "max": 1, "step": 0.001
                }),
                Lexicon.FLIP: ("BOOLEAN", {
                    "default": False,
                    "tooltip": "Horizontal split (False) or Vertical split (True)"
                }),
                Lexicon.MODE: (EnumScaleMode._member_names_, {
                    "default": EnumScaleMode.MATTE.name,}),
                Lexicon.WH: ("VEC2", {
                    "default": (512, 512), "mij":IMAGE_SIZE_MIN, "int": True,
                    "label": ["W", "H"]}),
                Lexicon.SAMPLE: (EnumInterpolation._member_names_, {
                    "default": EnumInterpolation.LANCZOS4.name,}),
                Lexicon.MATTE: ("VEC4", {
                    "default": (0, 0, 0, 255), "rgb": True,})
            }
        })
        return Lexicon._parse(d)

    def run(self, **kw) -> RGBAMaskType:
        pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)
        percent = parse_param(kw, Lexicon.VALUE, EnumConvertType.FLOAT, 0.5, 0, 1)
        flip = parse_param(kw, Lexicon.FLIP, EnumConvertType.BOOLEAN, False)
        mode = parse_param(kw, Lexicon.MODE, EnumScaleMode, EnumScaleMode.MATTE.name)
        wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (512, 512), IMAGE_SIZE_MIN)
        sample = parse_param(kw, Lexicon.SAMPLE, EnumInterpolation, EnumInterpolation.LANCZOS4.name)
        matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255)
        params = list(zip_longest_fill(pA, percent, flip, mode, wihi, sample, matte))
        images = []
        pbar = ProgressBar(len(params))
        for idx, (pA, percent, flip, mode, wihi, sample, matte) in enumerate(params):
            w, h = wihi
            pA = channel_solid(w, h, matte) if pA is None else tensor_to_cv(pA)

            if flip:
                size = pA.shape[1]
                percent = max(1, min(size-1, int(size * percent)))
                image_a = pA[:, :percent]
                image_b = pA[:, percent:]
            else:
                size = pA.shape[0]
                percent = max(1, min(size-1, int(size * percent)))
                image_a = pA[:percent, :]
                image_b = pA[percent:, :]

            if mode != EnumScaleMode.MATTE:
                image_a = image_scalefit(image_a, w, h, mode, sample)
                image_b = image_scalefit(image_b, w, h, mode, sample)

            images.append([cv_to_tensor(img) for img in [image_a, image_b]])
            pbar.update_absolute(idx)
        return image_stack(images)

class StackNode(CozyImageNode):
    NAME = "STACK (JOV) ➕"
    CATEGORY = JOV_CATEGORY
    DESCRIPTION = """
Merge multiple input images into a single composite image by stacking them along a specified axis.

Options include axis, stride, scaling mode, width and height, interpolation method, and matte color.

The axis parameter allows for horizontal, vertical, or grid stacking of images, while stride controls the spacing between them.
"""

    @classmethod
    def INPUT_TYPES(cls) -> InputType:
        d = super().INPUT_TYPES()
        d = deep_merge(d, {
            "optional": {
                Lexicon.AXIS: (EnumOrientation._member_names_, {
                    "default": EnumOrientation.GRID.name,}),
                Lexicon.STEP: ("INT", {
                    "default": 1, "min": 0,
                    "tooltip":"How many images are placed before a new row starts (stride)"}),
                Lexicon.MODE: (EnumScaleMode._member_names_, {
                    "default": EnumScaleMode.MATTE.name,}),
                Lexicon.WH: ("VEC2", {
                    "default": (512, 512), "mij": IMAGE_SIZE_MIN, "int": True,
                    "label": ["W", "H"]}),
                Lexicon.SAMPLE: (EnumInterpolation._member_names_, {
                    "default": EnumInterpolation.LANCZOS4.name,}),
                Lexicon.MATTE: ("VEC4", {
                    "default": (0, 0, 0, 255), "rgb": True,})
            }
        })
        return Lexicon._parse(d)

    def run(self, **kw) -> RGBAMaskType:
        images = parse_dynamic(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)
        if len(images) == 0:
            logger.warning("no images to stack")
            return

        images = [tensor_to_cv(i) for i in images]
        axis = parse_param(kw, Lexicon.AXIS, EnumOrientation, EnumOrientation.GRID.name)[0]
        stride = parse_param(kw, Lexicon.STEP, EnumConvertType.INT, 1, 0)[0]
        mode = parse_param(kw, Lexicon.MODE, EnumScaleMode, EnumScaleMode.MATTE.name)[0]
        wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (512, 512), IMAGE_SIZE_MIN)[0]
        sample = parse_param(kw, Lexicon.SAMPLE, EnumInterpolation, EnumInterpolation.LANCZOS4.name)[0]
        matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255)[0]
        img = image_stacker(images, axis, stride) #, matte)
        if mode != EnumScaleMode.MATTE:
            w, h = wihi
            img = image_scalefit(img, w, h, mode, sample)
        rgba, rgb, mask = cv_to_tensor_full(img, matte)
        return rgba.unsqueeze(0), rgb.unsqueeze(0), mask.unsqueeze(0)

class TransformNode(CozyImageNode):
    NAME = "TRANSFORM (JOV) 🏝️"
    CATEGORY = JOV_CATEGORY
    DESCRIPTION = """
Apply various geometric transformations to images, including translation, rotation, scaling, mirroring, tiling and perspective projection. It offers extensive control over image manipulation to achieve desired visual effects.
"""

    @classmethod
    def INPUT_TYPES(cls) -> InputType:
        d = super().INPUT_TYPES(prompt=True, dynprompt=True)
        d = deep_merge(d, {
            "optional": {
                Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),
                Lexicon.MASK: (COZY_TYPE_IMAGE, {
                    "tooltip": "Override Image mask"}),
                Lexicon.XY: ("VEC2", {
                    "default": (0, 0,), "mij": -1, "maj": 1,
                    "label": ["X", "Y"]}),
                Lexicon.ANGLE: ("FLOAT", {
                    "default": 0, "min": -sys.float_info.max, "max": sys.float_info.max, "step": 0.1,}),
                Lexicon.SIZE: ("VEC2", {
                    "default": (1, 1), "mij": 0.001,
                    "label": ["X", "Y"]}),
                Lexicon.TILE: ("VEC2", {
                    "default": (1, 1), "mij": 1,
                    "label": ["X", "Y"]}),
                Lexicon.EDGE: (EnumEdge._member_names_, {
                    "default": EnumEdge.CLIP.name}),
                Lexicon.MIRROR: (EnumMirrorMode._member_names_, {
                    "default": EnumMirrorMode.NONE.name}),
                Lexicon.PIVOT: ("VEC2", {
                    "default": (0.5, 0.5), "mij": 0, "maj": 1, "step": 0.01,
                    "label": ["X", "Y"]}),
                Lexicon.PROJECTION: (EnumProjection._member_names_, {
                    "default": EnumProjection.NORMAL.name}),
                Lexicon.TLTR: ("VEC4", {
                    "default": (0, 0, 1, 0), "mij": 0, "maj": 1, "step": 0.005,
                    "label": ["TOP", "LEFT", "TOP", "RIGHT"],}),
                Lexicon.BLBR: ("VEC4", {
                    "default": (0, 1, 1, 1), "mij": 0, "maj": 1, "step": 0.005,
                    "label": ["BOTTOM", "LEFT", "BOTTOM", "RIGHT"],}),
                Lexicon.STRENGTH: ("FLOAT", {
                    "default": 1, "min": 0, "max": 1, "step": 0.005}),
                Lexicon.MODE: (EnumScaleMode._member_names_, {
                    "default": EnumScaleMode.MATTE.name,}),
                Lexicon.WH: ("VEC2", {
                    "default": (512, 512), "mij": IMAGE_SIZE_MIN, "int": True,
                    "label": ["W", "H"]}),
                Lexicon.SAMPLE: (EnumInterpolation._member_names_, {
                    "default": EnumInterpolation.LANCZOS4.name,}),
                Lexicon.MATTE: ("VEC4", {
                    "default": (0, 0, 0, 255), "rgb": True,})
            }
        })
        return Lexicon._parse(d)

    def run(self, **kw) -> RGBAMaskType:
        pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)
        mask = parse_param(kw, Lexicon.MASK, EnumConvertType.IMAGE, None)
        offset = parse_param(kw, Lexicon.XY, EnumConvertType.VEC2, (0, 0), -1, 1)
        angle = parse_param(kw, Lexicon.ANGLE, EnumConvertType.FLOAT, 0)
        size = parse_param(kw, Lexicon.SIZE, EnumConvertType.VEC2, (1, 1), 0.001)
        edge = parse_param(kw, Lexicon.EDGE, EnumEdge, EnumEdge.CLIP.name)
        mirror = parse_param(kw, Lexicon.MIRROR, EnumMirrorMode, EnumMirrorMode.NONE.name)
        mirror_pivot = parse_param(kw, Lexicon.PIVOT, EnumConvertType.VEC2, (0.5, 0.5), 0, 1)
        tile_xy = parse_param(kw, Lexicon.TILE, EnumConvertType.VEC2, (1, 1), 1)
        proj = parse_param(kw, Lexicon.PROJECTION, EnumProjection, EnumProjection.NORMAL.name)
        tltr = parse_param(kw, Lexicon.TLTR, EnumConvertType.VEC4, (0, 0, 1, 0), 0, 1)
        blbr = parse_param(kw, Lexicon.BLBR, EnumConvertType.VEC4, (0, 1, 1, 1), 0, 1)
        strength = parse_param(kw, Lexicon.STRENGTH, EnumConvertType.FLOAT, 1, 0, 1)
        mode = parse_param(kw, Lexicon.MODE, EnumScaleMode, EnumScaleMode.MATTE.name)
        wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (512, 512), IMAGE_SIZE_MIN)
        sample = parse_param(kw, Lexicon.SAMPLE, EnumInterpolation, EnumInterpolation.LANCZOS4.name)
        matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255)
        params = list(zip_longest_fill(pA, mask, offset, angle, size, edge, tile_xy, mirror, mirror_pivot, proj, strength, tltr, blbr, mode, wihi, sample, matte))
        images = []
        pbar = ProgressBar(len(params))
        for idx, (pA, mask, offset, angle, size, edge, tile_xy, mirror, mirror_pivot, proj, strength, tltr, blbr, mode, wihi, sample, matte) in enumerate(params):
            pA = tensor_to_cv(pA) if pA is not None else channel_solid()
            if mask is None:
                mask = image_mask(pA, 255)
            else:
                mask = tensor_to_cv(mask)
            pA = image_mask_add(pA, mask)

            h, w = pA.shape[:2]
            pA = image_transform(pA, offset, angle, size, sample, edge)
            pA = image_crop_center(pA, w, h)

            if mirror != EnumMirrorMode.NONE:
                mpx, mpy = mirror_pivot
                pA = image_mirror(pA, mirror, mpx, mpy)
                pA = image_scalefit(pA, w, h, EnumScaleMode.FIT, sample)

            tx, ty = tile_xy
            if tx != 1. or ty != 1.:
                pA = image_edge_wrap(pA, tx / 2 - 0.5, ty / 2 - 0.5)
                pA = image_scalefit(pA, w, h, EnumScaleMode.FIT, sample)

            match proj:
                case EnumProjection.PERSPECTIVE:
                    x1, y1, x2, y2 = tltr
                    x4, y4, x3, y3 = blbr
                    sh, sw = pA.shape[:2]
                    x1, x2, x3, x4 = map(lambda x: x * sw, [x1, x2, x3, x4])
                    y1, y2, y3, y4 = map(lambda y: y * sh, [y1, y2, y3, y4])
                    pA = remap_perspective(pA, [[x1, y1], [x2, y2], [x3, y3], [x4, y4]])
                case EnumProjection.SPHERICAL:
                    pA = remap_sphere(pA, strength)
                case EnumProjection.FISHEYE:
                    pA = remap_fisheye(pA, strength)
                case EnumProjection.POLAR:
                    pA = remap_polar(pA)

            if proj != EnumProjection.NORMAL:
                pA = image_scalefit(pA, w, h, EnumScaleMode.FIT, sample)

            if mode != EnumScaleMode.MATTE:
                w, h = wihi
                pA = image_scalefit(pA, w, h, mode, sample)

            images.append(cv_to_tensor_full(pA, matte))
            pbar.update_absolute(idx)
        return image_stack(images)


================================================
FILE: core/utility/__init__.py
================================================


================================================
FILE: core/utility/batch.py
================================================
""" Jovimetrix - Utility """

import os
import sys
import json
import glob
import random
from enum import Enum
from pathlib import Path
from itertools import zip_longest
from typing import Any

import torch
import numpy as np

from comfy.utils import ProgressBar
from nodes import interrupt_processing

from cozy_comfyui import \
    logger, \
    IMAGE_SIZE_MIN, \
    InputType, EnumConvertType, TensorType, \
    deep_merge, parse_dynamic, parse_param

from cozy_comfyui.lexicon import \
    Lexicon

from cozy_comfyui.node import \
    COZY_TYPE_ANY, \
    CozyBaseNode

from cozy_comfyui.image import \
    IMAGE_FORMATS

from cozy_comfyui.image.compose import \
    EnumScaleMode, EnumInterpolation, \
    image_matte, image_scalefit

from cozy_comfyui.image.convert import \
    image_convert, cv_to_tensor, cv_to_tensor_full, tensor_to_cv

from cozy_comfyui.image.misc import \
    image_by_size

from cozy_comfyui.image.io import \
    image_load

from cozy_comfyui.api import \
    parse_reset, comfy_api_post

from ... import \
    ROOT

JOV_CATEGORY = "UTILITY/BATCH"

# ==============================================================================
# === ENUMERATION ===
# ==============================================================================

class EnumBatchMode(Enum):
    MERGE = 30
    PICK = 10
    SLICE = 15
    INDEX_LIST = 20
    RANDOM = 5

# ==============================================================================
# === CLASS ===
# ==============================================================================

class ArrayNode(CozyBaseNode):
    NAME = "ARRAY (JOV) 📚"
    CATEGORY = JOV_CATEGORY
    RETURN_TYPES = (COZY_TYPE_ANY, "INT",)
    RETURN_NAMES = ("ARRAY", "LENGTH",)
    OUTPUT_IS_LIST = (True, True,)
    OUTPUT_TOOLTIPS = (
        "Output list from selected operation",
        "Length of output list",
        "Full input list",
        "Length of all input elements",
    )
    DESCRIPTION = """
Processes a batch of data based on the selected mode. Merge, pick, slice, random select, or index items. Can also reverse the order of items.
"""

    @classmethod
    def INPUT_TYPES(cls) -> InputType:
        d = super().INPUT_TYPES()
        d = deep_merge(d, {
            "optional": {
                Lexicon.MODE: (EnumBatchMode._member_names_, {
                    "default": EnumBatchMode.MERGE.name,
                    "tooltip": "Select a single index, specific range, custom index list or randomized"}),
                Lexicon.RANGE: ("VEC3", {
                    "default": (0, 0, 1), "mij": 0, "int": True,
                    "tooltip": "The start, end and step for the range"}),
                Lexicon.INDEX: ("STRING", {
                    "default": "",
                    "tooltip": "Comma separated list of indicies to export"}),
                Lexicon.COUNT: ("INT", {
                    "default": 0, "min": 0, "max": sys.maxsize,
                    "tooltip": "How many items to return"}),
                Lexicon.REVERSE: ("BOOLEAN", {
                    "default": False,
                    "tooltip": "Reverse the calculated output list"}),
                Lexicon.SEED: ("INT", {
                    "default": 0, "min": 0, "max": sys.maxsize}),
            }
        })
        return Lexicon._parse(d)

    @classmethod
    def batched(cls, iterable, chunk_size, expand:bool=False, fill:Any=None) -> list[Any]:
        if expand:
            iterator = iter(iterable)
            return zip_longest(*[iterator] * chunk_size, fillvalue=fill)
        return [iterable[i: i + chunk_size] for i in range(0, len(iterable), chunk_size)]

    def run(self, **kw) -> tuple[int, list]:
        data_list = parse_dynamic(kw, Lexicon.DYNAMIC, EnumConvertType.ANY, None)
        mode = parse_param(kw, Lexicon.MODE, EnumBatchMode, EnumBatchMode.MERGE.name)[0]
        slice_range = parse_param(kw, Lexicon.RANGE, EnumConvertType.VEC3INT, (0, 0, 1))[0]
        index = parse_param(kw, Lexicon.INDEX, EnumConvertType.STRING, "")[0]
        count = parse_param(kw, Lexicon.COUNT, EnumConvertType.INT, 0, 0)[0]
        reverse = parse_param(kw, Lexicon.REVERSE, EnumConvertType.BOOLEAN, False)[0]
        seed = parse_param(kw, Lexicon.SEED, EnumConvertType.INT, 0, 0)[0]

        data = []
        # track latents since they need to be added back to Dict['samples']
        output_type = None
        for b in data_list:
            if isinstance(b, dict) and "samples" in b:
                # latents are batched in the x.samples key
                if output_type and output_type != EnumConvertType.LATENT:
                    raise Exception(f"Cannot mix input types {output_type} vs {EnumConvertType.LATENT}")
                data.extend(b["samples"])
                output_type = EnumConvertType.LATENT

            elif isinstance(b, TensorType):
                if output_type and output_type not in (EnumConvertType.IMAGE, EnumConvertType.MASK):
                    raise Exception(f"Cannot mix input types {output_type} vs {EnumConvertType.IMAGE}")

                if b.ndim == 4:
                    b = [i for i in b]
                else:
                    b = [b]

                for x in b:
                    if x.ndim == 2:
                        x = x.unsqueeze(-1)
                    data.append(x)

                output_type = EnumConvertType.IMAGE

            elif b is not None:
                idx_type = type(b)
                if output_type and output_type != idx_type:
                    raise Exception(f"Cannot mix input types {output_type} vs {idx_type}")
                data.append(b)

        if len(data) == 0:
            logger.warning("no data for list")
            return [], [0], [], [0]

        if mode == EnumBatchMode.PICK:
            start, end, step = slice_range
            start = start if start < len(data) else -1
            data = [data[start]]
        elif mode == EnumBatchMode.SLICE:
            start, end, step = slice_range
            start = abs(start)
            end = len(data) if end == 0 else abs(end+1)
            if step == 0:
                step = 1
            elif step < 0:
                data = data[::-1]
                step = abs(step)
            data = data[start:end:step]
        elif mode == EnumBatchMode.RANDOM:
            random.seed(seed)
            if count == 0:
                count = len(data)
            else:
                count = max(1, min(len(data), count))
            data = random.sample(data, k=count)
        elif mode == EnumBatchMode.INDEX_LIST:
            junk = []
            for x in index.split(','):
                if '-' in x:
                    x = x.split('-')
                    for idx, v in enumerate(x):
                        try:
                            x[idx] = max(0, min(len(data)-1, int(v)))
                        except ValueError as e:
                            logger.error(e)
                            x[idx] = 0

                    if x[0] > x[1]:
                        tmp = list(range(x[0], x[1]-1, -1))
                    else:
                        tmp = list(range(x[0], x[1]+1))
                    junk.extend(tmp)
                else:
                    idx = max(0, min(len(data)-1, int(x)))
                    junk.append(idx)
            if len(junk) > 0:
                data = [data[i] for i in junk]

        if len(data) == 0:
            logger.warning("no data for list")
            return [], [0], [], [0]

        # reverse before?
        if reverse:
            data.reverse()

        # cut the list down first
        if count > 0:
            data = data[0:count]

        size = len(data)
        if output_type == EnumConvertType.IMAGE:
            _, w, h = image_by_size(data)
            result = []
            for d in data:
                w2, h2, cc = d.shape
                if w != w2 or h != h2 or cc != 4:
                    d = tensor_to_cv(d)
                    d = image_convert(d, 4)
                    d = image_matte(d, (0,0,0,0), w, h)
                    d = cv_to_tensor(d)
                d = d.unsqueeze(0)
                result.append(d)

            size = len(result)
            data = torch.stack(result)
        else:
            data = [data]

        return (data, [size],)

class BatchToList(CozyBaseNode):
    NAME = "BATCH TO LIST (JOV)"
    NAME_PRETTY = "BATCH TO LIST (JOV)"
    CATEGORY = JOV_CATEGORY
    RETURN_TYPES = (COZY_TYPE_ANY, )
    RETURN_NAMES = ("LIST", )
    DESCRIPTION = """
Convert a batch of values into a pure python list of values.
"""
    @classmethod
    def INPUT_TYPES(cls) -> InputType:
        d = super().INPUT_TYPES()
        return deep_merge(d, {
            "optional": {
                Lexicon.BATCH: (COZY_TYPE_ANY, {}),
            }
        })

    def run(self, **kw) -> tuple[list[Any]]:
        batch = parse_param(kw, Lexicon.BATCH, EnumConvertType.LIST, [])
        batch = [f[0] for f in batch]
        return (batch,)

class QueueBaseNode(CozyBaseNode):
    CATEGORY = JOV_CATEGORY
    RETURN_TYPES = (COZY_TYPE_ANY, COZY_TYPE_ANY, "STRING", "INT", "INT", "BOOLEAN")
    RETURN_NAMES = ("❔", "QUEUE", "CURRENT", "INDEX", "TOTAL", "TRIGGER", )
    #OUTPUT_IS_LIST = (True, True, True, True, True, True,)
    VIDEO_FORMATS = ['.wav', '.mp3', '.webm', '.mp4', '.avi', '.wmv', '.mkv', '.mov', '.mxf']

    @classmethod
    def IS_CHANGED(cls, **kw) -> float:
        return float('nan')

    @classmethod
    def INPUT_TYPES(cls) -> InputType:
        d = super().INPUT_TYPES()
        d = deep_merge(d, {
            "optional": {
                Lexicon.QUEUE: ("STRING", {
                    "default": "./res/img/test-a.png", "multiline": True,
                    "tooltip": "Current items to process during Queue iteration"}),
                Lexicon.RECURSE: ("BOOLEAN", {
                    "default": False,
                    "tooltip": "Recurse through all subdirectories found"}),
                Lexicon.BATCH: ("BOOLEAN", {
                    "default": False,
                    "tooltip": "Load all items, if they are loadable items, i.e. batch load images from the Queue's list"}),
                Lexicon.SELECT: ("INT", {
                    "default": 0, "min": 0,
                    "tooltip": "The index to use for the current queue item. 0 will move to the next item each queue run"}),
                Lexicon.HOLD: ("BOOLEAN", {
                    "default": False,
                    "tooltip": "Hold the item at the current queue index"}),
                Lexicon.STOP: ("BOOLEAN", {
                    "default": False,
                    "tooltip": "When the Queue is out of items, send a `HALT` to ComfyUI"}),
                Lexicon.LOOP: ("BOOLEAN", {
                    "default": True,
                    "tooltip": "If the queue should loop. If `False` and if there are more iterations, will send the previous image"}),
                Lexicon.RESET: ("BOOLEAN", {
                    "default": False,
                    "tooltip": "Reset the queue back to index 1"}),
            }
        })
        return Lexicon._parse(d)

    def __init__(self) -> None:
        self.__index = 0
        self.__q = None
        self.__index_last = None
        self.__len = 0
        self.__current = None
        self.__previous = None
        self.__ident = None
        self.__last_q_value = {}

    # consume the list into iterable items to load/process
    def __parseQ(self, data: Any, recurse: bool=False) -> list[str]:
        entries = []
        for line in data.strip().split('\n'):
            if len(line) == 0:
                continue

            data = [line]
            if not line.lower().startswith("http"):
                # <directory>;*.png;*.gif;*.jpg
                base_path_str, tail = os.path.split(line)
                filters = [p.strip() for p in tail.split(';')]

                base_path = Path(base_path_str)
                if base_path.is_absolute():
                    search_dir = base_path if base_path.is_dir() else base_path.parent
                else:
                    search_dir = (ROOT / base_path).resolve()

                # Check if the base directory exists
                if search_dir.exists():
                    if search_dir.is_dir():
                        new_data = []
                        filters = filters if len(filters) > 0 and isinstance(filters[0], str) else IMAGE_FORMATS
                        for pattern in filters:
                            found = glob.glob(str(search_dir / pattern), recursive=recurse)
                            new_data.extend([str(Path(f).resolve()) for f in found if Path(f).is_file()])
                        if len(new_data):
                            data = new_data
                    elif search_dir.is_file():
                        path = str(search_dir.resolve())
                        if path.lower().endswith('.txt'):
                            with open(path, 'r', encoding='utf-8') as f:
                                data = f.read().split('\n')
                        else:
                            data = [path]
                elif len(results := glob.glob(str(search_dir))) > 0:
                    data = [x.replace('\\', '/') for x in results]

            if len(data):
                ret = []
                for x in data:
                    try: ret.append(float(x))
                    except: ret.append(x)
                entries.extend(ret)
        return entries

    # turn Q element into actual hard type
    def process(self, q_data: Any) -> TensorType | str | dict:
        # single Q cache to skip loading single entries over and over
        # @TODO: MRU cache strategy
        if (val := self.__last_q_value.get(q_data, None)) is not None:
            return val
        if isinstance(q_data, (str,)):
            _, ext = os.path.splitext(q_data)
            if ext in IMAGE_FORMATS:
                data = image_load(q_data)[0]
                self.__last_q_value[q_data] = data
            #elif ext in self.VIDEO_FORMATS:
            #    data = load_file(q_data)
            #    self.__last_q_value[q_data] = data
            elif ext == '.json':
                with open(q_data, 'r', encoding='utf-8') as f:
                    self.__last_q_value[q_data] = json.load(f)
        return self.__last_q_value.get(q_data, q_data)

    def run(self, ident, **kw) -> tuple[Any, list[str], str, int, int]:

        self.__ident = ident
        # should work headless as well

        if (new_val := parse_param(kw, Lexicon.SELECT, EnumConvertType.INT, 0)[0]) > 0:
            self.__index = new_val - 1

        reset = parse_reset(ident) > 0
        if reset or parse_param(kw, Lexicon.RESET, EnumConvertType.BOOLEAN, False)[0]:
            self.__q = None
            self.__index = 0

        mode = parse_param(kw, Lexicon.MODE, EnumScaleMode, EnumScaleMode.MATTE.name)[0]
        sample = parse_param(kw, Lexicon.SAMPLE, EnumInterpolation, EnumInterpolation.LANCZOS4.name)[0]
        wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (512, 512), IMAGE_SIZE_MIN)[0]
        w, h = wihi
        matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255)[0]

        if self.__q is None:
            # process Q into ...
            # check if folder first, file, then string.
            # entry is: data, <filter if folder:*.png,*.jpg>, <repeats:1+>
            recurse = parse_param(kw, Lexicon.RECURSE, EnumConvertType.BOOLEAN, False)[0]
            q = parse_param(kw, Lexicon.QUEUE, EnumConvertType.STRING, "")[0]
            self.__q = self.__parseQ(q, recurse)
            self.__len = len(self.__q)
            self.__index_last = 0
            self.__previous = self.__q[0] if len(self.__q) else None
            if self.__previous:
                self.__previous = self.process(self.__previous)

        # make sure we have more to process if are a single fire queue
        stop = parse_param(kw, Lexicon.STOP, EnumConvertType.BOOLEAN, False)[0]
        if stop and self.__index >= self.__len:
            comfy_api_post("jovi-queue-done", ident, self.status)
            interrupt_processing()
            return self.__previous, self.__q, self.__current, self.__index_last+1, self.__len

        if (wait := parse_param(kw, Lexicon.HOLD, EnumConvertType.BOOLEAN, False))[0] == True:
            self.__index = self.__index_last

        # otherwise loop around the end
        loop = parse_param(kw, Lexicon.LOOP, EnumConvertType.BOOLEAN, False)[0]
        if loop == True:
            self.__index %= self.__len
        else:
            self.__index = min(self.__index, self.__len-1)

        self.__current = self.__q[self.__index]
        data = self.__previous
        self.__index_last = self.__index
        info = f"QUEUE #{ident} [{self.__current}] ({self.__index})"
        batched = False
        if (batched := parse_param(kw, Lexicon.BATCH, EnumConvertType.BOOLEAN, False)[0]) == True:
            data = []
            mw, mh, mc = 0, 0, 0
            for idx in range(self.__len):
                ret = self.process(self.__q[idx])
                if isinstance(ret, (np.ndarray,)):
                    h2, w2, c = ret.shape
                    mw, mh, mc = max(mw, w2), max(mh, h2), max(mc, c)
                data.append(ret)

            if mw != 0 or mh != 0 or mc != 0:
                ret = []
                # matte = [matte[0], matte[1], matte[2], 0]
                pbar = ProgressBar(self.__len)
                for idx, d in enumerate(data):
                    d = image_convert(d, mc)
                    if mode != EnumScaleMode.MATTE:
                        d = image_scalefit(d, w, h, mode, sample, matte)
                        d = image_scalefit(d, w, h, EnumScaleMode.RESIZE_MATTE, sample, matte)
                    else:
                        d = image_matte(d, matte, mw, mh)
                    ret.append(cv_to_tensor(d))
                    pbar.update_absolute(idx)
                data = torch.stack(ret)
        elif wait == True:
            info += f" PAUSED"
        else:
            data = self.process(self.__q[self.__index])
            if isinstance(data, (np.ndarray,)):
                if mode != EnumScaleMode.MATTE:
                    data = image_scalefit(data, w, h, mode, sample)
                data = cv_to_tensor(data).unsqueeze(0)
            self.__index += 1

        self.__previous = data
        comfy_api_post("jovi-queue-ping", ident, self.status)
        if stop and batched:
            interrupt_processing()
        return data, self.__q, self.__current, self.__index, self.__len, self.__index == self.__len or batched

    @property
    def status(self) -> dict[str, Any]:
        return {
            "id": self.__ident,
            "c": self.__current,
            "i": self.__index_last,
            "s": self.__len,
            "l": self.__q
        }

class QueueNode(QueueBaseNode):
    NAME = "QUEUE (JOV) 🗃"
    OUTPUT_TOOLTIPS = (
        "Current item selected from the Queue list",
        "The entire Queue list",
        "Current item selected from the Queue list as a string",
        "Current index for the selected item in the Queue list",
        "Total items in the current Queue List",
        "Send a True signal when the queue end index is reached"
    )
    DESCRIPTION = """
Manage a queue of items, such as file paths or data. Supports various formats including images, videos, text files, and JSON files. You can specify the current index for the queue item, enable pausing the queue, or reset it back to the first index. The node outputs the current item in the queue, the entire queue, the current index, and the total number of items in the queue.
"""

class QueueTooNode(QueueBaseNode):
    NAME = "QUEUE TOO (JOV) 🗃"
    RETURN_TYPES = ("IMAGE", "IMAGE", "MASK", "STRING", "INT", "INT", "BOOLEAN")
    RETURN_NAMES = ("RGBA", "RGB", "MASK", "CURRENT", "INDEX", "TOTAL", "TRIGGER", )
    #OUTPUT_IS_LIST = (False, False, False, True, True, True, True,)
    OUTPUT_TOOLTIPS = (
        "Full channel [RGBA] image. If there is an alpha, the image will be masked out with it when using this output",
        "Three channel [RGB] image. There will be no alpha",
        "Single channel mask output",
        "Current item selected from the Queue list as a string",
        "Current index for the selected item in the Queue list",
        "Total items in the current Queue List",
        "Send a True signal when the queue end index is reached"
    )
    DESCRIPTION = """
Manage a queue of specific items: media files. Supports various image and video formats. You can specify the current index for the queue item, enable pausing the queue, or reset it back to the first index. The node outputs the current item in the queue, the entire queue, the current index, and the total number of items in the queue.
"""

    @classmethod
    def INPUT_TYPES(cls) -> InputType:
        d = super().INPUT_TYPES()
        d = deep_merge(d, {
            "optional": {
                Lexicon.MODE: (EnumScaleMode._member_names_, {
                    "default": EnumScaleMode.MATTE.name}),
                Lexicon.WH: ("VEC2", {
                    "default": (512, 512), "mij":IMAGE_SIZE_MIN, "int": True,
                    "label": ["W", "H"],}),
                Lexicon.SAMPLE: (EnumInterpolation._member_names_, {
                    "default": EnumInterpolation.LANCZOS4.name,}),
                Lexicon.MATTE: ("VEC4", {
                    "default": (0, 0, 0, 255), "rgb": True,}),
            },
            "hidden": d.get("hidden", {})
        })
        return Lexicon._parse(d)

    def run(self, ident, **kw) -> tuple[TensorType, TensorType, TensorType, str, int, int, bool]:
        data, _, current, index, total, trigger = super().run(ident, **kw)
        if not isinstance(data, (TensorType, )):
            data = [None, None, None]
        else:
            matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255)[0]
            data = [tensor_to_cv(d) for d in data]
            data = [cv_to_tensor_full(d, matte) for d in data]
            data = [torch.stack(d) for d in zip(*data)]
        return *data, current, index, total, trigger


================================================
FILE: core/utility/info.py
================================================
""" Jovimetrix - Utility """

import io
import json
from typing import Any

import torch
import numpy as np
from PIL import Image
import matplotlib.pyplot as plt

from cozy_comfyui import \
    IMAGE_SIZE_MIN, \
    InputType, EnumConvertType, TensorType, \
    deep_merge, parse_dynamic, parse_param

from cozy_comfyui.lexicon import \
    Lexicon

from cozy_comfyui.node import \
    COZY_TYPE_IMAGE, \
    CozyBaseNode

from cozy_comfyui.image.convert import \
    pil_to_tensor

from cozy_comfyui.api import \
    parse_reset

JOV_CATEGORY = "UTILITY/INFO"

# ==============================================================================
# === SUPPORT ===
# ==============================================================================

def decode_tensor(tensor: TensorType) -> str:
    if tensor.ndim > 3:
        b, h, w, cc = tensor.shape
    elif tensor.ndim > 2:
        cc = 1
        b, h, w = tensor.shape
    else:
        b = 1
        cc = 1
        h, w = tensor.shape
    return f"{b}x{w}x{h}x{cc}"

# ==============================================================================
# === CLASS ===
# ==============================================================================

class AkashicData:
    def __init__(self, **kw) -> None:
        for k, v in kw.items():
            setattr(self, k, v)

class AkashicNode(CozyBaseNode):
    NAME = "AKASHIC (JOV) 📓"
    CATEGORY = JOV_CATEGORY
    RETURN_NAMES = ()
    OUTPUT_NODE = True
    DESCRIPTION = """
Visualize data. It accepts various types of data, including images, text, and other types. If no input is provided, it returns an empty result. The output consists of a dictionary containing UI-related information, such as base64-encoded images and text representations of the input data.
"""

    def run(self, **kw) -> tuple[Any, Any]:
        kw.pop('ident', None)
        o = kw.values()
        output = {"ui": {"b64_images": [], "text": []}}
        if o is None or len(o) == 0:
            output["ui"]["result"] = (None, None, )
            return output

        def __parse(val) -> str:
            ret = ''
            typ = ''.join(repr(type(val)).split("'")[1:2])
            if isinstance(val, dict):
                # mixlab layer?
                if (image := val.get('image', None)) is not None:
                    ret = image
                    if (mask := val.get('mask', None)) is not None:
                        while len(mask.shape) < len(image.shape):
                            mask = mask.unsqueeze(-1)
                        ret = torch.cat((image, mask), dim=-1)
                    if ret.ndim < 4:
                        ret = ret.unsqueeze(-1)
                    ret = decode_tensor(ret)
                    typ = "Mixlab Layer"

                # vector patch....
                elif 'xyzw' in val:
                    val = val["xyzw"]
                    typ = "VECTOR"
                # latents....
                elif 'samples' in val:
                    ret = decode_tensor(val['samples'][0])
                    typ = "LATENT"
                # empty bugger
                elif len(val) == 0:
                    ret = ""
                else:
                    try:
                        ret = json.dumps(val, indent=3, separators=(',', ': '))
                    except Exception as e:
                        ret = str(e)
            elif isinstance(val, (tuple, set, list,)):
                if (size := len(val)) > 0:
                    if isinstance(val, (np.ndarray,)):
                        ret = str(val)
                        typ = "NUMPY ARRAY"
                    elif isinstance(val[0], (TensorType,)):
                        ret = decode_tensor(val[0])
                        typ = type(val[0])
                    elif size == 1 and isinstance(val[0], (list,)) and isinstance(val[0][0], (TensorType,)):
                        ret = decode_tensor(val[0][0])
                        typ = "CONDITIONING"
                    elif all(isinstance(i, (tuple, set, list)) for i in val):
                        ret = "[\n" + ",\n".join(f"  {row}" for row in val) + "\n]"
                        # ret = json.dumps(val, indent=4)
                    elif all(isinstance(i, (bool, int, float)) for i in val):
                        ret = ','.join([str(x) for x in val])
                    else:
                        ret = str(val)
            elif isinstance(val, bool):
                ret = "True" if val else "False"
            elif isinstance(val, TensorType):
                ret = decode_tensor(val)
            else:
                ret = str(val)
            return json.dumps({typ: ret}, separators=(',', ': '))

        for x in o:
            data = ""
            if len(x) > 1:
                data += "::\n"
            for p in x:
                data += __parse(p) + "\n"
            output["ui"]["text"].append(data)
        return output

class GraphNode(CozyBaseNode):
    NAME = "GRAPH (JOV) 📈"
    CATEGORY = JOV_CATEGORY
    OUTPUT_NODE = True
    RETURN_TYPES = ("IMAGE", )
    RETURN_NAMES = ("IMAGE",)
    OUTPUT_TOOLTIPS = (
        "The graphed image",
    )
    DESCRIPTION = """
Visualize a series of data points over time. It accepts a dynamic number of values to graph and display, with options to reset the graph or specify the number of values. The output is an image displaying the graph, allowing users to analyze trends and patterns.
"""

    @classmethod
    def INPUT_TYPES(cls) -> InputType:
        d = super().INPUT_TYPES()
        d = deep_merge(d, {
            "optional": {
                Lexicon.RESET: ("BOOLEAN", {
                    "default": False,
                    "tooltip":"Clear the graph history"}),
                Lexicon.VALUE: ("INT", {
                    "default": 60, "min": 0,
                    "tooltip":"Number of values to graph and display"}),
                Lexicon.WH: ("VEC2", {
                    "default": (512, 512), "mij":IMAGE_SIZE_MIN, "int": True,
                    "label": ["W", "H"]}),
            }
        })
        return Lexicon._parse(d)

    @classmethod
    def IS_CHANGED(cls, **kw) -> float:
        return float('nan')

    def __init__(self, *arg, **kw) -> None:
        super().__init__(*arg, **kw)
        self.__history = []
        self.__fig, self.__ax = plt.subplots(figsize=(5.12, 5.12))

    def run(self, ident, **kw) -> tuple[TensorType]:
        slice = parse_param(kw, Lexicon.VALUE, EnumConvertType.INT, 60)[0]
        wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (512, 512), IMAGE_SIZE_MIN)[0]
        if parse_reset(ident) > 0 or parse_param(kw, Lexicon.RESET, EnumConvertType.BOOLEAN, False)[0]:
            self.__history = []
        longest_edge = 0
        dynamic = parse_dynamic(kw, Lexicon.DYNAMIC, EnumConvertType.FLOAT, 0, extend=False)
        self.__ax.clear()
        for idx, val in enumerate(dynamic):
            if isinstance(val, (set, tuple,)):
                val = list(val)
            if not isinstance(val, (list, )):
                val = [val]
            while len(self.__history) <= idx:
                self.__history.append([])
            self.__history[idx].extend(val)
            if slice > 0:
                stride = max(0, -slice + len(self.__history[idx]) + 1)
                longest_edge = max(longest_edge, stride)
                self.__history[idx] = self.__history[idx][stride:]
            self.__ax.plot(self.__history[idx], color="rgbcymk"[idx])

        self.__history = self.__history[:slice+1]
        width, height = wihi
        width, height = (width / 100., height / 100.)
        self.__fig.set_figwidth(width)
        self.__fig.set_figheight(height)
        self.__fig.canvas.draw_idle()
        buffer = io.BytesIO()
        self.__fig.savefig(buffer, format="png")
        buffer.seek(0)
        image = Image.open(buffer)
        return (pil_to_tensor(image),)

class ImageInfoNode(CozyBaseNode):
    NAME = "IMAGE INFO (JOV) 📚"
    CATEGORY = JOV_CATEGORY
    RETURN_TYPES = ("INT", "INT", "INT", "INT", "VEC2", "VEC3")
    RETURN_NAMES = ("COUNT", "W", "H", "C", "WH", "WHC")
    OUTPUT_TOOLTIPS = (
        "Batch count",
        "Width",
        "Height",
        "Channels",
        "Width & Height as a VEC2",
        "Width, Height and Channels as a VEC3"
    )
    DESCRIPTION = """
Exports and Displays immediate information about images.
"""

    @classmethod
    def INPUT_TYPES(cls) -> InputType:
        d = super().INPUT_TYPES()
        d = deep_merge(d, {
            "optional": {
                Lexicon.IMAGE: (COZY_TYPE_IMAGE, {})
            }
        })
        return Lexicon._parse(d)

    def run(self, **kw) -> tuple[int, list]:
        image = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)
        height, width, cc = image[0].shape
        return (len(image), width, height, cc, (width, height), (width, height, cc))


================================================
FILE: core/utility/io.py
================================================
""" Jovimetrix - Utility """

import os
import json
from uuid import uuid4
from pathlib import Path
from typing import Any

import torch
import numpy as np
from PIL import Image
from PIL.PngImagePlugin import PngInfo

from comfy.utils import ProgressBar
from folder_paths import get_output_directory
from nodes import interrupt_processing

from cozy_comfyui import \
    logger, \
    InputType, EnumConvertType, \
    deep_merge, parse_param, parse_param_list, zip_longest_fill

from cozy_comfyui.lexicon import \
    Lexicon

from cozy_comfyui.node import \
    COZY_TYPE_IMAGE, COZY_TYPE_ANY, \
    CozyBaseNode

from cozy_comfyui.image.convert import \
    tensor_to_pil, tensor_to_cv

from cozy_comfyui.api import \
    TimedOutException, ComfyAPIMessage, \
    comfy_api_post

# ==============================================================================
# === GLOBAL ===
# ==============================================================================

JOV_CATEGORY = "UTILITY/IO"

# min amount of time before showing the cancel dialog
JOV_DELAY_MIN = 5
try: JOV_DELAY_MIN = int(os.getenv("JOV_DELAY_MIN", JOV_DELAY_MIN))
except: pass
JOV_DELAY_MIN = max(1, JOV_DELAY_MIN)

# max 115 days
JOV_DELAY_MAX = 10000000
try: JOV_DELAY_MAX = int(os.getenv("JOV_DELAY_MAX", JOV_DELAY_MAX))
except: pass

FORMATS = ["gif", "png", "jpg"]
if (JOV_GIFSKI := os.getenv("JOV_GIFSKI", None)) is not None:
    if not os.path.isfile(JOV_GIFSKI):
        logger.error(f"gifski missing [{JOV_GIFSKI}]")
        JOV_GIFSKI = None
    else:
        FORMATS = ["gifski"] + FORMATS
        logger.info("gifski support")
else:
    logger.warning("no gifski support")

# ==============================================================================
# === SUPPORT ===
# ==============================================================================

def path_next(pattern: str) -> str:
    """
    Finds the next free path in an sequentially named list of files
    """
    i = 1
    while os.path.exists(pattern % i):
        i = i * 2

    a, b = (i // 2, i)
    while a + 1 < b:
        c = (a + b) // 2
        a, b = (c, b) if os.path.exists(pattern % c) else (a, c)
    return pattern % b

# ==============================================================================
# === CLASS ===
# ==============================================================================

class DelayNode(CozyBaseNode):
    NAME = "DELAY (JOV) ✋🏽"
    CATEGORY = JOV_CATEGORY
    RETURN_TYPES = (COZY_TYPE_ANY,)
    RETURN_NAMES = ("OUT",)
    OUTPUT_TOOLTIPS = (
        "Pass through data when the delay ends",
    )
    DESCRIPTION = """
Introduce pauses in the workflow that accept an optional input to pass through and a timer parameter to specify the duration of the delay. If no timer is provided, it defaults to a maximum delay. During the delay, it periodically checks for messages to interrupt the delay. Once the delay is completed, it returns the input passed to it. You can disable the screensaver with the `ENABLE` option
"""

    @classmethod
    def INPUT_TYPES(cls) -> InputType:
        d = super().INPUT_TYPES()
        d = deep_merge(d, {
            "optional": {
                Lexicon.PASS_IN: (COZY_TYPE_ANY, {
                    "default": None,
                    "tooltip":"The data that should be held until the timer completes."}),
                Lexicon.TIMER: ("INT", {
                    "default" : 0, "min": -1,
                    "tooltip":"How long to delay if enabled. 0 means no delay."}),
                Lexicon.ENABLE: ("BOOLEAN", {
                    "default": True,
                    "tooltip":"Enable or disable the screensaver."})
            }
        })
        return Lexicon._parse(d)

    @classmethod
    def IS_CHANGED(cls, **kw) -> float:
        return float('nan')

    def run(self, ident, **kw) -> tuple[Any]:
        delay = parse_param(kw, Lexicon.TIMER, EnumConvertType.INT, -1, -1, JOV_DELAY_MAX)[0]
        if delay < 0:
            delay = JOV_DELAY_MAX
        if delay > JOV_DELAY_MIN:
            comfy_api_post("jovi-delay-user", ident, {"id": ident, "timeout": delay})
        # enable = parse_param(kw, Lexicon.ENABLE, EnumConvertType.BOOLEAN, True)[0]

        step = 1
        pbar = ProgressBar(delay)
        while step <= delay:
            try:
                data = ComfyAPIMessage.poll(ident, timeout=1)
                if data.get('id', None) == ident:
                    if data.get('cmd', False) == False:
                        interrupt_processing(True)
                        logger.warning(f"delay [cancelled] ({step}): {ident}")
                    break
            except TimedOutException as _:
                if step % 10 == 0:
                    logger.info(f"delay [continue] ({step}): {ident}")
            pbar.update_absolute(step)
            step += 1

        return kw[Lexicon.PASS_IN]

class ExportNode(CozyBaseNode):
    NAME = "EXPORT (JOV) 📽"
    CATEGORY = JOV_CATEGORY
    NOT_IDEMPOTENT = True
    OUTPUT_NODE = True
    RETURN_TYPES = ()
    DESCRIPTION = """
Responsible for saving images or animations to disk. It supports various output formats such as GIF and GIFSKI. Users can specify the output directory, filename prefix, image quality, frame rate, and other parameters. Additionally, it allows overwriting existing files or generating unique filenames to avoid conflicts. The node outputs the saved images or animation as a tensor.
"""

    @classmethod
    def IS_CHANGED(cls, **kw) -> float:
        return float('nan')

    @classmethod
    def INPUT_TYPES(cls) -> InputType:
        d = super().INPUT_TYPES()
        d = deep_merge(d, {
            "optional": {
                Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),
                Lexicon.PATH: ("STRING", {
                    "default": get_output_directory(),
                    "default_top": "<comfy output dir>",}),
                Lexicon.FORMAT: (FORMATS, {
                    "default": FORMATS[0],}),
                Lexicon.PREFIX: ("STRING", {
                    "default": "jovi",}),
                Lexicon.OVERWRITE: ("BOOLEAN", {
                    "default": False,}),
                # GIF ONLY
                Lexicon.OPTIMIZE: ("BOOLEAN", {
                    "default": False,}),
                # GIFSKI ONLY
                Lexicon.QUALITY: ("INT", {
                    "default": 90, "min": 1, "max": 100,}),
                Lexicon.QUALITY_M: ("INT", {
                    "default": 100, "min": 1, "max
Download .txt
gitextract_d5cdnh8o/

├── .gitattributes
├── .github/
│   └── workflows/
│       └── publish_action.yml
├── .gitignore
├── LICENSE
├── NOTICE
├── README.md
├── __init__.py
├── core/
│   ├── __init__.py
│   ├── adjust.py
│   ├── anim.py
│   ├── calc.py
│   ├── color.py
│   ├── compose.py
│   ├── create.py
│   ├── trans.py
│   ├── utility/
│   │   ├── __init__.py
│   │   ├── batch.py
│   │   ├── info.py
│   │   └── io.py
│   └── vars.py
├── node_list.json
├── pyproject.toml
├── requirements.txt
└── web/
    ├── core.js
    ├── fun.js
    ├── jovi_metrix.css
    ├── nodes/
    │   ├── akashic.js
    │   ├── array.js
    │   ├── delay.js
    │   ├── flatten.js
    │   ├── graph.js
    │   ├── lerp.js
    │   ├── op_binary.js
    │   ├── op_unary.js
    │   ├── queue.js
    │   ├── route.js
    │   ├── stack.js
    │   ├── stringer.js
    │   └── value.js
    ├── util.js
    └── widget_vector.js
Download .txt
SYMBOL INDEX (236 symbols across 29 files)

FILE: core/__init__.py
  class EnumFillOperation (line 4) | class EnumFillOperation(Enum):

FILE: core/adjust.py
  class EnumAutoLevel (line 57) | class EnumAutoLevel(Enum):
  class EnumAdjustLight (line 62) | class EnumAdjustLight(Enum):
  class EnumAdjustPixel (line 69) | class EnumAdjustPixel(Enum):
  class AdjustBlurNode (line 79) | class AdjustBlurNode(CozyImageNode):
    method INPUT_TYPES (line 87) | def INPUT_TYPES(cls) -> InputType:
    method run (line 100) | def run(self, **kw) -> RGBAMaskType:
  class AdjustColorNode (line 116) | class AdjustColorNode(CozyImageNode):
    method INPUT_TYPES (line 124) | def INPUT_TYPES(cls) -> InputType:
    method run (line 137) | def run(self, **kw) -> RGBAMaskType:
  class AdjustEdgeNode (line 151) | class AdjustEdgeNode(CozyImageNode):
    method INPUT_TYPES (line 159) | def INPUT_TYPES(cls) -> InputType:
    method run (line 176) | def run(self, **kw) -> RGBAMaskType:
  class AdjustEmbossNode (line 194) | class AdjustEmbossNode(CozyImageNode):
    method INPUT_TYPES (line 202) | def INPUT_TYPES(cls) -> InputType:
    method run (line 218) | def run(self, **kw) -> RGBAMaskType:
  class AdjustLevelNode (line 235) | class AdjustLevelNode(CozyImageNode):
    method INPUT_TYPES (line 244) | def INPUT_TYPES(cls) -> InputType:
    method run (line 266) | def run(self, **kw) -> RGBAMaskType:
  class AdjustLightNode (line 297) | class AdjustLightNode(CozyImageNode):
    method INPUT_TYPES (line 305) | def INPUT_TYPES(cls) -> InputType:
    method run (line 324) | def run(self, **kw) -> RGBAMaskType:
  class AdjustMorphNode (line 366) | class AdjustMorphNode(CozyImageNode):
    method INPUT_TYPES (line 374) | def INPUT_TYPES(cls) -> InputType:
    method run (line 389) | def run(self, **kw) -> RGBAMaskType:
  class AdjustPixelNode (line 406) | class AdjustPixelNode(CozyImageNode):
    method INPUT_TYPES (line 414) | def INPUT_TYPES(cls) -> InputType:
    method run (line 427) | def run(self, **kw) -> RGBAMaskType:
  class AdjustSharpenNode (line 456) | class AdjustSharpenNode(CozyImageNode):
    method INPUT_TYPES (line 464) | def INPUT_TYPES(cls) -> InputType:
    method run (line 477) | def run(self, **kw) -> RGBAMaskType:
  class AdjustSharpenNodev3 (line 491) | class AdjustSharpenNodev3(CozyImageNodev3):
    method define_schema (line 493) | def define_schema(cls, **kwarg) -> io.Schema:
    method execute (line 532) | def execute(self, *arg, **kw) -> io.NodeOutput:
  class AdjustExtension (line 546) | class AdjustExtension(ComfyExtension):
    method get_node_list (line 548) | async def get_node_list(self) -> list[type[io.ComfyNode]]:
  function comfy_entrypoint (line 553) | async def comfy_entrypoint() -> AdjustExtension:

FILE: core/anim.py
  class ResultObject (line 44) | class ResultObject(object):
    method __init__ (line 45) | def __init__(self, *arg, **kw) -> None:
  class TickNode (line 52) | class TickNode(CozyBaseNode):
    method INPUT_TYPES (line 70) | def INPUT_TYPES(cls) -> InputType:
    method run (line 106) | def run(self, **kw) -> tuple[float, ...]:
  class WaveGeneratorNode (line 144) | class WaveGeneratorNode(CozyBaseNode):
    method INPUT_TYPES (line 155) | def INPUT_TYPES(cls) -> InputType:
    method run (line 179) | def run(self, **kw) -> tuple[float, int]:

FILE: core/calc.py
  class EnumBinaryOperation (line 37) | class EnumBinaryOperation(Enum):
  class EnumComparison (line 70) | class EnumComparison(Enum):
  class EnumConvertString (line 92) | class EnumConvertString(Enum):
  class EnumSwizzle (line 99) | class EnumSwizzle(Enum):
  class EnumUnaryOperation (line 110) | class EnumUnaryOperation(Enum):
  function to_bits (line 179) | def to_bits(value: Any):
  function vector_swap (line 190) | def vector_swap(pA: Any, pB: Any, swap_x: EnumSwizzle, swap_y:EnumSwizzle,
  class BitSplitNode (line 222) | class BitSplitNode(CozyBaseNode):
    method INPUT_TYPES (line 239) | def INPUT_TYPES(cls) -> InputType:
    method run (line 253) | def run(self, **kw) -> tuple[list[int], list[bool]]:
  class ComparisonNode (line 277) | class ComparisonNode(CozyBaseNode):
    method INPUT_TYPES (line 292) | def INPUT_TYPES(cls) -> InputType:
    method run (line 321) | def run(self, **kw) -> tuple[Any, Any]:
  class LerpNode (line 411) | class LerpNode(CozyBaseNode):
    method INPUT_TYPES (line 429) | def INPUT_TYPES(cls) -> InputType:
    method run (line 450) | def run(self, **kw) -> tuple[Any, Any]:
  class OPUnaryNode (line 494) | class OPUnaryNode(CozyBaseNode):
    method INPUT_TYPES (line 508) | def INPUT_TYPES(cls) -> InputType:
    method run (line 527) | def run(self, **kw) -> tuple[bool]:
  class OPBinaryNode (line 581) | class OPBinaryNode(CozyBaseNode):
    method INPUT_TYPES (line 595) | def INPUT_TYPES(cls) -> InputType:
    method run (line 621) | def run(self, **kw) -> tuple[bool]:
  class StringerNode (line 728) | class StringerNode(CozyBaseNode):
    method INPUT_TYPES (line 739) | def INPUT_TYPES(cls) -> InputType:
    method run (line 758) | def run(self, **kw) -> tuple[TensorType, ...]:
  class SwizzleNode (line 797) | class SwizzleNode(CozyBaseNode):
    method INPUT_TYPES (line 808) | def INPUT_TYPES(cls) -> InputType:
    method run (line 831) | def run(self, **kw) -> tuple[float, ...]:

FILE: core/color.py
  class EnumColorMatchMode (line 55) | class EnumColorMatchMode(Enum):
  class EnumColorMatchMap (line 60) | class EnumColorMatchMap(Enum):
  class ColorBlindNode (line 68) | class ColorBlindNode(CozyImageNode):
    method INPUT_TYPES (line 76) | def INPUT_TYPES(cls) -> InputType:
    method run (line 89) | def run(self, **kw) -> RGBAMaskType:
  class ColorMatchNode (line 104) | class ColorMatchNode(CozyImageNode):
    method INPUT_TYPES (line 112) | def INPUT_TYPES(cls) -> InputType:
    method run (line 138) | def run(self, **kw) -> RGBAMaskType:
  class ColorKMeansNode (line 188) | class ColorKMeansNode(CozyBaseNode):
    method INPUT_TYPES (line 205) | def INPUT_TYPES(cls) -> InputType:
    method run (line 227) | def run(self, **kw) -> RGBAMaskType:
  class ColorTheoryNode (line 264) | class ColorTheoryNode(CozyBaseNode):
    method INPUT_TYPES (line 278) | def INPUT_TYPES(cls) -> InputType:
    method run (line 294) | def run(self, **kw) -> tuple[list[TensorType], list[TensorType]]:
  class GradientMapNode (line 311) | class GradientMapNode(CozyImageNode):
    method INPUT_TYPES (line 321) | def INPUT_TYPES(cls) -> InputType:
    method run (line 345) | def run(self, **kw) -> RGBAMaskType:

FILE: core/compose.py
  class BlendNode (line 51) | class BlendNode(CozyImageNode):
    method INPUT_TYPES (line 59) | def INPUT_TYPES(cls) -> InputType:
    method run (line 90) | def run(self, **kw) -> RGBAMaskType:
  class FilterMaskNode (line 170) | class FilterMaskNode(CozyImageNode):
    method INPUT_TYPES (line 178) | def INPUT_TYPES(cls) -> InputType:
    method run (line 198) | def run(self, **kw) -> RGBAMaskType:
  class HistogramNode (line 220) | class HistogramNode(CozyImageNode):
    method INPUT_TYPES (line 228) | def INPUT_TYPES(cls) -> InputType:
    method run (line 241) | def run(self, **kw) -> RGBAMaskType:
  class PixelMergeNode (line 256) | class PixelMergeNode(CozyImageNode):
    method INPUT_TYPES (line 264) | def INPUT_TYPES(cls) -> InputType:
    method run (line 284) | def run(self, **kw) -> RGBAMaskType:
  class PixelSplitNode (line 325) | class PixelSplitNode(CozyBaseNode):
    method INPUT_TYPES (line 342) | def INPUT_TYPES(cls) -> InputType:
    method run (line 351) | def run(self, **kw) -> RGBAMaskType:
  class PixelSwapNode (line 362) | class PixelSwapNode(CozyImageNode):
    method INPUT_TYPES (line 370) | def INPUT_TYPES(cls) -> InputType:
    method run (line 390) | def run(self, **kw) -> RGBAMaskType:
  class ThresholdNode (line 427) | class ThresholdNode(CozyImageNode):
    method INPUT_TYPES (line 435) | def INPUT_TYPES(cls) -> InputType:
    method run (line 455) | def run(self, **kw) -> RGBAMaskType:

FILE: core/create.py
  class ConstantNode (line 59) | class ConstantNode(CozyImageNode):
    method INPUT_TYPES (line 67) | def INPUT_TYPES(cls) -> InputType:
    method run (line 89) | def run(self, **kw) -> RGBAMaskType:
  class ShapeNode (line 127) | class ShapeNode(CozyImageNode):
    method INPUT_TYPES (line 135) | def INPUT_TYPES(cls) -> InputType:
    method run (line 167) | def run(self, **kw) -> RGBAMaskType:
  class TextNode (line 216) | class TextNode(CozyImageNode):
    method INPUT_TYPES (line 226) | def INPUT_TYPES(cls) -> InputType:
    method run (line 277) | def run(self, **kw) -> RGBAMaskType:

FILE: core/trans.py
  class EnumCropMode (line 50) | class EnumCropMode(Enum):
  class CropNode (line 59) | class CropNode(CozyImageNode):
    method INPUT_TYPES (line 67) | def INPUT_TYPES(cls) -> InputType:
    method run (line 92) | def run(self, **kw) -> RGBAMaskType:
  class FlattenNode (line 128) | class FlattenNode(CozyImageNode):
    method INPUT_TYPES (line 136) | def INPUT_TYPES(cls) -> InputType:
    method run (line 156) | def run(self, **kw) -> RGBAMaskType:
  class SplitNode (line 175) | class SplitNode(CozyBaseNode):
    method INPUT_TYPES (line 189) | def INPUT_TYPES(cls) -> InputType:
    method run (line 214) | def run(self, **kw) -> RGBAMaskType:
  class StackNode (line 248) | class StackNode(CozyImageNode):
    method INPUT_TYPES (line 260) | def INPUT_TYPES(cls) -> InputType:
    method run (line 282) | def run(self, **kw) -> RGBAMaskType:
  class TransformNode (line 302) | class TransformNode(CozyImageNode):
    method INPUT_TYPES (line 310) | def INPUT_TYPES(cls) -> InputType:
    method run (line 358) | def run(self, **kw) -> RGBAMaskType:

FILE: core/utility/batch.py
  class EnumBatchMode (line 60) | class EnumBatchMode(Enum):
  class ArrayNode (line 71) | class ArrayNode(CozyBaseNode):
    method INPUT_TYPES (line 88) | def INPUT_TYPES(cls) -> InputType:
    method batched (line 114) | def batched(cls, iterable, chunk_size, expand:bool=False, fill:Any=Non...
    method run (line 120) | def run(self, **kw) -> tuple[int, list]:
  class BatchToList (line 243) | class BatchToList(CozyBaseNode):
    method INPUT_TYPES (line 253) | def INPUT_TYPES(cls) -> InputType:
    method run (line 261) | def run(self, **kw) -> tuple[list[Any]]:
  class QueueBaseNode (line 266) | class QueueBaseNode(CozyBaseNode):
    method IS_CHANGED (line 274) | def IS_CHANGED(cls, **kw) -> float:
    method INPUT_TYPES (line 278) | def INPUT_TYPES(cls) -> InputType:
    method __init__ (line 310) | def __init__(self) -> None:
    method __parseQ (line 321) | def __parseQ(self, data: Any, recurse: bool=False) -> list[str]:
    method process (line 368) | def process(self, q_data: Any) -> TensorType | str | dict:
    method run (line 386) | def run(self, ident, **kw) -> tuple[Any, list[str], str, int, int]:
    method status (line 481) | def status(self) -> dict[str, Any]:
  class QueueNode (line 490) | class QueueNode(QueueBaseNode):
  class QueueTooNode (line 504) | class QueueTooNode(QueueBaseNode):
    method INPUT_TYPES (line 523) | def INPUT_TYPES(cls) -> InputType:
    method run (line 541) | def run(self, ident, **kw) -> tuple[TensorType, TensorType, TensorType...

FILE: core/utility/info.py
  function decode_tensor (line 36) | def decode_tensor(tensor: TensorType) -> str:
  class AkashicData (line 52) | class AkashicData:
    method __init__ (line 53) | def __init__(self, **kw) -> None:
  class AkashicNode (line 57) | class AkashicNode(CozyBaseNode):
    method run (line 66) | def run(self, **kw) -> tuple[Any, Any]:
  class GraphNode (line 141) | class GraphNode(CozyBaseNode):
    method INPUT_TYPES (line 155) | def INPUT_TYPES(cls) -> InputType:
    method IS_CHANGED (line 173) | def IS_CHANGED(cls, **kw) -> float:
    method __init__ (line 176) | def __init__(self, *arg, **kw) -> None:
    method run (line 181) | def run(self, ident, **kw) -> tuple[TensorType]:
  class ImageInfoNode (line 215) | class ImageInfoNode(CozyBaseNode):
    method INPUT_TYPES (line 233) | def INPUT_TYPES(cls) -> InputType:
    method run (line 242) | def run(self, **kw) -> tuple[int, list]:

FILE: core/utility/io.py
  function path_next (line 69) | def path_next(pattern: str) -> str:
  class DelayNode (line 87) | class DelayNode(CozyBaseNode):
    method INPUT_TYPES (line 100) | def INPUT_TYPES(cls) -> InputType:
    method IS_CHANGED (line 118) | def IS_CHANGED(cls, **kw) -> float:
    method run (line 121) | def run(self, ident, **kw) -> tuple[Any]:
  class ExportNode (line 147) | class ExportNode(CozyBaseNode):
    method IS_CHANGED (line 158) | def IS_CHANGED(cls, **kw) -> float:
    method INPUT_TYPES (line 162) | def INPUT_TYPES(cls) -> InputType:
    method run (line 194) | def run(self, **kw) -> None:
  class RouteNode (line 258) | class RouteNode(CozyBaseNode):
    method INPUT_TYPES (line 271) | def INPUT_TYPES(cls) -> InputType:
    method run (line 281) | def run(self, **kw) -> tuple[Any, ...]:
  class SaveOutputNode (line 294) | class SaveOutputNode(CozyBaseNode):
    method IS_CHANGED (line 305) | def IS_CHANGED(cls, **kw) -> float:
    method INPUT_TYPES (line 309) | def INPUT_TYPES(cls) -> InputType:
    method run (line 326) | def run(self, **kw) -> dict[str, Any]:

FILE: core/vars.py
  class ValueNode (line 30) | class ValueNode(CozyBaseNode):
    method INPUT_TYPES (line 42) | def INPUT_TYPES(cls) -> InputType:
    method run (line 75) | def run(self, **kw) -> tuple[tuple[Any, ...]]:
  class Vector2Node (line 131) | class Vector2Node(CozyBaseNode):
    method INPUT_TYPES (line 145) | def INPUT_TYPES(cls) -> InputType:
    method run (line 162) | def run(self, **kw) -> tuple[tuple[float, ...]]:
  class Vector3Node (line 176) | class Vector3Node(CozyBaseNode):
    method INPUT_TYPES (line 190) | def INPUT_TYPES(cls) -> InputType:
    method run (line 210) | def run(self, **kw) -> tuple[tuple[float, ...]]:
  class Vector4Node (line 226) | class Vector4Node(CozyBaseNode):
    method INPUT_TYPES (line 240) | def INPUT_TYPES(cls) -> InputType:
    method run (line 263) | def run(self, **kw) -> tuple[tuple[float, ...]]:

FILE: web/core.js
  method init (line 22) | async init() {

FILE: web/fun.js
  class Particle (line 69) | class Particle {
    method constructor (line 70) | constructor() {
    method draw (line 82) | draw() {
    method move (line 103) | move() {
  function flashBackgroundColor (line 154) | async function flashBackgroundColor(element, duration, flashCount, color...

FILE: web/nodes/akashic.js
  method beforeRegisterNodeDef (line 12) | async beforeRegisterNodeDef(nodeType, nodeData, app) {

FILE: web/nodes/array.js
  method beforeRegisterNodeDef (line 15) | async beforeRegisterNodeDef(nodeType, nodeData) {

FILE: web/nodes/delay.js
  constant EVENT_JOVI_DELAY (line 9) | const EVENT_JOVI_DELAY = "jovi-delay-user";
  constant EVENT_JOVI_UPDATE (line 10) | const EVENT_JOVI_UPDATE = "jovi-delay-update";
  function domShowModal (line 12) | function domShowModal(innerHTML, eventCallback, timeout=null) {
  method beforeRegisterNodeDef (line 60) | async beforeRegisterNodeDef(nodeType, nodeData) {

FILE: web/nodes/flatten.js
  method beforeRegisterNodeDef (line 11) | async beforeRegisterNodeDef(nodeType, nodeData) {

FILE: web/nodes/graph.js
  method init (line 11) | async init() {
  method beforeRegisterNodeDef (line 20) | async beforeRegisterNodeDef(nodeType, nodeData, app) {

FILE: web/nodes/lerp.js
  method beforeRegisterNodeDef (line 10) | async beforeRegisterNodeDef(nodeType, nodeData) {

FILE: web/nodes/op_binary.js
  method beforeRegisterNodeDef (line 10) | async beforeRegisterNodeDef(nodeType, nodeData) {

FILE: web/nodes/op_unary.js
  method beforeRegisterNodeDef (line 10) | async beforeRegisterNodeDef(nodeType, nodeData) {

FILE: web/nodes/queue.js
  constant EVENT_JOVI_PING (line 12) | const EVENT_JOVI_PING = "jovi-queue-ping";
  constant EVENT_JOVI_DONE (line 13) | const EVENT_JOVI_DONE = "jovi-queue-done";
  method beforeRegisterNodeDef (line 17) | async beforeRegisterNodeDef(nodeType, nodeData, app) {

FILE: web/nodes/route.js
  method beforeRegisterNodeDef (line 15) | async beforeRegisterNodeDef(nodeType, nodeData) {

FILE: web/nodes/stack.js
  method beforeRegisterNodeDef (line 11) | async beforeRegisterNodeDef(nodeType, nodeData) {

FILE: web/nodes/stringer.js
  method beforeRegisterNodeDef (line 11) | async beforeRegisterNodeDef(nodeType, nodeData) {

FILE: web/nodes/value.js
  method beforeRegisterNodeDef (line 10) | async beforeRegisterNodeDef(nodeType, nodeData) {

FILE: web/util.js
  function apiJovimetrix (line 16) | async function apiJovimetrix(id, cmd, data=null, route="message", ) {
  function widgetHookControl (line 44) | async function widgetHookControl(node, control_key, child_key) {
  function nodeFitHeight (line 128) | function nodeFitHeight(node) {
  function nodeAddDynamic (line 139) | async function nodeAddDynamic(nodeType, prefix, dynamic_type='*') {
  function nodeVirtualLinkRoot (line 210) | function nodeVirtualLinkRoot(node) {
  function nodeVirtualLinkChild (line 229) | function nodeVirtualLinkChild(node) {
  function nodeInputsClear (line 248) | function nodeInputsClear(node, stop = 0) {
  function nodeOutputsClear (line 260) | function nodeOutputsClear(node, stop = 0) {

FILE: web/widget_vector.js
  function arrayToObject (line 7) | function arrayToObject(values, length, parseFn) {
  function domInnerValueChange (line 15) | function domInnerValueChange(node, pos, widget, value, event=undefined) {
  function colorHex2RGB (line 30) | function colorHex2RGB(hex) {
  function colorRGB2Hex (line 39) | function colorRGB2Hex(input) {
  function clamp (line 153) | function clamp(widget, v, idx) {
  method getCustomWidgets (line 279) | async getCustomWidgets(app) {
Condensed preview — 41 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (288K chars).
[
  {
    "path": ".gitattributes",
    "chars": 66,
    "preview": "# Auto detect text files and perform LF normalization\n* text=auto\n"
  },
  {
    "path": ".github/workflows/publish_action.yml",
    "chars": 541,
    "preview": "name: Publish to Comfy registry\non:\n  workflow_dispatch:\n  push:\n    branches:\n      - main\n    paths:\n      - \"pyprojec"
  },
  {
    "path": ".gitignore",
    "chars": 216,
    "preview": "__pycache__\n*.py[cod]\n*$py.class\n_*/\nglsl/*\n*.code-workspace\n.vscode\nconfig.json\nignore.txt\n.env\n.venv\n.DS_Store\n*.egg-i"
  },
  {
    "path": "LICENSE",
    "chars": 1122,
    "preview": "MIT License\n\nCopyright (c) 2023 Alexander G. Morano\n\nPermission is hereby granted, free of charge, to any person obtaini"
  },
  {
    "path": "NOTICE",
    "chars": 857,
    "preview": "This project includes code concepts from the MTB Nodes project (MIT)\nhttps://github.com/melMass/comfy_mtb\n\nThis project "
  },
  {
    "path": "README.md",
    "chars": 12970,
    "preview": "<picture>\n  <source media=\"(prefers-color-scheme: dark)\" srcset=\"https://github.com/Amorano/Jovimetrix-examples/blob/mas"
  },
  {
    "path": "__init__.py",
    "chars": 2804,
    "preview": "\"\"\"\n     ██  ██████  ██    ██ ██ ███    ███ ███████ ████████ ██████  ██ ██   ██ \n     ██ ██    ██ ██    ██ ██ ████  ████"
  },
  {
    "path": "core/__init__.py",
    "chars": 108,
    "preview": "\nfrom enum import Enum\n\nclass EnumFillOperation(Enum):\n    DEFAULT = 0\n    FILL_ZERO = 20\n    FILL_ALL = 10\n"
  },
  {
    "path": "core/adjust.py",
    "chars": 21052,
    "preview": "\"\"\" Jovimetrix - Adjust \"\"\"\n\nimport sys\nfrom enum import Enum\nfrom typing import Any\nfrom typing_extensions import overr"
  },
  {
    "path": "core/anim.py",
    "chars": 12328,
    "preview": "\"\"\" Jovimetrix - Animation \"\"\"\n\nimport sys\n\nimport numpy as np\n\nfrom comfy.utils import ProgressBar\n\nfrom cozy_comfyui i"
  },
  {
    "path": "core/calc.py",
    "chars": 34148,
    "preview": "\"\"\" Jovimetrix - Calculation \"\"\"\n\nimport sys\nimport math\nimport struct\nfrom enum import Enum\nfrom typing import Any\nfrom"
  },
  {
    "path": "core/color.py",
    "chars": 16185,
    "preview": "\"\"\" Jovimetrix - Color \"\"\"\n\nfrom enum import Enum\n\nimport cv2\nimport torch\n\nfrom comfy.utils import ProgressBar\n\nfrom co"
  },
  {
    "path": "core/compose.py",
    "chars": 21877,
    "preview": "\"\"\" Jovimetrix - Composition \"\"\"\n\nimport numpy as np\n\nfrom comfy.utils import ProgressBar\n\nfrom cozy_comfyui import \\\n  "
  },
  {
    "path": "core/create.py",
    "chars": 15731,
    "preview": "\"\"\" Jovimetrix - Creation \"\"\"\n\nimport numpy as np\nfrom PIL import ImageFont\nfrom skimage.filters import gaussian\n\nfrom c"
  },
  {
    "path": "core/trans.py",
    "chars": 19986,
    "preview": "\"\"\" Jovimetrix - Transform \"\"\"\n\nimport sys\nfrom enum import Enum\n\nfrom comfy.utils import ProgressBar\n\nfrom cozy_comfyui"
  },
  {
    "path": "core/utility/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "core/utility/batch.py",
    "chars": 22298,
    "preview": "\"\"\" Jovimetrix - Utility \"\"\"\n\nimport os\nimport sys\nimport json\nimport glob\nimport random\nfrom enum import Enum\nfrom path"
  },
  {
    "path": "core/utility/info.py",
    "chars": 8904,
    "preview": "\"\"\" Jovimetrix - Utility \"\"\"\n\nimport io\nimport json\nfrom typing import Any\n\nimport torch\nimport numpy as np\nfrom PIL imp"
  },
  {
    "path": "core/utility/io.py",
    "chars": 14262,
    "preview": "\"\"\" Jovimetrix - Utility \"\"\"\n\nimport os\nimport json\nfrom uuid import uuid4\nfrom pathlib import Path\nfrom typing import A"
  },
  {
    "path": "core/vars.py",
    "chars": 11688,
    "preview": "\"\"\" Jovimetrix - Variables \"\"\"\n\nimport sys\nimport random\nfrom typing import Any\n\nfrom comfy.utils import ProgressBar\n\nfr"
  },
  {
    "path": "node_list.json",
    "chars": 5645,
    "preview": "{\n    \"ADJUST: BLUR (JOV)\": \"Enhance and modify images with various blur effects\",\n    \"ADJUST: COLOR (JOV)\": \"Enhance a"
  },
  {
    "path": "pyproject.toml",
    "chars": 1650,
    "preview": "[project]\nname = \"jovimetrix\"\ndescription = \"Animation via tick. Parameter manipulation with wave generator. Unary and B"
  },
  {
    "path": "requirements.txt",
    "chars": 197,
    "preview": "aenum\ngit+https://github.com/cozy-comfyui/cozy_comfyui@main#egg=cozy_comfyui\ngit+https://github.com/cozy-comfyui/cozy_co"
  },
  {
    "path": "web/core.js",
    "chars": 953,
    "preview": "/**\n    ASYNC\n    init\n    setup\n    registerCustomNodes\n    nodeCreated\n    beforeRegisterNodeDef\n    getCustomWidgets\n"
  },
  {
    "path": "web/fun.js",
    "chars": 5373,
    "preview": "/**/\n\nimport { app } from \"../../scripts/app.js\";\n\nexport const bewm = function(ex, ey) {\n    //- adapted from \"Anchor C"
  },
  {
    "path": "web/jovi_metrix.css",
    "chars": 366,
    "preview": "/**/\n\n.jov-modal-content {\n  max-width: 100%;\n  max-height: 100%;\n  overflow: visible;\n  position: relative;\n  margin: 3"
  },
  {
    "path": "web/nodes/akashic.js",
    "chars": 1925,
    "preview": "/**/\n\nimport { app } from \"../../../scripts/app.js\"\nimport { ComfyWidgets } from '../../../scripts/widgets.js';\nimport {"
  },
  {
    "path": "web/nodes/array.js",
    "chars": 424,
    "preview": "/**\n * File: array.js\n * Project: Jovimetrix\n *\n */\n\nimport { app } from \"../../../scripts/app.js\"\nimport { nodeAddDynam"
  },
  {
    "path": "web/nodes/delay.js",
    "chars": 4578,
    "preview": "/**/\n\nimport { api } from \"../../../scripts/api.js\";\nimport { app } from \"../../../scripts/app.js\";\nimport { apiJovimetr"
  },
  {
    "path": "web/nodes/flatten.js",
    "chars": 398,
    "preview": "/**/\n\nimport { app } from \"../../../scripts/app.js\"\nimport { nodeAddDynamic } from \"../util.js\"\n\nconst _id = \"FLATTEN (J"
  },
  {
    "path": "web/nodes/graph.js",
    "chars": 2533,
    "preview": "/**/\n\nimport { app } from \"../../../scripts/app.js\"\nimport { apiJovimetrix, nodeAddDynamic } from \"../util.js\"\n\nconst _i"
  },
  {
    "path": "web/nodes/lerp.js",
    "chars": 742,
    "preview": "/**/\n\nimport { app } from \"../../../scripts/app.js\"\nimport { widgetHookControl } from \"../util.js\"\n\nconst _id = \"LERP (J"
  },
  {
    "path": "web/nodes/op_binary.js",
    "chars": 681,
    "preview": "/**/\n\nimport { app } from \"../../../scripts/app.js\"\nimport { widgetHookControl } from \"../util.js\"\n\nconst _id = \"OP BINA"
  },
  {
    "path": "web/nodes/op_unary.js",
    "chars": 623,
    "preview": "/**/\n\nimport { app } from \"../../../scripts/app.js\"\nimport { widgetHookControl } from \"../util.js\"\n\nconst _id = \"OP UNAR"
  },
  {
    "path": "web/nodes/queue.js",
    "chars": 5721,
    "preview": "/**/\n\nimport { api } from \"../../../scripts/api.js\";\nimport { app } from \"../../../scripts/app.js\";\nimport { ComfyWidget"
  },
  {
    "path": "web/nodes/route.js",
    "chars": 3383,
    "preview": "/**/\n\nimport { app } from \"../../../scripts/app.js\"\nimport {\n    TypeSlot, TypeSlotEvent, nodeFitHeight,\n    nodeVirtual"
  },
  {
    "path": "web/nodes/stack.js",
    "chars": 381,
    "preview": "/**/\n\nimport { app } from \"../../../scripts/app.js\"\nimport { nodeAddDynamic}  from \"../util.js\"\n\nconst _id = \"STACK (JOV"
  },
  {
    "path": "web/nodes/stringer.js",
    "chars": 384,
    "preview": "/**/\n\nimport { app } from \"../../../scripts/app.js\"\nimport { nodeAddDynamic } from \"../util.js\"\n\nconst _id = \"STRINGER ("
  },
  {
    "path": "web/nodes/value.js",
    "chars": 1597,
    "preview": "/**/\n\nimport { app } from \"../../../scripts/app.js\"\nimport { widgetHookControl, nodeFitHeight} from \"../util.js\"\n\nconst "
  },
  {
    "path": "web/util.js",
    "chars": 8646,
    "preview": "/**/\n\nimport { app } from \"../../scripts/app.js\"\nimport { api } from \"../../scripts/api.js\"\n\nexport const TypeSlot = {\n "
  },
  {
    "path": "web/widget_vector.js",
    "chars": 10685,
    "preview": "/**/\n\nimport { app } from \"../../scripts/app.js\"\nimport { $el } from \"../../scripts/ui.js\"\n/** @import { IWidget, LGraph"
  }
]

About this extraction

This page contains the full source code of the Amorano/Jovimetrix GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 41 files (267.6 KB), approximately 69.5k tokens, and a symbol index with 236 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!