[
  {
    "path": ".gitattributes",
    "content": "# Auto detect text files and perform LF normalization\n* text=auto\n"
  },
  {
    "path": ".github/workflows/publish_action.yml",
    "content": "name: Publish to Comfy registry\non:\n  workflow_dispatch:\n  push:\n    branches:\n      - main\n    paths:\n      - \"pyproject.toml\"\n\npermissions:\n  issues: write\n\njobs:\n  publish-node:\n    name: Publish Custom Node to registry\n    runs-on: ubuntu-latest\n    if: ${{ github.repository_owner == 'Amorano' }}\n    steps:\n      - name: Check out code\n        uses: actions/checkout@v4\n      - name: Publish Custom Node\n        uses: Comfy-Org/publish-node-action@v1\n        with:\n          personal_access_token: ${{ secrets.REGISTRY_ACCESS_TOKEN }}\n"
  },
  {
    "path": ".gitignore",
    "content": "__pycache__\n*.py[cod]\n*$py.class\n_*/\nglsl/*\n*.code-workspace\n.vscode\nconfig.json\nignore.txt\n.env\n.venv\n.DS_Store\n*.egg-info\n*.bak\ncheckpoints\nresults\nbackup\nnode_modules\n*-lock.json\n*.config.mjs\npackage.json\n_TODO*.*"
  },
  {
    "path": "LICENSE",
    "content": "MIT License\n\nCopyright (c) 2023 Alexander G. Morano\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n\nGO NUTS; JUST TRY NOT TO DO IT IN YOUR HEAD.\n"
  },
  {
    "path": "NOTICE",
    "content": "This project includes code concepts from the MTB Nodes project (MIT)\nhttps://github.com/melMass/comfy_mtb\n\nThis project includes code concepts from the ComfyUI-Custom-Scripts project (MIT)\nhttps://github.com/pythongosssss/ComfyUI-Custom-Scripts\n\nThis project includes code concepts from the KJNodes for ComfyUI project (GPL 3.0)\nhttps://github.com/kijai/ComfyUI-KJNodes\n\nThis project includes code concepts from the UE Nodes project (Apache 2.0)\nhttps://github.com/chrisgoringe/cg-use-everywhere\n\nThis project includes code concepts from the WAS Node Suite project (MIT)\nhttps://github.com/WASasquatch/was-node-suite-comfyui\n\nThis project includes code concepts from the rgthree-comfy project (MIT)\nhttps://github.com/rgthree/rgthree-comfy\n\nThis project includes code concepts from the FizzNodes project (MIT)\nhttps://github.com/FizzleDorf/ComfyUI_FizzNodes"
  },
  {
    "path": "README.md",
    "content": "<picture>\n  <source media=\"(prefers-color-scheme: dark)\" srcset=\"https://github.com/Amorano/Jovimetrix-examples/blob/master/res/logo-jovimetrix.png\">\n  <source media=\"(prefers-color-scheme: light)\" srcset=\"https://github.com/Amorano/Jovimetrix-examples/blob/master/res/logo-jovimetrix-light.png\">\n  <img alt=\"ComfyUI Nodes for procedural masking, live composition and video manipulation\">\n</picture>\n\n<h2><div align=\"center\">\n<a href=\"https://github.com/comfyanonymous/ComfyUI\">COMFYUI</a> Nodes for procedural masking, live composition and video manipulation\n</div></h2>\n\n<h3><div align=\"center\">\nJOVIMETRIX IS ONLY GUARANTEED TO SUPPORT <a href=\"https://github.com/comfyanonymous/ComfyUI\">COMFYUI 0.1.3+</a> and <a href=\"https://github.com/Comfy-Org/ComfyUI_frontend\">FRONTEND 1.2.40+</a><br>\nIF YOU NEED AN OLDER VERSION, PLEASE DO NOT UPDATE.\n</div></h3>\n\n<h2><div align=\"center\">\n\n![KNIVES!](https://badgen.net/github/open-issues/amorano/jovimetrix)\n![FORKS!](https://badgen.net/github/forks/amorano/jovimetrix)\n\n</div></h2>\n\n<!---------------------------------------------------------------------------->\n\n# SPONSORSHIP\n\nPlease consider sponsoring me if you enjoy the results of my work, code or documentation or otherwise. A good way to keep code development open and free is through sponsorship.\n\n<div align=\"center\">\n\n&nbsp;|&nbsp;|&nbsp;|&nbsp;\n-|-|-|-\n[![BE A GITHUB SPONSOR ❤️](https://img.shields.io/badge/sponsor-30363D?style=for-the-badge&logo=GitHub-Sponsors&logoColor=#EA4AAA)](https://github.com/sponsors/Amorano) | [![DIRECTLY SUPPORT ME VIA PAYPAL](https://img.shields.io/badge/PayPal-00457C?style=for-the-badge&logo=paypal&logoColor=white)](https://www.paypal.com/paypalme/onarom) | [![PATREON SUPPORTER](https://img.shields.io/badge/Patreon-F96854?style=for-the-badge&logo=patreon&logoColor=white)](https://www.patreon.com/joviex) | [![SUPPORT ME ON KO-FI!](https://ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/alexandermorano)\n</div>\n\n## HIGHLIGHTS\n\n* 30 function `BLEND` node -- subtract, multiply and overlay like the best\n* Vector support for 2, 3, 4 size tuples of integer or float type\n* Specific RGB/RGBA color vector support that provides a color picker\n* All Image inputs support RGBA, RGB or pure MASK input\n* Full Text generation support using installed system fonts\n* Basic parametric shape (Circle, Square, Polygon) generator~~\n* `COLOR BLIND` check support\n* `COLOR MATCH` against existing images or create a custom LUT\n* Generate `COLOR THEORY` spreads from an existing image\n* `COLOR MEANS` to generate palettes for existing images to keep other images in the same tonal ranges\n* `PIXEL SPLIT` separate the channels of an image to manipulate and `PIXEL MERGE` them back together\n* `STACK` a series of images into a new single image vertically, horizontally or in a grid\n* Or `FLATTEN` a batch of images into a single image with each image subsequently added on top (slap comp)\n* `VALUE` Node has conversion support for all ComfyUI types and some 3rd party types (2DCoords, Mixlab Layers)\n* `LERP` node to linear interpolate all ComfyUI and Jovimetrix value types\n* Automatic conversion of Mixlab Layer types into Image types\n* Generic `ARRAY` that can Merge, Split, Select, Slice or Randomize a list of ANY type\n* `STRINGER` node to perform specific string manipulation operations: Split, Join, Replace, Slice.\n* A `QUEUE` Node that supports recursing directories, filtering multiple file types and batch loading\n* Use the `OP UNARY` and `OP BINARY` nodes to perform single and double type functions across all ComfyUI and Jovimetrix value types\n* Manipulate vectors with the `SWIZZLE` node to swap their XYZW positions\n* `DELAY` execution at certain parts in a workflow, with or without a timeout\n* Generate curve data with the `TICK` and `WAVE GEN` nodes\n\n<br>\n\n<h1>AS OF VERSION 2.0.0, THESE NODES HAVE MIGRATED TO OTHER, SMALLER PACKAGES</h1>\n\nMigrated to [Jovi_GLSL](https://github.com/Amorano/Jovi_GLSL)\n\n~~* GLSL shader support~~\n~~* * `GLSL Node`  provides raw access to Vertex and Fragment shaders~~\n~~* * `Dynamic GLSL` dynamically convert existing GLSL scripts file into ComfyUI nodes at runtime~~\n~~* * Over 20+ Hand written GLSL nodes to speed up specific tasks better done on the GPU (10x speedup in most cases)~~\n\nMigrated to [Jovi_Capture](https://github.com/Amorano/Jovi_Capture)\n\n~~* `STREAM READER` node to capture monitor, webcam or url media~~\n~~* `STREAM WRITER` node to export media to a HTTP/HTTPS server for OBS or other 3rd party streaming software~~\n\nMigrated to [Jovi_Spout](https://github.com/Amorano/Jovi_Spout)\n\n~~* `SPOUT` streaming support *WINDOWS ONLY*~~\n\nMigrated to [Jovi_MIDI](https://github.com/Amorano/Jovi_MIDI)\n\n~~* `MIDI READER` Captures MIDI messages from an external MIDI device or controller~~\n~~* `MIDI MESSAGE` Processes MIDI messages received from an external MIDI controller or device~~\n~~* `MIDI FILTER` (advanced filter) to select messages from MIDI streams and devices~~\n~~* `MIDI FILTER EZ` simpler interface to filter single messages from MIDI streams and devices~~\n\nMigrated to [Jovi_Help](https://github.com/Amorano/Jovi_Help)\n\n~~* Help System for *ALL NODES* that will auto-parse unknown knows for their type data and descriptions~~\n\nMigrated to [Jovi_Colorizer](https://github.com/Amorano/Jovi_Colorizer)\n\n~~* Colorization for *ALL NODES* using their own node settings, their node group or via regex pattern matching~~\n\n## UPDATES\n\n<h2>DO NOT UPDATE JOVIMETRIX PAST VERSION 1.7.48 IF YOU DONT WANT TO LOSE A BUNCH OF NODES</h2>\n\nNodes that have been removed are in various other packages now. You can install those specific packages to get the functionality back, but I have no way to migrate the actual connections -- you will need to do that manually. **\n\nNodes that have been migrated:\n\n* ALL MIDI NODES:\n* * MIDIMessageNode\n* * MIDIReaderNode\n* * MIDIFilterNode\n* * MIDIFilterEZNode\n\n[Migrated to Jovi_MIDI](https://github.com/Amorano/Jovi_MIDI)\n\n* ALL STREAMING NODES:\n* * StreamReaderNode\n* * StreamWriterNode\n\n[Migrated to Jovi_Capture](https://github.com/Amorano/Jovi_Capture)\n\n* * SpoutWriterNode\n\n[Migrated to Jovi_Spout](https://github.com/Amorano/Jovi_Spout)\n\n* ALL GLSL NODES:\n* * GLSL\n* * GLSL BLEND LINEAR\n* * GLSL COLOR CONVERSION\n* * GLSL COLOR PALETTE\n* * GLSL CONICAL GRADIENT\n* * GLSL DIRECTIONAL WARP\n* * GLSL FILTER RANGE\n* * GLSL GRAYSCALE\n* * GLSL HSV ADJUST\n* * GLSL INVERT\n* * GLSL NORMAL\n* * GLSL NORMAL BLEND\n* * GLSL POSTERIZE\n* * GLSL TRANSFORM\n\n[Migrated to Jovi_GLSL](https://github.com/Amorano/Jovi_GLSL)\n\n**2025/09/04** @2.1.25:\n* Auto-level for `LEVEL` node\n* `HISTOGRAM` node\n* new support for cozy_comfy (v3+ comfy node spec)\n\n**2025/08/15** @2.1.23:\n* fixed regression in `FLATTEN` node\n\n**2025/08/12** @2.1.22:\n* tick allows for float/int start\n\n**2025/08/03** @2.1.21:\n* fixed css for `DELAY` node\n* delay node timer extended to 150+ days\n* all tooltips checked to be TUPLE entries\n\n**2025/07/31** @2.1.20:\n* support for tensors in `OP UNARY` or `OP BINARY`\n\n**2025/07/27** @2.1.19:\n* added `BATCH TO LIST` node\n* `VECTOR` node(s) default step changed to 0.1\n\n**2025/07/13** @2.1.18:\n* allow numpy>=1.25.0\n\n**2025/07/07** @2.1.17:\n* updated to cozy_comfyui 0.0.39\n\n**2025/07/04** @2.1.16:\n* Type hint updates\n\n**2025/06/28** @2.1.15:\n* `GRAPH NODE` updated to use new mechanism in cozy_comfyui 0.0.37 for list of list parse on dynamics\n\n**2025/06/18** @2.1.14:\n* fixed resize_matte mode to use full mask/alpha\n\n**2025/06/18** @2.1.13:\n* allow hex codes for vectors\n* updated to cozy_comfyui 0.0.36\n\n**2025/06/07** @2.1.11:\n* cleaned up image_convert for grayscale/mask\n* updated to cozy_comfyui 0.0.35\n\n**2025/06/06** @2.1.10:\n* updated to comfy_cozy 0.0.34\n* default width and height to 1\n* removed old debug string\n* akashic try to parse unicode emoji strings\n\n**2025/06/02** @2.1.9:\n* fixed dynamic nodes that already start with inputs (dynamic input wouldnt show up)\n* patched Queue node to work with new `COMBO` style of inputs\n\n**2025/05/29** @2.1.8:\n* updated to comfy_cozy 0.0.32\n\n**2025/05/27** @2.1.7:\n* re-ranged all FLOAT to their maximum representations\n* clerical cleanup for JS callbacks\n* added `SPLIT` node to break images into vertical or horizontal slices\n\n**2025/05/25** @2.1.6:\n* loosened restriction for python 3.11+ to allow for 3.10+\n* * I make zero guarantee that will actually let 3.10 work and I will not support 3.10\n\n**2025/05/16** @2.1.5:\n* Full compatibility with [ComfyMath Vector](https://github.com/evanspearman/ComfyMath) nodes\n* Masks can be inverted at inputs\n* `EnumScaleInputMode` for `BLEND` node to adjust inputs prior to operation\n* Allow images or mask inputs in `CONSTANT` node to fall through\n* `VALUE` nodes return all items as list, not just >1\n* Added explicit MASK option for `PIXEL SPLIT` node\n* Split `ADJUST` node into `BLUR`, `EDGE`, `LIGHT`, `PIXEL`,\n* Migrated most of image lib to cozy_comfyui\n* widget_vector tweaked to disallow non-numerics\n* widgetHookControl streamlined\n\n**2025/05/08** @2.1.4:\n* Support for NUMERICAL (bool, int, float, vecN) inputs on value inputs\n\n**2025/05/08** @2.1.3:\n* fixed for VEC* types using MIN/MAX\n\n**2025/05/07** @2.1.2:\n* `TICK` with normalization and new series generator\n\n**2025/05/06** @2.1.1:\n* fixed IS_CHANGED in graphnode\n* updated `TICK SIMPLE` in situ of `TICK` to be inclusive of the end range\n* migrated ease, normalization and wave functions to cozy_comfyui\n* first pass preserving values in multi-type fields\n\n**2025/05/05** @2.1.0:\n* Cleaned up all node defaults\n* Vector nodes aligned for list outputs\n* Cleaned all emoji from input/output\n* Clear all EnumConvertTypes and align with new comfy_cozy\n* Lexicon defines come from Comfy_Cozy module\n* `OP UNARY` fixed factorial\n* Added fill array mode for `OP UNARY`\n* removed `STEREOGRAM` and `STEROSCOPIC` -- they were designed poorly\n\n**2025/05/01** @2.0.11:\n* unified widget_vector.js\n* new comfy_cozy support\n* auto-convert all VEC*INT -> VEC* float types\n* readability for node definitions\n\n**2025/04/24** @2.0.10:\n* `SHAPE NODE` fixed for transparency blends when using blurred masks\n\n**2025/04/24** @2.0.9:\n* removed inversion in pixel splitter\n\n**2025/04/23** @2.0.8:\n* categories aligned to new comfy-cozy support\n\n**2025/04/19** @2.0.7:\n* all JS messages fixed\n\n**2025/04/19** @2.0.6:\n* fixed reset message from JS\n\n**2025/04/19** @2.0.5:\n* patched new frontend input mechanism for dynamic inputs\n* reduced requirements\n* removed old vector conversions waiting for new frontend mechanism\n\n**2025/04/17** @2.0.4:\n* fixed bug in resize_matte `MODE` that would fail when the matte was smaller than the input image\n* migrated to image_crop functions to cozy_comfyui\n\n**2025/04/12** @2.0.0:\n* REMOVED ALL STREAMING, MIDI and GLSL nodes for new packages, HELP System and Node Colorization system:\n\n   [Jovi_Capture - Web camera, Monitor Capture, Window Capture](https://github.com/Amorano/Jovi_Capture)\n\n   [Jovi_MIDI - MIDI capture and MIDI message parsing](https://github.com/Amorano/Jovi_MIDI)\n\n   [Jovi_GLSL - GLSL Shaders](https://github.com/Amorano/Jovi_GLSL)\n\n   [Jovi_Spout - SPOUT Streaming support](https://github.com/Amorano/Jovi_Spout)\n\n   [Jovi_Colorizer - Node Colorization](https://github.com/Amorano/Jovi_Colorizer)\n\n   [Jovi_Help - Node Help](https://github.com/Amorano/Jovi_Help)\n\n* all nodes will accept `LIST` or `BATCH` and process as if all elements are in a list.\n* patched constant node to work with `MATTE_RESIZE`\n* patched import loader to work with old/new comfyui\n* missing array web node partial\n* removed array and no one even noticed.\n* all inputs should be treated as a list even single elements []\n\n<div align=\"center\">\n<img src=\"https://github.com/user-attachments/assets/8ed13e6a-218c-468a-a480-53ab55b04d21\" alt=\"explicit vector node supports\" width=\"640\"/>\n<img src=\"https://github.com/user-attachments/assets/4459855c-c4e6-4739-811e-a6c90aa5a90c\" alt=\"TICK Node Batch Support Output\" width=\"384\"/>\n</div>\n\n# INSTALLATION\n\n[Please see the wiki for advanced use of the environment variables used during startup](https://github.com/Amorano/Jovimetrix/wiki/B.-ASICS)\n\n## COMFYUI MANAGER\n\nIf you have [ComfyUI Manager](https://github.com/ltdrdata/ComfyUI-Manager) installed, simply search for Jovimetrix and install from the manager's database.\n\n## MANUAL INSTALL\nClone the repository into your ComfyUI custom_nodes directory. You can clone the repository with the command:\n```\ngit clone https://github.com/Amorano/Jovimetrix.git\n```\nYou can then install the requirements by using the command:\n```\n.\\python_embed\\python.exe -s -m pip install -r requirements.txt\n```\nIf you are using a <code>virtual environment</code> (<code><i>venv</i></code>), make sure it is activated before installation. Then install the requirements with the command:\n```\npip install -r requirements.txt\n```\n# WHERE TO FIND ME\n\nYou can find me on [![DISCORD](https://dcbadge.vercel.app/api/server/62TJaZ3Z5r?style=flat-square)](https://discord.gg/62TJaZ3Z5r).\n"
  },
  {
    "path": "__init__.py",
    "content": "\"\"\"\n     ██  ██████  ██    ██ ██ ███    ███ ███████ ████████ ██████  ██ ██   ██ \n     ██ ██    ██ ██    ██ ██ ████  ████ ██         ██    ██   ██ ██  ██ ██  \n     ██ ██    ██ ██    ██ ██ ██ ████ ██ █████      ██    ██████  ██   ███  \n██   ██ ██    ██  ██  ██  ██ ██  ██  ██ ██         ██    ██   ██ ██  ██ ██ \n █████   ██████    ████   ██ ██      ██ ███████    ██    ██   ██ ██ ██   ██ \n\n              Animation, Image Compositing & Procedural Creation\n\n@title: Jovimetrix\n@author: Alexander G. Morano\n@category: Compositing\n@reference: https://github.com/Amorano/Jovimetrix\n@tags: adjust, animate, compose, compositing, composition, device, flow, video,\nmask, shape, animation, logic\n@description: Animation via tick. Parameter manipulation with wave generator.\nUnary and Binary math support. Value convert int/float/bool, VectorN and Image,\nMask types. Shape mask generator. Stack images, do channel ops, split, merge\nand randomize arrays and batches. Load images & video from anywhere. Dynamic\nbus routing. Save output anywhere! Flatten, crop, transform; check\ncolorblindness or linear interpolate values.\n@node list:\n    TickNode, TickSimpleNode, WaveGeneratorNode\n    BitSplitNode, ComparisonNode, LerpNode, OPUnaryNode, OPBinaryNode, StringerNode, SwizzleNode,\n    ColorBlindNode, ColorMatchNode, ColorKMeansNode, ColorTheoryNode, GradientMapNode,\n    AdjustNode, BlendNode, FilterMaskNode, PixelMergeNode, PixelSplitNode, PixelSwapNode, ThresholdNode,\n    ConstantNode, ShapeNode, TextNode,\n    CropNode, FlattenNode, StackNode, TransformNode,\n\n    ArrayNode, QueueNode, QueueTooNode,\n    AkashicNode, GraphNode, ImageInfoNode,\n    DelayNode, ExportNode, RouteNode, SaveOutputNode\n\n    ValueNode, Vector2Node, Vector3Node, Vector4Node,\n\"\"\"\n\n__author__ = \"Alexander G. Morano\"\n__email__ = \"amorano@gmail.com\"\n\nfrom pathlib import Path\n\nfrom cozy_comfyui import \\\n    logger\n\nfrom cozy_comfyui.node import \\\n    loader\n\nJOV_DOCKERENV = False\ntry:\n    with open('/proc/1/cgroup', 'rt') as f:\n        content = f.read()\n        JOV_DOCKERENV = any(x in content for x in ['docker', 'kubepods', 'containerd'])\nexcept FileNotFoundError:\n    pass\n\nif JOV_DOCKERENV:\n    logger.info(\"RUNNING IN A DOCKER\")\n\n# ==============================================================================\n# === GLOBAL ===\n# ==============================================================================\n\nPACKAGE = \"JOVIMETRIX\"\nWEB_DIRECTORY = \"./web\"\nROOT = Path(__file__).resolve().parent\nNODE_CLASS_MAPPINGS, NODE_DISPLAY_NAME_MAPPINGS = loader(ROOT,\n                                                         PACKAGE,\n                                                         \"core\",\n                                                         f\"{PACKAGE} 🔺🟩🔵\",\n                                                         False)\n"
  },
  {
    "path": "core/__init__.py",
    "content": "\nfrom enum import Enum\n\nclass EnumFillOperation(Enum):\n    DEFAULT = 0\n    FILL_ZERO = 20\n    FILL_ALL = 10\n"
  },
  {
    "path": "core/adjust.py",
    "content": "\"\"\" Jovimetrix - Adjust \"\"\"\n\nimport sys\nfrom enum import Enum\nfrom typing import Any\nfrom typing_extensions import override\n\nimport comfy.model_management\nfrom comfy_api.latest import ComfyExtension, io\nfrom comfy.utils import ProgressBar\n\nfrom cozy_comfyui import \\\n    InputType, RGBAMaskType, EnumConvertType, \\\n    deep_merge, parse_param, zip_longest_fill\n\nfrom cozy_comfyui.lexicon import \\\n    Lexicon\n\nfrom cozy_comfy.node import \\\n    COZY_TYPE_IMAGE as COZY_TYPE_IMAGEv3, \\\n    CozyImageNode as CozyImageNodev3\n\nfrom cozy_comfyui.node import \\\n    COZY_TYPE_IMAGE, \\\n    CozyImageNode\n\nfrom cozy_comfyui.image.adjust import \\\n    EnumAdjustBlur, EnumAdjustColor, EnumAdjustEdge, EnumAdjustMorpho, \\\n    image_contrast, image_brightness, image_equalize, image_gamma, \\\n    image_exposure, image_pixelate, image_pixelscale, \\\n    image_posterize, image_quantize, image_sharpen, image_morphology, \\\n    image_emboss, image_blur, image_edge, image_color, \\\n    image_autolevel, image_autolevel_histogram\n\nfrom cozy_comfyui.image.channel import \\\n    channel_solid\n\nfrom cozy_comfyui.image.compose import \\\n    image_levels\n\nfrom cozy_comfyui.image.convert import \\\n    tensor_to_cv, cv_to_tensor_full, image_mask, image_mask_add\n\nfrom cozy_comfyui.image.misc import \\\n    image_stack\n\n# ==============================================================================\n# === GLOBAL ===\n# ==============================================================================\n\nJOV_CATEGORY = \"ADJUST\"\n\n# ==============================================================================\n# === ENUMERATION ===\n# ==============================================================================\n\nclass EnumAutoLevel(Enum):\n    MANUAL = 10\n    AUTO = 20\n    HISTOGRAM = 30\n\nclass EnumAdjustLight(Enum):\n    EXPOSURE = 10\n    GAMMA = 20\n    BRIGHTNESS = 30\n    CONTRAST = 40\n    EQUALIZE = 50\n\nclass EnumAdjustPixel(Enum):\n    PIXELATE = 10\n    PIXELSCALE = 20\n    QUANTIZE = 30\n    POSTERIZE = 40\n\n# ==============================================================================\n# === CLASS ===\n# ==============================================================================\n\nclass AdjustBlurNode(CozyImageNode):\n    NAME = \"ADJUST: BLUR (JOV)\"\n    CATEGORY = JOV_CATEGORY\n    DESCRIPTION = \"\"\"\nEnhance and modify images with various blur effects.\n\"\"\"\n\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES()\n        d = deep_merge(d, {\n            \"optional\": {\n                Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),\n                Lexicon.FUNCTION: (EnumAdjustBlur._member_names_, {\n                    \"default\": EnumAdjustBlur.BLUR.name,}),\n                Lexicon.RADIUS: (\"INT\", {\n                    \"default\": 3, \"min\": 3}),\n            }\n        })\n        return Lexicon._parse(d)\n\n    def run(self, **kw) -> RGBAMaskType:\n        pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)\n        op = parse_param(kw, Lexicon.FUNCTION, EnumAdjustBlur, EnumAdjustBlur.BLUR.name)\n        radius = parse_param(kw, Lexicon.RADIUS, EnumConvertType.INT, 3)\n        params = list(zip_longest_fill(pA, op, radius))\n        images = []\n        pbar = ProgressBar(len(params))\n        for idx, (pA, op, radius) in enumerate(params):\n            pA = channel_solid() if pA is None else tensor_to_cv(pA)\n            # height, width = pA.shape[:2]\n            pA = image_blur(pA, op, radius)\n            #pA = image_blend(pA, img_new, mask)\n            images.append(cv_to_tensor_full(pA))\n            pbar.update_absolute(idx)\n        return image_stack(images)\n\nclass AdjustColorNode(CozyImageNode):\n    NAME = \"ADJUST: COLOR (JOV)\"\n    CATEGORY = JOV_CATEGORY\n    DESCRIPTION = \"\"\"\nEnhance and modify images with various blur effects.\n\"\"\"\n\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES()\n        d = deep_merge(d, {\n            \"optional\": {\n                Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),\n                Lexicon.FUNCTION: (EnumAdjustColor._member_names_, {\n                    \"default\": EnumAdjustColor.RGB.name,}),\n                Lexicon.VEC: (\"VEC3\", {\n                    \"default\": (0,0,0), \"mij\": -1, \"maj\": 1, \"step\": 0.025})\n            }\n        })\n        return Lexicon._parse(d)\n\n    def run(self, **kw) -> RGBAMaskType:\n        pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)\n        op = parse_param(kw, Lexicon.FUNCTION, EnumAdjustColor, EnumAdjustColor.RGB.name)\n        vec = parse_param(kw, Lexicon.VEC, EnumConvertType.VEC3, (0,0,0))\n        params = list(zip_longest_fill(pA, op, vec))\n        images = []\n        pbar = ProgressBar(len(params))\n        for idx, (pA, op, vec) in enumerate(params):\n            pA = channel_solid() if pA is None else tensor_to_cv(pA)\n            pA = image_color(pA, op, vec[0], vec[1], vec[2])\n            images.append(cv_to_tensor_full(pA))\n            pbar.update_absolute(idx)\n        return image_stack(images)\n\nclass AdjustEdgeNode(CozyImageNode):\n    NAME = \"ADJUST: EDGE (JOV)\"\n    CATEGORY = JOV_CATEGORY\n    DESCRIPTION = \"\"\"\nEnhanced edge detection.\n\"\"\"\n\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES()\n        d = deep_merge(d, {\n            \"optional\": {\n                Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),\n                Lexicon.FUNCTION: (EnumAdjustEdge._member_names_, {\n                    \"default\": EnumAdjustEdge.CANNY.name,}),\n                Lexicon.RADIUS: (\"INT\", {\n                    \"default\": 1, \"min\": 1}),\n                Lexicon.ITERATION: (\"INT\", {\n                    \"default\": 1, \"min\": 1, \"max\": 1000}),\n                Lexicon.LOHI: (\"VEC2\", {\n                    \"default\": (0, 1), \"mij\": 0, \"maj\": 1, \"step\": 0.01})\n            }\n        })\n        return Lexicon._parse(d)\n\n    def run(self, **kw) -> RGBAMaskType:\n        pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)\n        op = parse_param(kw, Lexicon.FUNCTION, EnumAdjustEdge, EnumAdjustEdge.CANNY.name)\n        radius = parse_param(kw, Lexicon.RADIUS, EnumConvertType.INT, 1)\n        count = parse_param(kw, Lexicon.ITERATION, EnumConvertType.INT, 1)\n        lohi = parse_param(kw, Lexicon.LOHI, EnumConvertType.VEC2, (0,1))\n        params = list(zip_longest_fill(pA, op, radius, count, lohi))\n        images = []\n        pbar = ProgressBar(len(params))\n        for idx, (pA, op, radius, count, lohi) in enumerate(params):\n            pA = channel_solid() if pA is None else tensor_to_cv(pA)\n            alpha = image_mask(pA)\n            pA = image_edge(pA, op, radius, count, lohi[0], lohi[1])\n            pA = image_mask_add(pA, alpha)\n            images.append(cv_to_tensor_full(pA))\n            pbar.update_absolute(idx)\n        return image_stack(images)\n\nclass AdjustEmbossNode(CozyImageNode):\n    NAME = \"ADJUST: EMBOSS (JOV)\"\n    CATEGORY = JOV_CATEGORY\n    DESCRIPTION = \"\"\"\nEmboss boss mode.\n\"\"\"\n\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES()\n        d = deep_merge(d, {\n            \"optional\": {\n                Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),\n                Lexicon.HEADING: (\"FLOAT\", {\n                    \"default\": -45, \"min\": -sys.float_info.max, \"max\": sys.float_info.max, \"step\": 0.1}),\n                Lexicon.ELEVATION: (\"FLOAT\", {\n                    \"default\": 45, \"min\": -sys.float_info.max, \"max\": sys.float_info.max, \"step\": 0.1}),\n                Lexicon.DEPTH: (\"FLOAT\", {\n                    \"default\": 10, \"min\": 0, \"max\": sys.float_info.max, \"step\": 0.1,\n                    \"tooltip\": \"Depth perceived from the light angles above\"}),\n            }\n        })\n        return Lexicon._parse(d)\n\n    def run(self, **kw) -> RGBAMaskType:\n        pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)\n        heading = parse_param(kw, Lexicon.HEADING, EnumConvertType.FLOAT, -45)\n        elevation = parse_param(kw, Lexicon.ELEVATION, EnumConvertType.FLOAT, 45)\n        depth = parse_param(kw, Lexicon.DEPTH, EnumConvertType.FLOAT, 10)\n        params = list(zip_longest_fill(pA, heading, elevation, depth))\n        images = []\n        pbar = ProgressBar(len(params))\n        for idx, (pA, heading, elevation, depth) in enumerate(params):\n            pA = channel_solid() if pA is None else tensor_to_cv(pA)\n            alpha = image_mask(pA)\n            pA = image_emboss(pA, heading, elevation, depth)\n            pA = image_mask_add(pA, alpha)\n            images.append(cv_to_tensor_full(pA))\n            pbar.update_absolute(idx)\n        return image_stack(images)\n\nclass AdjustLevelNode(CozyImageNode):\n    NAME = \"ADJUST: LEVELS (JOV)\"\n    CATEGORY = JOV_CATEGORY\n    DESCRIPTION = \"\"\"\nManual or automatic adjust image levels so that the darkest pixel becomes black\nand the brightest pixel becomes white, enhancing overall contrast.\n\"\"\"\n\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES()\n        d = deep_merge(d, {\n            \"optional\": {\n                Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),\n                Lexicon.LMH: (\"VEC3\", {\n                    \"default\": (0,0.5,1), \"mij\": 0, \"maj\": 1, \"step\": 0.01,\n                    \"label\": [\"LOW\", \"MID\", \"HIGH\"]}),\n                Lexicon.RANGE: (\"VEC2\", {\n                    \"default\": (0, 1), \"mij\": 0, \"maj\": 1, \"step\": 0.01,\n                    \"label\": [\"IN\", \"OUT\"]}),\n                Lexicon.MODE: (EnumAutoLevel._member_names_, {\n                    \"default\": EnumAutoLevel.MANUAL.name,\n                    \"tooltip\": \"Autolevel linearly or with Histogram bin values, per channel\"\n                }),\n                \"clip\": (\"FLOAT\", {\n                    \"default\": 0.5, \"min\": 0, \"max\": 1.0, \"step\": 0.01\n                })\n            }\n        })\n        return Lexicon._parse(d)\n\n    def run(self, **kw) -> RGBAMaskType:\n        pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)\n        LMH = parse_param(kw, Lexicon.LMH, EnumConvertType.VEC3, (0,0.5,1))\n        inout = parse_param(kw, Lexicon.RANGE, EnumConvertType.VEC2, (0,1))\n        mode = parse_param(kw, Lexicon.MODE, EnumAutoLevel, EnumAutoLevel.AUTO.name)\n        clip = parse_param(kw, \"clip\", EnumConvertType.FLOAT, 0.5, 0, 1)\n        params = list(zip_longest_fill(pA, LMH, inout, mode, clip))\n        images = []\n        pbar = ProgressBar(len(params))\n        for idx, (pA, LMH, inout, mode, clip) in enumerate(params):\n            pA = channel_solid() if pA is None else tensor_to_cv(pA)\n            '''\n            h, s, v = hsv\n            img_new = image_hsv(img_new, h, s, v)\n            '''\n            match mode:\n                case EnumAutoLevel.MANUAL:\n                    low, mid, high = LMH\n                    start, end = inout\n                    pA = image_levels(pA, low, mid, high, start, end)\n\n                case EnumAutoLevel.AUTO:\n                    pA = image_autolevel(pA)\n\n                case EnumAutoLevel.HISTOGRAM:\n                    pA = image_autolevel_histogram(pA, clip)\n\n            images.append(cv_to_tensor_full(pA))\n            pbar.update_absolute(idx)\n        return image_stack(images)\n\nclass AdjustLightNode(CozyImageNode):\n    NAME = \"ADJUST: LIGHT (JOV)\"\n    CATEGORY = JOV_CATEGORY\n    DESCRIPTION = \"\"\"\nTonal adjustments. They can be applied individually or all at the same time in order: brightness, contrast, histogram equalization, exposure, and gamma correction.\n\"\"\"\n\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES()\n        d = deep_merge(d, {\n            \"optional\": {\n                Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),\n                Lexicon.BRIGHTNESS: (\"FLOAT\", {\n                    \"default\": 0.5, \"min\": 0, \"max\": 1, \"step\": 0.01}),\n                Lexicon.CONTRAST: (\"FLOAT\", {\n                    \"default\": 0, \"min\": -1, \"max\": 1, \"step\": 0.01}),\n                Lexicon.EQUALIZE: (\"BOOLEAN\", {\n                    \"default\": False}),\n                Lexicon.EXPOSURE: (\"FLOAT\", {\n                    \"default\": 1, \"min\": -8, \"max\": 8, \"step\": 0.01}),\n                Lexicon.GAMMA: (\"FLOAT\", {\n                    \"default\": 1, \"min\": 0, \"max\": 8, \"step\": 0.01}),\n            }\n        })\n        return Lexicon._parse(d)\n\n    def run(self, **kw) -> RGBAMaskType:\n        pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)\n        brightness = parse_param(kw, Lexicon.BRIGHTNESS, EnumConvertType.FLOAT, 0.5)\n        contrast = parse_param(kw, Lexicon.CONTRAST, EnumConvertType.FLOAT, 0)\n        equalize = parse_param(kw, Lexicon.EQUALIZE, EnumConvertType.FLOAT, 0)\n        exposure = parse_param(kw, Lexicon.EXPOSURE, EnumConvertType.FLOAT, 0)\n        gamma = parse_param(kw, Lexicon.GAMMA, EnumConvertType.FLOAT, 0)\n        params = list(zip_longest_fill(pA, brightness, contrast, equalize, exposure, gamma))\n        images = []\n        pbar = ProgressBar(len(params))\n        for idx, (pA, brightness, contrast, equalize, exposure, gamma) in enumerate(params):\n            pA = channel_solid() if pA is None else tensor_to_cv(pA)\n            alpha = image_mask(pA)\n\n            brightness = 2. * (brightness - 0.5)\n            if brightness != 0:\n                pA = image_brightness(pA, brightness)\n\n            if contrast != 0:\n                pA = image_contrast(pA, contrast)\n\n            if equalize:\n                pA = image_equalize(pA)\n\n            if exposure != 1:\n                pA = image_exposure(pA, exposure)\n\n            if gamma != 1:\n                pA = image_gamma(pA, gamma)\n\n            '''\n            h, s, v = hsv\n            img_new = image_hsv(img_new, h, s, v)\n\n            l, m, h = level\n            img_new = image_levels(img_new, l, h, m, gamma)\n            '''\n            pA = image_mask_add(pA, alpha)\n            images.append(cv_to_tensor_full(pA))\n            pbar.update_absolute(idx)\n        return image_stack(images)\n\nclass AdjustMorphNode(CozyImageNode):\n    NAME = \"ADJUST: MORPHOLOGY (JOV)\"\n    CATEGORY = JOV_CATEGORY\n    DESCRIPTION = \"\"\"\nOperations based on the image shape.\n\"\"\"\n\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES()\n        d = deep_merge(d, {\n            \"optional\": {\n                Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),\n                Lexicon.FUNCTION: (EnumAdjustMorpho._member_names_, {\n                    \"default\": EnumAdjustMorpho.DILATE.name,}),\n                Lexicon.RADIUS: (\"INT\", {\n                    \"default\": 1, \"min\": 1}),\n                Lexicon.ITERATION: (\"INT\", {\n                    \"default\": 1, \"min\": 1, \"max\": 1000}),\n            }\n        })\n        return Lexicon._parse(d)\n\n    def run(self, **kw) -> RGBAMaskType:\n        pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)\n        op = parse_param(kw, Lexicon.FUNCTION, EnumAdjustMorpho, EnumAdjustMorpho.DILATE.name)\n        kernel = parse_param(kw, Lexicon.RADIUS, EnumConvertType.INT, 1)\n        count = parse_param(kw, Lexicon.ITERATION, EnumConvertType.INT, 1)\n        params = list(zip_longest_fill(pA, op, kernel, count))\n        images: list[Any] = []\n        pbar = ProgressBar(len(params))\n        for idx, (pA, op, kernel, count) in enumerate(params):\n            pA = channel_solid() if pA is None else tensor_to_cv(pA)\n            alpha = image_mask(pA)\n            pA = image_morphology(pA, op, kernel, count)\n            pA = image_mask_add(pA, alpha)\n            images.append(cv_to_tensor_full(pA))\n            pbar.update_absolute(idx)\n        return image_stack(images)\n\nclass AdjustPixelNode(CozyImageNode):\n    NAME = \"ADJUST: PIXEL (JOV)\"\n    CATEGORY = JOV_CATEGORY\n    DESCRIPTION = \"\"\"\nPixel-level transformations. The val parameter controls the intensity or resolution of the effect, depending on the operation.\n\"\"\"\n\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES()\n        d = deep_merge(d, {\n            \"optional\": {\n                Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),\n                Lexicon.FUNCTION: (EnumAdjustPixel._member_names_, {\n                    \"default\": EnumAdjustPixel.PIXELATE.name,}),\n                Lexicon.VALUE: (\"FLOAT\", {\n                    \"default\": 0, \"min\": 0, \"max\": 1, \"step\": 0.01})\n            }\n        })\n        return Lexicon._parse(d)\n\n    def run(self, **kw) -> RGBAMaskType:\n        pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)\n        op = parse_param(kw, Lexicon.FUNCTION, EnumAdjustPixel, EnumAdjustPixel.PIXELATE.name)\n        val = parse_param(kw, Lexicon.VALUE, EnumConvertType.FLOAT, 0)\n        params = list(zip_longest_fill(pA, op, val))\n        images = []\n        pbar = ProgressBar(len(params))\n        for idx, (pA, op, val) in enumerate(params):\n            pA = channel_solid() if pA is None else tensor_to_cv(pA, chan=4)\n            alpha = image_mask(pA)\n\n            match op:\n                case EnumAdjustPixel.PIXELATE:\n                    pA = image_pixelate(pA, val / 2.)\n\n                case EnumAdjustPixel.PIXELSCALE:\n                    pA = image_pixelscale(pA, val)\n\n                case EnumAdjustPixel.QUANTIZE:\n                    pA = image_quantize(pA, val)\n\n                case EnumAdjustPixel.POSTERIZE:\n                    pA = image_posterize(pA, val)\n\n            pA = image_mask_add(pA, alpha)\n            images.append(cv_to_tensor_full(pA))\n            pbar.update_absolute(idx)\n        return image_stack(images)\n\nclass AdjustSharpenNode(CozyImageNode):\n    NAME = \"ADJUST: SHARPEN (JOV)\"\n    CATEGORY = JOV_CATEGORY\n    DESCRIPTION = \"\"\"\nSharpen the pixels of an image.\n\"\"\"\n\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES()\n        d = deep_merge(d, {\n            \"optional\": {\n                Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),\n                Lexicon.AMOUNT: (\"FLOAT\", {\n                    \"default\": 0, \"min\": 0, \"max\": 1, \"step\": 0.01}),\n                Lexicon.THRESHOLD: (\"FLOAT\", {\n                    \"default\": 0, \"min\": 0, \"max\": 1, \"step\": 0.01})\n            }\n        })\n        return Lexicon._parse(d)\n\n    def run(self, **kw) -> RGBAMaskType:\n        pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)\n        amount = parse_param(kw, Lexicon.AMOUNT, EnumConvertType.FLOAT, 0)\n        threshold = parse_param(kw, Lexicon.THRESHOLD, EnumConvertType.FLOAT, 0)\n        params = list(zip_longest_fill(pA, amount, threshold))\n        images = []\n        pbar = ProgressBar(len(params))\n        for idx, (pA, amount, threshold) in enumerate(params):\n            pA = channel_solid() if pA is None else tensor_to_cv(pA)\n            pA = image_sharpen(pA, amount / 2., threshold=threshold / 25.5)\n            images.append(cv_to_tensor_full(pA))\n            pbar.update_absolute(idx)\n        return image_stack(images)\n\nclass AdjustSharpenNodev3(CozyImageNodev3):\n    @classmethod\n    def define_schema(cls, **kwarg) -> io.Schema:\n        schema = super(**kwarg).define_schema()\n        schema.display_name = \"ADJUST: SHARPEN (JOV)\"\n        schema.category = JOV_CATEGORY\n        schema.description = \"Sharpen the pixels of an image.\"\n\n        schema.inputs.extend([\n            io.MultiType.Input(\n                id=Lexicon.IMAGE[0],\n                types=COZY_TYPE_IMAGEv3,\n                display_name=Lexicon.IMAGE[0],\n                optional=True,\n                tooltip=Lexicon.IMAGE[1]\n            ),\n            io.Float.Input(\n                id=Lexicon.AMOUNT[0],\n                display_name=Lexicon.AMOUNT[0],\n                optional=True,\n                default= 0,\n                min=0,\n                max=1,\n                step=0.01,\n                tooltip=Lexicon.AMOUNT[1]\n            ),\n            io.Float.Input(\n                id=Lexicon.THRESHOLD[0],\n                display_name=Lexicon.THRESHOLD[0],\n                optional=True,\n                default= 0,\n                min=0,\n                max=1,\n                step=0.01,\n                tooltip=Lexicon.THRESHOLD[1]\n            )\n\n        ])\n        return schema\n\n    @classmethod\n    def execute(self, *arg, **kw) -> io.NodeOutput:\n        pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)\n        amount = parse_param(kw, Lexicon.AMOUNT, EnumConvertType.FLOAT, 0)\n        threshold = parse_param(kw, Lexicon.THRESHOLD, EnumConvertType.FLOAT, 0)\n        params = list(zip_longest_fill(pA, amount, threshold))\n        images = []\n        pbar = ProgressBar(len(params))\n        for idx, (pA, amount, threshold) in enumerate(params):\n            pA = channel_solid() if pA is None else tensor_to_cv(pA)\n            pA = image_sharpen(pA, amount / 2., threshold=threshold / 25.5)\n            images.append(cv_to_tensor_full(pA))\n            pbar.update_absolute(idx)\n        return io.NodeOutput(image_stack(images))\n\nclass AdjustExtension(ComfyExtension):\n    @override\n    async def get_node_list(self) -> list[type[io.ComfyNode]]:\n        return [\n            AdjustSharpenNodev3\n        ]\n\nasync def comfy_entrypoint() -> AdjustExtension:\n    return AdjustExtension()"
  },
  {
    "path": "core/anim.py",
    "content": "\"\"\" Jovimetrix - Animation \"\"\"\n\nimport sys\n\nimport numpy as np\n\nfrom comfy.utils import ProgressBar\n\nfrom cozy_comfyui import \\\n    InputType, EnumConvertType, \\\n    deep_merge, parse_param, zip_longest_fill\n\nfrom cozy_comfyui.lexicon import \\\n    Lexicon\n\nfrom cozy_comfyui.node import \\\n    CozyBaseNode\n\nfrom cozy_comfyui.maths.ease import \\\n    EnumEase, \\\n    ease_op\n\nfrom cozy_comfyui.maths.norm import \\\n    EnumNormalize, \\\n    norm_op\n\nfrom cozy_comfyui.maths.wave import \\\n    EnumWave, \\\n    wave_op\n\nfrom cozy_comfyui.maths.series import \\\n    seriesLinear\n\n# ==============================================================================\n# === GLOBAL ===\n# ==============================================================================\n\nJOV_CATEGORY = \"ANIMATION\"\n\n# ==============================================================================\n# === CLASS ===\n# ==============================================================================\n\nclass ResultObject(object):\n    def __init__(self, *arg, **kw) -> None:\n        self.frame = []\n        self.lin = []\n        self.fixed = []\n        self.trigger = []\n        self.batch = []\n\nclass TickNode(CozyBaseNode):\n    NAME = \"TICK (JOV) ⏱\"\n    CATEGORY = JOV_CATEGORY\n    RETURN_TYPES = (\"FLOAT\", \"FLOAT\", \"FLOAT\", \"FLOAT\", \"FLOAT\")\n    RETURN_NAMES = (\"VALUE\", \"LINEAR\", \"EASED\", \"SCALAR_LIN\", \"SCALAR_EASE\")\n    OUTPUT_IS_LIST = (True, True, True, True, True,)\n    OUTPUT_TOOLTIPS = (\n        \"List of values\",\n        \"Normalized values\",\n        \"Eased values\",\n        \"Scalar normalized values\",\n        \"Scalar eased values\",\n    )\n    DESCRIPTION = \"\"\"\nValue generator with normalized values based on based on time interval.\n\"\"\"\n\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES()\n        d = deep_merge(d, {\n            \"optional\": {\n                # forces a MOD on CYCLE\n                Lexicon.START: (\"FLOAT\", {\n                    \"default\": 0, \"min\": -sys.maxsize, \"max\": sys.maxsize\n                }),\n                # interval between frames\n                Lexicon.STEP: (\"FLOAT\", {\n                    \"default\": 0, \"min\": -sys.float_info.max, \"max\": sys.float_info.max, \"precision\": 3,\n                    \"tooltip\": \"Amount to add to each frame per tick\"\n                }),\n                # how many frames to dump....\n                Lexicon.COUNT: (\"INT\", {\n                    \"default\": 1, \"min\": 1, \"max\": 1500\n                }),\n                Lexicon.LOOP: (\"INT\", {\n                    \"default\": 0, \"min\": 0, \"max\": sys.maxsize,\n                    \"tooltip\": \"What value before looping starts. 0 means linear playback (no loop point)\"\n                }),\n                Lexicon.PINGPONG: (\"BOOLEAN\", {\n                    \"default\": False\n                }),\n                Lexicon.EASE: (EnumEase._member_names_, {\n                    \"default\": EnumEase.LINEAR.name}),\n                Lexicon.NORMALIZE: (EnumNormalize._member_names_, {\n                    \"default\": EnumNormalize.MINMAX2.name}),\n                Lexicon.SCALAR: (\"FLOAT\", {\n                    \"default\": 1, \"min\": 0, \"max\": sys.float_info.max\n                })\n\n            }\n        })\n        return Lexicon._parse(d)\n\n    def run(self, **kw) -> tuple[float, ...]:\n        \"\"\"\n        Generates a series of numbers with various options including:\n        - Custom start value (supporting floating point and negative numbers)\n        - Custom step value (supporting floating point and negative numbers)\n        - Fixed number of frames\n        - Custom loop point (series restarts after reaching this many steps)\n        - Ping-pong option (reverses direction at end points)\n        - Support for easing functions\n        - Normalized output 0..1, -1..1, L2 or ZScore\n        \"\"\"\n\n        start = parse_param(kw, Lexicon.START, EnumConvertType.FLOAT, 0)[0]\n        step = parse_param(kw, Lexicon.STEP, EnumConvertType.FLOAT, 0)[0]\n        count = parse_param(kw, Lexicon.COUNT, EnumConvertType.INT, 1, 1, 1500)[0]\n        loop = parse_param(kw, Lexicon.LOOP, EnumConvertType.INT, 0, 0)[0]\n        pingpong = parse_param(kw, Lexicon.PINGPONG, EnumConvertType.BOOLEAN, False)[0]\n        ease = parse_param(kw, Lexicon.EASE, EnumEase, EnumEase.LINEAR.name)[0]\n        normalize = parse_param(kw, Lexicon.NORMALIZE, EnumNormalize, EnumNormalize.MINMAX1.name)[0]\n        scalar = parse_param(kw, Lexicon.SCALAR, EnumConvertType.FLOAT, 1, 0)[0]\n\n        if step == 0:\n            step = 1\n\n        cycle = seriesLinear(start, step, count, loop, pingpong)\n        linear = norm_op(normalize, np.array(cycle))\n        eased = ease_op(ease, linear, len(linear))\n        scalar_linear = linear * scalar\n        scalar_eased = eased * scalar\n\n        return (\n            cycle,\n            linear.tolist(),\n            eased.tolist(),\n            scalar_linear.tolist(),\n            scalar_eased.tolist(),\n        )\n\nclass WaveGeneratorNode(CozyBaseNode):\n    NAME = \"WAVE GEN (JOV) 🌊\"\n    NAME_PRETTY = \"WAVE GEN (JOV) 🌊\"\n    CATEGORY = JOV_CATEGORY\n    RETURN_TYPES = (\"FLOAT\", \"INT\", )\n    RETURN_NAMES = (\"FLOAT\", \"INT\", )\n    DESCRIPTION = \"\"\"\nProduce waveforms like sine, square, or sawtooth with adjustable frequency, amplitude, phase, and offset. It's handy for creating oscillating patterns or controlling animation dynamics. This node emits both continuous floating-point values and integer representations of the generated waves.\n\"\"\"\n\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES()\n        d = deep_merge(d, {\n            \"optional\": {\n                Lexicon.WAVE: (EnumWave._member_names_, {\n                    \"default\": EnumWave.SIN.name}),\n                Lexicon.FREQ: (\"FLOAT\", {\n                    \"default\": 1, \"min\": 0, \"max\": sys.float_info.max, \"step\": 0.01,}),\n                Lexicon.AMP: (\"FLOAT\", {\n                    \"default\": 1, \"min\": 0, \"max\": sys.float_info.max, \"step\": 0.01,}),\n                Lexicon.PHASE: (\"FLOAT\", {\n                    \"default\": 0, \"min\": 0, \"max\": 1, \"step\": 0.01}),\n                Lexicon.OFFSET: (\"FLOAT\", {\n                    \"default\": 0, \"min\": 0, \"max\": 1, \"step\": 0.001}),\n                Lexicon.TIME: (\"FLOAT\", {\n                    \"default\": 0, \"min\": 0, \"max\": sys.float_info.max, \"step\": 0.0001}),\n                Lexicon.INVERT: (\"BOOLEAN\", {\n                    \"default\": False}),\n                Lexicon.ABSOLUTE: (\"BOOLEAN\", {\n                    \"default\": False,}),\n            }\n        })\n        return Lexicon._parse(d)\n\n    def run(self, **kw) -> tuple[float, int]:\n        op = parse_param(kw, Lexicon.WAVE, EnumWave, EnumWave.SIN.name)\n        freq = parse_param(kw, Lexicon.FREQ, EnumConvertType.FLOAT, 1, 0)\n        amp = parse_param(kw, Lexicon.AMP, EnumConvertType.FLOAT, 1, 0)\n        phase = parse_param(kw, Lexicon.PHASE, EnumConvertType.FLOAT, 0, 0)\n        shift = parse_param(kw, Lexicon.OFFSET, EnumConvertType.FLOAT, 0, 0)\n        delta_time = parse_param(kw, Lexicon.TIME, EnumConvertType.FLOAT, 0, 0)\n        invert = parse_param(kw, Lexicon.INVERT, EnumConvertType.BOOLEAN, False)\n        absolute = parse_param(kw, Lexicon.ABSOLUTE, EnumConvertType.BOOLEAN, False)\n        results = []\n        params = list(zip_longest_fill(op, freq, amp, phase, shift, delta_time, invert, absolute))\n        pbar = ProgressBar(len(params))\n        for idx, (op, freq, amp, phase, shift, delta_time, invert, absolute) in enumerate(params):\n            # freq = 1. / freq\n            if invert:\n                amp = 1. / val\n            val = wave_op(op, phase, freq, amp, shift, delta_time)\n            if absolute:\n                val = np.abs(val)\n            val = max(-sys.float_info.max, min(val, sys.float_info.max))\n            results.append([val, int(val)])\n            pbar.update_absolute(idx)\n        return *list(zip(*results)),\n\n'''\nclass TickOldNode(CozyBaseNode):\n    NAME = \"TICK OLD (JOV) ⏱\"\n    CATEGORY = JOV_CATEGORY\n    RETURN_TYPES = (\"INT\", \"FLOAT\", \"FLOAT\", COZY_TYPE_ANY, COZY_TYPE_ANY,)\n    RETURN_NAMES = (\"VAL\", \"LINEAR\", \"FPS\", \"TRIGGER\", \"BATCH\",)\n    OUTPUT_IS_LIST = (True, False, False, False, False,)\n    OUTPUT_TOOLTIPS = (\n        \"Current value for the configured tick as ComfyUI List\",\n        \"Normalized tick value (0..1) based on BPM and Loop\",\n        \"Current 'frame' in the tick based on FPS setting\",\n        \"Based on the BPM settings, on beat hit, output the input at '⚡'\",\n        \"Current batch of values for the configured tick as standard list which works in other Jovimetrix nodes\",\n    )\n    DESCRIPTION = \"\"\"\nA timer and frame counter, emitting pulses or signals based on time intervals. It allows precise synchronization and control over animation sequences, with options to adjust FPS, BPM, and loop points. This node is useful for generating time-based events or driving animations with rhythmic precision.\n\"\"\"\n\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES()\n        d = deep_merge(d, {\n            \"optional\": {\n                # data to pass on a pulse of the loop\n                Lexicon.TRIGGER: (COZY_TYPE_ANY, {\n                    \"default\": None,\n                    \"tooltip\": \"Output to send when beat (BPM setting) is hit\"\n                }),\n                # forces a MOD on CYCLE\n                Lexicon.START: (\"INT\", {\n                    \"default\": 0, \"min\": 0, \"max\": sys.maxsize,\n                }),\n                Lexicon.LOOP: (\"INT\", {\n                    \"default\": 0, \"min\": 0, \"max\": sys.maxsize,\n                    \"tooltip\": \"Number of frames before looping starts. 0 means continuous playback (no loop point)\"\n                }),\n                Lexicon.FPS: (\"INT\", {\n                    \"default\": 24, \"min\": 1\n                }),\n                Lexicon.BPM: (\"INT\", {\n                    \"default\": 120, \"min\": 1, \"max\": 60000,\n                    \"tooltip\": \"BPM trigger rate to send the input. If input is empty, TRUE is sent on trigger\"\n                }),\n                Lexicon.NOTE: (\"INT\", {\n                    \"default\": 4, \"min\": 1, \"max\": 256,\n                    \"tooltip\": \"Number of beats per measure. Quarter note is 4, Eighth is 8, 16 is 16, etc.\"}),\n                # how many frames to dump....\n                Lexicon.BATCH: (\"INT\", {\n                    \"default\": 1, \"min\": 1, \"max\": 32767,\n                    \"tooltip\": \"Number of frames wanted\"\n                }),\n                Lexicon.STEP: (\"INT\", {\n                    \"default\": 0, \"min\": 0, \"max\": sys.maxsize\n                }),\n            }\n        })\n        return Lexicon._parse(d)\n\n    def run(self, ident, **kw) -> tuple[int, float, float, Any]:\n        passthru = parse_param(kw, Lexicon.TRIGGER, EnumConvertType.ANY, None)[0]\n        stride = parse_param(kw, Lexicon.STEP, EnumConvertType.INT, 0)[0]\n        loop = parse_param(kw, Lexicon.LOOP, EnumConvertType.INT, 0)[0]\n        start = parse_param(kw, Lexicon.START, EnumConvertType.INT, self.__frame)[0]\n        if loop != 0:\n            self.__frame %= loop\n        fps = parse_param(kw, Lexicon.FPS, EnumConvertType.INT, 24, 1)[0]\n        bpm = parse_param(kw, Lexicon.BPM, EnumConvertType.INT, 120, 1)[0]\n        divisor = parse_param(kw, Lexicon.NOTE, EnumConvertType.INT, 4, 1)[0]\n        beat = 60. / max(1., bpm) / divisor\n        batch = parse_param(kw, Lexicon.BATCH, EnumConvertType.INT, 1, 1)[0]\n        step_fps = 1. / max(1., float(fps))\n\n        trigger = None\n        results = ResultObject()\n        pbar = ProgressBar(batch)\n        step = stride if stride != 0 else max(1, loop / batch)\n        for idx in range(batch):\n            trigger = False\n            lin = start if loop == 0 else start / loop\n            fixed_step = math.fmod(start * step_fps, fps)\n            if (math.fmod(fixed_step, beat) == 0):\n                trigger = [passthru]\n            if loop != 0:\n                start %= loop\n            results.frame.append(start)\n            results.lin.append(float(lin))\n            results.fixed.append(float(fixed_step))\n            results.trigger.append(trigger)\n            results.batch.append(start)\n            start += step\n            pbar.update_absolute(idx)\n\n        return (results.frame, results.lin, results.fixed, results.trigger, results.batch,)\n\n'''"
  },
  {
    "path": "core/calc.py",
    "content": "\"\"\" Jovimetrix - Calculation \"\"\"\n\nimport sys\nimport math\nimport struct\nfrom enum import Enum\nfrom typing import Any\nfrom collections import Counter\n\nimport torch\nfrom scipy.special import gamma\n\nfrom comfy.utils import ProgressBar\n\nfrom cozy_comfyui import \\\n    logger, \\\n    TensorType, InputType, EnumConvertType, \\\n    deep_merge, parse_dynamic, parse_param, parse_value, zip_longest_fill\n\nfrom cozy_comfyui.lexicon import \\\n    Lexicon\n\nfrom cozy_comfyui.node import \\\n    COZY_TYPE_ANY, COZY_TYPE_NUMERICAL, COZY_TYPE_FULL, \\\n    CozyBaseNode\n\n# ==============================================================================\n# === GLOBAL ===\n# ==============================================================================\n\nJOV_CATEGORY = \"CALC\"\n\n# ==============================================================================\n# === ENUMERATION ===\n# ==============================================================================\n\nclass EnumBinaryOperation(Enum):\n    ADD = 0\n    SUBTRACT = 1\n    MULTIPLY   = 2\n    DIVIDE = 3\n    DIVIDE_FLOOR = 4\n    MODULUS = 5\n    POWER = 6\n    # TERNARY WITHOUT THE NEED\n    MAXIMUM = 20\n    MINIMUM = 21\n    # VECTOR\n    DOT_PRODUCT = 30\n    CROSS_PRODUCT = 31\n    # MATRIX\n\n    # BITS\n    # BIT_NOT = 39\n    BIT_AND = 60\n    BIT_NAND = 61\n    BIT_OR = 62\n    BIT_NOR = 63\n    BIT_XOR = 64\n    BIT_XNOR = 65\n    BIT_LSHIFT = 66\n    BIT_RSHIFT = 67\n    # GROUP\n    UNION = 80\n    INTERSECTION = 81\n    DIFFERENCE = 82\n    # WEIRD ONES\n    BASE = 90\n\nclass EnumComparison(Enum):\n    EQUAL = 0\n    NOT_EQUAL = 1\n    LESS_THAN = 2\n    LESS_THAN_EQUAL = 3\n    GREATER_THAN = 4\n    GREATER_THAN_EQUAL = 5\n    # LOGIC\n    # NOT = 10\n    AND = 20\n    NAND = 21\n    OR = 22\n    NOR = 23\n    XOR = 24\n    XNOR = 25\n    # TYPE\n    IS = 80\n    IS_NOT = 81\n    # GROUPS\n    IN = 82\n    NOT_IN = 83\n\nclass EnumConvertString(Enum):\n    SPLIT = 10\n    JOIN = 30\n    FIND = 40\n    REPLACE = 50\n    SLICE = 70  # start - end - step  = -1, -1, 1\n\nclass EnumSwizzle(Enum):\n    A_X = 0\n    A_Y = 10\n    A_Z = 20\n    A_W = 30\n    B_X = 9\n    B_Y = 11\n    B_Z = 21\n    B_W = 31\n    CONSTANT = 40\n\nclass EnumUnaryOperation(Enum):\n    ABS = 0\n    FLOOR = 1\n    CEIL = 2\n    SQRT = 3\n    SQUARE = 4\n    LOG = 5\n    LOG10 = 6\n    SIN = 7\n    COS = 8\n    TAN = 9\n    NEGATE = 10\n    RECIPROCAL = 12\n    FACTORIAL = 14\n    EXP = 16\n    # COMPOUND\n    MINIMUM = 20\n    MAXIMUM = 21\n    MEAN = 22\n    MEDIAN = 24\n    MODE = 26\n    MAGNITUDE = 30\n    NORMALIZE = 32\n    # LOGICAL\n    NOT = 40\n    # BITWISE\n    BIT_NOT = 45\n    COS_H = 60\n    SIN_H = 62\n    TAN_H = 64\n    RADIANS = 70\n    DEGREES = 72\n    GAMMA = 80\n    # IS_EVEN\n    IS_EVEN = 90\n    IS_ODD = 91\n\n# Dictionary to map each operation to its corresponding function\nOP_UNARY = {\n    EnumUnaryOperation.ABS: lambda x: math.fabs(x),\n    EnumUnaryOperation.FLOOR: lambda x: math.floor(x),\n    EnumUnaryOperation.CEIL: lambda x: math.ceil(x),\n    EnumUnaryOperation.SQRT: lambda x: math.sqrt(x),\n    EnumUnaryOperation.SQUARE: lambda x: math.pow(x, 2),\n    EnumUnaryOperation.LOG: lambda x: math.log(x) if x != 0 else -math.inf,\n    EnumUnaryOperation.LOG10: lambda x: math.log10(x) if x != 0 else -math.inf,\n    EnumUnaryOperation.SIN: lambda x: math.sin(x),\n    EnumUnaryOperation.COS: lambda x: math.cos(x),\n    EnumUnaryOperation.TAN: lambda x: math.tan(x),\n    EnumUnaryOperation.NEGATE: lambda x: -x,\n    EnumUnaryOperation.RECIPROCAL: lambda x: 1 / x if x != 0 else 0,\n    EnumUnaryOperation.FACTORIAL: lambda x: math.factorial(abs(int(x))),\n    EnumUnaryOperation.EXP: lambda x: math.exp(x),\n    EnumUnaryOperation.NOT: lambda x: not x,\n    EnumUnaryOperation.BIT_NOT: lambda x: ~int(x),\n    EnumUnaryOperation.IS_EVEN: lambda x: x % 2 == 0,\n    EnumUnaryOperation.IS_ODD: lambda x: x % 2 == 1,\n    EnumUnaryOperation.COS_H: lambda x: math.cosh(x),\n    EnumUnaryOperation.SIN_H: lambda x: math.sinh(x),\n    EnumUnaryOperation.TAN_H: lambda x: math.tanh(x),\n    EnumUnaryOperation.RADIANS: lambda x: math.radians(x),\n    EnumUnaryOperation.DEGREES: lambda x: math.degrees(x),\n    EnumUnaryOperation.GAMMA: lambda x: gamma(x) if x > 0 else 0,\n}\n\n# ==============================================================================\n# === SUPPORT ===\n# ==============================================================================\n\ndef to_bits(value: Any):\n    if isinstance(value, int):\n        return bin(value)[2:]\n    elif isinstance(value, float):\n        packed = struct.pack('>d', value)\n        return ''.join(f'{byte:08b}' for byte in packed)\n    elif isinstance(value, str):\n        return ''.join(f'{ord(c):08b}' for c in value)\n    else:\n        raise TypeError(f\"Unsupported type: {type(value)}\")\n\ndef vector_swap(pA: Any, pB: Any, swap_x: EnumSwizzle, swap_y:EnumSwizzle,\n                swap_z:EnumSwizzle, swap_w:EnumSwizzle, default:list[float]) -> list[float]:\n    \"\"\"Swap out a vector's values with another vector's values, or a constant fill.\"\"\"\n\n    def parse(target, targetB, swap, val) -> float:\n        if swap == EnumSwizzle.CONSTANT:\n            return val\n        if swap in [EnumSwizzle.B_X, EnumSwizzle.B_Y, EnumSwizzle.B_Z, EnumSwizzle.B_W]:\n            target = targetB\n        swap = int(swap.value / 10)\n        return target[swap]\n\n    while len(pA) < 4:\n        pA.append(0)\n\n    while len(pB) < 4:\n        pB.append(0)\n\n    while len(default) < 4:\n        default.append(0)\n\n    return [\n        parse(pA, pB, swap_x, default[0]),\n        parse(pA, pB, swap_y, default[1]),\n        parse(pA, pB, swap_z, default[2]),\n        parse(pA, pB, swap_w, default[3])\n    ]\n\n# ==============================================================================\n# === CLASS ===\n# ==============================================================================\n\nclass BitSplitNode(CozyBaseNode):\n    NAME = \"BIT SPLIT (JOV) ⭄\"\n    CATEGORY = JOV_CATEGORY\n    RETURN_TYPES = (COZY_TYPE_ANY, \"BOOLEAN\",)\n    RETURN_NAMES = (\"BIT\", \"BOOL\",)\n    OUTPUT_IS_LIST = (True, True,)\n    OUTPUT_TOOLTIPS = (\n        \"Bits as Numerical output (0 or 1)\",\n        \"Bits as Boolean output (True or False)\"\n    )\n    DESCRIPTION = \"\"\"\nSplit an input into separate bits.\nBOOL, INT and FLOAT use their numbers,\nSTRING is treated as a list of CHARACTER.\nIMAGE and MASK will return a TRUE bit for any non-black pixel, as a stream of bits for all pixels in the image.\n\"\"\"\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES()\n        d = deep_merge(d, {\n            \"optional\": {\n                Lexicon.VALUE: (COZY_TYPE_NUMERICAL, {\n                    \"default\": None,\n                    \"tooltip\": \"Value to convert into bits\"}),\n                Lexicon.BITS: (\"INT\", {\n                    \"default\": 8, \"min\": 0, \"max\": 64,\n                    \"tooltip\": \"Number of output bits requested\"})\n            }\n        })\n        return Lexicon._parse(d)\n\n    def run(self, **kw) -> tuple[list[int], list[bool]]:\n        value = parse_param(kw, Lexicon.VALUE, EnumConvertType.LIST, 0)\n        bits = parse_param(kw, Lexicon.BITS, EnumConvertType.INT, 8)\n        params = list(zip_longest_fill(value, bits))\n        pbar = ProgressBar(len(params))\n        results = []\n        for idx, (value, bits) in enumerate(params):\n            bit_repr = to_bits(value[0])[::-1]\n            if bits > 0:\n                if len(bit_repr) > bits:\n                    bit_repr = bit_repr[0:bits]\n                else:\n                    bit_repr = bit_repr.ljust(bits, '0')\n\n            int_bits = []\n            bool_bits = []\n            for b in bit_repr:\n                bit = int(b)\n                int_bits.append(bit)\n                bool_bits.append(bool(bit))\n            results.append([int_bits, bool_bits])\n            pbar.update_absolute(idx)\n        return *list(zip(*results)),\n\nclass ComparisonNode(CozyBaseNode):\n    NAME = \"COMPARISON (JOV) 🕵🏽\"\n    CATEGORY = JOV_CATEGORY\n    RETURN_TYPES = (COZY_TYPE_ANY, COZY_TYPE_ANY,)\n    RETURN_NAMES = (\"OUT\", \"VAL\",)\n    OUTPUT_IS_LIST = (True, True,)\n    OUTPUT_TOOLTIPS = (\n        \"Outputs the input at PASS or FAIL depending the evaluation\",\n        \"The comparison result value\"\n    )\n    DESCRIPTION = \"\"\"\nEvaluates two inputs (A and B) with a specified comparison operators and optional values for successful and failed comparisons. The node performs the specified operation element-wise between corresponding elements of A and B. If the comparison is successful for all elements, it returns the success value; otherwise, it returns the failure value. The node supports various comparison operators such as EQUAL, GREATER_THAN, LESS_THAN, AND, OR, IS, IN, etc.\n\"\"\"\n\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES()\n        d = deep_merge(d, {\n            \"optional\": {\n                Lexicon.IN_A: (COZY_TYPE_NUMERICAL, {\n                    \"default\": 0,\n                    \"tooltip\":\"First value to compare\"}),\n                Lexicon.IN_B: (COZY_TYPE_NUMERICAL, {\n                    \"default\": 0,\n                    \"tooltip\":\"Second value to compare\"}),\n                Lexicon.SUCCESS: (COZY_TYPE_ANY, {\n                    \"default\": 0,\n                    \"tooltip\": \"Sent to OUT on a successful condition\"}),\n                Lexicon.FAIL: (COZY_TYPE_ANY, {\n                    \"default\": 0,\n                    \"tooltip\": \"Sent to OUT on a failure condition\"}),\n                Lexicon.FUNCTION: (EnumComparison._member_names_, {\n                    \"default\": EnumComparison.EQUAL.name,\n                    \"tooltip\": \"Comparison function. Sends the data in PASS on successful comparison to OUT, otherwise sends the value in FAIL\"}),\n                Lexicon.SWAP: (\"BOOLEAN\", {\n                    \"default\": False,\n                    \"tooltip\": \"Reverse the A and B inputs\"}),\n                Lexicon.INVERT: (\"BOOLEAN\", {\n                    \"default\": False,\n                    \"tooltip\": \"Reverse the PASS and FAIL inputs\"}),\n            }\n        })\n        return Lexicon._parse(d)\n\n    def run(self, **kw) -> tuple[Any, Any]:\n        in_a = parse_param(kw, Lexicon.IN_A, EnumConvertType.ANY, 0)\n        in_b = parse_param(kw, Lexicon.IN_B, EnumConvertType.ANY, 0)\n        size = max(len(in_a), len(in_b))\n        good = parse_param(kw, Lexicon.SUCCESS, EnumConvertType.ANY, 0)[:size]\n        fail = parse_param(kw, Lexicon.FAIL, EnumConvertType.ANY, 0)[:size]\n        op = parse_param(kw, Lexicon.FUNCTION, EnumComparison, EnumComparison.EQUAL.name)[:size]\n        swap = parse_param(kw, Lexicon.SWAP, EnumConvertType.BOOLEAN, False)[:size]\n        invert = parse_param(kw, Lexicon.INVERT, EnumConvertType.BOOLEAN, False)[:size]\n        params = list(zip_longest_fill(in_a, in_b, good, fail, op, swap, invert))\n        pbar = ProgressBar(len(params))\n        vals = []\n        results = []\n        for idx, (A, B, good, fail, op, swap, invert) in enumerate(params):\n            if not isinstance(A, (tuple, list,)):\n                A = [A]\n            if not isinstance(B, (tuple, list,)):\n                B = [B]\n\n            size = min(4, max(len(A), len(B))) - 1\n            typ = [EnumConvertType.FLOAT, EnumConvertType.VEC2, EnumConvertType.VEC3, EnumConvertType.VEC4][size]\n\n            val_a = parse_value(A, typ, [A[-1]] * size)\n            if not isinstance(val_a, (list,)):\n                val_a = [val_a]\n\n            val_b = parse_value(B, typ, [B[-1]] * size)\n            if not isinstance(val_b, (list,)):\n                val_b = [val_b]\n\n            if swap:\n                val_a, val_b = val_b, val_a\n\n            match op:\n                case EnumComparison.EQUAL:\n                    val = [a == b for a, b in zip(val_a, val_b)]\n                case EnumComparison.GREATER_THAN:\n                    val = [a > b for a, b in zip(val_a, val_b)]\n                case EnumComparison.GREATER_THAN_EQUAL:\n                    val = [a >= b for a, b in zip(val_a, val_b)]\n                case EnumComparison.LESS_THAN:\n                    val = [a < b for a, b in zip(val_a, val_b)]\n                case EnumComparison.LESS_THAN_EQUAL:\n                    val = [a <= b for a, b in zip(val_a, val_b)]\n                case EnumComparison.NOT_EQUAL:\n                    val = [a != b for a, b in zip(val_a, val_b)]\n                # LOGIC\n                # case EnumBinaryOperation.NOT = 10\n                case EnumComparison.AND:\n                    val = [a and b for a, b in zip(val_a, val_b)]\n                case EnumComparison.NAND:\n                    val = [not(a and b) for a, b in zip(val_a, val_b)]\n                case EnumComparison.OR:\n                    val = [a or b for a, b in zip(val_a, val_b)]\n                case EnumComparison.NOR:\n                    val = [not(a or b) for a, b in zip(val_a, val_b)]\n                case EnumComparison.XOR:\n                    val = [(a and not b) or (not a and b) for a, b in zip(val_a, val_b)]\n                case EnumComparison.XNOR:\n                    val = [not((a and not b) or (not a and b)) for a, b in zip(val_a, val_b)]\n                # IDENTITY\n                case EnumComparison.IS:\n                    val = [a is b for a, b in zip(val_a, val_b)]\n                case EnumComparison.IS_NOT:\n                    val = [a is not b for a, b in zip(val_a, val_b)]\n                # GROUP\n                case EnumComparison.IN:\n                    val = [a in val_b for a in val_a]\n                case EnumComparison.NOT_IN:\n                    val = [a not in val_b for a in val_a]\n\n            output = all([bool(v) for v in val])\n            if invert:\n                output = not output\n\n            output = good if output == True else fail\n            results.append([output, val])\n            pbar.update_absolute(idx)\n\n        outs, vals = zip(*results)\n        if isinstance(outs[0], (TensorType,)):\n            if len(outs) > 1:\n                outs = torch.stack(outs)\n            else:\n                outs = outs[0].unsqueeze(0)\n            outs = [outs]\n        else:\n            outs = list(outs)\n        return outs, *vals,\n\nclass LerpNode(CozyBaseNode):\n    NAME = \"LERP (JOV) 🔰\"\n    CATEGORY = JOV_CATEGORY\n    RETURN_TYPES = (COZY_TYPE_ANY,)\n    RETURN_NAMES = (\"❔\",)\n    OUTPUT_IS_LIST = (True,)\n    OUTPUT_TOOLTIPS = (\n        f\"Output can vary depending on the type chosen in the {\"TYPE\"} parameter\",\n    )\n    DESCRIPTION = \"\"\"\nCalculate linear interpolation between two values or vectors based on a blending factor (alpha).\n\nThe node accepts optional start (IN_A) and end (IN_B) points, a blending factor (FLOAT), and various input types for both start and end points, such as single values (X, Y), 2-value vectors (IN_A2, IN_B2), 3-value vectors (IN_A3, IN_B3), and 4-value vectors (IN_A4, IN_B4).\n\nAdditionally, you can specify the easing function (EASE) and the desired output type (TYPE). It supports various easing functions for smoother transitions.\n\"\"\"\n\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES()\n        d = deep_merge(d, {\n            \"optional\": {\n                Lexicon.IN_A: (COZY_TYPE_NUMERICAL, {\n                    \"tooltip\": \"Custom Start Point\"}),\n                Lexicon.IN_B: (COZY_TYPE_NUMERICAL, {\n                    \"tooltip\": \"Custom End Point\"}),\n                Lexicon.ALPHA: (\"VEC4\", {\n                    \"default\": (0.5, 0.5, 0.5, 0.5), \"mij\": 0, \"maj\": 1,}),\n                Lexicon.TYPE: (EnumConvertType._member_names_[:6], {\n                    \"default\": EnumConvertType.FLOAT.name,\n                    \"tooltip\": \"Output type desired from resultant operation\"}),\n                Lexicon.DEFAULT_A: (\"VEC4\", {\n                    \"default\": (0, 0, 0, 0)}),\n                Lexicon.DEFAULT_B: (\"VEC4\", {\n                    \"default\": (1,1,1,1)})\n            }\n        })\n        return Lexicon._parse(d)\n\n    def run(self, **kw) -> tuple[Any, Any]:\n        A = parse_param(kw, Lexicon.IN_A, EnumConvertType.ANY, 0)\n        B = parse_param(kw, Lexicon.IN_B, EnumConvertType.ANY, 0)\n        alpha = parse_param(kw, Lexicon.ALPHA,EnumConvertType.VEC4, (0.5,0.5,0.5,0.5))\n        typ = parse_param(kw, Lexicon.TYPE, EnumConvertType, EnumConvertType.FLOAT.name)\n        a_xyzw = parse_param(kw, Lexicon.DEFAULT_A, EnumConvertType.VEC4, (0, 0, 0, 0))\n        b_xyzw = parse_param(kw, Lexicon.DEFAULT_B, EnumConvertType.VEC4, (1, 1, 1, 1))\n        values = []\n        params = list(zip_longest_fill(A, B, alpha, typ, a_xyzw, b_xyzw))\n        pbar = ProgressBar(len(params))\n        for idx, (A, B, alpha, typ, a_xyzw, b_xyzw) in enumerate(params):\n            size = int(typ.value / 10)\n\n            if A is None:\n                A = a_xyzw[:size]\n            if B is None:\n                B = b_xyzw[:size]\n\n            val_a = parse_value(A, EnumConvertType.VEC4, a_xyzw)\n            val_b = parse_value(B, EnumConvertType.VEC4, b_xyzw)\n            alpha = parse_value(alpha, EnumConvertType.VEC4, alpha)\n\n            if size > 1:\n                val_a = val_a[:size + 1]\n                val_b = val_b[:size + 1]\n            else:\n                val_a = [val_a[0]]\n                val_b = [val_b[0]]\n\n            val = [val_b[x] * alpha[x] + val_a[x] * (1 - alpha[x]) for x in range(size)]\n            convert = int if \"INT\" in typ.name else float\n            ret = []\n            for v in val:\n                try:\n                    ret.append(convert(v))\n                except OverflowError:\n                    ret.append(0)\n                except Exception as e:\n                    ret.append(0)\n            val = ret[0] if size == 1 else ret[:size+1]\n            values.append(val)\n            pbar.update_absolute(idx)\n        return [values]\n\nclass OPUnaryNode(CozyBaseNode):\n    NAME = \"OP UNARY (JOV) 🎲\"\n    CATEGORY = JOV_CATEGORY\n    RETURN_TYPES = (COZY_TYPE_ANY,)\n    RETURN_NAMES = (\"❔\",)\n    OUTPUT_IS_LIST = (True,)\n    OUTPUT_TOOLTIPS = (\n        \"Output type will match the input type\",\n    )\n    DESCRIPTION = \"\"\"\nPerform single function operations like absolute value, mean, median, mode, magnitude, normalization, maximum, or minimum on input values.\n\"\"\"\n\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES()\n        typ = EnumConvertType._member_names_[:6]\n        d = deep_merge(d, {\n            \"optional\": {\n                Lexicon.IN_A: (COZY_TYPE_FULL, {\n                    \"default\": 0}),\n                Lexicon.FUNCTION: (EnumUnaryOperation._member_names_, {\n                    \"default\": EnumUnaryOperation.ABS.name}),\n                Lexicon.TYPE: (typ, {\n                    \"default\": EnumConvertType.FLOAT.name,}),\n                Lexicon.DEFAULT_A: (\"VEC4\", {\n                    \"default\": (0,0,0,0), \"mij\": -sys.float_info.max, \"maj\": sys.float_info.max,\n                    \"precision\": 2,\n                    \"label\": [\"X\", \"Y\", \"Z\", \"W\"]})\n            }\n        })\n        return Lexicon._parse(d)\n\n    def run(self, **kw) -> tuple[bool]:\n        results = []\n        A = parse_param(kw, Lexicon.IN_A, EnumConvertType.ANY, 0)\n        op = parse_param(kw, Lexicon.FUNCTION, EnumUnaryOperation, EnumUnaryOperation.ABS.name)\n        out = parse_param(kw, Lexicon.TYPE, EnumConvertType, EnumConvertType.FLOAT.name)\n        a_xyzw = parse_param(kw, Lexicon.DEFAULT_A, EnumConvertType.VEC4, (0, 0, 0, 0))\n        params = list(zip_longest_fill(A, op, out, a_xyzw))\n        pbar = ProgressBar(len(params))\n        for idx, (A, op, out, a_xyzw) in enumerate(params):\n            if not isinstance(A, (list, tuple,)):\n                A = [A]\n            best_type = [EnumConvertType.FLOAT, EnumConvertType.VEC2, EnumConvertType.VEC3, EnumConvertType.VEC4][len(A)-1]\n            val = parse_value(A, best_type, a_xyzw)\n            val = parse_value(val, EnumConvertType.VEC4, a_xyzw)\n            match op:\n                case EnumUnaryOperation.MEAN:\n                    val = [sum(val) / len(val)]\n                case EnumUnaryOperation.MEDIAN:\n                    val = [sorted(val)[len(val) // 2]]\n                case EnumUnaryOperation.MODE:\n                    counts = Counter(val)\n                    val = [max(counts, key=counts.get)]\n                case EnumUnaryOperation.MAGNITUDE:\n                    val = [math.sqrt(sum(x ** 2 for x in val))]\n                case EnumUnaryOperation.NORMALIZE:\n                    if len(val) == 1:\n                        val = [1]\n                    else:\n                        m = math.sqrt(sum(x ** 2 for x in val))\n                        if m > 0:\n                            val = [v / m for v in val]\n                        else:\n                            val = [0] * len(val)\n                case EnumUnaryOperation.MAXIMUM:\n                    val = [max(val)]\n                case EnumUnaryOperation.MINIMUM:\n                    val = [min(val)]\n                case _:\n                    # Apply unary operation to each item in the list\n                    ret = []\n                    for v in val:\n                        try:\n                            v = OP_UNARY[op](v)\n                        except Exception as e:\n                            logger.error(f\"{e} :: {op}\")\n                            v = 0\n                        ret.append(v)\n                    val = ret\n\n            val = parse_value(val, out, 0)\n            results.append(val)\n            pbar.update_absolute(idx)\n        return (results,)\n\nclass OPBinaryNode(CozyBaseNode):\n    NAME = \"OP BINARY (JOV) 🌟\"\n    CATEGORY = JOV_CATEGORY\n    RETURN_TYPES = (COZY_TYPE_ANY,)\n    RETURN_NAMES = (\"❔\",)\n    OUTPUT_IS_LIST = (True,)\n    OUTPUT_TOOLTIPS = (\n        \"Output type will match the input type\",\n    )\n    DESCRIPTION = \"\"\"\nExecute binary operations like addition, subtraction, multiplication, division, and bitwise operations on input values, supporting various data types and vector sizes.\n\"\"\"\n\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        names_convert = EnumConvertType._member_names_[:6]\n        d = super().INPUT_TYPES()\n        d = deep_merge(d, {\n            \"optional\": {\n                Lexicon.IN_A: (COZY_TYPE_FULL, {\n                    \"default\": None}),\n                Lexicon.IN_B: (COZY_TYPE_FULL, {\n                    \"default\": None}),\n                Lexicon.FUNCTION: (EnumBinaryOperation._member_names_, {\n                    \"default\": EnumBinaryOperation.ADD.name,}),\n                Lexicon.TYPE: (names_convert, {\n                    \"default\": names_convert[2],\n                    \"tooltip\":\"Output type desired from resultant operation\"}),\n                Lexicon.SWAP: (\"BOOLEAN\", {\n                    \"default\": False}),\n                Lexicon.DEFAULT_A: (\"VEC4\", {\n                    \"default\": (0,0,0,0), \"mij\": -sys.float_info.max, \"maj\": sys.float_info.max,\n                    \"label\": [\"X\", \"Y\", \"Z\", \"W\"]}),\n                Lexicon.DEFAULT_B: (\"VEC4\", {\n                    \"default\": (0,0,0,0), \"mij\": -sys.float_info.max, \"maj\": sys.float_info.max,\n                    \"label\": [\"X\", \"Y\", \"Z\", \"W\"]})\n            }\n        })\n        return Lexicon._parse(d)\n\n    def run(self, **kw) -> tuple[bool]:\n        results = []\n        A = parse_param(kw, Lexicon.IN_A, EnumConvertType.ANY, None)\n        B = parse_param(kw, Lexicon.IN_B, EnumConvertType.ANY, None)\n        op = parse_param(kw, Lexicon.FUNCTION, EnumBinaryOperation, EnumBinaryOperation.ADD.name)\n        typ = parse_param(kw, Lexicon.TYPE, EnumConvertType, EnumConvertType.FLOAT.name)\n        swap = parse_param(kw, Lexicon.SWAP, EnumConvertType.BOOLEAN, False)\n        a_xyzw = parse_param(kw, Lexicon.DEFAULT_A, EnumConvertType.VEC4, (0, 0, 0, 0))\n        b_xyzw = parse_param(kw, Lexicon.DEFAULT_B, EnumConvertType.VEC4, (0, 0, 0, 0))\n        params = list(zip_longest_fill(A, B, a_xyzw, b_xyzw, op, typ, swap))\n        pbar = ProgressBar(len(params))\n        for idx, (A, B, a_xyzw, b_xyzw, op, typ, swap) in enumerate(params):\n            if not isinstance(A, (list, tuple,)):\n                A = [A]\n            if not isinstance(B, (list, tuple,)):\n                B = [B]\n            size = min(3, max(len(A)-1, len(B)-1))\n            best_type = [EnumConvertType.FLOAT, EnumConvertType.VEC2, EnumConvertType.VEC3, EnumConvertType.VEC4][size]\n            val_a = parse_value(A, best_type, a_xyzw)\n            val_a = parse_value(val_a, EnumConvertType.VEC4, a_xyzw)\n            val_b = parse_value(B, best_type, b_xyzw)\n            val_b = parse_value(val_b, EnumConvertType.VEC4, b_xyzw)\n\n            if swap:\n                val_a, val_b = val_b, val_a\n\n            size = max(1, int(typ.value / 10))\n            val_a = val_a[:size+1]\n            val_b = val_b[:size+1]\n\n            match op:\n                # VECTOR\n                case EnumBinaryOperation.DOT_PRODUCT:\n                    val = [sum(a * b for a, b in zip(val_a, val_b))]\n                case EnumBinaryOperation.CROSS_PRODUCT:\n                    val = [0, 0, 0]\n                    if len(val_a) < 3 or len(val_b) < 3:\n                        logger.warning(\"Cross product only defined for 3D vectors\")\n                    else:\n                        val = [\n                            val_a[1] * val_b[2] - val_a[2] * val_b[1],\n                            val_a[2] * val_b[0] - val_a[0] * val_b[2],\n                            val_a[0] * val_b[1] - val_a[1] * val_b[0]\n                        ]\n\n                # ARITHMETIC\n                case EnumBinaryOperation.ADD:\n                    val = [sum(pair) for pair in zip(val_a, val_b)]\n                case EnumBinaryOperation.SUBTRACT:\n                    val = [a - b for a, b in zip(val_a, val_b)]\n                case EnumBinaryOperation.MULTIPLY:\n                    val = [a * b for a, b in zip(val_a, val_b)]\n                case EnumBinaryOperation.DIVIDE:\n                    val = [a / b if b != 0 else 0 for a, b in zip(val_a, val_b)]\n                case EnumBinaryOperation.DIVIDE_FLOOR:\n                    val = [a // b if b != 0 else 0 for a, b in zip(val_a, val_b)]\n                case EnumBinaryOperation.MODULUS:\n                    val = [a % b if b != 0 else 0 for a, b in zip(val_a, val_b)]\n                case EnumBinaryOperation.POWER:\n                    val = [a ** b if b >= 0 else 0 for a, b in zip(val_a, val_b)]\n                case EnumBinaryOperation.MAXIMUM:\n                    val = [max(a, val_b[i]) for i, a in enumerate(val_a)]\n                case EnumBinaryOperation.MINIMUM:\n                    # val = min(val_a, val_b)\n                    val = [min(a, val_b[i]) for i, a in enumerate(val_a)]\n\n                # BITS\n                # case EnumBinaryOperation.BIT_NOT:\n                case EnumBinaryOperation.BIT_AND:\n                    val = [int(a) & int(b) for a, b in zip(val_a, val_b)]\n                case EnumBinaryOperation.BIT_NAND:\n                    val = [not(int(a) & int(b)) for a, b in zip(val_a, val_b)]\n                case EnumBinaryOperation.BIT_OR:\n                    val = [int(a) | int(b) for a, b in zip(val_a, val_b)]\n                case EnumBinaryOperation.BIT_NOR:\n                    val = [not(int(a) | int(b)) for a, b in zip(val_a, val_b)]\n                case EnumBinaryOperation.BIT_XOR:\n                    val = [int(a) ^ int(b) for a, b in zip(val_a, val_b)]\n                case EnumBinaryOperation.BIT_XNOR:\n                    val = [not(int(a) ^ int(b)) for a, b in zip(val_a, val_b)]\n                case EnumBinaryOperation.BIT_LSHIFT:\n                    val = [int(a) << int(b) if b >= 0 else 0 for a, b in zip(val_a, val_b)]\n                case EnumBinaryOperation.BIT_RSHIFT:\n                    val = [int(a) >> int(b) if b >= 0 else 0 for a, b in zip(val_a, val_b)]\n\n                # GROUP\n                case EnumBinaryOperation.UNION:\n                    val = list(set(val_a) | set(val_b))\n                case EnumBinaryOperation.INTERSECTION:\n                    val = list(set(val_a) & set(val_b))\n                case EnumBinaryOperation.DIFFERENCE:\n                    val = list(set(val_a) - set(val_b))\n\n                # WEIRD\n                case EnumBinaryOperation.BASE:\n                    val = list(set(val_a) - set(val_b))\n\n            # cast into correct type....\n            default = val\n            if len(val) == 0:\n                default = [0]\n\n            val = parse_value(val, typ, default)\n            results.append(val)\n            pbar.update_absolute(idx)\n        return (results,)\n\nclass StringerNode(CozyBaseNode):\n    NAME = \"STRINGER (JOV) 🪀\"\n    CATEGORY = JOV_CATEGORY\n    RETURN_TYPES = (\"STRING\", \"INT\",)\n    RETURN_NAMES = (\"STRING\", \"COUNT\",)\n    OUTPUT_IS_LIST = (True, False,)\n    DESCRIPTION = \"\"\"\nManipulate strings through filtering\n\"\"\"\n\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES()\n        d = deep_merge(d, {\n            \"optional\": {\n                # split, join, replace, trim/lift\n                Lexicon.FUNCTION: (EnumConvertString._member_names_, {\n                    \"default\": EnumConvertString.SPLIT.name}),\n                Lexicon.KEY: (\"STRING\", {\n                    \"default\":\"\", \"dynamicPrompt\":False,\n                    \"tooltip\": \"Delimiter (SPLIT/JOIN) or string to use as search string (FIND/REPLACE).\"}),\n                Lexicon.REPLACE: (\"STRING\", {\n                    \"default\":\"\", \"dynamicPrompt\":False}),\n                Lexicon.RANGE: (\"VEC3\", {\n                    \"default\":(0, -1, 1), \"int\": True,\n                    \"tooltip\": \"Start, End and Step. Values will clip to the actual list size(s).\"}),\n            }\n        })\n        return Lexicon._parse(d)\n\n    def run(self, **kw) -> tuple[TensorType, ...]:\n        # turn any all inputs into the\n        data_list = parse_dynamic(kw, Lexicon.STRING, EnumConvertType.ANY, \"\")\n        if data_list is None:\n            logger.warn(\"no data for list\")\n            return ([], 0)\n\n        op = parse_param(kw, Lexicon.FUNCTION, EnumConvertString, EnumConvertString.SPLIT.name)[0]\n        key = parse_param(kw, Lexicon.KEY, EnumConvertType.STRING, \"\")[0]\n        replace = parse_param(kw, Lexicon.REPLACE, EnumConvertType.STRING, \"\")[0]\n        stenst = parse_param(kw, Lexicon.RANGE, EnumConvertType.VEC3INT, (0, -1, 1))[0]\n        results = []\n        match op:\n            case EnumConvertString.SPLIT:\n                results = data_list\n                if key != \"\":\n                    results = []\n                    for d in data_list:\n                        d = [key if len(r) == 0 else r for r in d.split(key)]\n                        results.extend(d)\n            case EnumConvertString.JOIN:\n                results = [key.join(data_list)]\n            case EnumConvertString.FIND:\n                results = [r for r in data_list if r.find(key) > -1]\n            case EnumConvertString.REPLACE:\n                results = data_list\n                if key != \"\":\n                    results = [r.replace(key, replace) for r in data_list]\n            case EnumConvertString.SLICE:\n                start, end, step = stenst\n                for x in data_list:\n                    start = len(x) if start < 0 else min(max(0, start), len(x))\n                    end = len(x) if end < 0 else min(max(0, end), len(x))\n                    if step != 0:\n                        results.append(x[start:end:step])\n                    else:\n                        results.append(x)\n        return (results, len(results),)\n\nclass SwizzleNode(CozyBaseNode):\n    NAME = \"SWIZZLE (JOV) 😵\"\n    CATEGORY = JOV_CATEGORY\n    RETURN_TYPES = (COZY_TYPE_ANY,)\n    RETURN_NAMES = (\"❔\",)\n    OUTPUT_IS_LIST = (True,)\n    DESCRIPTION = \"\"\"\nSwap components between two vectors based on specified swizzle patterns and values. It provides flexibility in rearranging vector elements dynamically.\n\"\"\"\n\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES()\n        names_convert = EnumConvertType._member_names_[3:6]\n        d = deep_merge(d, {\n            \"optional\": {\n                Lexicon.IN_A: (COZY_TYPE_NUMERICAL, {}),\n                Lexicon.IN_B: (COZY_TYPE_NUMERICAL, {}),\n                Lexicon.TYPE: (names_convert, {\n                    \"default\": names_convert[0]}),\n                Lexicon.SWAP_X: (EnumSwizzle._member_names_, {\n                    \"default\": EnumSwizzle.A_X.name,}),\n                Lexicon.SWAP_Y: (EnumSwizzle._member_names_, {\n                    \"default\": EnumSwizzle.A_Y.name,}),\n                Lexicon.SWAP_Z: (EnumSwizzle._member_names_, {\n                    \"default\": EnumSwizzle.A_Z.name,}),\n                Lexicon.SWAP_W: (EnumSwizzle._member_names_, {\n                    \"default\": EnumSwizzle.A_W.name,}),\n                Lexicon.DEFAULT: (\"VEC4\", {\n                    \"default\": (0,0,0,0), \"mij\": -sys.float_info.max, \"maj\": sys.float_info.max})\n            }\n        })\n        return Lexicon._parse(d)\n\n    def run(self, **kw) -> tuple[float, ...]:\n        pA = parse_param(kw, Lexicon.IN_A, EnumConvertType.LIST, None)\n        pB = parse_param(kw, Lexicon.IN_B, EnumConvertType.LIST, None)\n        typ = parse_param(kw, Lexicon.TYPE, EnumConvertType, EnumConvertType.VEC2.name)\n        swap_x = parse_param(kw, Lexicon.SWAP_X, EnumSwizzle, EnumSwizzle.A_X.name)\n        swap_y = parse_param(kw, Lexicon.SWAP_Y, EnumSwizzle, EnumSwizzle.A_Y.name)\n        swap_z = parse_param(kw, Lexicon.SWAP_Z, EnumSwizzle, EnumSwizzle.A_W.name)\n        swap_w = parse_param(kw, Lexicon.SWAP_W, EnumSwizzle, EnumSwizzle.A_Z.name)\n        default = parse_param(kw, Lexicon.DEFAULT, EnumConvertType.VEC4, (0, 0, 0, 0))\n        params = list(zip_longest_fill(pA, pB, typ, swap_x, swap_y, swap_z, swap_w, default))\n        results = []\n        pbar = ProgressBar(len(params))\n        for idx, (pA, pB, typ, swap_x, swap_y, swap_z, swap_w, default) in enumerate(params):\n            default = list(default)\n            pA = pA + default[len(pA):]\n            pB = pB + default[len(pB):]\n            val = vector_swap(pA, pB, swap_x, swap_y, swap_z, swap_w, default)\n            val = parse_value(val, typ, val)\n            results.append(val)\n            pbar.update_absolute(idx)\n        return (results,)\n"
  },
  {
    "path": "core/color.py",
    "content": "\"\"\" Jovimetrix - Color \"\"\"\n\nfrom enum import Enum\n\nimport cv2\nimport torch\n\nfrom comfy.utils import ProgressBar\n\nfrom cozy_comfyui import \\\n    IMAGE_SIZE_MIN, \\\n    InputType, RGBAMaskType, EnumConvertType, TensorType, \\\n    deep_merge, parse_param, zip_longest_fill\n\nfrom cozy_comfyui.lexicon import \\\n    Lexicon\n\nfrom cozy_comfyui.node import \\\n    COZY_TYPE_IMAGE, \\\n    CozyBaseNode, CozyImageNode\n\nfrom cozy_comfyui.image.adjust import \\\n    image_invert\n\nfrom cozy_comfyui.image.color import \\\n    EnumCBDeficiency, EnumCBSimulator, EnumColorMap, EnumColorTheory, \\\n    color_lut_full, color_lut_match, color_lut_palette, \\\n    color_lut_tonal, color_lut_visualize, color_match_reinhard, \\\n    color_theory, color_blind, color_top_used, image_gradient_expand, \\\n    image_gradient_map\n\nfrom cozy_comfyui.image.channel import \\\n    channel_solid\n\nfrom cozy_comfyui.image.compose import \\\n    EnumScaleMode, EnumInterpolation, \\\n    image_scalefit\n\nfrom cozy_comfyui.image.convert import \\\n    tensor_to_cv, cv_to_tensor, cv_to_tensor_full, image_mask, image_mask_add\n\nfrom cozy_comfyui.image.misc import \\\n    image_stack\n\n# ==============================================================================\n# === GLOBAL ===\n# ==============================================================================\n\nJOV_CATEGORY = \"COLOR\"\n\n# ==============================================================================\n# === ENUMERATION ===\n# ==============================================================================\n\nclass EnumColorMatchMode(Enum):\n    REINHARD = 30\n    LUT = 10\n    # HISTOGRAM = 20\n\nclass EnumColorMatchMap(Enum):\n    USER_MAP = 0\n    PRESET_MAP = 10\n\n# ==============================================================================\n# === CLASS ===\n# ==============================================================================\n\nclass ColorBlindNode(CozyImageNode):\n    NAME = \"COLOR BLIND (JOV) 👁‍🗨\"\n    CATEGORY = JOV_CATEGORY\n    DESCRIPTION = \"\"\"\nSimulate color blindness effects on images. You can select various types of color deficiencies, adjust the severity of the effect, and apply the simulation using different simulators. This node is ideal for accessibility testing and design adjustments, ensuring inclusivity in your visual content.\n\"\"\"\n\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES()\n        d = deep_merge(d, {\n            \"optional\": {\n                Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),\n                Lexicon.DEFICIENCY: (EnumCBDeficiency._member_names_, {\n                    \"default\": EnumCBDeficiency.PROTAN.name,}),\n                Lexicon.SOLVER: (EnumCBSimulator._member_names_, {\n                    \"default\": EnumCBSimulator.AUTOSELECT.name,})\n            }\n        })\n        return Lexicon._parse(d)\n\n    def run(self, **kw) -> RGBAMaskType:\n        pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)\n        deficiency = parse_param(kw, Lexicon.DEFICIENCY, EnumCBDeficiency, EnumCBDeficiency.PROTAN.name)\n        simulator = parse_param(kw, Lexicon.SOLVER, EnumCBSimulator, EnumCBSimulator.AUTOSELECT.name)\n        severity = parse_param(kw, Lexicon.VALUE, EnumConvertType.FLOAT, 1)\n        params = list(zip_longest_fill(pA, deficiency, simulator, severity))\n        images = []\n        pbar = ProgressBar(len(params))\n        for idx, (pA, deficiency, simulator, severity) in enumerate(params):\n            pA = channel_solid() if pA is None else tensor_to_cv(pA)\n            pA = color_blind(pA, deficiency, simulator, severity)\n            images.append(cv_to_tensor_full(pA))\n            pbar.update_absolute(idx)\n        return image_stack(images)\n\nclass ColorMatchNode(CozyImageNode):\n    NAME = \"COLOR MATCH (JOV) 💞\"\n    CATEGORY = JOV_CATEGORY\n    DESCRIPTION = \"\"\"\nAdjust the color scheme of one image to match another with the Color Match Node. Choose from various color matching LUTs or Reinhard matching. You can specify a custom user color maps, the number of colors, and whether to flip or invert the images.\n\"\"\"\n\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES()\n        d = deep_merge(d, {\n            \"optional\": {\n                Lexicon.IMAGE_SOURCE: (COZY_TYPE_IMAGE, {}),\n                Lexicon.IMAGE_TARGET: (COZY_TYPE_IMAGE, {}),\n                Lexicon.MODE: (EnumColorMatchMode._member_names_, {\n                    \"default\": EnumColorMatchMode.REINHARD.name,\n                    \"tooltip\": \"Match colors from an image or built-in (LUT), Histogram lookups or Reinhard method\"}),\n                Lexicon.MAP: (EnumColorMatchMap._member_names_, {\n                    \"default\": EnumColorMatchMap.USER_MAP.name, }),\n                Lexicon.COLORMAP: (EnumColorMap._member_names_, {\n                    \"default\": EnumColorMap.HSV.name,}),\n                Lexicon.VALUE: (\"INT\", {\n                    \"default\": 255, \"min\": 0, \"max\": 255,\n                    \"tooltip\":\"The number of colors to use from the LUT during the remap. Will quantize the LUT range.\"}),\n                Lexicon.SWAP: (\"BOOLEAN\", {\n                    \"default\": False,}),\n                Lexicon.INVERT: (\"BOOLEAN\", {\n                    \"default\": False,}),\n                Lexicon.MATTE: (\"VEC4\", {\n                    \"default\": (0, 0, 0, 255), \"rgb\": True,}),\n            }\n        })\n        return Lexicon._parse(d)\n\n    def run(self, **kw) -> RGBAMaskType:\n        pA = parse_param(kw, Lexicon.IMAGE_SOURCE, EnumConvertType.IMAGE, None)\n        pB = parse_param(kw, Lexicon.IMAGE_TARGET, EnumConvertType.IMAGE, None)\n        mode = parse_param(kw, Lexicon.MODE, EnumColorMatchMode, EnumColorMatchMode.REINHARD.name)\n        cmap = parse_param(kw, Lexicon.MAP, EnumColorMatchMap, EnumColorMatchMap.USER_MAP.name)\n        colormap = parse_param(kw, Lexicon.COLORMAP, EnumColorMap, EnumColorMap.HSV.name)\n        num_colors = parse_param(kw, Lexicon.VALUE, EnumConvertType.INT, 255)\n        swap = parse_param(kw, Lexicon.SWAP, EnumConvertType.BOOLEAN, False)\n        invert = parse_param(kw, Lexicon.INVERT, EnumConvertType.BOOLEAN, False)\n        matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4, (0, 0, 0, 255), 0, 255)\n        params = list(zip_longest_fill(pA, pB, mode, cmap, colormap, num_colors, swap, invert, matte))\n        images = []\n        pbar = ProgressBar(len(params))\n        for idx, (pA, pB, mode, cmap, colormap, num_colors, swap, invert, matte) in enumerate(params):\n            if swap == True:\n                pA, pB = pB, pA\n\n            mask = None\n            if pA is None:\n                pA = channel_solid()\n            else:\n                pA = tensor_to_cv(pA)\n                if pA.ndim == 3 and pA.shape[2] == 4:\n                    mask = image_mask(pA)\n\n            # h, w = pA.shape[:2]\n            if pB is None:\n                pB = channel_solid()\n            else:\n                pB = tensor_to_cv(pB)\n\n            match mode:\n                case EnumColorMatchMode.LUT:\n                    if cmap == EnumColorMatchMap.PRESET_MAP:\n                        pB = None\n                    pA = color_lut_match(pA, colormap.value, pB, num_colors)\n\n                case EnumColorMatchMode.REINHARD:\n                    pA = color_match_reinhard(pA, pB)\n\n            if invert == True:\n                pA = image_invert(pA, 1)\n\n            if mask is not None:\n                pA = image_mask_add(pA, mask)\n\n            images.append(cv_to_tensor_full(pA, matte))\n            pbar.update_absolute(idx)\n        return image_stack(images)\n\nclass ColorKMeansNode(CozyBaseNode):\n    NAME = \"COLOR MEANS (JOV) 〰️\"\n    CATEGORY = JOV_CATEGORY\n    RETURN_TYPES = (\"IMAGE\", \"IMAGE\", \"IMAGE\", \"JLUT\", \"IMAGE\",)\n    RETURN_NAMES = (\"IMAGE\", \"PALETTE\", \"GRADIENT\", \"LUT\", \"RGB\", )\n    OUTPUT_TOOLTIPS = (\n        \"Sequence of top-K colors. Count depends on value in `VAL`.\",\n        \"Simple Tone palette based on result top-K colors. Width is taken from input.\",\n        \"Gradient of top-K colors.\",\n        \"Full 3D LUT of the image mapped to the resultant top-K colors chosen.\",\n        \"Visualization of full 3D .cube LUT in JLUT output\"\n    )\n    DESCRIPTION = \"\"\"\nThe top-k colors ordered from most->least used as a strip, tonal palette and 3D LUT.\n\"\"\"\n\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES()\n        d = deep_merge(d, {\n            \"optional\": {\n                Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),\n                Lexicon.VALUE: (\"INT\", {\n                    \"default\": 12, \"min\": 1, \"max\": 255,\n                    \"tooltip\": \"The top K colors to select\"}),\n                Lexicon.SIZE: (\"INT\", {\n                    \"default\": 32, \"min\": 1, \"max\": 256,\n                    \"tooltip\": \"Height of the tones in the strip. Width is based on input\"}),\n                Lexicon.COUNT: (\"INT\", {\n                    \"default\": 33, \"min\": 1, \"max\": 255,\n                    \"tooltip\": \"Number of nodes to use in interpolation of full LUT (256 is every pixel)\"}),\n                Lexicon.WH: (\"VEC2\", {\n                    \"default\": (256, 256), \"mij\":IMAGE_SIZE_MIN, \"int\": True,\n                    \"label\": [\"W\", \"H\"]\n                }),\n            }\n        })\n        return Lexicon._parse(d)\n\n    def run(self, **kw) -> RGBAMaskType:\n        pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)\n        kcolors = parse_param(kw, Lexicon.VALUE, EnumConvertType.INT, 12, 1, 255)\n        lut_height = parse_param(kw, Lexicon.SIZE, EnumConvertType.INT, 32, 1, 256)\n        nodes = parse_param(kw, Lexicon.COUNT, EnumConvertType.INT, 33, 1, 255)\n        wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (256, 256), IMAGE_SIZE_MIN)\n\n        params = list(zip_longest_fill(pA, kcolors, nodes, lut_height, wihi))\n        top_colors = []\n        lut_tonal = []\n        lut_full = []\n        lut_visualized = []\n        gradients = []\n        pbar = ProgressBar(len(params) * sum(kcolors))\n        for idx, (pA, kcolors, nodes, lut_height, wihi) in enumerate(params):\n            if pA is None:\n                pA = channel_solid()\n\n            pA = tensor_to_cv(pA)\n            colors = color_top_used(pA, kcolors)\n\n            # size down to 1px strip then expand to 256 for full gradient\n            top_colors.extend([cv_to_tensor(channel_solid(*wihi, color=c)) for c in colors])\n            lut = color_lut_tonal(colors, width=pA.shape[1], height=lut_height)\n            lut_tonal.append(cv_to_tensor(lut))\n            full = color_lut_full(colors, nodes)\n            lut_full.append(torch.from_numpy(full))\n            lut = color_lut_visualize(full, wihi[1])\n            lut_visualized.append(cv_to_tensor(lut))\n            palette = color_lut_palette(colors, 1)\n            gradient = image_gradient_expand(palette)\n            gradient = cv2.resize(gradient, wihi)\n            gradients.append(cv_to_tensor(gradient))\n            pbar.update_absolute(idx)\n\n        return torch.stack(top_colors), torch.stack(lut_tonal), torch.stack(gradients), lut_full, torch.stack(lut_visualized),\n\nclass ColorTheoryNode(CozyBaseNode):\n    NAME = \"COLOR THEORY (JOV) 🛞\"\n    CATEGORY = JOV_CATEGORY\n    RETURN_TYPES = (\"IMAGE\", \"IMAGE\", \"IMAGE\", \"IMAGE\", \"IMAGE\")\n    RETURN_NAMES = (\"C1\", \"C2\", \"C3\", \"C4\", \"C5\")\n    DESCRIPTION = \"\"\"\nGenerate a color harmony based on the selected scheme.\n\nSupported schemes include complimentary, analogous, triadic, tetradic, and more.\n\nUsers can customize the angle of separation for color calculations, offering flexibility in color manipulation and exploration of different color palettes.\n\"\"\"\n\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES()\n        d = deep_merge(d, {\n            \"optional\": {\n                Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),\n                Lexicon.SCHEME: (EnumColorTheory._member_names_, {\n                    \"default\": EnumColorTheory.COMPLIMENTARY.name}),\n                Lexicon.VALUE: (\"INT\", {\n                    \"default\": 45, \"min\": -90, \"max\": 90,\n                    \"tooltip\": \"Custom angle of separation to use when calculating colors\"}),\n                Lexicon.INVERT: (\"BOOLEAN\", {\n                    \"default\": False})\n            }\n        })\n        return Lexicon._parse(d)\n\n    def run(self, **kw) -> tuple[list[TensorType], list[TensorType]]:\n        pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)\n        scheme = parse_param(kw, Lexicon.SCHEME, EnumColorTheory, EnumColorTheory.COMPLIMENTARY.name)\n        value = parse_param(kw, Lexicon.VALUE, EnumConvertType.INT, 45, -90, 90)\n        invert = parse_param(kw, Lexicon.INVERT, EnumConvertType.BOOLEAN, False)\n        params = list(zip_longest_fill(pA, scheme, value, invert))\n        images = []\n        pbar = ProgressBar(len(params))\n        for idx, (img, scheme, value, invert) in enumerate(params):\n            img = channel_solid() if img is None else tensor_to_cv(img)\n            img = color_theory(img, value, scheme)\n            if invert:\n                img = (image_invert(s, 1) for s in img)\n            images.append([cv_to_tensor(a) for a in img])\n            pbar.update_absolute(idx)\n        return image_stack(images)\n\nclass GradientMapNode(CozyImageNode):\n    NAME = \"GRADIENT MAP (JOV) 🇲🇺\"\n    CATEGORY = JOV_CATEGORY\n    DESCRIPTION = \"\"\"\nRemaps an input image using a gradient lookup table (LUT).\n\nThe gradient image will be translated into a single row lookup table.\n\"\"\"\n\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES()\n        d = deep_merge(d, {\n            \"optional\": {\n                Lexicon.IMAGE: (COZY_TYPE_IMAGE, {\n                    \"tooltip\": \"Image to remap with gradient input\"}),\n                Lexicon.GRADIENT: (COZY_TYPE_IMAGE, {\n                    \"tooltip\": f\"Look up table (LUT) to remap the input image in `{\"IMAGE\"}`\"}),\n                Lexicon.REVERSE: (\"BOOLEAN\", {\n                    \"default\": False,\n                    \"tooltip\": \"Reverse the gradient from left-to-right\"}),\n                Lexicon.MODE: (EnumScaleMode._member_names_, {\n                    \"default\": EnumScaleMode.MATTE.name,}),\n                Lexicon.WH: (\"VEC2\", {\n                    \"default\": (512, 512), \"mij\":IMAGE_SIZE_MIN, \"int\": True,\n                    \"label\": [\"W\", \"H\"] }),\n                Lexicon.SAMPLE: (EnumInterpolation._member_names_, {\n                    \"default\": EnumInterpolation.LANCZOS4.name,}),\n                Lexicon.MATTE: (\"VEC4\", {\n                    \"default\": (0, 0, 0, 255), \"rgb\": True,})\n            }\n        })\n        return Lexicon._parse(d)\n\n    def run(self, **kw) -> RGBAMaskType:\n        pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)\n        gradient = parse_param(kw, Lexicon.GRADIENT, EnumConvertType.IMAGE, None)\n        reverse = parse_param(kw, Lexicon.REVERSE, EnumConvertType.BOOLEAN, False)\n        mode = parse_param(kw, Lexicon.MODE, EnumScaleMode, EnumScaleMode.MATTE.name)\n        wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (512, 512), IMAGE_SIZE_MIN)\n        sample = parse_param(kw, Lexicon.SAMPLE, EnumInterpolation, EnumInterpolation.LANCZOS4.name)\n        matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255)\n        images = []\n        params = list(zip_longest_fill(pA, gradient, reverse, mode, sample, wihi, matte))\n        pbar = ProgressBar(len(params))\n        for idx, (pA, gradient, reverse, mode, sample, wihi, matte) in enumerate(params):\n            pA = channel_solid() if pA is None else tensor_to_cv(pA)\n            mask = None\n            if pA.ndim == 3 and pA.shape[2] == 4:\n                mask = image_mask(pA)\n\n            gradient = channel_solid() if gradient is None else tensor_to_cv(gradient)\n            pA = image_gradient_map(pA, gradient)\n            if mode != EnumScaleMode.MATTE:\n                w, h = wihi\n                pA = image_scalefit(pA, w, h, mode, sample)\n\n            if mask is not None:\n                pA = image_mask_add(pA, mask)\n\n            images.append(cv_to_tensor_full(pA, matte))\n            pbar.update_absolute(idx)\n        return image_stack(images)\n"
  },
  {
    "path": "core/compose.py",
    "content": "\"\"\" Jovimetrix - Composition \"\"\"\n\nimport numpy as np\n\nfrom comfy.utils import ProgressBar\n\nfrom cozy_comfyui import \\\n    IMAGE_SIZE_MIN, \\\n    InputType, RGBAMaskType, EnumConvertType, \\\n    deep_merge, parse_param, zip_longest_fill\n\nfrom cozy_comfyui.lexicon import \\\n    Lexicon\n\nfrom cozy_comfyui.node import \\\n    COZY_TYPE_IMAGE, \\\n    CozyBaseNode, CozyImageNode\n\nfrom cozy_comfyui.image import \\\n    EnumImageType\n\nfrom cozy_comfyui.image.adjust import \\\n    EnumThreshold, EnumThresholdAdapt, \\\n    image_histogram2, image_invert, image_filter, image_threshold\n\nfrom cozy_comfyui.image.channel import \\\n    EnumPixelSwizzle, \\\n    channel_merge, channel_solid, channel_swap\n\nfrom cozy_comfyui.image.compose import \\\n    EnumBlendType, EnumScaleMode, EnumScaleInputMode, EnumInterpolation, \\\n    image_resize, \\\n    image_scalefit, image_split, image_blend, image_matte\n\nfrom cozy_comfyui.image.convert import \\\n    image_mask, image_convert, tensor_to_cv, cv_to_tensor, cv_to_tensor_full\n\nfrom cozy_comfyui.image.misc import \\\n    image_by_size, image_minmax, image_stack\n\n# ==============================================================================\n# === GLOBAL ===\n# ==============================================================================\n\nJOV_CATEGORY = \"COMPOSE\"\n\n# ==============================================================================\n# === CLASS ===\n# ==============================================================================\n\nclass BlendNode(CozyImageNode):\n    NAME = \"BLEND (JOV) ⚗️\"\n    CATEGORY = JOV_CATEGORY\n    DESCRIPTION = \"\"\"\nCombine two input images using various blending modes, such as normal, screen, multiply, overlay, etc. It also supports alpha blending and masking to achieve complex compositing effects. This node is essential for creating layered compositions and adding visual richness to images.\n\"\"\"\n\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES()\n        d = deep_merge(d, {\n            \"optional\": {\n                Lexicon.IMAGE_BACK: (COZY_TYPE_IMAGE, {}),\n                Lexicon.IMAGE_FORE: (COZY_TYPE_IMAGE, {}),\n                Lexicon.MASK: (COZY_TYPE_IMAGE, {\n                    \"tooltip\": \"Optional Mask for Alpha Blending. If empty, it will use the ALPHA of the FOREGROUND\"}),\n                Lexicon.FUNCTION: (EnumBlendType._member_names_, {\n                    \"default\": EnumBlendType.NORMAL.name,}),\n                Lexicon.ALPHA: (\"FLOAT\", {\n                    \"default\": 1, \"min\": 0, \"max\": 1, \"step\": 0.01,}),\n                Lexicon.SWAP: (\"BOOLEAN\", {\n                    \"default\": False}),\n                Lexicon.INVERT: (\"BOOLEAN\", {\n                    \"default\": False, \"tooltip\": \"Invert the mask input\"}),\n                Lexicon.MODE: (EnumScaleMode._member_names_, {\n                    \"default\": EnumScaleMode.MATTE.name,}),\n                Lexicon.WH: (\"VEC2\", {\n                    \"default\": (512, 512), \"mij\":IMAGE_SIZE_MIN, \"int\": True,\n                    \"label\": [\"W\", \"H\"]}),\n                Lexicon.SAMPLE: (EnumInterpolation._member_names_, {\n                    \"default\": EnumInterpolation.LANCZOS4.name,}),\n                Lexicon.MATTE: (\"VEC4\", {\n                    \"default\": (0, 0, 0, 255), \"rgb\": True,}),\n                Lexicon.INPUT: (EnumScaleInputMode._member_names_, {\n                    \"default\": EnumScaleInputMode.NONE.name,}),\n            }\n        })\n        return Lexicon._parse(d)\n\n    def run(self, **kw) -> RGBAMaskType:\n        back = parse_param(kw, Lexicon.IMAGE_BACK, EnumConvertType.IMAGE, None)\n        fore = parse_param(kw, Lexicon.IMAGE_FORE, EnumConvertType.IMAGE, None)\n        mask = parse_param(kw, Lexicon.MASK, EnumConvertType.MASK, None)\n        func = parse_param(kw, Lexicon.FUNCTION, EnumBlendType, EnumBlendType.NORMAL.name)\n        alpha = parse_param(kw, Lexicon.ALPHA, EnumConvertType.FLOAT, 1)\n        swap = parse_param(kw, Lexicon.SWAP, EnumConvertType.BOOLEAN, False)\n        invert = parse_param(kw, Lexicon.INVERT, EnumConvertType.BOOLEAN, False)\n        mode = parse_param(kw, Lexicon.MODE, EnumScaleMode, EnumScaleMode.MATTE.name)\n        wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (512, 512), IMAGE_SIZE_MIN)\n        sample = parse_param(kw, Lexicon.SAMPLE, EnumInterpolation, EnumInterpolation.LANCZOS4.name)\n        matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255)\n        inputMode = parse_param(kw, Lexicon.INPUT, EnumScaleInputMode, EnumScaleInputMode.NONE.name)\n        params = list(zip_longest_fill(back, fore, mask, func, alpha, swap, invert, mode, wihi, sample, matte, inputMode))\n        images = []\n        pbar = ProgressBar(len(params))\n        for idx, (back, fore, mask, func, alpha, swap, invert, mode, wihi, sample, matte, inputMode) in enumerate(params):\n            if swap:\n                back, fore = fore, back\n\n            width, height = IMAGE_SIZE_MIN, IMAGE_SIZE_MIN\n            if back is None:\n                if fore is None:\n                    if mask is None:\n                        if mode != EnumScaleMode.MATTE:\n                            width, height = wihi\n                    else:\n                        height, width = mask.shape[:2]\n                else:\n                    height, width = fore.shape[:2]\n            else:\n                height, width = back.shape[:2]\n\n            if back is None:\n                back = channel_solid(width, height, matte)\n            else:\n                back = tensor_to_cv(back)\n                #matted = pixel_eval(matte)\n                #back = image_matte(back, matted)\n\n            if fore is None:\n                clear = list(matte[:3]) + [0]\n                fore = channel_solid(width, height, clear)\n            else:\n                fore = tensor_to_cv(fore)\n\n            if mask is None:\n                mask = image_mask(fore, 255)\n            else:\n                mask = tensor_to_cv(mask, 1)\n\n            if invert:\n                mask = 255 - mask\n\n            if inputMode != EnumScaleInputMode.NONE:\n                # get the min/max of back, fore; and mask?\n                imgs = [back, fore]\n                _, w, h = image_by_size(imgs)\n                back = image_scalefit(back, w, h, inputMode, sample, matte)\n                fore = image_scalefit(fore, w, h, inputMode, sample, matte)\n                mask = image_scalefit(mask, w, h, inputMode, sample)\n\n                back = image_scalefit(back, w, h, EnumScaleMode.RESIZE_MATTE, sample, matte)\n                fore = image_scalefit(fore, w, h, EnumScaleMode.RESIZE_MATTE, sample, (0,0,0,255))\n                mask = image_scalefit(mask, w, h, EnumScaleMode.RESIZE_MATTE, sample, (255,255,255,255))\n\n            img = image_blend(back, fore, mask, func, alpha)\n            mask = image_mask(img)\n\n            if mode != EnumScaleMode.MATTE:\n                width, height = wihi\n                img = image_scalefit(img, width, height, mode, sample, matte)\n\n            img = cv_to_tensor_full(img, matte)\n            #img = [cv_to_tensor(back), cv_to_tensor(fore), cv_to_tensor(mask, True)]\n            images.append(img)\n            pbar.update_absolute(idx)\n\n        return image_stack(images)\n\nclass FilterMaskNode(CozyImageNode):\n    NAME = \"FILTER MASK (JOV) 🤿\"\n    CATEGORY = JOV_CATEGORY\n    DESCRIPTION = \"\"\"\nCreate masks based on specific color ranges within an image. Specify the color range using start and end values and an optional fuzziness factor to adjust the range. This node allows for precise color-based mask creation, ideal for tasks like object isolation, background removal, or targeted color adjustments.\n\"\"\"\n\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES()\n        d = deep_merge(d, {\n            \"optional\": {\n                Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),\n                Lexicon.START: (\"VEC3\", {\n                    \"default\": (128, 128, 128), \"rgb\": True}),\n                Lexicon.RANGE: (\"BOOLEAN\", {\n                    \"default\": False,\n                    \"tooltip\": \"Use an end point (start->end) when calculating the filter range\"}),\n                Lexicon.END: (\"VEC3\", {\n                    \"default\": (128, 128, 128), \"rgb\": True}),\n                Lexicon.FUZZ: (\"VEC3\", {\n                    \"default\": (0.5,0.5,0.5), \"mij\":0, \"maj\":1,}),\n                Lexicon.MATTE: (\"VEC4\", {\n                    \"default\": (0, 0, 0, 255), \"rgb\": True,}),\n            }\n        })\n        return Lexicon._parse(d)\n\n    def run(self, **kw) -> RGBAMaskType:\n        pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)\n        start = parse_param(kw, Lexicon.START, EnumConvertType.VEC3INT, (128,128,128), 0, 255)\n        use_range = parse_param(kw, Lexicon.RANGE, EnumConvertType.BOOLEAN, False)\n        end = parse_param(kw, Lexicon.END, EnumConvertType.VEC3INT, (128,128,128), 0, 255)\n        fuzz = parse_param(kw, Lexicon.FUZZ, EnumConvertType.VEC3, (0.5,0.5,0.5), 0, 1)\n        matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255)\n        params = list(zip_longest_fill(pA, start, use_range, end, fuzz, matte))\n        images = []\n        pbar = ProgressBar(len(params))\n        for idx, (pA, start, use_range, end, fuzz, matte) in enumerate(params):\n            img = np.zeros((IMAGE_SIZE_MIN, IMAGE_SIZE_MIN, 3), dtype=np.uint8) if pA is None else tensor_to_cv(pA)\n\n            img, mask = image_filter(img, start, end, fuzz, use_range)\n            if img.shape[2] == 3:\n                alpha_channel = np.zeros((img.shape[0], img.shape[1], 1), dtype=img.dtype)\n                img = np.concatenate((img, alpha_channel), axis=2)\n            img[..., 3] = mask[:,:]\n            images.append(cv_to_tensor_full(img, matte))\n            pbar.update_absolute(idx)\n        return image_stack(images)\n\nclass HistogramNode(CozyImageNode):\n    NAME = \"HISTOGRAM (JOV)\"\n    CATEGORY = JOV_CATEGORY\n    DESCRIPTION = \"\"\"\nThe Histogram Node generates a histogram representation of the input image, showing the distribution of pixel intensity values across different bins. This visualization is useful for understanding the overall brightness and contrast characteristics of an image. Additionally, the node performs histogram normalization, which adjusts the pixel values to enhance the contrast of the image. Histogram normalization can be helpful for improving the visual quality of images or preparing them for further image processing tasks.\n\"\"\"\n\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES()\n        d = deep_merge(d, {\n            \"optional\": {\n                Lexicon.IMAGE: (COZY_TYPE_IMAGE, {\n                    \"tooltip\": \"Pixel Data (RGBA, RGB or Grayscale)\"}),\n                Lexicon.WH: (\"VEC2\", {\n                    \"default\": (512, 512), \"mij\":IMAGE_SIZE_MIN, \"int\": True,\n                    \"label\": [\"W\", \"H\"]}),\n            }\n        })\n        return Lexicon._parse(d)\n\n    def run(self, **kw) -> RGBAMaskType:\n        pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)\n        wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (512, 512), IMAGE_SIZE_MIN)\n        params = list(zip_longest_fill(pA, wihi))\n        images = []\n        pbar = ProgressBar(len(params))\n        for idx, (pA, wihi) in enumerate(params):\n            pA = tensor_to_cv(pA) if pA is not None else channel_solid()\n            hist_img = image_histogram2(pA, bins=256)\n            width, height = wihi\n            hist_img = image_resize(hist_img, width, height, EnumInterpolation.NEAREST)\n            images.append(cv_to_tensor_full(hist_img))\n            pbar.update_absolute(idx)\n        return image_stack(images)\n\nclass PixelMergeNode(CozyImageNode):\n    NAME = \"PIXEL MERGE (JOV) 🫂\"\n    CATEGORY = JOV_CATEGORY\n    DESCRIPTION = \"\"\"\nCombines individual color channels (red, green, blue) along with an optional mask channel to create a composite image.\n\"\"\"\n\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES()\n        d = deep_merge(d, {\n            \"optional\": {\n                Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),\n                Lexicon.CHAN_RED: (COZY_TYPE_IMAGE, {}),\n                Lexicon.CHAN_GREEN: (COZY_TYPE_IMAGE, {}),\n                Lexicon.CHAN_BLUE: (COZY_TYPE_IMAGE, {}),\n                Lexicon.CHAN_ALPHA: (COZY_TYPE_IMAGE, {}),\n                Lexicon.MATTE: (\"VEC4\", {\n                    \"default\": (0, 0, 0, 255), \"rgb\": True,}),\n                Lexicon.FLIP: (\"VEC4\", {\n                    \"default\": (0,0,0,0), \"mij\":0, \"maj\":1, \"step\": 0.01,\n                    \"tooltip\": \"Invert specific input prior to merging. R, G, B, A.\"}),\n                Lexicon.INVERT: (\"BOOLEAN\", {\n                    \"default\": False,})\n            }\n        })\n        return Lexicon._parse(d)\n\n    def run(self, **kw) -> RGBAMaskType:\n        rgba = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)\n        R = parse_param(kw, Lexicon.CHAN_RED, EnumConvertType.MASK, None)\n        G = parse_param(kw, Lexicon.CHAN_GREEN, EnumConvertType.MASK, None)\n        B = parse_param(kw, Lexicon.CHAN_BLUE, EnumConvertType.MASK, None)\n        A = parse_param(kw, Lexicon.CHAN_ALPHA, EnumConvertType.MASK, None)\n        matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255)\n        flip = parse_param(kw, Lexicon.FLIP, EnumConvertType.VEC4, (0, 0, 0, 0), 0, 1)\n        invert = parse_param(kw, Lexicon.INVERT, EnumConvertType.BOOLEAN, False)\n        params = list(zip_longest_fill(rgba, R, G, B, A, matte, flip, invert))\n        images = []\n        pbar = ProgressBar(len(params))\n        for idx, (rgba, r, g, b, a, matte, flip, invert) in enumerate(params):\n            replace = r, g, b, a\n            if rgba is not None:\n                rgba = image_split(tensor_to_cv(rgba, chan=4))\n                img = [tensor_to_cv(replace[i]) if replace[i] is not None else x for i, x in enumerate(rgba)]\n            else:\n                img = [tensor_to_cv(x) if x is not None else x for x in replace]\n\n            _, _, w_max, h_max = image_minmax(img)\n            for i, x in enumerate(img):\n                if x is None:\n                    x = np.full((h_max, w_max, 1), matte[i], dtype=np.uint8)\n                else:\n                    x = image_convert(x, 1)\n                    x = image_scalefit(x, w_max, h_max, EnumScaleMode.ASPECT)\n\n                if flip[i] != 0:\n                    x = image_invert(x, flip[i])\n                img[i] = x\n\n            img = channel_merge(img)\n\n            #if invert == True:\n            #    img = image_invert(img, 1)\n\n            images.append(cv_to_tensor_full(img, matte))\n            pbar.update_absolute(idx)\n        return image_stack(images)\n\nclass PixelSplitNode(CozyBaseNode):\n    NAME = \"PIXEL SPLIT (JOV) 💔\"\n    CATEGORY = JOV_CATEGORY\n    RETURN_TYPES = (\"MASK\", \"MASK\", \"MASK\", \"MASK\", \"IMAGE\")\n    RETURN_NAMES = (\"❤️\", \"💚\", \"💙\", \"🤍\", \"RGB\")\n    OUTPUT_TOOLTIPS = (\n        \"Single channel output of Red Channel.\",\n        \"Single channel output of Green Channel\",\n        \"Single channel output of Blue Channel\",\n        \"Single channel output of Alpha Channel\",\n        \"RGB pack of the input\",\n    )\n    DESCRIPTION = \"\"\"\nSplit an input into individual color channels (red, green, blue, alpha).\n\"\"\"\n\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES()\n        d = deep_merge(d, {\n            \"optional\": {\n                Lexicon.IMAGE: (COZY_TYPE_IMAGE, {})\n            }\n        })\n        return Lexicon._parse(d)\n\n    def run(self, **kw) -> RGBAMaskType:\n        pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)\n        images = []\n        pbar = ProgressBar(len(pA))\n        for idx, pA in enumerate(pA):\n            pA = channel_solid(chan=EnumImageType.RGBA) if pA is None else tensor_to_cv(pA, chan=4)\n            out = [cv_to_tensor(x, True) for x in image_split(pA)] + [cv_to_tensor(image_convert(pA, 3))]\n            images.append(out)\n            pbar.update_absolute(idx)\n        return image_stack(images)\n\nclass PixelSwapNode(CozyImageNode):\n    NAME = \"PIXEL SWAP (JOV) 🔃\"\n    CATEGORY = JOV_CATEGORY\n    DESCRIPTION = \"\"\"\nSwap pixel values between two input images based on specified channel swizzle operations. Options include pixel inputs, swap operations for red, green, blue, and alpha channels, and constant values for each channel. The swap operations allow for flexible pixel manipulation by determining the source of each channel in the output image, whether it be from the first image, the second image, or a constant value.\n\"\"\"\n\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES()\n        d = deep_merge(d, {\n            \"optional\": {\n                Lexicon.IMAGE_SOURCE: (COZY_TYPE_IMAGE, {}),\n                Lexicon.IMAGE_TARGET: (COZY_TYPE_IMAGE, {}),\n                Lexicon.SWAP_R: (EnumPixelSwizzle._member_names_, {\n                    \"default\": EnumPixelSwizzle.RED_A.name,}),\n                Lexicon.SWAP_G: (EnumPixelSwizzle._member_names_, {\n                    \"default\": EnumPixelSwizzle.GREEN_A.name,}),\n                Lexicon.SWAP_B: (EnumPixelSwizzle._member_names_, {\n                    \"default\": EnumPixelSwizzle.BLUE_A.name,}),\n                Lexicon.SWAP_A: (EnumPixelSwizzle._member_names_, {\n                    \"default\": EnumPixelSwizzle.ALPHA_A.name,}),\n                Lexicon.MATTE: (\"VEC4\", {\n                    \"default\": (0, 0, 0, 255), \"rgb\": True,})\n            }\n        })\n        return Lexicon._parse(d)\n\n    def run(self, **kw) -> RGBAMaskType:\n        pA = parse_param(kw, Lexicon.IMAGE_SOURCE, EnumConvertType.IMAGE, None)\n        pB = parse_param(kw, Lexicon.IMAGE_TARGET, EnumConvertType.IMAGE, None)\n        swap_r = parse_param(kw, Lexicon.SWAP_R, EnumPixelSwizzle, EnumPixelSwizzle.RED_A.name)\n        swap_g = parse_param(kw, Lexicon.SWAP_G, EnumPixelSwizzle, EnumPixelSwizzle.GREEN_A.name)\n        swap_b = parse_param(kw, Lexicon.SWAP_B, EnumPixelSwizzle, EnumPixelSwizzle.BLUE_A.name)\n        swap_a = parse_param(kw, Lexicon.SWAP_A, EnumPixelSwizzle, EnumPixelSwizzle.ALPHA_A.name)\n        matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255)\n        params = list(zip_longest_fill(pA, pB, swap_r, swap_g, swap_b, swap_a, matte))\n        images = []\n        pbar = ProgressBar(len(params))\n        for idx, (pA, pB, swap_r, swap_g, swap_b, swap_a, matte) in enumerate(params):\n            if pA is None:\n                if pB is None:\n                    out = channel_solid()\n                    images.append(cv_to_tensor_full(out))\n                    pbar.update_absolute(idx)\n                    continue\n\n                h, w = pB.shape[:2]\n                pA = channel_solid(w, h)\n            else:\n                h, w = pA.shape[:2]\n                pA = tensor_to_cv(pA)\n                pA = image_convert(pA, 4)\n\n            pB = tensor_to_cv(pB) if pB is not None else channel_solid(w, h)\n            pB = image_convert(pB, 4)\n            pB = image_matte(pB, (0,0,0,0), w, h)\n            pB = image_scalefit(pB, w, h, EnumScaleMode.CROP)\n\n            out = channel_swap(pA, pB, (swap_r, swap_g, swap_b, swap_a), matte)\n\n            images.append(cv_to_tensor_full(out))\n            pbar.update_absolute(idx)\n        return image_stack(images)\n\nclass ThresholdNode(CozyImageNode):\n    NAME = \"THRESHOLD (JOV) 📉\"\n    CATEGORY = JOV_CATEGORY\n    DESCRIPTION = \"\"\"\nDefine a range and apply it to an image for segmentation and feature extraction. Choose from various threshold modes, such as binary and adaptive, and adjust the threshold value and block size to suit your needs. You can also invert the resulting mask if necessary. This node is versatile for a variety of image processing tasks.\n\"\"\"\n\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES()\n        d = deep_merge(d, {\n            \"optional\": {\n                Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),\n                Lexicon.ADAPT: ( EnumThresholdAdapt._member_names_, {\n                    \"default\": EnumThresholdAdapt.ADAPT_NONE.name,}),\n                Lexicon.FUNCTION: ( EnumThreshold._member_names_, {\n                    \"default\": EnumThreshold.BINARY.name}),\n                Lexicon.THRESHOLD: (\"FLOAT\", {\n                    \"default\": 0.5, \"min\": 0, \"max\": 1, \"step\": 0.005}),\n                Lexicon.SIZE: (\"INT\", {\n                    \"default\": 3, \"min\": 3, \"max\": 103}),\n                Lexicon.INVERT: (\"BOOLEAN\", {\n                    \"default\": False,\n                    \"tooltip\": \"Invert the mask input\"})\n            }\n        })\n        return Lexicon._parse(d)\n\n    def run(self, **kw) -> RGBAMaskType:\n        pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)\n        mode = parse_param(kw, Lexicon.FUNCTION, EnumThreshold, EnumThreshold.BINARY.name)\n        adapt = parse_param(kw, Lexicon.ADAPT, EnumThresholdAdapt, EnumThresholdAdapt.ADAPT_NONE.name)\n        threshold = parse_param(kw, Lexicon.THRESHOLD, EnumConvertType.FLOAT, 1, 0, 1)\n        block = parse_param(kw, Lexicon.SIZE, EnumConvertType.INT, 3, 3, 103)\n        invert = parse_param(kw, Lexicon.INVERT, EnumConvertType.BOOLEAN, False)\n        params = list(zip_longest_fill(pA, mode, adapt, threshold, block, invert))\n        images = []\n        pbar = ProgressBar(len(params))\n        for idx, (pA, mode, adapt, th, block, invert) in enumerate(params):\n            pA = tensor_to_cv(pA) if pA is not None else channel_solid()\n            pA = image_threshold(pA, th, mode, adapt, block)\n            if invert == True:\n                pA = image_invert(pA, 1)\n            images.append(cv_to_tensor_full(pA))\n            pbar.update_absolute(idx)\n        return image_stack(images)\n"
  },
  {
    "path": "core/create.py",
    "content": "\"\"\" Jovimetrix - Creation \"\"\"\n\nimport numpy as np\nfrom PIL import ImageFont\nfrom skimage.filters import gaussian\n\nfrom comfy.utils import ProgressBar\n\nfrom cozy_comfyui import \\\n    IMAGE_SIZE_MIN, \\\n    InputType, EnumConvertType, RGBAMaskType, \\\n    deep_merge, parse_param, zip_longest_fill\n\nfrom cozy_comfyui.lexicon import \\\n    Lexicon\n\nfrom cozy_comfyui.node import \\\n    COZY_TYPE_IMAGE, \\\n    CozyImageNode\n\nfrom cozy_comfyui.image import \\\n    EnumImageType\n\nfrom cozy_comfyui.image.adjust import \\\n    image_invert\n\nfrom cozy_comfyui.image.channel import \\\n    channel_solid\n\nfrom cozy_comfyui.image.compose import \\\n    EnumEdge, EnumScaleMode, EnumInterpolation, \\\n    image_rotate, image_scalefit, image_transform, image_translate, image_blend\n\nfrom cozy_comfyui.image.convert import \\\n    image_convert, pil_to_cv, cv_to_tensor, cv_to_tensor_full, tensor_to_cv, \\\n    image_mask, image_mask_add, image_mask_binary\n\nfrom cozy_comfyui.image.misc import \\\n    image_stack\n\nfrom cozy_comfyui.image.shape import \\\n    EnumShapes, \\\n    shape_ellipse, shape_polygon, shape_quad\n\nfrom cozy_comfyui.image.text import \\\n    EnumAlignment, EnumJustify, \\\n    font_names, text_autosize, text_draw\n\n# ==============================================================================\n# === GLOBAL ===\n# ==============================================================================\n\nJOV_CATEGORY = \"CREATE\"\n\n# ==============================================================================\n# === CLASS ===\n# ==============================================================================\n\nclass ConstantNode(CozyImageNode):\n    NAME = \"CONSTANT (JOV) 🟪\"\n    CATEGORY = JOV_CATEGORY\n    DESCRIPTION = \"\"\"\nGenerate a constant image or mask of a specified size and color. It can be used to create solid color backgrounds or matte images for compositing with other visual elements. The node allows you to define the desired width and height of the output and specify the RGBA color value for the constant output. Additionally, you can input an optional image to use as a matte with the selected color.\n\"\"\"\n\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES()\n        d = deep_merge(d, {\n            \"optional\": {\n                Lexicon.IMAGE: (COZY_TYPE_IMAGE, {\n                    \"tooltip\":\"Optional Image to Matte with Selected Color\"}),\n                Lexicon.MASK: (COZY_TYPE_IMAGE, {\n                    \"tooltip\":\"Override Image mask\"}),\n                Lexicon.COLOR: (\"VEC4\", {\n                    \"default\": (0, 0, 0, 255), \"rgb\": True,\n                    \"tooltip\": \"Constant Color to Output\"}),\n                Lexicon.MODE: (EnumScaleMode._member_names_, {\n                    \"default\": EnumScaleMode.MATTE.name,}),\n                Lexicon.WH: (\"VEC2\", {\n                    \"default\": (512, 512), \"mij\": 1, \"int\": True,\n                    \"label\": [\"W\", \"H\"],}),\n                Lexicon.SAMPLE: (EnumInterpolation._member_names_, {\n                    \"default\": EnumInterpolation.LANCZOS4.name,})\n            }\n        })\n        return Lexicon._parse(d)\n\n    def run(self, **kw) -> RGBAMaskType:\n        pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)\n        mask = parse_param(kw, Lexicon.MASK, EnumConvertType.MASK, None)\n        matte = parse_param(kw, Lexicon.COLOR, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255)\n        mode = parse_param(kw, Lexicon.MODE, EnumScaleMode, EnumScaleMode.MATTE.name)\n        wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (512, 512), 1)\n        sample = parse_param(kw, Lexicon.SAMPLE, EnumInterpolation, EnumInterpolation.LANCZOS4.name)\n        images = []\n        params = list(zip_longest_fill(pA, mask, matte, mode, wihi, sample))\n        pbar = ProgressBar(len(params))\n        for idx, (pA, mask, matte, mode, wihi, sample) in enumerate(params):\n            width, height = wihi\n            w, h = width, height\n\n            if pA is None:\n                pA = channel_solid(width, height, (0,0,0,255))\n            else:\n                pA = tensor_to_cv(pA)\n                pA = image_convert(pA, 4)\n                h, w = pA.shape[:2]\n\n            if mask is None:\n                mask = image_mask(pA, 0)\n            else:\n                mask = tensor_to_cv(mask, invert=1, chan=1)\n                mask = image_scalefit(mask, w, h, matte=(0,0,0,255), mode=EnumScaleMode.FIT)\n\n            pB = channel_solid(w, h, matte)\n            pA = image_blend(pB, pA, mask)\n            #mask = image_invert(mask, 1)\n            pA = image_mask_add(pA, mask)\n\n            if mode != EnumScaleMode.MATTE:\n                pA = image_scalefit(pA, width, height, mode, sample, matte)\n            images.append(cv_to_tensor_full(pA, matte))\n            pbar.update_absolute(idx)\n        return image_stack(images)\n\nclass ShapeNode(CozyImageNode):\n    NAME = \"SHAPE GEN (JOV) ✨\"\n    CATEGORY = JOV_CATEGORY\n    DESCRIPTION = \"\"\"\nCreate n-sided polygons. These shapes can be customized by adjusting parameters such as size, color, position, rotation angle, and edge blur. The node provides options to specify the shape type, the number of sides for polygons, the RGBA color value for the main shape, and the RGBA color value for the background. Additionally, you can control the width and height of the output images, the position offset, and the amount of edge blur applied to the shapes.\n\"\"\"\n\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES()\n        d = deep_merge(d, {\n            \"optional\": {\n                Lexicon.SHAPE: (EnumShapes._member_names_, {\n                    \"default\": EnumShapes.CIRCLE.name}),\n                Lexicon.SIDES: (\"INT\", {\n                    \"default\": 3, \"min\": 3, \"max\": 100}),\n                Lexicon.COLOR: (\"VEC4\", {\n                    \"default\": (255, 255, 255, 255), \"rgb\": True,\n                    \"tooltip\": \"Main Shape Color\"}),\n                Lexicon.MATTE: (\"VEC4\", {\n                    \"default\": (0, 0, 0, 255), \"rgb\": True,}),\n                Lexicon.WH: (\"VEC2\", {\n                    \"default\": (256, 256), \"mij\":IMAGE_SIZE_MIN, \"int\": True,\n                    \"label\": [\"W\", \"H\"],}),\n                Lexicon.XY: (\"VEC2\", {\n                    \"default\": (0, 0,), \"mij\": -1, \"maj\": 1,\n                    \"label\": [\"X\", \"Y\"]}),\n                Lexicon.ANGLE: (\"FLOAT\", {\n                    \"default\": 0, \"min\": -180, \"max\": 180, \"step\": 0.01,}),\n                Lexicon.SIZE: (\"VEC2\", {\n                    \"default\": (1, 1), \"mij\": 0, \"maj\": 1,\n                    \"label\": [\"X\", \"Y\"]}),\n                Lexicon.EDGE: (EnumEdge._member_names_, {\n                    \"default\": EnumEdge.CLIP.name}),\n                Lexicon.BLUR: (\"FLOAT\", {\n                    \"default\": 0, \"min\": 0, \"step\": 0.01,}),\n            }\n        })\n        return Lexicon._parse(d)\n\n    def run(self, **kw) -> RGBAMaskType:\n        shape = parse_param(kw, Lexicon.SHAPE, EnumShapes, EnumShapes.CIRCLE.name)\n        sides = parse_param(kw, Lexicon.SIDES, EnumConvertType.INT, 3, 3)\n        color = parse_param(kw, Lexicon.COLOR, EnumConvertType.VEC4INT, (255, 255, 255, 255), 0, 255)\n        matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255)\n        wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (256, 256), IMAGE_SIZE_MIN)\n        offset = parse_param(kw, Lexicon.XY, EnumConvertType.VEC2, (0, 0), -1, 1)\n        angle = parse_param(kw, Lexicon.ANGLE, EnumConvertType.FLOAT, 0, -180, 180)\n        size = parse_param(kw, Lexicon.SIZE, EnumConvertType.VEC2, (1, 1), 0, 1, zero=0.001)\n        edge = parse_param(kw, Lexicon.EDGE, EnumEdge, EnumEdge.CLIP.name)\n        blur = parse_param(kw, Lexicon.BLUR, EnumConvertType.FLOAT, 0, 0)\n        params = list(zip_longest_fill(shape, sides, color, matte, wihi, offset, angle, size, edge, blur))\n        images = []\n        pbar = ProgressBar(len(params))\n        for idx, (shape, sides, color, matte, wihi, offset, angle, size, edge, blur) in enumerate(params):\n            width, height = wihi\n            sizeX, sizeY = size\n            fill = color[:3][::-1]\n\n            match shape:\n                case EnumShapes.SQUARE:\n                    rgb = shape_quad(width, height, sizeX, sizeY, fill)\n\n                case EnumShapes.CIRCLE:\n                    rgb = shape_ellipse(width, height, sizeX, sizeY, fill)\n\n                case EnumShapes.POLYGON:\n                    rgb = shape_polygon(width, height, sizeX, sides, fill)\n\n            rgb = pil_to_cv(rgb)\n            rgb = image_transform(rgb, offset, angle, edge=edge)\n            mask = image_mask_binary(rgb)\n\n            if blur > 0:\n                # @TODO: Do blur on larger canvas to remove wrap bleed.\n                rgb = (gaussian(rgb, sigma=blur, channel_axis=2) * 255).astype(np.uint8)\n                mask = (gaussian(mask, sigma=blur, channel_axis=2) * 255).astype(np.uint8)\n\n            mask = (mask * (color[3] / 255.)).astype(np.uint8)\n            back = list(matte[:3]) + [255]\n            canvas = np.full((height, width, 4), back, dtype=rgb.dtype)\n            rgba = image_blend(canvas, rgb, mask)\n            rgba = image_mask_add(rgba, mask)\n            rgb = image_convert(rgba, 3)\n\n            images.append([cv_to_tensor(rgba), cv_to_tensor(rgb), cv_to_tensor(mask, True)])\n            pbar.update_absolute(idx)\n        return image_stack(images)\n\nclass TextNode(CozyImageNode):\n    NAME = \"TEXT GEN (JOV) 📝\"\n    CATEGORY = JOV_CATEGORY\n    FONTS = font_names()\n    FONT_NAMES = sorted(FONTS.keys())\n    DESCRIPTION = \"\"\"\nGenerates images containing text based on parameters such as font, size, alignment, color, and position. Users can input custom text messages, select fonts from a list of available options, adjust font size, and specify the alignment and justification of the text. Additionally, the node provides options for auto-sizing text to fit within specified dimensions, controlling letter-by-letter rendering, and applying edge effects such as clipping and inversion.\n\"\"\"\n\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES()\n        d = deep_merge(d, {\n            \"optional\": {\n                Lexicon.STRING: (\"STRING\", {\n                    \"default\": \"jovimetrix\", \"multiline\": True,\n                    \"dynamicPrompts\": False,\n                    \"tooltip\": \"Your Message\"}),\n                Lexicon.FONT: (cls.FONT_NAMES, {\n                    \"default\": cls.FONT_NAMES[0]}),\n                Lexicon.LETTER: (\"BOOLEAN\", {\n                    \"default\": False,}),\n                Lexicon.AUTOSIZE: (\"BOOLEAN\", {\n                    \"default\": False,\n                    \"tooltip\": \"Scale based on Width & Height\"}),\n                Lexicon.COLOR: (\"VEC4\", {\n                    \"default\": (255, 255, 255, 255), \"rgb\": True,\n                    \"tooltip\": \"Color of the letters\"}),\n                Lexicon.MATTE: (\"VEC4\", {\n                    \"default\": (0, 0, 0, 255), \"rgb\": True,}),\n                Lexicon.COLUMNS: (\"INT\", {\n                    \"default\": 0, \"min\": 0}),\n                # if auto on, hide these...\n                Lexicon.SIZE: (\"INT\", {\n                    \"default\": 16, \"min\": 8}),\n                Lexicon.ALIGN: (EnumAlignment._member_names_, {\n                    \"default\": EnumAlignment.CENTER.name,}),\n                Lexicon.JUSTIFY: (EnumJustify._member_names_, {\n                    \"default\": EnumJustify.CENTER.name,}),\n                Lexicon.MARGIN: (\"INT\", {\n                    \"default\": 0, \"min\": -1024, \"max\": 1024,}),\n                Lexicon.SPACING: (\"INT\", {\n                    \"default\": 0, \"min\": -1024, \"max\": 1024}),\n                Lexicon.WH: (\"VEC2\", {\n                    \"default\": (256, 256), \"mij\":IMAGE_SIZE_MIN, \"int\": True,\n                    \"label\": [\"W\", \"H\"],}),\n                Lexicon.XY: (\"VEC2\", {\n                    \"default\": (0, 0,), \"mij\": -1, \"maj\": 1,\n                    \"label\": [\"X\", \"Y\"],\n                    \"tooltip\":\"Offset the position\"}),\n                Lexicon.ANGLE: (\"FLOAT\", {\n                    \"default\": 0, \"step\": 0.01,}),\n                Lexicon.EDGE: (EnumEdge._member_names_, {\n                    \"default\": EnumEdge.CLIP.name}),\n                Lexicon.INVERT: (\"BOOLEAN\", {\n                    \"default\": False,\n                    \"tooltip\": \"Invert the mask input\"})\n            }\n        })\n        return Lexicon._parse(d)\n\n    def run(self, **kw) -> RGBAMaskType:\n        full_text = parse_param(kw, Lexicon.STRING, EnumConvertType.STRING, \"jovimetrix\")\n        font_idx = parse_param(kw, Lexicon.FONT, EnumConvertType.STRING, self.FONT_NAMES[0])\n        autosize = parse_param(kw, Lexicon.AUTOSIZE, EnumConvertType.BOOLEAN, False)\n        letter = parse_param(kw, Lexicon.LETTER, EnumConvertType.BOOLEAN, False)\n        color = parse_param(kw, Lexicon.COLOR, EnumConvertType.VEC4INT, (255,255,255,255))\n        matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0,0,0,255))\n        columns = parse_param(kw, Lexicon.COLUMNS, EnumConvertType.INT, 0)\n        font_size = parse_param(kw, Lexicon.SIZE, EnumConvertType.INT, 1)\n        align = parse_param(kw, Lexicon.ALIGN, EnumAlignment, EnumAlignment.CENTER.name)\n        justify = parse_param(kw, Lexicon.JUSTIFY, EnumJustify, EnumJustify.CENTER.name)\n        margin = parse_param(kw, Lexicon.MARGIN, EnumConvertType.INT, 0)\n        line_spacing = parse_param(kw, Lexicon.SPACING, EnumConvertType.INT, 0)\n        wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (512, 512), IMAGE_SIZE_MIN)\n        pos = parse_param(kw, Lexicon.XY, EnumConvertType.VEC2, (0, 0))\n        angle = parse_param(kw, Lexicon.ANGLE, EnumConvertType.INT, 0)\n        edge = parse_param(kw, Lexicon.EDGE, EnumEdge, EnumEdge.CLIP.name)\n        invert = parse_param(kw, Lexicon.INVERT, EnumConvertType.BOOLEAN, False)\n        images = []\n        params = list(zip_longest_fill(full_text, font_idx, autosize, letter, color,\n                                matte, columns, font_size, align, justify, margin,\n                                line_spacing, wihi, pos, angle, edge, invert))\n\n        pbar = ProgressBar(len(params))\n        for idx, (full_text, font_idx, autosize, letter, color, matte, columns,\n                font_size, align, justify, margin, line_spacing, wihi, pos,\n                angle, edge, invert) in enumerate(params):\n\n            width, height = wihi\n            font_name = self.FONTS[font_idx]\n            full_text = str(full_text)\n\n            if letter:\n                full_text = full_text.replace('\\n', '')\n                if autosize:\n                    _, font_size = text_autosize(full_text[0].upper(), font_name, width, height)[:2]\n                    margin = 0\n                    line_spacing = 0\n            else:\n                if autosize:\n                    wm = width - margin * 2\n                    hm = height - margin * 2 - line_spacing\n                    columns = 0 if columns == 0 else columns * 2 + 2\n                    full_text, font_size = text_autosize(full_text, font_name, wm, hm, columns)[:2]\n                full_text = [full_text]\n            font_size *= 2.5\n\n            font = ImageFont.truetype(font_name, font_size)\n            for ch in full_text:\n                img = text_draw(ch, font, width, height, align, justify, margin, line_spacing, color)\n                img = image_rotate(img, angle, edge=edge)\n                img = image_translate(img, pos, edge=edge)\n                if invert:\n                    img = image_invert(img, 1)\n                images.append(cv_to_tensor_full(img, matte))\n            pbar.update_absolute(idx)\n        return image_stack(images)\n"
  },
  {
    "path": "core/trans.py",
    "content": "\"\"\" Jovimetrix - Transform \"\"\"\n\nimport sys\nfrom enum import Enum\n\nfrom comfy.utils import ProgressBar\n\nfrom cozy_comfyui import \\\n    logger, \\\n    IMAGE_SIZE_MIN, \\\n    InputType, RGBAMaskType, EnumConvertType, \\\n    deep_merge, parse_param, parse_dynamic, zip_longest_fill\n\nfrom cozy_comfyui.lexicon import \\\n    Lexicon\n\nfrom cozy_comfyui.node import \\\n    COZY_TYPE_IMAGE, \\\n    CozyImageNode, CozyBaseNode\n\nfrom cozy_comfyui.image.channel import \\\n    channel_solid\n\nfrom cozy_comfyui.image.convert import \\\n    tensor_to_cv, cv_to_tensor_full, cv_to_tensor, image_mask, image_mask_add\n\nfrom cozy_comfyui.image.compose import \\\n    EnumOrientation, EnumEdge, EnumMirrorMode, EnumScaleMode, EnumInterpolation, \\\n    image_edge_wrap, image_mirror, image_scalefit, image_transform, \\\n    image_crop, image_crop_center, image_crop_polygonal, image_stacker, \\\n    image_flatten\n\nfrom cozy_comfyui.image.misc import \\\n    image_stack\n\nfrom cozy_comfyui.image.mapping import \\\n    EnumProjection, \\\n    remap_fisheye, remap_perspective, remap_polar, remap_sphere\n\n# ==============================================================================\n# === GLOBAL ===\n# ==============================================================================\n\nJOV_CATEGORY = \"TRANSFORM\"\n\n# ==============================================================================\n# === ENUMERATION ===\n# ==============================================================================\n\nclass EnumCropMode(Enum):\n    CENTER = 20\n    XY = 0\n    FREE = 10\n\n# ==============================================================================\n# === CLASS ===\n# ==============================================================================\n\nclass CropNode(CozyImageNode):\n    NAME = \"CROP (JOV) ✂️\"\n    CATEGORY = JOV_CATEGORY\n    DESCRIPTION = \"\"\"\nExtract a portion of an input image or resize it. It supports various cropping modes, including center cropping, custom XY cropping, and free-form polygonal cropping. This node is useful for preparing image data for specific tasks or extracting regions of interest.\n\"\"\"\n\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES()\n        d = deep_merge(d, {\n            \"optional\": {\n                Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),\n                Lexicon.FUNCTION: (EnumCropMode._member_names_, {\n                    \"default\": EnumCropMode.CENTER.name}),\n                Lexicon.XY: (\"VEC2\", {\n                    \"default\": (0, 0), \"mij\": 0, \"maj\": 1,\n                    \"label\": [\"X\", \"Y\"]}),\n                Lexicon.WH: (\"VEC2\", {\n                    \"default\": (512, 512), \"mij\": IMAGE_SIZE_MIN, \"int\": True,\n                    \"label\": [\"W\", \"H\"]}),\n                Lexicon.TLTR: (\"VEC4\", {\n                    \"default\": (0, 0, 0, 1), \"mij\": 0, \"maj\": 1,\n                    \"label\": [\"TOP\", \"LEFT\", \"TOP\", \"RIGHT\"],}),\n                Lexicon.BLBR: (\"VEC4\", {\n                    \"default\": (1, 0, 1, 1), \"mij\": 0, \"maj\": 1,\n                    \"label\": [\"BOTTOM\", \"LEFT\", \"BOTTOM\", \"RIGHT\"],}),\n                Lexicon.MATTE: (\"VEC4\", {\n                    \"default\": (0, 0, 0, 255), \"rgb\": True,})\n            }\n        })\n        return Lexicon._parse(d)\n\n    def run(self, **kw) -> RGBAMaskType:\n        pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)\n        func = parse_param(kw, Lexicon.FUNCTION, EnumCropMode, EnumCropMode.CENTER.name)\n        # if less than 1 then use as scalar, over 1 = int(size)\n        xy = parse_param(kw, Lexicon.XY, EnumConvertType.VEC2, (0, 0,))\n        wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (512, 512), IMAGE_SIZE_MIN)\n        tltr = parse_param(kw, Lexicon.TLTR, EnumConvertType.VEC4, (0, 0, 0, 1,))\n        blbr = parse_param(kw, Lexicon.BLBR, EnumConvertType.VEC4, (1, 0, 1, 1,))\n        matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255)\n        params = list(zip_longest_fill(pA, func, xy, wihi, tltr, blbr, matte))\n        images = []\n        pbar = ProgressBar(len(params))\n        for idx, (pA, func, xy, wihi, tltr, blbr, matte) in enumerate(params):\n            width, height = wihi\n            pA = tensor_to_cv(pA) if pA is not None else channel_solid(width, height)\n            alpha = None\n            if pA.ndim == 3 and pA.shape[2] == 4:\n                alpha = image_mask(pA)\n\n            if func == EnumCropMode.FREE:\n                x1, y1, x2, y2 = tltr\n                x4, y4, x3, y3 = blbr\n                points = (x1 * width, y1 * height), (x2 * width, y2 * height), \\\n                    (x3 * width, y3 * height), (x4 * width, y4 * height)\n                pA = image_crop_polygonal(pA, points)\n                if alpha is not None:\n                    alpha = image_crop_polygonal(alpha, points)\n                    pA[..., 3] = alpha[..., 0][:,:]\n            elif func == EnumCropMode.XY:\n                pA = image_crop(pA, width, height, xy)\n            else:\n                pA = image_crop_center(pA, width, height)\n            images.append(cv_to_tensor_full(pA, matte))\n            pbar.update_absolute(idx)\n        return image_stack(images)\n\nclass FlattenNode(CozyImageNode):\n    NAME = \"FLATTEN (JOV) ⬇️\"\n    CATEGORY = JOV_CATEGORY\n    DESCRIPTION = \"\"\"\nCombine multiple input images into a single image by summing their pixel values. This operation is useful for merging multiple layers or images into one composite image, such as combining different elements of a design or merging masks. Users can specify the blending mode and interpolation method to control how the images are combined. Additionally, a matte can be applied to adjust the transparency of the final composite image.\n\"\"\"\n\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES()\n        d = deep_merge(d, {\n            \"optional\": {\n                Lexicon.MODE: (EnumScaleMode._member_names_, {\n                    \"default\": EnumScaleMode.MATTE.name,}),\n                Lexicon.WH: (\"VEC2\", {\n                    \"default\": (512, 512), \"mij\":1, \"int\": True,\n                    \"label\": [\"W\", \"H\"]}),\n                Lexicon.SAMPLE: (EnumInterpolation._member_names_, {\n                    \"default\": EnumInterpolation.LANCZOS4.name,}),\n                Lexicon.MATTE: (\"VEC4\", {\n                    \"default\": (0, 0, 0, 255), \"rgb\": True,}),\n                Lexicon.OFFSET: (\"VEC2\", {\n                    \"default\": (0, 0), \"mij\":0, \"int\": True,\n                    \"label\": [\"X\", \"Y\"]}),\n            }\n        })\n        return Lexicon._parse(d)\n\n    def run(self, **kw) -> RGBAMaskType:\n        imgs = parse_dynamic(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)\n        if imgs is None:\n            logger.warning(\"no images to flatten\")\n            return ()\n\n        # be less dumb when merging\n        pA = [tensor_to_cv(i) for i in imgs]\n        mode = parse_param(kw, Lexicon.MODE, EnumScaleMode, EnumScaleMode.MATTE.name)[0]\n        wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (512, 512), 1)[0]\n        sample = parse_param(kw, Lexicon.SAMPLE, EnumInterpolation, EnumInterpolation.LANCZOS4.name)[0]\n        matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255)[0]\n        offset = parse_param(kw, Lexicon.OFFSET, EnumConvertType.VEC2INT, (0, 0), 0)[0]\n        w, h = wihi\n        x, y = offset\n        pA = image_flatten(pA, x, y, w, h, mode=mode, sample=sample)\n        pA = [cv_to_tensor_full(pA, matte)]\n        return image_stack(pA)\n\nclass SplitNode(CozyBaseNode):\n    NAME = \"SPLIT (JOV) 🎭\"\n    CATEGORY = JOV_CATEGORY\n    RETURN_TYPES = (\"IMAGE\", \"IMAGE\",)\n    RETURN_NAMES = (\"IMAGEA\", \"IMAGEB\",)\n    OUTPUT_TOOLTIPS = (\n        \"Left/Top image\",\n        \"Right/Bottom image\"\n    )\n    DESCRIPTION = \"\"\"\nSplit an image into two or four images based on the percentages for width and height.\n\"\"\"\n\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES()\n        d = deep_merge(d, {\n            \"optional\": {\n                Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),\n                Lexicon.VALUE: (\"FLOAT\", {\n                    \"default\": 0.5, \"min\": 0, \"max\": 1, \"step\": 0.001\n                }),\n                Lexicon.FLIP: (\"BOOLEAN\", {\n                    \"default\": False,\n                    \"tooltip\": \"Horizontal split (False) or Vertical split (True)\"\n                }),\n                Lexicon.MODE: (EnumScaleMode._member_names_, {\n                    \"default\": EnumScaleMode.MATTE.name,}),\n                Lexicon.WH: (\"VEC2\", {\n                    \"default\": (512, 512), \"mij\":IMAGE_SIZE_MIN, \"int\": True,\n                    \"label\": [\"W\", \"H\"]}),\n                Lexicon.SAMPLE: (EnumInterpolation._member_names_, {\n                    \"default\": EnumInterpolation.LANCZOS4.name,}),\n                Lexicon.MATTE: (\"VEC4\", {\n                    \"default\": (0, 0, 0, 255), \"rgb\": True,})\n            }\n        })\n        return Lexicon._parse(d)\n\n    def run(self, **kw) -> RGBAMaskType:\n        pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)\n        percent = parse_param(kw, Lexicon.VALUE, EnumConvertType.FLOAT, 0.5, 0, 1)\n        flip = parse_param(kw, Lexicon.FLIP, EnumConvertType.BOOLEAN, False)\n        mode = parse_param(kw, Lexicon.MODE, EnumScaleMode, EnumScaleMode.MATTE.name)\n        wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (512, 512), IMAGE_SIZE_MIN)\n        sample = parse_param(kw, Lexicon.SAMPLE, EnumInterpolation, EnumInterpolation.LANCZOS4.name)\n        matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255)\n        params = list(zip_longest_fill(pA, percent, flip, mode, wihi, sample, matte))\n        images = []\n        pbar = ProgressBar(len(params))\n        for idx, (pA, percent, flip, mode, wihi, sample, matte) in enumerate(params):\n            w, h = wihi\n            pA = channel_solid(w, h, matte) if pA is None else tensor_to_cv(pA)\n\n            if flip:\n                size = pA.shape[1]\n                percent = max(1, min(size-1, int(size * percent)))\n                image_a = pA[:, :percent]\n                image_b = pA[:, percent:]\n            else:\n                size = pA.shape[0]\n                percent = max(1, min(size-1, int(size * percent)))\n                image_a = pA[:percent, :]\n                image_b = pA[percent:, :]\n\n            if mode != EnumScaleMode.MATTE:\n                image_a = image_scalefit(image_a, w, h, mode, sample)\n                image_b = image_scalefit(image_b, w, h, mode, sample)\n\n            images.append([cv_to_tensor(img) for img in [image_a, image_b]])\n            pbar.update_absolute(idx)\n        return image_stack(images)\n\nclass StackNode(CozyImageNode):\n    NAME = \"STACK (JOV) ➕\"\n    CATEGORY = JOV_CATEGORY\n    DESCRIPTION = \"\"\"\nMerge multiple input images into a single composite image by stacking them along a specified axis.\n\nOptions include axis, stride, scaling mode, width and height, interpolation method, and matte color.\n\nThe axis parameter allows for horizontal, vertical, or grid stacking of images, while stride controls the spacing between them.\n\"\"\"\n\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES()\n        d = deep_merge(d, {\n            \"optional\": {\n                Lexicon.AXIS: (EnumOrientation._member_names_, {\n                    \"default\": EnumOrientation.GRID.name,}),\n                Lexicon.STEP: (\"INT\", {\n                    \"default\": 1, \"min\": 0,\n                    \"tooltip\":\"How many images are placed before a new row starts (stride)\"}),\n                Lexicon.MODE: (EnumScaleMode._member_names_, {\n                    \"default\": EnumScaleMode.MATTE.name,}),\n                Lexicon.WH: (\"VEC2\", {\n                    \"default\": (512, 512), \"mij\": IMAGE_SIZE_MIN, \"int\": True,\n                    \"label\": [\"W\", \"H\"]}),\n                Lexicon.SAMPLE: (EnumInterpolation._member_names_, {\n                    \"default\": EnumInterpolation.LANCZOS4.name,}),\n                Lexicon.MATTE: (\"VEC4\", {\n                    \"default\": (0, 0, 0, 255), \"rgb\": True,})\n            }\n        })\n        return Lexicon._parse(d)\n\n    def run(self, **kw) -> RGBAMaskType:\n        images = parse_dynamic(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)\n        if len(images) == 0:\n            logger.warning(\"no images to stack\")\n            return\n\n        images = [tensor_to_cv(i) for i in images]\n        axis = parse_param(kw, Lexicon.AXIS, EnumOrientation, EnumOrientation.GRID.name)[0]\n        stride = parse_param(kw, Lexicon.STEP, EnumConvertType.INT, 1, 0)[0]\n        mode = parse_param(kw, Lexicon.MODE, EnumScaleMode, EnumScaleMode.MATTE.name)[0]\n        wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (512, 512), IMAGE_SIZE_MIN)[0]\n        sample = parse_param(kw, Lexicon.SAMPLE, EnumInterpolation, EnumInterpolation.LANCZOS4.name)[0]\n        matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255)[0]\n        img = image_stacker(images, axis, stride) #, matte)\n        if mode != EnumScaleMode.MATTE:\n            w, h = wihi\n            img = image_scalefit(img, w, h, mode, sample)\n        rgba, rgb, mask = cv_to_tensor_full(img, matte)\n        return rgba.unsqueeze(0), rgb.unsqueeze(0), mask.unsqueeze(0)\n\nclass TransformNode(CozyImageNode):\n    NAME = \"TRANSFORM (JOV) 🏝️\"\n    CATEGORY = JOV_CATEGORY\n    DESCRIPTION = \"\"\"\nApply various geometric transformations to images, including translation, rotation, scaling, mirroring, tiling and perspective projection. It offers extensive control over image manipulation to achieve desired visual effects.\n\"\"\"\n\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES(prompt=True, dynprompt=True)\n        d = deep_merge(d, {\n            \"optional\": {\n                Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),\n                Lexicon.MASK: (COZY_TYPE_IMAGE, {\n                    \"tooltip\": \"Override Image mask\"}),\n                Lexicon.XY: (\"VEC2\", {\n                    \"default\": (0, 0,), \"mij\": -1, \"maj\": 1,\n                    \"label\": [\"X\", \"Y\"]}),\n                Lexicon.ANGLE: (\"FLOAT\", {\n                    \"default\": 0, \"min\": -sys.float_info.max, \"max\": sys.float_info.max, \"step\": 0.1,}),\n                Lexicon.SIZE: (\"VEC2\", {\n                    \"default\": (1, 1), \"mij\": 0.001,\n                    \"label\": [\"X\", \"Y\"]}),\n                Lexicon.TILE: (\"VEC2\", {\n                    \"default\": (1, 1), \"mij\": 1,\n                    \"label\": [\"X\", \"Y\"]}),\n                Lexicon.EDGE: (EnumEdge._member_names_, {\n                    \"default\": EnumEdge.CLIP.name}),\n                Lexicon.MIRROR: (EnumMirrorMode._member_names_, {\n                    \"default\": EnumMirrorMode.NONE.name}),\n                Lexicon.PIVOT: (\"VEC2\", {\n                    \"default\": (0.5, 0.5), \"mij\": 0, \"maj\": 1, \"step\": 0.01,\n                    \"label\": [\"X\", \"Y\"]}),\n                Lexicon.PROJECTION: (EnumProjection._member_names_, {\n                    \"default\": EnumProjection.NORMAL.name}),\n                Lexicon.TLTR: (\"VEC4\", {\n                    \"default\": (0, 0, 1, 0), \"mij\": 0, \"maj\": 1, \"step\": 0.005,\n                    \"label\": [\"TOP\", \"LEFT\", \"TOP\", \"RIGHT\"],}),\n                Lexicon.BLBR: (\"VEC4\", {\n                    \"default\": (0, 1, 1, 1), \"mij\": 0, \"maj\": 1, \"step\": 0.005,\n                    \"label\": [\"BOTTOM\", \"LEFT\", \"BOTTOM\", \"RIGHT\"],}),\n                Lexicon.STRENGTH: (\"FLOAT\", {\n                    \"default\": 1, \"min\": 0, \"max\": 1, \"step\": 0.005}),\n                Lexicon.MODE: (EnumScaleMode._member_names_, {\n                    \"default\": EnumScaleMode.MATTE.name,}),\n                Lexicon.WH: (\"VEC2\", {\n                    \"default\": (512, 512), \"mij\": IMAGE_SIZE_MIN, \"int\": True,\n                    \"label\": [\"W\", \"H\"]}),\n                Lexicon.SAMPLE: (EnumInterpolation._member_names_, {\n                    \"default\": EnumInterpolation.LANCZOS4.name,}),\n                Lexicon.MATTE: (\"VEC4\", {\n                    \"default\": (0, 0, 0, 255), \"rgb\": True,})\n            }\n        })\n        return Lexicon._parse(d)\n\n    def run(self, **kw) -> RGBAMaskType:\n        pA = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)\n        mask = parse_param(kw, Lexicon.MASK, EnumConvertType.IMAGE, None)\n        offset = parse_param(kw, Lexicon.XY, EnumConvertType.VEC2, (0, 0), -1, 1)\n        angle = parse_param(kw, Lexicon.ANGLE, EnumConvertType.FLOAT, 0)\n        size = parse_param(kw, Lexicon.SIZE, EnumConvertType.VEC2, (1, 1), 0.001)\n        edge = parse_param(kw, Lexicon.EDGE, EnumEdge, EnumEdge.CLIP.name)\n        mirror = parse_param(kw, Lexicon.MIRROR, EnumMirrorMode, EnumMirrorMode.NONE.name)\n        mirror_pivot = parse_param(kw, Lexicon.PIVOT, EnumConvertType.VEC2, (0.5, 0.5), 0, 1)\n        tile_xy = parse_param(kw, Lexicon.TILE, EnumConvertType.VEC2, (1, 1), 1)\n        proj = parse_param(kw, Lexicon.PROJECTION, EnumProjection, EnumProjection.NORMAL.name)\n        tltr = parse_param(kw, Lexicon.TLTR, EnumConvertType.VEC4, (0, 0, 1, 0), 0, 1)\n        blbr = parse_param(kw, Lexicon.BLBR, EnumConvertType.VEC4, (0, 1, 1, 1), 0, 1)\n        strength = parse_param(kw, Lexicon.STRENGTH, EnumConvertType.FLOAT, 1, 0, 1)\n        mode = parse_param(kw, Lexicon.MODE, EnumScaleMode, EnumScaleMode.MATTE.name)\n        wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (512, 512), IMAGE_SIZE_MIN)\n        sample = parse_param(kw, Lexicon.SAMPLE, EnumInterpolation, EnumInterpolation.LANCZOS4.name)\n        matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255)\n        params = list(zip_longest_fill(pA, mask, offset, angle, size, edge, tile_xy, mirror, mirror_pivot, proj, strength, tltr, blbr, mode, wihi, sample, matte))\n        images = []\n        pbar = ProgressBar(len(params))\n        for idx, (pA, mask, offset, angle, size, edge, tile_xy, mirror, mirror_pivot, proj, strength, tltr, blbr, mode, wihi, sample, matte) in enumerate(params):\n            pA = tensor_to_cv(pA) if pA is not None else channel_solid()\n            if mask is None:\n                mask = image_mask(pA, 255)\n            else:\n                mask = tensor_to_cv(mask)\n            pA = image_mask_add(pA, mask)\n\n            h, w = pA.shape[:2]\n            pA = image_transform(pA, offset, angle, size, sample, edge)\n            pA = image_crop_center(pA, w, h)\n\n            if mirror != EnumMirrorMode.NONE:\n                mpx, mpy = mirror_pivot\n                pA = image_mirror(pA, mirror, mpx, mpy)\n                pA = image_scalefit(pA, w, h, EnumScaleMode.FIT, sample)\n\n            tx, ty = tile_xy\n            if tx != 1. or ty != 1.:\n                pA = image_edge_wrap(pA, tx / 2 - 0.5, ty / 2 - 0.5)\n                pA = image_scalefit(pA, w, h, EnumScaleMode.FIT, sample)\n\n            match proj:\n                case EnumProjection.PERSPECTIVE:\n                    x1, y1, x2, y2 = tltr\n                    x4, y4, x3, y3 = blbr\n                    sh, sw = pA.shape[:2]\n                    x1, x2, x3, x4 = map(lambda x: x * sw, [x1, x2, x3, x4])\n                    y1, y2, y3, y4 = map(lambda y: y * sh, [y1, y2, y3, y4])\n                    pA = remap_perspective(pA, [[x1, y1], [x2, y2], [x3, y3], [x4, y4]])\n                case EnumProjection.SPHERICAL:\n                    pA = remap_sphere(pA, strength)\n                case EnumProjection.FISHEYE:\n                    pA = remap_fisheye(pA, strength)\n                case EnumProjection.POLAR:\n                    pA = remap_polar(pA)\n\n            if proj != EnumProjection.NORMAL:\n                pA = image_scalefit(pA, w, h, EnumScaleMode.FIT, sample)\n\n            if mode != EnumScaleMode.MATTE:\n                w, h = wihi\n                pA = image_scalefit(pA, w, h, mode, sample)\n\n            images.append(cv_to_tensor_full(pA, matte))\n            pbar.update_absolute(idx)\n        return image_stack(images)\n"
  },
  {
    "path": "core/utility/__init__.py",
    "content": ""
  },
  {
    "path": "core/utility/batch.py",
    "content": "\"\"\" Jovimetrix - Utility \"\"\"\n\nimport os\nimport sys\nimport json\nimport glob\nimport random\nfrom enum import Enum\nfrom pathlib import Path\nfrom itertools import zip_longest\nfrom typing import Any\n\nimport torch\nimport numpy as np\n\nfrom comfy.utils import ProgressBar\nfrom nodes import interrupt_processing\n\nfrom cozy_comfyui import \\\n    logger, \\\n    IMAGE_SIZE_MIN, \\\n    InputType, EnumConvertType, TensorType, \\\n    deep_merge, parse_dynamic, parse_param\n\nfrom cozy_comfyui.lexicon import \\\n    Lexicon\n\nfrom cozy_comfyui.node import \\\n    COZY_TYPE_ANY, \\\n    CozyBaseNode\n\nfrom cozy_comfyui.image import \\\n    IMAGE_FORMATS\n\nfrom cozy_comfyui.image.compose import \\\n    EnumScaleMode, EnumInterpolation, \\\n    image_matte, image_scalefit\n\nfrom cozy_comfyui.image.convert import \\\n    image_convert, cv_to_tensor, cv_to_tensor_full, tensor_to_cv\n\nfrom cozy_comfyui.image.misc import \\\n    image_by_size\n\nfrom cozy_comfyui.image.io import \\\n    image_load\n\nfrom cozy_comfyui.api import \\\n    parse_reset, comfy_api_post\n\nfrom ... import \\\n    ROOT\n\nJOV_CATEGORY = \"UTILITY/BATCH\"\n\n# ==============================================================================\n# === ENUMERATION ===\n# ==============================================================================\n\nclass EnumBatchMode(Enum):\n    MERGE = 30\n    PICK = 10\n    SLICE = 15\n    INDEX_LIST = 20\n    RANDOM = 5\n\n# ==============================================================================\n# === CLASS ===\n# ==============================================================================\n\nclass ArrayNode(CozyBaseNode):\n    NAME = \"ARRAY (JOV) 📚\"\n    CATEGORY = JOV_CATEGORY\n    RETURN_TYPES = (COZY_TYPE_ANY, \"INT\",)\n    RETURN_NAMES = (\"ARRAY\", \"LENGTH\",)\n    OUTPUT_IS_LIST = (True, True,)\n    OUTPUT_TOOLTIPS = (\n        \"Output list from selected operation\",\n        \"Length of output list\",\n        \"Full input list\",\n        \"Length of all input elements\",\n    )\n    DESCRIPTION = \"\"\"\nProcesses a batch of data based on the selected mode. Merge, pick, slice, random select, or index items. Can also reverse the order of items.\n\"\"\"\n\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES()\n        d = deep_merge(d, {\n            \"optional\": {\n                Lexicon.MODE: (EnumBatchMode._member_names_, {\n                    \"default\": EnumBatchMode.MERGE.name,\n                    \"tooltip\": \"Select a single index, specific range, custom index list or randomized\"}),\n                Lexicon.RANGE: (\"VEC3\", {\n                    \"default\": (0, 0, 1), \"mij\": 0, \"int\": True,\n                    \"tooltip\": \"The start, end and step for the range\"}),\n                Lexicon.INDEX: (\"STRING\", {\n                    \"default\": \"\",\n                    \"tooltip\": \"Comma separated list of indicies to export\"}),\n                Lexicon.COUNT: (\"INT\", {\n                    \"default\": 0, \"min\": 0, \"max\": sys.maxsize,\n                    \"tooltip\": \"How many items to return\"}),\n                Lexicon.REVERSE: (\"BOOLEAN\", {\n                    \"default\": False,\n                    \"tooltip\": \"Reverse the calculated output list\"}),\n                Lexicon.SEED: (\"INT\", {\n                    \"default\": 0, \"min\": 0, \"max\": sys.maxsize}),\n            }\n        })\n        return Lexicon._parse(d)\n\n    @classmethod\n    def batched(cls, iterable, chunk_size, expand:bool=False, fill:Any=None) -> list[Any]:\n        if expand:\n            iterator = iter(iterable)\n            return zip_longest(*[iterator] * chunk_size, fillvalue=fill)\n        return [iterable[i: i + chunk_size] for i in range(0, len(iterable), chunk_size)]\n\n    def run(self, **kw) -> tuple[int, list]:\n        data_list = parse_dynamic(kw, Lexicon.DYNAMIC, EnumConvertType.ANY, None)\n        mode = parse_param(kw, Lexicon.MODE, EnumBatchMode, EnumBatchMode.MERGE.name)[0]\n        slice_range = parse_param(kw, Lexicon.RANGE, EnumConvertType.VEC3INT, (0, 0, 1))[0]\n        index = parse_param(kw, Lexicon.INDEX, EnumConvertType.STRING, \"\")[0]\n        count = parse_param(kw, Lexicon.COUNT, EnumConvertType.INT, 0, 0)[0]\n        reverse = parse_param(kw, Lexicon.REVERSE, EnumConvertType.BOOLEAN, False)[0]\n        seed = parse_param(kw, Lexicon.SEED, EnumConvertType.INT, 0, 0)[0]\n\n        data = []\n        # track latents since they need to be added back to Dict['samples']\n        output_type = None\n        for b in data_list:\n            if isinstance(b, dict) and \"samples\" in b:\n                # latents are batched in the x.samples key\n                if output_type and output_type != EnumConvertType.LATENT:\n                    raise Exception(f\"Cannot mix input types {output_type} vs {EnumConvertType.LATENT}\")\n                data.extend(b[\"samples\"])\n                output_type = EnumConvertType.LATENT\n\n            elif isinstance(b, TensorType):\n                if output_type and output_type not in (EnumConvertType.IMAGE, EnumConvertType.MASK):\n                    raise Exception(f\"Cannot mix input types {output_type} vs {EnumConvertType.IMAGE}\")\n\n                if b.ndim == 4:\n                    b = [i for i in b]\n                else:\n                    b = [b]\n\n                for x in b:\n                    if x.ndim == 2:\n                        x = x.unsqueeze(-1)\n                    data.append(x)\n\n                output_type = EnumConvertType.IMAGE\n\n            elif b is not None:\n                idx_type = type(b)\n                if output_type and output_type != idx_type:\n                    raise Exception(f\"Cannot mix input types {output_type} vs {idx_type}\")\n                data.append(b)\n\n        if len(data) == 0:\n            logger.warning(\"no data for list\")\n            return [], [0], [], [0]\n\n        if mode == EnumBatchMode.PICK:\n            start, end, step = slice_range\n            start = start if start < len(data) else -1\n            data = [data[start]]\n        elif mode == EnumBatchMode.SLICE:\n            start, end, step = slice_range\n            start = abs(start)\n            end = len(data) if end == 0 else abs(end+1)\n            if step == 0:\n                step = 1\n            elif step < 0:\n                data = data[::-1]\n                step = abs(step)\n            data = data[start:end:step]\n        elif mode == EnumBatchMode.RANDOM:\n            random.seed(seed)\n            if count == 0:\n                count = len(data)\n            else:\n                count = max(1, min(len(data), count))\n            data = random.sample(data, k=count)\n        elif mode == EnumBatchMode.INDEX_LIST:\n            junk = []\n            for x in index.split(','):\n                if '-' in x:\n                    x = x.split('-')\n                    for idx, v in enumerate(x):\n                        try:\n                            x[idx] = max(0, min(len(data)-1, int(v)))\n                        except ValueError as e:\n                            logger.error(e)\n                            x[idx] = 0\n\n                    if x[0] > x[1]:\n                        tmp = list(range(x[0], x[1]-1, -1))\n                    else:\n                        tmp = list(range(x[0], x[1]+1))\n                    junk.extend(tmp)\n                else:\n                    idx = max(0, min(len(data)-1, int(x)))\n                    junk.append(idx)\n            if len(junk) > 0:\n                data = [data[i] for i in junk]\n\n        if len(data) == 0:\n            logger.warning(\"no data for list\")\n            return [], [0], [], [0]\n\n        # reverse before?\n        if reverse:\n            data.reverse()\n\n        # cut the list down first\n        if count > 0:\n            data = data[0:count]\n\n        size = len(data)\n        if output_type == EnumConvertType.IMAGE:\n            _, w, h = image_by_size(data)\n            result = []\n            for d in data:\n                w2, h2, cc = d.shape\n                if w != w2 or h != h2 or cc != 4:\n                    d = tensor_to_cv(d)\n                    d = image_convert(d, 4)\n                    d = image_matte(d, (0,0,0,0), w, h)\n                    d = cv_to_tensor(d)\n                d = d.unsqueeze(0)\n                result.append(d)\n\n            size = len(result)\n            data = torch.stack(result)\n        else:\n            data = [data]\n\n        return (data, [size],)\n\nclass BatchToList(CozyBaseNode):\n    NAME = \"BATCH TO LIST (JOV)\"\n    NAME_PRETTY = \"BATCH TO LIST (JOV)\"\n    CATEGORY = JOV_CATEGORY\n    RETURN_TYPES = (COZY_TYPE_ANY, )\n    RETURN_NAMES = (\"LIST\", )\n    DESCRIPTION = \"\"\"\nConvert a batch of values into a pure python list of values.\n\"\"\"\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES()\n        return deep_merge(d, {\n            \"optional\": {\n                Lexicon.BATCH: (COZY_TYPE_ANY, {}),\n            }\n        })\n\n    def run(self, **kw) -> tuple[list[Any]]:\n        batch = parse_param(kw, Lexicon.BATCH, EnumConvertType.LIST, [])\n        batch = [f[0] for f in batch]\n        return (batch,)\n\nclass QueueBaseNode(CozyBaseNode):\n    CATEGORY = JOV_CATEGORY\n    RETURN_TYPES = (COZY_TYPE_ANY, COZY_TYPE_ANY, \"STRING\", \"INT\", \"INT\", \"BOOLEAN\")\n    RETURN_NAMES = (\"❔\", \"QUEUE\", \"CURRENT\", \"INDEX\", \"TOTAL\", \"TRIGGER\", )\n    #OUTPUT_IS_LIST = (True, True, True, True, True, True,)\n    VIDEO_FORMATS = ['.wav', '.mp3', '.webm', '.mp4', '.avi', '.wmv', '.mkv', '.mov', '.mxf']\n\n    @classmethod\n    def IS_CHANGED(cls, **kw) -> float:\n        return float('nan')\n\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES()\n        d = deep_merge(d, {\n            \"optional\": {\n                Lexicon.QUEUE: (\"STRING\", {\n                    \"default\": \"./res/img/test-a.png\", \"multiline\": True,\n                    \"tooltip\": \"Current items to process during Queue iteration\"}),\n                Lexicon.RECURSE: (\"BOOLEAN\", {\n                    \"default\": False,\n                    \"tooltip\": \"Recurse through all subdirectories found\"}),\n                Lexicon.BATCH: (\"BOOLEAN\", {\n                    \"default\": False,\n                    \"tooltip\": \"Load all items, if they are loadable items, i.e. batch load images from the Queue's list\"}),\n                Lexicon.SELECT: (\"INT\", {\n                    \"default\": 0, \"min\": 0,\n                    \"tooltip\": \"The index to use for the current queue item. 0 will move to the next item each queue run\"}),\n                Lexicon.HOLD: (\"BOOLEAN\", {\n                    \"default\": False,\n                    \"tooltip\": \"Hold the item at the current queue index\"}),\n                Lexicon.STOP: (\"BOOLEAN\", {\n                    \"default\": False,\n                    \"tooltip\": \"When the Queue is out of items, send a `HALT` to ComfyUI\"}),\n                Lexicon.LOOP: (\"BOOLEAN\", {\n                    \"default\": True,\n                    \"tooltip\": \"If the queue should loop. If `False` and if there are more iterations, will send the previous image\"}),\n                Lexicon.RESET: (\"BOOLEAN\", {\n                    \"default\": False,\n                    \"tooltip\": \"Reset the queue back to index 1\"}),\n            }\n        })\n        return Lexicon._parse(d)\n\n    def __init__(self) -> None:\n        self.__index = 0\n        self.__q = None\n        self.__index_last = None\n        self.__len = 0\n        self.__current = None\n        self.__previous = None\n        self.__ident = None\n        self.__last_q_value = {}\n\n    # consume the list into iterable items to load/process\n    def __parseQ(self, data: Any, recurse: bool=False) -> list[str]:\n        entries = []\n        for line in data.strip().split('\\n'):\n            if len(line) == 0:\n                continue\n\n            data = [line]\n            if not line.lower().startswith(\"http\"):\n                # <directory>;*.png;*.gif;*.jpg\n                base_path_str, tail = os.path.split(line)\n                filters = [p.strip() for p in tail.split(';')]\n\n                base_path = Path(base_path_str)\n                if base_path.is_absolute():\n                    search_dir = base_path if base_path.is_dir() else base_path.parent\n                else:\n                    search_dir = (ROOT / base_path).resolve()\n\n                # Check if the base directory exists\n                if search_dir.exists():\n                    if search_dir.is_dir():\n                        new_data = []\n                        filters = filters if len(filters) > 0 and isinstance(filters[0], str) else IMAGE_FORMATS\n                        for pattern in filters:\n                            found = glob.glob(str(search_dir / pattern), recursive=recurse)\n                            new_data.extend([str(Path(f).resolve()) for f in found if Path(f).is_file()])\n                        if len(new_data):\n                            data = new_data\n                    elif search_dir.is_file():\n                        path = str(search_dir.resolve())\n                        if path.lower().endswith('.txt'):\n                            with open(path, 'r', encoding='utf-8') as f:\n                                data = f.read().split('\\n')\n                        else:\n                            data = [path]\n                elif len(results := glob.glob(str(search_dir))) > 0:\n                    data = [x.replace('\\\\', '/') for x in results]\n\n            if len(data):\n                ret = []\n                for x in data:\n                    try: ret.append(float(x))\n                    except: ret.append(x)\n                entries.extend(ret)\n        return entries\n\n    # turn Q element into actual hard type\n    def process(self, q_data: Any) -> TensorType | str | dict:\n        # single Q cache to skip loading single entries over and over\n        # @TODO: MRU cache strategy\n        if (val := self.__last_q_value.get(q_data, None)) is not None:\n            return val\n        if isinstance(q_data, (str,)):\n            _, ext = os.path.splitext(q_data)\n            if ext in IMAGE_FORMATS:\n                data = image_load(q_data)[0]\n                self.__last_q_value[q_data] = data\n            #elif ext in self.VIDEO_FORMATS:\n            #    data = load_file(q_data)\n            #    self.__last_q_value[q_data] = data\n            elif ext == '.json':\n                with open(q_data, 'r', encoding='utf-8') as f:\n                    self.__last_q_value[q_data] = json.load(f)\n        return self.__last_q_value.get(q_data, q_data)\n\n    def run(self, ident, **kw) -> tuple[Any, list[str], str, int, int]:\n\n        self.__ident = ident\n        # should work headless as well\n\n        if (new_val := parse_param(kw, Lexicon.SELECT, EnumConvertType.INT, 0)[0]) > 0:\n            self.__index = new_val - 1\n\n        reset = parse_reset(ident) > 0\n        if reset or parse_param(kw, Lexicon.RESET, EnumConvertType.BOOLEAN, False)[0]:\n            self.__q = None\n            self.__index = 0\n\n        mode = parse_param(kw, Lexicon.MODE, EnumScaleMode, EnumScaleMode.MATTE.name)[0]\n        sample = parse_param(kw, Lexicon.SAMPLE, EnumInterpolation, EnumInterpolation.LANCZOS4.name)[0]\n        wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (512, 512), IMAGE_SIZE_MIN)[0]\n        w, h = wihi\n        matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255)[0]\n\n        if self.__q is None:\n            # process Q into ...\n            # check if folder first, file, then string.\n            # entry is: data, <filter if folder:*.png,*.jpg>, <repeats:1+>\n            recurse = parse_param(kw, Lexicon.RECURSE, EnumConvertType.BOOLEAN, False)[0]\n            q = parse_param(kw, Lexicon.QUEUE, EnumConvertType.STRING, \"\")[0]\n            self.__q = self.__parseQ(q, recurse)\n            self.__len = len(self.__q)\n            self.__index_last = 0\n            self.__previous = self.__q[0] if len(self.__q) else None\n            if self.__previous:\n                self.__previous = self.process(self.__previous)\n\n        # make sure we have more to process if are a single fire queue\n        stop = parse_param(kw, Lexicon.STOP, EnumConvertType.BOOLEAN, False)[0]\n        if stop and self.__index >= self.__len:\n            comfy_api_post(\"jovi-queue-done\", ident, self.status)\n            interrupt_processing()\n            return self.__previous, self.__q, self.__current, self.__index_last+1, self.__len\n\n        if (wait := parse_param(kw, Lexicon.HOLD, EnumConvertType.BOOLEAN, False))[0] == True:\n            self.__index = self.__index_last\n\n        # otherwise loop around the end\n        loop = parse_param(kw, Lexicon.LOOP, EnumConvertType.BOOLEAN, False)[0]\n        if loop == True:\n            self.__index %= self.__len\n        else:\n            self.__index = min(self.__index, self.__len-1)\n\n        self.__current = self.__q[self.__index]\n        data = self.__previous\n        self.__index_last = self.__index\n        info = f\"QUEUE #{ident} [{self.__current}] ({self.__index})\"\n        batched = False\n        if (batched := parse_param(kw, Lexicon.BATCH, EnumConvertType.BOOLEAN, False)[0]) == True:\n            data = []\n            mw, mh, mc = 0, 0, 0\n            for idx in range(self.__len):\n                ret = self.process(self.__q[idx])\n                if isinstance(ret, (np.ndarray,)):\n                    h2, w2, c = ret.shape\n                    mw, mh, mc = max(mw, w2), max(mh, h2), max(mc, c)\n                data.append(ret)\n\n            if mw != 0 or mh != 0 or mc != 0:\n                ret = []\n                # matte = [matte[0], matte[1], matte[2], 0]\n                pbar = ProgressBar(self.__len)\n                for idx, d in enumerate(data):\n                    d = image_convert(d, mc)\n                    if mode != EnumScaleMode.MATTE:\n                        d = image_scalefit(d, w, h, mode, sample, matte)\n                        d = image_scalefit(d, w, h, EnumScaleMode.RESIZE_MATTE, sample, matte)\n                    else:\n                        d = image_matte(d, matte, mw, mh)\n                    ret.append(cv_to_tensor(d))\n                    pbar.update_absolute(idx)\n                data = torch.stack(ret)\n        elif wait == True:\n            info += f\" PAUSED\"\n        else:\n            data = self.process(self.__q[self.__index])\n            if isinstance(data, (np.ndarray,)):\n                if mode != EnumScaleMode.MATTE:\n                    data = image_scalefit(data, w, h, mode, sample)\n                data = cv_to_tensor(data).unsqueeze(0)\n            self.__index += 1\n\n        self.__previous = data\n        comfy_api_post(\"jovi-queue-ping\", ident, self.status)\n        if stop and batched:\n            interrupt_processing()\n        return data, self.__q, self.__current, self.__index, self.__len, self.__index == self.__len or batched\n\n    @property\n    def status(self) -> dict[str, Any]:\n        return {\n            \"id\": self.__ident,\n            \"c\": self.__current,\n            \"i\": self.__index_last,\n            \"s\": self.__len,\n            \"l\": self.__q\n        }\n\nclass QueueNode(QueueBaseNode):\n    NAME = \"QUEUE (JOV) 🗃\"\n    OUTPUT_TOOLTIPS = (\n        \"Current item selected from the Queue list\",\n        \"The entire Queue list\",\n        \"Current item selected from the Queue list as a string\",\n        \"Current index for the selected item in the Queue list\",\n        \"Total items in the current Queue List\",\n        \"Send a True signal when the queue end index is reached\"\n    )\n    DESCRIPTION = \"\"\"\nManage a queue of items, such as file paths or data. Supports various formats including images, videos, text files, and JSON files. You can specify the current index for the queue item, enable pausing the queue, or reset it back to the first index. The node outputs the current item in the queue, the entire queue, the current index, and the total number of items in the queue.\n\"\"\"\n\nclass QueueTooNode(QueueBaseNode):\n    NAME = \"QUEUE TOO (JOV) 🗃\"\n    RETURN_TYPES = (\"IMAGE\", \"IMAGE\", \"MASK\", \"STRING\", \"INT\", \"INT\", \"BOOLEAN\")\n    RETURN_NAMES = (\"RGBA\", \"RGB\", \"MASK\", \"CURRENT\", \"INDEX\", \"TOTAL\", \"TRIGGER\", )\n    #OUTPUT_IS_LIST = (False, False, False, True, True, True, True,)\n    OUTPUT_TOOLTIPS = (\n        \"Full channel [RGBA] image. If there is an alpha, the image will be masked out with it when using this output\",\n        \"Three channel [RGB] image. There will be no alpha\",\n        \"Single channel mask output\",\n        \"Current item selected from the Queue list as a string\",\n        \"Current index for the selected item in the Queue list\",\n        \"Total items in the current Queue List\",\n        \"Send a True signal when the queue end index is reached\"\n    )\n    DESCRIPTION = \"\"\"\nManage a queue of specific items: media files. Supports various image and video formats. You can specify the current index for the queue item, enable pausing the queue, or reset it back to the first index. The node outputs the current item in the queue, the entire queue, the current index, and the total number of items in the queue.\n\"\"\"\n\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES()\n        d = deep_merge(d, {\n            \"optional\": {\n                Lexicon.MODE: (EnumScaleMode._member_names_, {\n                    \"default\": EnumScaleMode.MATTE.name}),\n                Lexicon.WH: (\"VEC2\", {\n                    \"default\": (512, 512), \"mij\":IMAGE_SIZE_MIN, \"int\": True,\n                    \"label\": [\"W\", \"H\"],}),\n                Lexicon.SAMPLE: (EnumInterpolation._member_names_, {\n                    \"default\": EnumInterpolation.LANCZOS4.name,}),\n                Lexicon.MATTE: (\"VEC4\", {\n                    \"default\": (0, 0, 0, 255), \"rgb\": True,}),\n            },\n            \"hidden\": d.get(\"hidden\", {})\n        })\n        return Lexicon._parse(d)\n\n    def run(self, ident, **kw) -> tuple[TensorType, TensorType, TensorType, str, int, int, bool]:\n        data, _, current, index, total, trigger = super().run(ident, **kw)\n        if not isinstance(data, (TensorType, )):\n            data = [None, None, None]\n        else:\n            matte = parse_param(kw, Lexicon.MATTE, EnumConvertType.VEC4INT, (0, 0, 0, 255), 0, 255)[0]\n            data = [tensor_to_cv(d) for d in data]\n            data = [cv_to_tensor_full(d, matte) for d in data]\n            data = [torch.stack(d) for d in zip(*data)]\n        return *data, current, index, total, trigger\n"
  },
  {
    "path": "core/utility/info.py",
    "content": "\"\"\" Jovimetrix - Utility \"\"\"\n\nimport io\nimport json\nfrom typing import Any\n\nimport torch\nimport numpy as np\nfrom PIL import Image\nimport matplotlib.pyplot as plt\n\nfrom cozy_comfyui import \\\n    IMAGE_SIZE_MIN, \\\n    InputType, EnumConvertType, TensorType, \\\n    deep_merge, parse_dynamic, parse_param\n\nfrom cozy_comfyui.lexicon import \\\n    Lexicon\n\nfrom cozy_comfyui.node import \\\n    COZY_TYPE_IMAGE, \\\n    CozyBaseNode\n\nfrom cozy_comfyui.image.convert import \\\n    pil_to_tensor\n\nfrom cozy_comfyui.api import \\\n    parse_reset\n\nJOV_CATEGORY = \"UTILITY/INFO\"\n\n# ==============================================================================\n# === SUPPORT ===\n# ==============================================================================\n\ndef decode_tensor(tensor: TensorType) -> str:\n    if tensor.ndim > 3:\n        b, h, w, cc = tensor.shape\n    elif tensor.ndim > 2:\n        cc = 1\n        b, h, w = tensor.shape\n    else:\n        b = 1\n        cc = 1\n        h, w = tensor.shape\n    return f\"{b}x{w}x{h}x{cc}\"\n\n# ==============================================================================\n# === CLASS ===\n# ==============================================================================\n\nclass AkashicData:\n    def __init__(self, **kw) -> None:\n        for k, v in kw.items():\n            setattr(self, k, v)\n\nclass AkashicNode(CozyBaseNode):\n    NAME = \"AKASHIC (JOV) 📓\"\n    CATEGORY = JOV_CATEGORY\n    RETURN_NAMES = ()\n    OUTPUT_NODE = True\n    DESCRIPTION = \"\"\"\nVisualize data. It accepts various types of data, including images, text, and other types. If no input is provided, it returns an empty result. The output consists of a dictionary containing UI-related information, such as base64-encoded images and text representations of the input data.\n\"\"\"\n\n    def run(self, **kw) -> tuple[Any, Any]:\n        kw.pop('ident', None)\n        o = kw.values()\n        output = {\"ui\": {\"b64_images\": [], \"text\": []}}\n        if o is None or len(o) == 0:\n            output[\"ui\"][\"result\"] = (None, None, )\n            return output\n\n        def __parse(val) -> str:\n            ret = ''\n            typ = ''.join(repr(type(val)).split(\"'\")[1:2])\n            if isinstance(val, dict):\n                # mixlab layer?\n                if (image := val.get('image', None)) is not None:\n                    ret = image\n                    if (mask := val.get('mask', None)) is not None:\n                        while len(mask.shape) < len(image.shape):\n                            mask = mask.unsqueeze(-1)\n                        ret = torch.cat((image, mask), dim=-1)\n                    if ret.ndim < 4:\n                        ret = ret.unsqueeze(-1)\n                    ret = decode_tensor(ret)\n                    typ = \"Mixlab Layer\"\n\n                # vector patch....\n                elif 'xyzw' in val:\n                    val = val[\"xyzw\"]\n                    typ = \"VECTOR\"\n                # latents....\n                elif 'samples' in val:\n                    ret = decode_tensor(val['samples'][0])\n                    typ = \"LATENT\"\n                # empty bugger\n                elif len(val) == 0:\n                    ret = \"\"\n                else:\n                    try:\n                        ret = json.dumps(val, indent=3, separators=(',', ': '))\n                    except Exception as e:\n                        ret = str(e)\n            elif isinstance(val, (tuple, set, list,)):\n                if (size := len(val)) > 0:\n                    if isinstance(val, (np.ndarray,)):\n                        ret = str(val)\n                        typ = \"NUMPY ARRAY\"\n                    elif isinstance(val[0], (TensorType,)):\n                        ret = decode_tensor(val[0])\n                        typ = type(val[0])\n                    elif size == 1 and isinstance(val[0], (list,)) and isinstance(val[0][0], (TensorType,)):\n                        ret = decode_tensor(val[0][0])\n                        typ = \"CONDITIONING\"\n                    elif all(isinstance(i, (tuple, set, list)) for i in val):\n                        ret = \"[\\n\" + \",\\n\".join(f\"  {row}\" for row in val) + \"\\n]\"\n                        # ret = json.dumps(val, indent=4)\n                    elif all(isinstance(i, (bool, int, float)) for i in val):\n                        ret = ','.join([str(x) for x in val])\n                    else:\n                        ret = str(val)\n            elif isinstance(val, bool):\n                ret = \"True\" if val else \"False\"\n            elif isinstance(val, TensorType):\n                ret = decode_tensor(val)\n            else:\n                ret = str(val)\n            return json.dumps({typ: ret}, separators=(',', ': '))\n\n        for x in o:\n            data = \"\"\n            if len(x) > 1:\n                data += \"::\\n\"\n            for p in x:\n                data += __parse(p) + \"\\n\"\n            output[\"ui\"][\"text\"].append(data)\n        return output\n\nclass GraphNode(CozyBaseNode):\n    NAME = \"GRAPH (JOV) 📈\"\n    CATEGORY = JOV_CATEGORY\n    OUTPUT_NODE = True\n    RETURN_TYPES = (\"IMAGE\", )\n    RETURN_NAMES = (\"IMAGE\",)\n    OUTPUT_TOOLTIPS = (\n        \"The graphed image\",\n    )\n    DESCRIPTION = \"\"\"\nVisualize a series of data points over time. It accepts a dynamic number of values to graph and display, with options to reset the graph or specify the number of values. The output is an image displaying the graph, allowing users to analyze trends and patterns.\n\"\"\"\n\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES()\n        d = deep_merge(d, {\n            \"optional\": {\n                Lexicon.RESET: (\"BOOLEAN\", {\n                    \"default\": False,\n                    \"tooltip\":\"Clear the graph history\"}),\n                Lexicon.VALUE: (\"INT\", {\n                    \"default\": 60, \"min\": 0,\n                    \"tooltip\":\"Number of values to graph and display\"}),\n                Lexicon.WH: (\"VEC2\", {\n                    \"default\": (512, 512), \"mij\":IMAGE_SIZE_MIN, \"int\": True,\n                    \"label\": [\"W\", \"H\"]}),\n            }\n        })\n        return Lexicon._parse(d)\n\n    @classmethod\n    def IS_CHANGED(cls, **kw) -> float:\n        return float('nan')\n\n    def __init__(self, *arg, **kw) -> None:\n        super().__init__(*arg, **kw)\n        self.__history = []\n        self.__fig, self.__ax = plt.subplots(figsize=(5.12, 5.12))\n\n    def run(self, ident, **kw) -> tuple[TensorType]:\n        slice = parse_param(kw, Lexicon.VALUE, EnumConvertType.INT, 60)[0]\n        wihi = parse_param(kw, Lexicon.WH, EnumConvertType.VEC2INT, (512, 512), IMAGE_SIZE_MIN)[0]\n        if parse_reset(ident) > 0 or parse_param(kw, Lexicon.RESET, EnumConvertType.BOOLEAN, False)[0]:\n            self.__history = []\n        longest_edge = 0\n        dynamic = parse_dynamic(kw, Lexicon.DYNAMIC, EnumConvertType.FLOAT, 0, extend=False)\n        self.__ax.clear()\n        for idx, val in enumerate(dynamic):\n            if isinstance(val, (set, tuple,)):\n                val = list(val)\n            if not isinstance(val, (list, )):\n                val = [val]\n            while len(self.__history) <= idx:\n                self.__history.append([])\n            self.__history[idx].extend(val)\n            if slice > 0:\n                stride = max(0, -slice + len(self.__history[idx]) + 1)\n                longest_edge = max(longest_edge, stride)\n                self.__history[idx] = self.__history[idx][stride:]\n            self.__ax.plot(self.__history[idx], color=\"rgbcymk\"[idx])\n\n        self.__history = self.__history[:slice+1]\n        width, height = wihi\n        width, height = (width / 100., height / 100.)\n        self.__fig.set_figwidth(width)\n        self.__fig.set_figheight(height)\n        self.__fig.canvas.draw_idle()\n        buffer = io.BytesIO()\n        self.__fig.savefig(buffer, format=\"png\")\n        buffer.seek(0)\n        image = Image.open(buffer)\n        return (pil_to_tensor(image),)\n\nclass ImageInfoNode(CozyBaseNode):\n    NAME = \"IMAGE INFO (JOV) 📚\"\n    CATEGORY = JOV_CATEGORY\n    RETURN_TYPES = (\"INT\", \"INT\", \"INT\", \"INT\", \"VEC2\", \"VEC3\")\n    RETURN_NAMES = (\"COUNT\", \"W\", \"H\", \"C\", \"WH\", \"WHC\")\n    OUTPUT_TOOLTIPS = (\n        \"Batch count\",\n        \"Width\",\n        \"Height\",\n        \"Channels\",\n        \"Width & Height as a VEC2\",\n        \"Width, Height and Channels as a VEC3\"\n    )\n    DESCRIPTION = \"\"\"\nExports and Displays immediate information about images.\n\"\"\"\n\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES()\n        d = deep_merge(d, {\n            \"optional\": {\n                Lexicon.IMAGE: (COZY_TYPE_IMAGE, {})\n            }\n        })\n        return Lexicon._parse(d)\n\n    def run(self, **kw) -> tuple[int, list]:\n        image = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)\n        height, width, cc = image[0].shape\n        return (len(image), width, height, cc, (width, height), (width, height, cc))\n"
  },
  {
    "path": "core/utility/io.py",
    "content": "\"\"\" Jovimetrix - Utility \"\"\"\n\nimport os\nimport json\nfrom uuid import uuid4\nfrom pathlib import Path\nfrom typing import Any\n\nimport torch\nimport numpy as np\nfrom PIL import Image\nfrom PIL.PngImagePlugin import PngInfo\n\nfrom comfy.utils import ProgressBar\nfrom folder_paths import get_output_directory\nfrom nodes import interrupt_processing\n\nfrom cozy_comfyui import \\\n    logger, \\\n    InputType, EnumConvertType, \\\n    deep_merge, parse_param, parse_param_list, zip_longest_fill\n\nfrom cozy_comfyui.lexicon import \\\n    Lexicon\n\nfrom cozy_comfyui.node import \\\n    COZY_TYPE_IMAGE, COZY_TYPE_ANY, \\\n    CozyBaseNode\n\nfrom cozy_comfyui.image.convert import \\\n    tensor_to_pil, tensor_to_cv\n\nfrom cozy_comfyui.api import \\\n    TimedOutException, ComfyAPIMessage, \\\n    comfy_api_post\n\n# ==============================================================================\n# === GLOBAL ===\n# ==============================================================================\n\nJOV_CATEGORY = \"UTILITY/IO\"\n\n# min amount of time before showing the cancel dialog\nJOV_DELAY_MIN = 5\ntry: JOV_DELAY_MIN = int(os.getenv(\"JOV_DELAY_MIN\", JOV_DELAY_MIN))\nexcept: pass\nJOV_DELAY_MIN = max(1, JOV_DELAY_MIN)\n\n# max 115 days\nJOV_DELAY_MAX = 10000000\ntry: JOV_DELAY_MAX = int(os.getenv(\"JOV_DELAY_MAX\", JOV_DELAY_MAX))\nexcept: pass\n\nFORMATS = [\"gif\", \"png\", \"jpg\"]\nif (JOV_GIFSKI := os.getenv(\"JOV_GIFSKI\", None)) is not None:\n    if not os.path.isfile(JOV_GIFSKI):\n        logger.error(f\"gifski missing [{JOV_GIFSKI}]\")\n        JOV_GIFSKI = None\n    else:\n        FORMATS = [\"gifski\"] + FORMATS\n        logger.info(\"gifski support\")\nelse:\n    logger.warning(\"no gifski support\")\n\n# ==============================================================================\n# === SUPPORT ===\n# ==============================================================================\n\ndef path_next(pattern: str) -> str:\n    \"\"\"\n    Finds the next free path in an sequentially named list of files\n    \"\"\"\n    i = 1\n    while os.path.exists(pattern % i):\n        i = i * 2\n\n    a, b = (i // 2, i)\n    while a + 1 < b:\n        c = (a + b) // 2\n        a, b = (c, b) if os.path.exists(pattern % c) else (a, c)\n    return pattern % b\n\n# ==============================================================================\n# === CLASS ===\n# ==============================================================================\n\nclass DelayNode(CozyBaseNode):\n    NAME = \"DELAY (JOV) ✋🏽\"\n    CATEGORY = JOV_CATEGORY\n    RETURN_TYPES = (COZY_TYPE_ANY,)\n    RETURN_NAMES = (\"OUT\",)\n    OUTPUT_TOOLTIPS = (\n        \"Pass through data when the delay ends\",\n    )\n    DESCRIPTION = \"\"\"\nIntroduce pauses in the workflow that accept an optional input to pass through and a timer parameter to specify the duration of the delay. If no timer is provided, it defaults to a maximum delay. During the delay, it periodically checks for messages to interrupt the delay. Once the delay is completed, it returns the input passed to it. You can disable the screensaver with the `ENABLE` option\n\"\"\"\n\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES()\n        d = deep_merge(d, {\n            \"optional\": {\n                Lexicon.PASS_IN: (COZY_TYPE_ANY, {\n                    \"default\": None,\n                    \"tooltip\":\"The data that should be held until the timer completes.\"}),\n                Lexicon.TIMER: (\"INT\", {\n                    \"default\" : 0, \"min\": -1,\n                    \"tooltip\":\"How long to delay if enabled. 0 means no delay.\"}),\n                Lexicon.ENABLE: (\"BOOLEAN\", {\n                    \"default\": True,\n                    \"tooltip\":\"Enable or disable the screensaver.\"})\n            }\n        })\n        return Lexicon._parse(d)\n\n    @classmethod\n    def IS_CHANGED(cls, **kw) -> float:\n        return float('nan')\n\n    def run(self, ident, **kw) -> tuple[Any]:\n        delay = parse_param(kw, Lexicon.TIMER, EnumConvertType.INT, -1, -1, JOV_DELAY_MAX)[0]\n        if delay < 0:\n            delay = JOV_DELAY_MAX\n        if delay > JOV_DELAY_MIN:\n            comfy_api_post(\"jovi-delay-user\", ident, {\"id\": ident, \"timeout\": delay})\n        # enable = parse_param(kw, Lexicon.ENABLE, EnumConvertType.BOOLEAN, True)[0]\n\n        step = 1\n        pbar = ProgressBar(delay)\n        while step <= delay:\n            try:\n                data = ComfyAPIMessage.poll(ident, timeout=1)\n                if data.get('id', None) == ident:\n                    if data.get('cmd', False) == False:\n                        interrupt_processing(True)\n                        logger.warning(f\"delay [cancelled] ({step}): {ident}\")\n                    break\n            except TimedOutException as _:\n                if step % 10 == 0:\n                    logger.info(f\"delay [continue] ({step}): {ident}\")\n            pbar.update_absolute(step)\n            step += 1\n\n        return kw[Lexicon.PASS_IN]\n\nclass ExportNode(CozyBaseNode):\n    NAME = \"EXPORT (JOV) 📽\"\n    CATEGORY = JOV_CATEGORY\n    NOT_IDEMPOTENT = True\n    OUTPUT_NODE = True\n    RETURN_TYPES = ()\n    DESCRIPTION = \"\"\"\nResponsible for saving images or animations to disk. It supports various output formats such as GIF and GIFSKI. Users can specify the output directory, filename prefix, image quality, frame rate, and other parameters. Additionally, it allows overwriting existing files or generating unique filenames to avoid conflicts. The node outputs the saved images or animation as a tensor.\n\"\"\"\n\n    @classmethod\n    def IS_CHANGED(cls, **kw) -> float:\n        return float('nan')\n\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES()\n        d = deep_merge(d, {\n            \"optional\": {\n                Lexicon.IMAGE: (COZY_TYPE_IMAGE, {}),\n                Lexicon.PATH: (\"STRING\", {\n                    \"default\": get_output_directory(),\n                    \"default_top\": \"<comfy output dir>\",}),\n                Lexicon.FORMAT: (FORMATS, {\n                    \"default\": FORMATS[0],}),\n                Lexicon.PREFIX: (\"STRING\", {\n                    \"default\": \"jovi\",}),\n                Lexicon.OVERWRITE: (\"BOOLEAN\", {\n                    \"default\": False,}),\n                # GIF ONLY\n                Lexicon.OPTIMIZE: (\"BOOLEAN\", {\n                    \"default\": False,}),\n                # GIFSKI ONLY\n                Lexicon.QUALITY: (\"INT\", {\n                    \"default\": 90, \"min\": 1, \"max\": 100,}),\n                Lexicon.QUALITY_M: (\"INT\", {\n                    \"default\": 100, \"min\": 1, \"max\": 100,}),\n                # GIF OR GIFSKI\n                Lexicon.FPS: (\"INT\", {\n                    \"default\": 24, \"min\": 1, \"max\": 60,}),\n                # GIF OR GIFSKI\n                Lexicon.LOOP: (\"INT\", {\n                    \"default\": 0, \"min\": 0,}),\n            }\n        })\n        return Lexicon._parse(d)\n\n    def run(self, **kw) -> None:\n        images = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)\n        suffix = parse_param(kw, Lexicon.PREFIX, EnumConvertType.STRING, uuid4().hex[:16])[0]\n        output_dir = parse_param(kw, Lexicon.PATH, EnumConvertType.STRING, \"\")[0]\n        format = parse_param(kw, Lexicon.FORMAT, EnumConvertType.STRING, \"gif\")[0]\n        overwrite = parse_param(kw, Lexicon.OVERWRITE, EnumConvertType.BOOLEAN, False)[0]\n        optimize = parse_param(kw, Lexicon.OPTIMIZE, EnumConvertType.BOOLEAN, False)[0]\n        quality = parse_param(kw, Lexicon.QUALITY, EnumConvertType.INT, 90, 0, 100)[0]\n        motion = parse_param(kw, Lexicon.QUALITY_M, EnumConvertType.INT, 100, 0, 100)[0]\n        fps = parse_param(kw, Lexicon.FPS, EnumConvertType.INT, 24, 1, 60)[0]\n        loop = parse_param(kw, Lexicon.LOOP, EnumConvertType.INT, 0, 0)[0]\n        output_dir = Path(output_dir)\n        output_dir.mkdir(parents=True, exist_ok=True)\n\n        def output(extension) -> Path:\n            path = output_dir / f\"{suffix}.{extension}\"\n            if not overwrite and os.path.isfile(path):\n                path = str(output_dir / f\"{suffix}_%s.{extension}\")\n                path = path_next(path)\n            return path\n\n        images = [tensor_to_pil(i) for i in images]\n        if format == \"gifski\":\n            root = output_dir / f\"{suffix}_{uuid4().hex[:16]}\"\n            # logger.debug(root)\n            try:\n                root.mkdir(parents=True, exist_ok=True)\n                for idx, i in enumerate(images):\n                    fname = str(root / f\"{suffix}_{idx}.png\")\n                    i.save(fname)\n            except Exception as e:\n                logger.warning(output_dir)\n                logger.error(str(e))\n                return\n            else:\n                out = output('gif')\n                fps = f\"--fps {fps}\" if fps > 0 else \"\"\n                q = f\"--quality {quality}\"\n                mq = f\"--motion-quality {motion}\"\n                cmd = f\"{JOV_GIFSKI} -o {out} {q} {mq} {fps} {str(root)}/{suffix}_*.png\"\n                logger.info(cmd)\n                try:\n                    os.system(cmd)\n                except Exception as e:\n                    logger.warning(cmd)\n                    logger.error(str(e))\n\n                # shutil.rmtree(root)\n\n        elif format == \"gif\":\n            images[0].save(\n                output('gif'),\n                append_images=images[1:],\n                disposal=2,\n                duration=1 / fps * 1000 if fps else 0,\n                loop=loop,\n                optimize=optimize,\n                save_all=True,\n            )\n        else:\n            for img in images:\n                img.save(output(format), optimize=optimize)\n        return ()\n\nclass RouteNode(CozyBaseNode):\n    NAME = \"ROUTE (JOV) 🚌\"\n    CATEGORY = JOV_CATEGORY\n    RETURN_TYPES = (\"BUS\",) + (COZY_TYPE_ANY,) * 10\n    RETURN_NAMES = (\"ROUTE\",)\n    OUTPUT_TOOLTIPS = (\n        \"Pass through for Route node\",\n    )\n    DESCRIPTION = \"\"\"\nRoutes the input data from the optional input ports to the output port, preserving the order of inputs. The `PASS_IN` optional input is directly passed through to the output, while other optional inputs are collected and returned as tuples, preserving the order of insertion.\n\"\"\"\n\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES()\n        d = deep_merge(d, {\n            \"optional\": {\n                Lexicon.ROUTE: (\"BUS\", {\n                    \"default\": None,}),\n            }\n        })\n        return Lexicon._parse(d)\n\n    def run(self, **kw) -> tuple[Any, ...]:\n        inout = parse_param(kw, Lexicon.ROUTE, EnumConvertType.ANY, None)\n        vars = kw.copy()\n        vars.pop(Lexicon.ROUTE, None)\n        vars.pop('ident', None)\n\n        parsed = []\n        values = list(vars.values())\n        for x in values:\n            p = parse_param_list(x, EnumConvertType.ANY, None)\n            parsed.append(p)\n        return inout, *parsed,\n\nclass SaveOutputNode(CozyBaseNode):\n    NAME = \"SAVE OUTPUT (JOV) 💾\"\n    CATEGORY = JOV_CATEGORY\n    NOT_IDEMPOTENT = True\n    OUTPUT_NODE = True\n    RETURN_TYPES = ()\n    DESCRIPTION = \"\"\"\nSave images with metadata to any specified path. Can save user metadata and prompt information.\n\"\"\"\n\n    @classmethod\n    def IS_CHANGED(cls, **kw) -> float:\n        return float('nan')\n\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES(True, True)\n        d = deep_merge(d, {\n            \"optional\": {\n                Lexicon.IMAGE: (\"IMAGE\", {}),\n                Lexicon.PATH: (\"STRING\", {\n                    \"default\": \"\", \"dynamicPrompts\":False}),\n                Lexicon.NAME: (\"STRING\", {\n                    \"default\": \"output\", \"dynamicPrompts\":False,}),\n                Lexicon.META: (\"JSON\", {\n                    \"default\": None,}),\n                Lexicon.USER: (\"STRING\", {\n                    \"default\": \"\", \"multiline\": True, \"dynamicPrompts\":False,}),\n            }\n        })\n        return Lexicon._parse(d)\n\n    def run(self, **kw) -> dict[str, Any]:\n        image = parse_param(kw, Lexicon.IMAGE, EnumConvertType.IMAGE, None)\n        path = parse_param(kw, Lexicon.PATH, EnumConvertType.STRING, \"\")\n        fname = parse_param(kw, Lexicon.NAME, EnumConvertType.STRING, \"output\")\n        metadata = parse_param(kw, Lexicon.META, EnumConvertType.DICT, {})\n        usermeta = parse_param(kw, Lexicon.USER, EnumConvertType.DICT, {})\n        prompt = parse_param(kw, 'prompt', EnumConvertType.STRING, \"\")\n        pnginfo = parse_param(kw, 'extra_pnginfo', EnumConvertType.DICT, {})\n        params = list(zip_longest_fill(image, path, fname, metadata, usermeta, prompt, pnginfo))\n        pbar = ProgressBar(len(params))\n        for idx, (image, path, fname, metadata, usermeta, prompt, pnginfo) in enumerate(params):\n            if image is None:\n                logger.warning(\"no image\")\n                image = torch.zeros((32, 32, 4), dtype=torch.uint8, device=\"cpu\")\n            try:\n                if not isinstance(usermeta, (dict,)):\n                    usermeta = json.loads(usermeta)\n                metadata.update(usermeta)\n            except json.decoder.JSONDecodeError:\n                pass\n            except Exception as e:\n                logger.error(e)\n                logger.error(usermeta)\n\n            metadata[\"prompt\"] = prompt\n            metadata[\"workflow\"] = json.dumps(pnginfo)\n            image = tensor_to_cv(image)\n            image = Image.fromarray(np.clip(image, 0, 255).astype(np.uint8))\n            meta_png = PngInfo()\n            for x in metadata:\n                try:\n                    data = json.dumps(metadata[x])\n                    meta_png.add_text(x, data)\n                except Exception as e:\n                    logger.error(e)\n                    logger.error(x)\n\n            if path == \"\" or path is None:\n                path = get_output_directory()\n\n            root = Path(path)\n            if not root.exists():\n                root = Path(get_output_directory())\n\n            root.mkdir(parents=True, exist_ok=True)\n\n            outname = fname\n            if len(params) > 1:\n                outname += f\"_{idx}\"\n            outname = (root / outname).with_suffix(\".png\")\n            logger.info(f\"wrote file: {outname}\")\n            image.save(outname, pnginfo=meta_png)\n            pbar.update_absolute(idx)\n        return ()\n"
  },
  {
    "path": "core/vars.py",
    "content": "\"\"\" Jovimetrix - Variables \"\"\"\n\nimport sys\nimport random\nfrom typing import Any\n\nfrom comfy.utils import ProgressBar\n\nfrom cozy_comfyui import \\\n    InputType, EnumConvertType, \\\n    deep_merge, parse_param, parse_value, zip_longest_fill\n\nfrom cozy_comfyui.lexicon import \\\n    Lexicon\n\nfrom cozy_comfyui.node import \\\n    COZY_TYPE_ANY, COZY_TYPE_NUMERICAL, \\\n    CozyBaseNode\n\n# ==============================================================================\n# === GLOBAL ===\n# ==============================================================================\n\nJOV_CATEGORY = \"VARIABLE\"\n\n# ==============================================================================\n# === CLASS ===\n# ==============================================================================\n\nclass ValueNode(CozyBaseNode):\n    NAME = \"VALUE (JOV) 🧬\"\n    CATEGORY = JOV_CATEGORY\n    RETURN_TYPES = (COZY_TYPE_ANY, COZY_TYPE_ANY, COZY_TYPE_ANY, COZY_TYPE_ANY, COZY_TYPE_ANY,)\n    RETURN_NAMES = (\"❔\", Lexicon.X, Lexicon.Y, Lexicon.Z, Lexicon.W,)\n    OUTPUT_IS_LIST = (True, True, True, True, True,)\n    DESCRIPTION = \"\"\"\nSupplies raw or default values for various data types, supporting vector input with components for X, Y, Z, and W. It also provides a string input option.\n\"\"\"\n    UPDATE = False\n\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES()\n        typ = EnumConvertType._member_names_[:6]\n        d = deep_merge(d, {\n            \"optional\": {\n                Lexicon.IN_A: (COZY_TYPE_ANY, {\n                    \"default\": None,}),\n                Lexicon.X: (COZY_TYPE_NUMERICAL, {\n                    \"default\": 0, \"mij\": -sys.float_info.max, \"maj\": sys.float_info.max,\n                    \"forceInput\": True}),\n                Lexicon.Y: (COZY_TYPE_NUMERICAL, {\n                    \"default\": 0, \"mij\": -sys.float_info.max, \"maj\": sys.float_info.max,\n                    \"forceInput\": True}),\n                Lexicon.Z: (COZY_TYPE_NUMERICAL, {\n                    \"default\": 0, \"mij\": -sys.float_info.max, \"maj\": sys.float_info.max,\n                    \"forceInput\": True}),\n                Lexicon.W: (COZY_TYPE_NUMERICAL, {\n                    \"default\": 0, \"mij\": -sys.float_info.max, \"maj\": sys.float_info.max,\n                    \"forceInput\": True}),\n                Lexicon.TYPE: (typ, {\n                    \"default\": EnumConvertType.BOOLEAN.name}),\n                Lexicon.DEFAULT_A: (\"VEC4\", {\n                    \"default\": (0, 0, 0, 0), \"mij\": -sys.float_info.max, \"maj\": sys.float_info.max,\n                    \"label\": [Lexicon.X, Lexicon.Y, Lexicon.Z, Lexicon.W]}),\n                Lexicon.DEFAULT_B: (\"VEC4\", {\n                    \"default\": (1,1,1,1), \"mij\": -sys.float_info.max, \"maj\": sys.float_info.max,\n                    \"label\": [Lexicon.X, Lexicon.Y, Lexicon.Z, Lexicon.W]}),\n                Lexicon.SEED: (\"INT\", {\n                    \"default\": 0, \"min\": 0, \"max\": sys.maxsize}),\n            }\n        })\n        return Lexicon._parse(d)\n\n    def run(self, **kw) -> tuple[tuple[Any, ...]]:\n        raw = parse_param(kw, Lexicon.IN_A, EnumConvertType.ANY, 0)\n        r_x = parse_param(kw, Lexicon.X, EnumConvertType.FLOAT, None)\n        r_y = parse_param(kw, Lexicon.Y, EnumConvertType.FLOAT, None)\n        r_z = parse_param(kw, Lexicon.Z, EnumConvertType.FLOAT, None)\n        r_w = parse_param(kw, Lexicon.W, EnumConvertType.FLOAT, None)\n        typ = parse_param(kw, Lexicon.TYPE, EnumConvertType, EnumConvertType.BOOLEAN.name)\n        xyzw = parse_param(kw, Lexicon.DEFAULT_A, EnumConvertType.VEC4, (0, 0, 0, 0))\n        yyzw = parse_param(kw, Lexicon.DEFAULT_B, EnumConvertType.VEC4, (1, 1, 1, 1))\n        seed = parse_param(kw, Lexicon.SEED, EnumConvertType.INT, 0)\n        params = list(zip_longest_fill(raw, r_x, r_y, r_z, r_w, typ, xyzw, yyzw, seed))\n        results = []\n        pbar = ProgressBar(len(params))\n        old_seed = -1\n        for idx, (raw, r_x, r_y, r_z, r_w, typ, xyzw, yyzw, seed) in enumerate(params):\n            # default = [x_str]\n            default2 = None\n            a, b, c, d = xyzw\n            a2, b2, c2, d2 = yyzw\n            default = (a if r_x is None else r_x,\n                b if r_y is None else r_y,\n                c if r_z is None else r_z,\n                d if r_w is None else r_w)\n            default2 = (a2, b2, c2, d2)\n\n            val = parse_value(raw, typ, default)\n            val2 = parse_value(default2, typ, default2)\n\n            # check if set to randomize....\n            self.UPDATE = False\n            if seed != 0:\n                self.UPDATE = True\n                val = list(val) if isinstance(val, (tuple, list,)) else [val]\n                val2 = list(val2) if isinstance(val2, (tuple, list,)) else [val2]\n\n                for i in range(len(val)):\n                    mx = max(val[i], val2[i])\n                    mn = min(val[i], val2[i])\n                    if mn == mx:\n                        val[i] = mn\n                    else:\n                        if old_seed != seed:\n                            random.seed(seed)\n                            old_seed = seed\n                        if typ in [EnumConvertType.INT, EnumConvertType.BOOLEAN]:\n                            val[i] = random.randint(mn, mx)\n                        else:\n                            val[i] = mn + random.random() * (mx - mn)\n\n            out = parse_value(val, typ, val)\n            items = [out,0,0,0] if not isinstance(out, (tuple, list,)) else out\n            results.append([out, *items])\n            pbar.update_absolute(idx)\n\n        return *list(zip(*results)),\n\nclass Vector2Node(CozyBaseNode):\n    NAME = \"VECTOR2 (JOV)\"\n    CATEGORY = JOV_CATEGORY\n    RETURN_TYPES = (\"VEC2\",)\n    RETURN_NAMES = (\"VEC2\",)\n    OUTPUT_IS_LIST = (True,)\n    OUTPUT_TOOLTIPS = (\n        \"Vector2 with float values\",\n    )\n    DESCRIPTION = \"\"\"\nOutputs a VECTOR2.\n\"\"\"\n\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES()\n        d = deep_merge(d, {\n            \"optional\": {\n                Lexicon.X: (COZY_TYPE_NUMERICAL, {\n                    \"min\": -sys.float_info.max, \"max\": sys.float_info.max,\n                    \"tooltip\": \"X channel value\"}),\n                Lexicon.Y: (COZY_TYPE_NUMERICAL, {\n                    \"min\": -sys.float_info.max, \"max\": sys.float_info.max,\n                    \"tooltip\": \"Y channel value\"}),\n                Lexicon.DEFAULT: (\"VEC2\", {\n                    \"default\": (0,0), \"mij\": -sys.float_info.max, \"maj\": sys.float_info.max,\n                    \"tooltip\": \"Default vector value\"}),\n            }\n        })\n        return Lexicon._parse(d)\n\n    def run(self, **kw) -> tuple[tuple[float, ...]]:\n        x = parse_param(kw, Lexicon.X, EnumConvertType.FLOAT, None)\n        y = parse_param(kw, Lexicon.Y, EnumConvertType.FLOAT, None)\n        default = parse_param(kw, Lexicon.DEFAULT, EnumConvertType.VEC2, (0,0))\n        result = []\n        params = list(zip_longest_fill(x, y, default))\n        pbar = ProgressBar(len(params))\n        for idx, (x, y, default) in enumerate(params):\n            x = round(default[0], 9) if x is None else round(x, 9)\n            y = round(default[1], 9) if y is None else round(y, 9)\n            result.append((x, y,))\n            pbar.update_absolute(idx)\n        return result,\n\nclass Vector3Node(CozyBaseNode):\n    NAME = \"VECTOR3 (JOV)\"\n    CATEGORY = JOV_CATEGORY\n    RETURN_TYPES = (\"VEC3\",)\n    RETURN_NAMES = (\"VEC3\",)\n    OUTPUT_IS_LIST = (True,)\n    OUTPUT_TOOLTIPS = (\n        \"Vector3 with float values\",\n    )\n    DESCRIPTION = \"\"\"\nOutputs a VECTOR3.\n\"\"\"\n\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES()\n        d = deep_merge(d, {\n            \"optional\": {\n                Lexicon.X: (COZY_TYPE_NUMERICAL, {\n                    \"min\": -sys.float_info.max, \"max\": sys.float_info.max,\n                    \"tooltip\": \"X channel value\"}),\n                Lexicon.Y: (COZY_TYPE_NUMERICAL, {\n                    \"min\": -sys.float_info.max, \"max\": sys.float_info.max,\n                    \"tooltip\": \"Y channel value\"}),\n                Lexicon.Z: (COZY_TYPE_NUMERICAL, {\n                    \"min\": -sys.float_info.max, \"max\": sys.float_info.max,\n                    \"tooltip\": \"Z channel value\"}),\n                Lexicon.DEFAULT: (\"VEC3\", {\n                    \"default\": (0,0,0), \"mij\": -sys.float_info.max, \"maj\": sys.float_info.max,\n                    \"tooltip\": \"Default vector value\"}),\n            }\n        })\n        return Lexicon._parse(d)\n\n    def run(self, **kw) -> tuple[tuple[float, ...]]:\n        x = parse_param(kw, Lexicon.X, EnumConvertType.FLOAT, None)\n        y = parse_param(kw, Lexicon.Y, EnumConvertType.FLOAT, None)\n        z = parse_param(kw, Lexicon.Z, EnumConvertType.FLOAT, None)\n        default = parse_param(kw, Lexicon.DEFAULT, EnumConvertType.VEC3, (0,0,0))\n        result = []\n        params = list(zip_longest_fill(x, y, z, default))\n        pbar = ProgressBar(len(params))\n        for idx, (x, y, z, default) in enumerate(params):\n            x = round(default[0], 9) if x is None else round(x, 9)\n            y = round(default[1], 9) if y is None else round(y, 9)\n            z = round(default[2], 9) if z is None else round(z, 9)\n            result.append((x, y, z,))\n            pbar.update_absolute(idx)\n        return result,\n\nclass Vector4Node(CozyBaseNode):\n    NAME = \"VECTOR4 (JOV)\"\n    CATEGORY = JOV_CATEGORY\n    RETURN_TYPES = (\"VEC4\",)\n    RETURN_NAMES = (\"VEC4\",)\n    OUTPUT_IS_LIST = (True,)\n    OUTPUT_TOOLTIPS = (\n        \"Vector4 with float values\",\n    )\n    DESCRIPTION = \"\"\"\nOutputs a VECTOR4.\n\"\"\"\n\n    @classmethod\n    def INPUT_TYPES(cls) -> InputType:\n        d = super().INPUT_TYPES()\n        d = deep_merge(d, {\n            \"optional\": {\n                Lexicon.X: (COZY_TYPE_NUMERICAL, {\n                    \"min\": -sys.float_info.max, \"max\": sys.float_info.max,\n                    \"tooltip\": \"X channel value\"}),\n                Lexicon.Y: (COZY_TYPE_NUMERICAL, {\n                    \"min\": -sys.float_info.max, \"max\": sys.float_info.max,\n                    \"tooltip\": \"Y channel value\"}),\n                Lexicon.Z: (COZY_TYPE_NUMERICAL, {\n                    \"min\": -sys.float_info.max, \"max\": sys.float_info.max,\n                    \"tooltip\": \"Z channel value\"}),\n                Lexicon.W: (COZY_TYPE_NUMERICAL, {\n                    \"min\": -sys.float_info.max, \"max\": sys.float_info.max,\n                    \"tooltip\": \"W channel value\"}),\n                Lexicon.DEFAULT: (\"VEC4\", {\n                    \"default\": (0,0,0,0), \"mij\": -sys.float_info.max, \"maj\": sys.float_info.max,\n                    \"tooltip\": \"Default vector value\"}),\n            }\n        })\n        return Lexicon._parse(d)\n\n    def run(self, **kw) -> tuple[tuple[float, ...]]:\n        x = parse_param(kw, Lexicon.X, EnumConvertType.FLOAT, None)\n        y = parse_param(kw, Lexicon.Y, EnumConvertType.FLOAT, None)\n        z = parse_param(kw, Lexicon.Z, EnumConvertType.FLOAT, None)\n        w = parse_param(kw, Lexicon.W, EnumConvertType.FLOAT, None)\n        default = parse_param(kw, Lexicon.DEFAULT, EnumConvertType.VEC4, (0,0,0,0))\n        result = []\n        params = list(zip_longest_fill(x, y, z, w, default))\n        pbar = ProgressBar(len(params))\n        for idx, (x, y, z, w, default) in enumerate(params):\n            x = round(default[0], 9) if x is None else round(x, 9)\n            y = round(default[1], 9) if y is None else round(y, 9)\n            z = round(default[2], 9) if z is None else round(z, 9)\n            w = round(default[3], 9) if w is None else round(w, 9)\n            result.append((x, y, z, w,))\n            pbar.update_absolute(idx)\n        return result,\n"
  },
  {
    "path": "node_list.json",
    "content": "{\n    \"ADJUST: BLUR (JOV)\": \"Enhance and modify images with various blur effects\",\n    \"ADJUST: COLOR (JOV)\": \"Enhance and modify images with various blur effects\",\n    \"ADJUST: EDGE (JOV)\": \"Enhanced edge detection\",\n    \"ADJUST: EMBOSS (JOV)\": \"Emboss boss mode\",\n    \"ADJUST: LEVELS (JOV)\": \"Manual or automatic adjust image levels so that the darkest pixel becomes black\\nand the brightest pixel becomes white, enhancing overall contrast\",\n    \"ADJUST: LIGHT (JOV)\": \"Tonal adjustments\",\n    \"ADJUST: MORPHOLOGY (JOV)\": \"Operations based on the image shape\",\n    \"ADJUST: PIXEL (JOV)\": \"Pixel-level transformations\",\n    \"ADJUST: SHARPEN (JOV)\": \"Sharpen the pixels of an image\",\n    \"AKASHIC (JOV) \\ud83d\\udcd3\": \"Visualize data\",\n    \"ARRAY (JOV) \\ud83d\\udcda\": \"Processes a batch of data based on the selected mode\",\n    \"BATCH TO LIST (JOV)\": \"Convert a batch of values into a pure python list of values\",\n    \"BIT SPLIT (JOV) \\u2b44\": \"Split an input into separate bits\",\n    \"BLEND (JOV) \\u2697\\ufe0f\": \"Combine two input images using various blending modes, such as normal, screen, multiply, overlay, etc\",\n    \"COLOR BLIND (JOV) \\ud83d\\udc41\\u200d\\ud83d\\udde8\": \"Simulate color blindness effects on images\",\n    \"COLOR MATCH (JOV) \\ud83d\\udc9e\": \"Adjust the color scheme of one image to match another with the Color Match Node\",\n    \"COLOR MEANS (JOV) \\u3030\\ufe0f\": \"The top-k colors ordered from most->least used as a strip, tonal palette and 3D LUT\",\n    \"COLOR THEORY (JOV) \\ud83d\\udede\": \"Generate a color harmony based on the selected scheme\",\n    \"COMPARISON (JOV) \\ud83d\\udd75\\ud83c\\udffd\": \"Evaluates two inputs (A and B) with a specified comparison operators and optional values for successful and failed comparisons\",\n    \"CONSTANT (JOV) \\ud83d\\udfea\": \"Generate a constant image or mask of a specified size and color\",\n    \"CROP (JOV) \\u2702\\ufe0f\": \"Extract a portion of an input image or resize it\",\n    \"DELAY (JOV) \\u270b\\ud83c\\udffd\": \"Introduce pauses in the workflow that accept an optional input to pass through and a timer parameter to specify the duration of the delay\",\n    \"EXPORT (JOV) \\ud83d\\udcfd\": \"Responsible for saving images or animations to disk\",\n    \"FILTER MASK (JOV) \\ud83e\\udd3f\": \"Create masks based on specific color ranges within an image\",\n    \"FLATTEN (JOV) \\u2b07\\ufe0f\": \"Combine multiple input images into a single image by summing their pixel values\",\n    \"GRADIENT MAP (JOV) \\ud83c\\uddf2\\ud83c\\uddfa\": \"Remaps an input image using a gradient lookup table (LUT)\",\n    \"GRAPH (JOV) \\ud83d\\udcc8\": \"Visualize a series of data points over time\",\n    \"HISTOGRAM (JOV)\": \"The Histogram Node generates a histogram representation of the input image, showing the distribution of pixel intensity values across different bins\",\n    \"IMAGE INFO (JOV) \\ud83d\\udcda\": \"Exports and Displays immediate information about images\",\n    \"LERP (JOV) \\ud83d\\udd30\": \"Calculate linear interpolation between two values or vectors based on a blending factor (alpha)\",\n    \"OP BINARY (JOV) \\ud83c\\udf1f\": \"Execute binary operations like addition, subtraction, multiplication, division, and bitwise operations on input values, supporting various data types and vector sizes\",\n    \"OP UNARY (JOV) \\ud83c\\udfb2\": \"Perform single function operations like absolute value, mean, median, mode, magnitude, normalization, maximum, or minimum on input values\",\n    \"PIXEL MERGE (JOV) \\ud83e\\udec2\": \"Combines individual color channels (red, green, blue) along with an optional mask channel to create a composite image\",\n    \"PIXEL SPLIT (JOV) \\ud83d\\udc94\": \"Split an input into individual color channels (red, green, blue, alpha)\",\n    \"PIXEL SWAP (JOV) \\ud83d\\udd03\": \"Swap pixel values between two input images based on specified channel swizzle operations\",\n    \"QUEUE (JOV) \\ud83d\\uddc3\": \"Manage a queue of items, such as file paths or data\",\n    \"QUEUE TOO (JOV) \\ud83d\\uddc3\": \"Manage a queue of specific items: media files\",\n    \"ROUTE (JOV) \\ud83d\\ude8c\": \"Routes the input data from the optional input ports to the output port, preserving the order of inputs\",\n    \"SAVE OUTPUT (JOV) \\ud83d\\udcbe\": \"Save images with metadata to any specified path\",\n    \"SHAPE GEN (JOV) \\u2728\": \"Create n-sided polygons\",\n    \"SPLIT (JOV) \\ud83c\\udfad\": \"Split an image into two or four images based on the percentages for width and height\",\n    \"STACK (JOV) \\u2795\": \"Merge multiple input images into a single composite image by stacking them along a specified axis\",\n    \"STRINGER (JOV) \\ud83e\\ude80\": \"Manipulate strings through filtering\",\n    \"SWIZZLE (JOV) \\ud83d\\ude35\": \"Swap components between two vectors based on specified swizzle patterns and values\",\n    \"TEXT GEN (JOV) \\ud83d\\udcdd\": \"Generates images containing text based on parameters such as font, size, alignment, color, and position\",\n    \"THRESHOLD (JOV) \\ud83d\\udcc9\": \"Define a range and apply it to an image for segmentation and feature extraction\",\n    \"TICK (JOV) \\u23f1\": \"Value generator with normalized values based on based on time interval\",\n    \"TRANSFORM (JOV) \\ud83c\\udfdd\\ufe0f\": \"Apply various geometric transformations to images, including translation, rotation, scaling, mirroring, tiling and perspective projection\",\n    \"VALUE (JOV) \\ud83e\\uddec\": \"Supplies raw or default values for various data types, supporting vector input with components for X, Y, Z, and W\",\n    \"VECTOR2 (JOV)\": \"Outputs a VECTOR2\",\n    \"VECTOR3 (JOV)\": \"Outputs a VECTOR3\",\n    \"VECTOR4 (JOV)\": \"Outputs a VECTOR4\",\n    \"WAVE GEN (JOV) \\ud83c\\udf0a\": \"Produce waveforms like sine, square, or sawtooth with adjustable frequency, amplitude, phase, and offset\"\n}"
  },
  {
    "path": "pyproject.toml",
    "content": "[project]\nname = \"jovimetrix\"\ndescription = \"Animation via tick. Parameter manipulation with wave generator. Unary and Binary math support. Value convert int/float/bool, VectorN and Image, Mask types. Shape mask generator. Stack images, do channel ops, split, merge and randomize arrays and batches. Load images & video from anywhere. Dynamic bus routing. Save output anywhere! Flatten, crop, transform; check colorblindness or linear interpolate values.\"\nversion = \"2.1.25\"\nlicense = { file = \"LICENSE\" }\nreadme = \"README.md\"\nauthors = [{ name = \"Alexander G. Morano\", email = \"amorano@gmail.com\" }]\nclassifiers = [\n  \"License :: OSI Approved :: MIT License\",\n  \"Operating System :: OS Independent\",\n  \"Programming Language :: Python\",\n  \"Programming Language :: Python :: 3\",\n  \"Programming Language :: Python :: 3.10\",\n  \"Programming Language :: Python :: 3.11\",\n  \"Programming Language :: Python :: 3.12\",\n  \"Intended Audience :: Developers\",\n]\nrequires-python = \">=3.10\"\ndependencies = [\n    \"aenum\",\n    \"git+https://github.com/cozy-comfyui/cozy_comfyui@main#egg=cozy_comfyui\",\n    \"git+https://github.com/cozy-comfyui/cozy_comfy@main#egg=cozy_comfy\",\n    \"matplotlib\",\n    \"numpy>=1.25.0\",\n    \"opencv-contrib-python\",\n    \"Pillow\"\n]\n\n[project.urls]\nHomepage = \"https://github.com/Amorano/Jovimetrix\"\nDocumentation = \"https://github.com/Amorano/Jovimetrix/wiki\"\nRepository = \"https://github.com/Amorano/Jovimetrix\"\nIssues = \"https://github.com/Amorano/Jovimetrix/issues\"\n\n[tool.comfy]\nPublisherId = \"amorano\"\nDisplayName = \"Jovimetrix\"\nIcon = \"https://raw.githubusercontent.com/Amorano/Jovimetrix-examples/refs/heads/master/res/logo-jvmx.png\"\n"
  },
  {
    "path": "requirements.txt",
    "content": "aenum\ngit+https://github.com/cozy-comfyui/cozy_comfyui@main#egg=cozy_comfyui\ngit+https://github.com/cozy-comfyui/cozy_comfy@main#egg=cozy_comfy\nmatplotlib\nnumpy>=1.25.0\nopencv-contrib-python\nPillow"
  },
  {
    "path": "web/core.js",
    "content": "/**\n    ASYNC\n    init\n    setup\n    registerCustomNodes\n    nodeCreated\n    beforeRegisterNodeDef\n    getCustomWidgets\n    afterConfigureGraph\n    refreshComboInNodes\n\n    NON-ASYNC\n    onNodeOutputsUpdated\n    beforeRegisterVueAppNodeDefs\n    loadedGraphNode\n  */\n\nimport { app } from \"../../scripts/app.js\"\n\napp.registerExtension({\n    name: \"jovimetrix\",\n    async init() {\n        const styleTagId = 'jovimetrix-stylesheet';\n        let styleTag = document.getElementById(styleTagId);\n        if (styleTag) {\n            return;\n        }\n\n        document.head.appendChild(Object.assign(document.createElement('script'), {\n            src: \"https://cdn.jsdelivr.net/npm/@jaames/iro@5\"\n        }));\n\n        document.head.appendChild(Object.assign(document.createElement('link'), {\n            id: styleTagId,\n            rel: 'stylesheet',\n            type: 'text/css',\n            href: 'extensions/jovimetrix/jovi_metrix.css'\n        }));\n\t}\n});"
  },
  {
    "path": "web/fun.js",
    "content": "/**/\n\nimport { app } from \"../../scripts/app.js\";\n\nexport const bewm = function(ex, ey) {\n    //- adapted from \"Anchor Click Canvas Animation\" by Nick Sheffield\n    //- https://codepen.io/nicksheffield/pen/NNEoLg/\n    const colors = [ '#ffc000', '#ff3b3b', '#ff8400' ];\n    const bubbles = 25;\n\n    const explode = () => {\n        let particles = [];\n        const ctx = app.canvas;\n        const canvas = ctx.getContext('2d');\n        ctx.style.pointerEvents = 'none';\n\n        for(var i = 0; i < bubbles; i++) {\n            particles.push({\n                x: canvas.width / 2,\n                y: canvas.height / 2,\n                radius: r(20, 30),\n                color: colors[Math.floor(Math.random() * colors.length)],\n                rotation: r(0, 360, true),\n                speed: r(12, 16),\n                friction: 0.9,\n                opacity: r(0, 0.5, true),\n                yVel: 0,\n                gravity: 0.15\n            });\n        }\n        render(particles, ctx);\n    }\n\n    const render = (particles, ctx) => {\n        requestAnimationFrame(() => render(particles, ctx));\n\n        particles.forEach((p) => {\n            p.x += p.speed * Math.cos(p.rotation * Math.PI / 180);\n            p.y += p.speed * Math.sin(p.rotation * Math.PI / 180);\n\n            p.opacity -= 0.01;\n            p.speed *= p.friction;\n            p.radius *= p.friction;\n            p.yVel += p.gravity;\n            p.y += p.yVel;\n\n            if(p.opacity < 0 || p.radius < 0) return;\n\n            ctx.beginPath();\n            ctx.globalAlpha = p.opacity;\n            ctx.fillStyle = p.color;\n            ctx.arc(p.x, p.y, p.radius, 0, 2 * Math.PI, false);\n            ctx.fill();\n        });\n    }\n\n    const r = (a, b, c) => parseFloat((Math.random() * ((a ? a : 1) - (b ? b : 0)) + (b ? b : 0)).toFixed(c ? c : 0));\n    explode(ex, ey);\n}\n\nexport const bubbles = function() {\n    const canvas = document.getElementById(\"graph-canvas\");\n    const context = canvas.getContext(\"2d\");\n    window.bubbles_alive = true;\n    let mouseX;\n    let mouseY;\n\n    const particleArray = [];\n    class Particle {\n        constructor() {\n            this.x = Math.random() * canvas.width * 0.85;\n            this.y = canvas.height * 0.85;\n            this.radius = Math.random() * 30;\n            this.dx = Math.random() - 0.5\n            this.dx = Math.sign(this.dx) * Math.random() * 1.27;\n            this.dy = 3 + Math.random() * 3;\n            this.hue = 25 + Math.random() * 250;\n            this.sat = 85 + Math.random() * 15;\n            this.val = 35 + Math.random() * 20;\n        }\n\n        draw() {\n            context.beginPath();\n            context.arc(this.x, this.y, this.radius, 0, 2 * Math.PI);\n            context.strokeStyle = `hsl(${this.hue} ${this.sat}% ${this.val}%)`;\n            context.stroke();\n\n            const gradient = context.createRadialGradient(\n                this.x,\n                this.y,\n                1,\n                this.x + 0.5,\n                this.y + 0.5,\n                this.radius\n            );\n\n            gradient.addColorStop(0.3, \"rgba(255, 255, 255, 0.3)\");\n            gradient.addColorStop(0.95, \"#E7FEFF7F\");\n            context.fillStyle = gradient;\n            context.fill();\n        }\n\n        move() {\n            this.x = this.x + this.dx + (Math.random() - 0.5) * 0.5;\n            this.y = this.y - this.dy + (Math.random() - 0.5) * 1.5;\n\n            // Check if the particle is outside the canvas boundaries\n            if (\n                this.x < -this.radius ||\n                this.x > canvas.width + this.radius ||\n                this.y < -this.radius ||\n                this.y > canvas.height + this.radius\n            ) {\n                // Remove the particle from the array\n                particleArray.splice(particleArray.indexOf(this), 1);\n            }\n        }\n    }\n\n    const animate = () => {\n        //context.clearRect(0, 0, canvas.width, canvas.height);\n        app.canvas.setDirty(true);\n\n        particleArray.forEach((particle) => {\n            particle?.move();\n            particle?.draw();\n        });\n\n        if (window.bubbles_alive) {\n            requestAnimationFrame(animate);\n            if (Math.random() > 0.5) {\n                const particle = new Particle(mouseX, mouseY);\n                particleArray.push(particle);\n            }\n        } else {\n            canvas.removeEventListener(\"mousemove\", handleMouseMove);\n            particleArray.length = 0; // Clear the particleArray\n            return;\n        }\n    };\n\n    const handleMouseMove = (event) => {\n        mouseX = event.clientX;\n        mouseY = event.clientY;\n    };\n\n    canvas.addEventListener(\"mousemove\", handleMouseMove);\n    animate();\n}\n\n// flash status for each element\nconst flashStatusMap = new Map();\n\nexport async function flashBackgroundColor(element, duration, flashCount, color=\"red\") {\n    if (flashStatusMap.get(element)) {\n        return;\n    }\n\n    flashStatusMap.set(element, true);\n    const originalColor = element.style.backgroundColor;\n\n    for (let i = 0; i < flashCount; i++) {\n        element.style.backgroundColor = color;\n        await new Promise(resolve => setTimeout(resolve, duration / 2));\n        element.style.backgroundColor = originalColor;\n        await new Promise(resolve => setTimeout(resolve, duration / 2));\n    }\n    flashStatusMap.set(element, false);\n}"
  },
  {
    "path": "web/jovi_metrix.css",
    "content": "/**/\n\n.jov-modal-content {\n  max-width: 100%;\n  max-height: 100%;\n  overflow: visible;\n  position: relative;\n  margin: 3px;\n  padding: 3px;\n  table-layout: fixed;\n  text-align: center;\n  background-color: var(--bg-color);\n  border-style: solid;\n  border-color: rgb(95, 85, 75);\n}\n\n.jov-delay-header {\n  font-weight: bold;\n  font-size: 1.5em;\n  text-align: center;\n}\n"
  },
  {
    "path": "web/nodes/akashic.js",
    "content": "/**/\n\nimport { app } from \"../../../scripts/app.js\"\nimport { ComfyWidgets } from '../../../scripts/widgets.js';\nimport { nodeAddDynamic } from \"../util.js\"\n\nconst _prefix = '📥'\nconst _id = \"AKASHIC (JOV) 📓\"\n\napp.registerExtension({\n\tname: 'jovimetrix.node.' + _id,\n\tasync beforeRegisterNodeDef(nodeType, nodeData, app) {\n        if (nodeData.name !== _id) {\n            return\n        }\n\n        await nodeAddDynamic(nodeType, _prefix);\n\n        const onExecuted = nodeType.prototype.onExecuted;\n        nodeType.prototype.onExecuted = async function (message) {\n            const me = onExecuted?.apply(this, arguments)\n            if (this.widgets) {\n                for (let i = 0; i < this.widgets.length; i++) {\n                    this.widgets[i].onRemove?.();\n                    this.widgets.splice(i, 0);\n                }\n                this.widgets.length = 0;\n            }\n            if (this.inputs.length>1) {\n                for (let i = 0; i < this.inputs.length-1; i++) {\n                    let textWidget = ComfyWidgets[\"STRING\"](this, this.inputs[i].name, [\"STRING\", { multiline: true }], app).widget;\n                    textWidget.inputEl.readOnly = true;\n                    textWidget.inputEl.style.margin = \"1px\";\n                    textWidget.inputEl.style.padding = \"1px\";\n                    textWidget.inputEl.style.border = \"1px\";\n                    textWidget.inputEl.style.backgroundColor = \"#222\";\n                    textWidget.value = this.inputs[i].name + \" \";\n                    let raw = message[\"text\"][i]\n                        .replace(/\\\\n/g, '\\n')\n                        .replace(/\"/g, '');\n\n                    try {\n                        raw = JSON.parse('\"' + raw.replace(/\"/g, '\\\\\"') + '\"');\n                    } catch (e) {\n                    }\n\n                    textWidget.value += raw;\n                }\n            }\n            return me;\n        }\n    }\n})\n"
  },
  {
    "path": "web/nodes/array.js",
    "content": "/**\n * File: array.js\n * Project: Jovimetrix\n *\n */\n\nimport { app } from \"../../../scripts/app.js\"\nimport { nodeAddDynamic } from \"../util.js\"\n\nconst _id = \"ARRAY (JOV) 📚\"\nconst _prefix = '❔'\n\napp.registerExtension({\n\tname: 'jovimetrix.node.' + _id,\n\tasync beforeRegisterNodeDef(nodeType, nodeData) {\n        if (nodeData.name !== _id) {\n            return;\n        }\n\n        await nodeAddDynamic(nodeType, _prefix);\n\t}\n})\n"
  },
  {
    "path": "web/nodes/delay.js",
    "content": "/**/\n\nimport { api } from \"../../../scripts/api.js\";\nimport { app } from \"../../../scripts/app.js\";\nimport { apiJovimetrix } from \"../util.js\"\nimport { bubbles } from '../fun.js'\n\nconst _id = \"DELAY (JOV) ✋🏽\"\nconst EVENT_JOVI_DELAY = \"jovi-delay-user\";\nconst EVENT_JOVI_UPDATE = \"jovi-delay-update\";\n\nfunction domShowModal(innerHTML, eventCallback, timeout=null) {\n    return new Promise((resolve, reject) => {\n        const modal = document.createElement(\"div\");\n        modal.className = \"modal\";\n        modal.innerHTML = innerHTML;\n        document.body.appendChild(modal);\n\n        // center\n        const modalContent = modal.querySelector(\".jov-modal-content\");\n        modalContent.style.position = \"absolute\";\n        modalContent.style.left = \"50%\";\n        modalContent.style.top = \"50%\";\n        modalContent.style.transform = \"translate(-50%, -50%)\";\n\n        let timeoutId;\n\n        const handleEvent = (event) => {\n            const targetId = event.target.id;\n            const result = eventCallback(targetId);\n\n            if (result != null) {\n                if (timeoutId) {\n                    clearTimeout(timeoutId);\n                    timeoutId = null;\n                }\n                modal.remove();\n                resolve(result);\n            }\n        };\n        modalContent.addEventListener(\"click\", handleEvent);\n        modalContent.addEventListener(\"dblclick\", handleEvent);\n\n        if (timeout) {\n            timeout *= 1000;\n            timeoutId = setTimeout(() => {\n                modal.remove();\n                reject(new Error(\"TIMEOUT\"));\n            }, timeout);\n        }\n\n        //setTimeout(() => {\n        //    modal.dispatchEvent(new Event('tick'));\n        //}, 1000);\n    });\n}\n\napp.registerExtension({\n\tname: 'jovimetrix.node.' + _id,\n\tasync beforeRegisterNodeDef(nodeType, nodeData) {\n        if (nodeData.name !== _id) {\n            return\n        }\n\n        const onNodeCreated = nodeType.prototype.onNodeCreated;\n        nodeType.prototype.onNodeCreated = async function () {\n            const me = await onNodeCreated?.apply(this, arguments);\n            const widget_time = this.widgets.find(w => w.name == 'timer');\n            const widget_enable = this.widgets.find(w => w.name == 'enable');\n            this.total_timeout = 0;\n            let showing = false;\n            let delay_modal;\n            const self = this;\n\n            async function python_delay_user(event) {\n                if (showing || event.detail.id != self.id) {\n                    return;\n                }\n\n                const time = event.detail.timeout;\n                if (time > 4 && widget_enable.value == true) {\n                    bubbles();\n                    console.info(time, widget_enable.value);\n                }\n\n                showing = true;\n                delay_modal = domShowModal(`\n                    <div class=\"jov-modal-content\">\n                        <h3 id=\"jov-delay-header\">DELAY NODE #${event.detail?.title || event.detail.id}</h3>\n                        <h4>CANCEL OR CONTINUE RENDER?</h4>\n                        <div>\n                            <button id=\"jov-submit-continue\">CONTINUE</button>\n                            <button id=\"jov-submit-cancel\">CANCEL</button>\n                        </div>\n                    </div>`,\n                    (button) => {\n                        return (button != \"jov-submit-cancel\");\n                    },\n                    time);\n\n                let value = false;\n                try {\n                    value = await delay_modal;\n                } catch (e) {\n                    if (e.message != \"TIMEOUT\") {\n                        console.error(e);\n                    }\n                }\n                apiJovimetrix(event.detail.id, value);\n\n                showing = false;\n                window.bubbles_alive = false;\n            }\n\n            async function python_delay_update() {\n            }\n\n            api.addEventListener(EVENT_JOVI_DELAY, python_delay_user);\n            api.addEventListener(EVENT_JOVI_UPDATE, python_delay_update);\n\n            this.onDestroy = () => {\n                api.removeEventListener(EVENT_JOVI_DELAY, python_delay_user);\n                api.removeEventListener(EVENT_JOVI_UPDATE, python_delay_update);\n            };\n            return me;\n        }\n\n        const onExecutionStart = nodeType.prototype.onExecutionStart\n        nodeType.prototype.onExecutionStart = function() {\n            onExecutionStart?.apply(this, arguments);\n            self.total_timeout = 0;\n        }\n\n    }\n})\n"
  },
  {
    "path": "web/nodes/flatten.js",
    "content": "/**/\n\nimport { app } from \"../../../scripts/app.js\"\nimport { nodeAddDynamic } from \"../util.js\"\n\nconst _id = \"FLATTEN (JOV) ⬇️\"\nconst _prefix = 'image'\n\napp.registerExtension({\n\tname: 'jovimetrix.node.' + _id,\n\tasync beforeRegisterNodeDef(nodeType, nodeData) {\n        if (nodeData.name !== _id) {\n            return;\n        }\n\n        await nodeAddDynamic(nodeType, _prefix, \"IMAGE,MASK\");\n\t}\n})\n"
  },
  {
    "path": "web/nodes/graph.js",
    "content": "/**/\n\nimport { app } from \"../../../scripts/app.js\"\nimport { apiJovimetrix, nodeAddDynamic } from \"../util.js\"\n\nconst _id = \"GRAPH (JOV) 📈\"\nconst _prefix = '❔'\n\napp.registerExtension({\n\tname: 'jovimetrix.node.' + _id,\n    async init() {\n        LGraphCanvas.link_type_colors['JOV_VG_0'] = \"#A00\";\n        LGraphCanvas.link_type_colors['JOV_VG_1'] = \"#0A0\";\n        LGraphCanvas.link_type_colors['JOV_VG_2'] = \"#00A\";\n        LGraphCanvas.link_type_colors['JOV_VG_3'] = \"#0AA\";\n        LGraphCanvas.link_type_colors['JOV_VG_4'] = \"#AA0\";\n        LGraphCanvas.link_type_colors['JOV_VG_5'] = \"#A0A\";\n        LGraphCanvas.link_type_colors['JOV_VG_6'] = \"#000\";\n    },\n\tasync beforeRegisterNodeDef(nodeType, nodeData, app) {\n        if (nodeData.name !== _id) {\n            return;\n        }\n\n        await nodeAddDynamic(nodeType, _prefix);\n\n        const onNodeCreated = nodeType.prototype.onNodeCreated;\n        nodeType.prototype.onNodeCreated = async function () {\n            const me = await onNodeCreated?.apply(this, arguments);\n            const self = this;\n            const widget_reset = this.widgets.find(w => w.name == 'reset');\n            widget_reset.callback = async() => {\n                widget_reset.value = false;\n                apiJovimetrix(self.id, \"reset\");\n            }\n            return me;\n        }\n\n        const onConnectionsChange = nodeType.prototype.onConnectionsChange\n        nodeType.prototype.onConnectionsChange = function (slotType, slot, event, link_info) {\n            const me = onConnectionsChange?.apply(this, arguments);\n            if (!link_info || slot == this.inputs.length) {\n                return;\n            }\n            let count = 0;\n            for (let i = 0; i < this.inputs.length; i++) {\n                const link_id = this.inputs[i].link;\n                const link = app.graph.links[link_id];\n                const nameParts = this.inputs[i].name.split('_');\n                const isInteger = nameParts.length > 1 && !isNaN(nameParts[0]) && Number.isInteger(parseFloat(nameParts[0]));\n                if (link && isInteger && nameParts[1].substring(0, _prefix.length) == _prefix) {\n                //if(link && this.inputs[i].name.substring(0, _prefix.length) == _prefix) {\n                    link.type = `JOV_VG_${count}`;\n                    this.inputs[i].color_on = LGraphCanvas.link_type_colors[link.type];\n                    count += 1;\n                }\n            }\n            app.graph.setDirtyCanvas(true, true);\n            return me;\n        }\n\t}\n})\n"
  },
  {
    "path": "web/nodes/lerp.js",
    "content": "/**/\n\nimport { app } from \"../../../scripts/app.js\"\nimport { widgetHookControl } from \"../util.js\"\n\nconst _id = \"LERP (JOV) 🔰\"\n\napp.registerExtension({\n\tname: 'jovimetrix.node.' + _id,\n\tasync beforeRegisterNodeDef(nodeType, nodeData) {\n        if (nodeData.name !== _id) {\n            return;\n        }\n\n        const onNodeCreated = nodeType.prototype.onNodeCreated\n        nodeType.prototype.onNodeCreated = async function () {\n            const me = await onNodeCreated?.apply(this, arguments);\n            await widgetHookControl(this, 'type', 'alpha', true);\n            await widgetHookControl(this, 'type', 'aa');\n            await widgetHookControl(this, 'type', 'bb');\n            return me;\n        }\n        return nodeType;\n\t}\n})\n"
  },
  {
    "path": "web/nodes/op_binary.js",
    "content": "/**/\n\nimport { app } from \"../../../scripts/app.js\"\nimport { widgetHookControl } from \"../util.js\"\n\nconst _id = \"OP BINARY (JOV) 🌟\"\n\napp.registerExtension({\n\tname: 'jovimetrix.node.' + _id,\n\tasync beforeRegisterNodeDef(nodeType, nodeData) {\n        if (nodeData.name !== _id) {\n            return;\n        }\n\n        const onNodeCreated = nodeType.prototype.onNodeCreated\n        nodeType.prototype.onNodeCreated = async function () {\n            const me = await onNodeCreated?.apply(this, arguments);\n            await widgetHookControl(this, 'type', 'aa');\n            await widgetHookControl(this, 'type', 'bb');\n            return me;\n        }\n\n       return nodeType;\n\t}\n})\n"
  },
  {
    "path": "web/nodes/op_unary.js",
    "content": "/**/\n\nimport { app } from \"../../../scripts/app.js\"\nimport { widgetHookControl } from \"../util.js\"\n\nconst _id = \"OP UNARY (JOV) 🎲\"\n\napp.registerExtension({\n\tname: 'jovimetrix.node.' + _id,\n\tasync beforeRegisterNodeDef(nodeType, nodeData) {\n\n        if (nodeData.name !== _id) {\n            return;\n        }\n\n        const onNodeCreated = nodeType.prototype.onNodeCreated\n        nodeType.prototype.onNodeCreated = async function () {\n            const me = await onNodeCreated?.apply(this, arguments);\n            await widgetHookControl(this, 'type', 'aa');\n            return me;\n        }\n        return nodeType;\n\t}\n})"
  },
  {
    "path": "web/nodes/queue.js",
    "content": "/**/\n\nimport { api } from \"../../../scripts/api.js\";\nimport { app } from \"../../../scripts/app.js\";\nimport { ComfyWidgets } from '../../../scripts/widgets.js';\nimport { apiJovimetrix, TypeSlotEvent, TypeSlot } from \"../util.js\"\nimport { flashBackgroundColor } from '../fun.js'\n\nconst _id1 = \"QUEUE (JOV) 🗃\";\nconst _id2 = \"QUEUE TOO (JOV) 🗃\";\nconst _prefix = '❔';\nconst EVENT_JOVI_PING = \"jovi-queue-ping\";\nconst EVENT_JOVI_DONE = \"jovi-queue-done\";\n\napp.registerExtension({\n\tname: 'jovimetrix.node.' + _id1,\n\tasync beforeRegisterNodeDef(nodeType, nodeData, app) {\n        if (nodeData.name != _id1 && nodeData.name != _id2) {\n            return;\n        }\n\n        function update_report(self) {\n            self.widget_report.value = `[${self.data_index+1} / ${self.data_all.length}]\\n${self.data_current}`;\n            app.canvas.setDirty(true);\n        }\n\n        function update_list(self, value) {\n            self.data_count = value.length;\n            self.data_index = 1;\n            self.data_current = \"\";\n            update_report(self);\n            apiJovimetrix(self.id, \"reset\");\n        }\n\n        const onNodeCreated = nodeType.prototype.onNodeCreated;\n        nodeType.prototype.onNodeCreated = async function () {\n            const me = await onNodeCreated?.apply(this, arguments);\n            const self = this;\n            this.data_index = 1;\n            this.data_current = \"\";\n            this.data_all = [];\n            this.widget_report = ComfyWidgets.STRING(this, 'QUEUE IS EMPTY 🔜', [\n                'STRING', {\n                    multiline: true,\n                },\n            ], app).widget;\n            this.widget_report.inputEl.readOnly = true;\n            this.widget_report.serializeValue = async () => { };\n\n            const widget_queue = this.widgets.find(w => w.name == 'queue');\n            const widget_batch = this.widgets.find(w => w.name == 'batch');\n            const widget_hold = this.widgets.find(w => w.name == 'hold');\n            const widget_reset = this.widgets.find(w => w.name == 'reset');\n\n            widget_queue.inputEl.addEventListener('input', function () {\n                const value = widget_queue.value.split('\\n');\n                update_list(self, value);\n            });\n\n            widget_reset.callback = () => {\n                widget_reset.value = false;\n                apiJovimetrix(self.id, \"reset\");\n            }\n\n            async function python_queue_ping(event) {\n                if (event.detail.id != self.id) {\n                    return;\n                }\n                self.data_index = event.detail.i;\n                self.data_all  = event.detail.l;\n                self.data_current = event.detail.c;\n                update_report(self);\n            }\n\n            // Add names to list control that collapses. And counter to see where we are in the overall\n            async function python_queue_done(event) {\n                if (event.detail.id != self.id) {\n                    return;\n                }\n                await flashBackgroundColor(self.widget_queue.inputEl, 650, 4, \"#995242CC\");\n            }\n\n            api.addEventListener(EVENT_JOVI_PING, python_queue_ping);\n            api.addEventListener(EVENT_JOVI_DONE, python_queue_done);\n\n            this.onDestroy = () => {\n                api.removeEventListener(EVENT_JOVI_PING, python_queue_ping);\n                api.removeEventListener(EVENT_JOVI_DONE, python_queue_done);\n            };\n\n            setTimeout(() => { widget_hold.callback(); }, 5);\n            setTimeout(() => { widget_batch.callback(); }, 5);\n            return me;\n        }\n\n        const onConnectOutput = nodeType.prototype.onConnectOutput;\n        nodeType.prototype.onConnectOutput = function(outputIndex, inputType, inputSlot, inputNode) {\n            if (outputIndex == 0 && inputType == \"COMBO\") {\n                // can link the \"same\" list -- user breaks it past that, their problem atm\n                const widget_queue = this.widgets.find(w => w.name == 'queue');\n                const widget = inputNode.widgets.find(w => w.name == inputSlot.name);\n                widget_queue.value = widget.options.values.join('\\n');\n            }\n            return onConnectOutput?.apply(this, arguments);\n        }\n\n        const onConnectionsChange = nodeType.prototype.onConnectionsChange;\n        nodeType.prototype.onConnectionsChange = function (slotType, slot, event, link_info)\n        //side, slot, connected, link_info\n        {\n            if (slotType == TypeSlot.Output && slot == 0 && link_info && event == TypeSlotEvent.Connect) {\n                const node = app.graph.getNodeById(link_info.target_id);\n                if (node === undefined || node.inputs === undefined) {\n                    return;\n                }\n                const target = node.inputs[link_info.target_slot];\n                if (target === undefined) {\n                    return;\n                }\n\n                const widget = node.widgets?.find(w => w.name == target.name);\n                if (widget === undefined) {\n                    return;\n                }\n                this.outputs[0].name = widget.name;\n                if (widget?.origType == \"combo\" || widget.type == \"COMBO\") {\n                    const values = widget.options.values;\n                    const widget_queue = this.widgets.find(w => w.name == 'queue');\n                    // remove all connections that don't match the list?\n                    widget_queue.value = values.join('\\n');\n                    update_list(this, values);\n                }\n                this.outputs[0].name = _prefix;\n            }\n            return onConnectionsChange?.apply(this, arguments);\n        };\n    }\n})\n"
  },
  {
    "path": "web/nodes/route.js",
    "content": "/**/\n\nimport { app } from \"../../../scripts/app.js\"\nimport {\n    TypeSlot, TypeSlotEvent, nodeFitHeight,\n    nodeVirtualLinkRoot, nodeInputsClear, nodeOutputsClear\n}  from \"../util.js\"\n\nconst _id = \"ROUTE (JOV) 🚌\";\nconst _prefix = '🔮';\nconst _dynamic_type = \"*\";\n\napp.registerExtension({\n\tname: 'jovimetrix.node.' + _id,\n\tasync beforeRegisterNodeDef(nodeType, nodeData) {\n        if (nodeData.name !== _id) {\n            return;\n        }\n\n        const onNodeCreated = nodeType.prototype.onNodeCreated\n        nodeType.prototype.onNodeCreated = async function () {\n            const me = await onNodeCreated?.apply(this, arguments);\n            this.addInput(_prefix, _dynamic_type);\n            nodeOutputsClear(this, 1);\n            return me;\n        }\n\n        const onConnectionsChange = nodeType.prototype.onConnectionsChange\n        nodeType.prototype.onConnectionsChange = function (slotType, slot_idx, event, link_info, node_slot) {\n            const me = onConnectionsChange?.apply(this, arguments);\n            let bus_connected = false;\n            if (event == TypeSlotEvent.Connect && link_info) {\n                let fromNode = this.graph._nodes.find(\n                    (otherNode) => otherNode.id == link_info.origin_id\n                );\n                if (slotType == TypeSlot.Input) {\n                    if (slot_idx == 0) {\n                        fromNode = nodeVirtualLinkRoot(fromNode);\n                        if (fromNode?.outputs && fromNode.outputs[0].type == node_slot.type) {\n                            // bus connection\n                            bus_connected = true;\n                            nodeInputsClear(this, 1);\n                            nodeOutputsClear(this, 1);\n                        }\n                    } else {\n                        // normal connection\n                        const parent_link = fromNode?.outputs[link_info.origin_slot];\n                        if (parent_link) {\n                            node_slot.type = parent_link.type;\n                            node_slot.name = parent_link.name ; //`${fromNode.id}_${parent_link.name}`;\n                            // make sure there is a matching output...\n                            while(this.outputs.length < slot_idx + 1) {\n                                this.addOutput(_prefix, _dynamic_type);\n                            }\n                            this.outputs[slot_idx].name = node_slot.name;\n                            this.outputs[slot_idx].type = node_slot.type;\n                        }\n                    }\n                }\n            } else if (event == TypeSlotEvent.Disconnect) {\n                bus_connected = false;\n                if (slot_idx == 0) {\n                    nodeInputsClear(this, 1);\n                    nodeOutputsClear(this, 1);\n                } else {\n                    this.removeInput(slot_idx);\n                    this.removeOutput(slot_idx);\n                }\n            }\n\n            // add extra input if we are not in BUS connection mode\n            if (!bus_connected) {\n                const last = this.inputs[this.inputs.length-1];\n                if (last.name != _prefix || last.type != _dynamic_type) {\n                    this.addInput(_prefix, _dynamic_type);\n                }\n            }\n            nodeFitHeight(this);\n            return me;\n        }\n\n        return nodeType;\n\t}\n})\n"
  },
  {
    "path": "web/nodes/stack.js",
    "content": "/**/\n\nimport { app } from \"../../../scripts/app.js\"\nimport { nodeAddDynamic}  from \"../util.js\"\n\nconst _id = \"STACK (JOV) ➕\"\nconst _prefix = 'image'\n\napp.registerExtension({\n\tname: 'jovimetrix.node.' + _id,\n\tasync beforeRegisterNodeDef(nodeType, nodeData) {\n        if (nodeData.name !== _id) {\n            return;\n        }\n\n        await nodeAddDynamic(nodeType, _prefix);\n\t}\n})\n"
  },
  {
    "path": "web/nodes/stringer.js",
    "content": "/**/\n\nimport { app } from \"../../../scripts/app.js\"\nimport { nodeAddDynamic } from \"../util.js\"\n\nconst _id = \"STRINGER (JOV) 🪀\"\nconst _prefix = 'string'\n\napp.registerExtension({\n\tname: 'jovimetrix.node.' + _id,\n\tasync beforeRegisterNodeDef(nodeType, nodeData) {\n        if (nodeData.name !== _id) {\n            return;\n        }\n\n        await nodeAddDynamic(nodeType, _prefix);\n\t}\n})"
  },
  {
    "path": "web/nodes/value.js",
    "content": "/**/\n\nimport { app } from \"../../../scripts/app.js\"\nimport { widgetHookControl, nodeFitHeight} from \"../util.js\"\n\nconst _id = \"VALUE (JOV) 🧬\"\n\napp.registerExtension({\n\tname: 'jovimetrix.node.' + _id,\n\tasync beforeRegisterNodeDef(nodeType, nodeData) {\n        if (nodeData.name !== _id) {\n            return;\n        }\n\n        const onNodeCreated = nodeType.prototype.onNodeCreated\n        nodeType.prototype.onNodeCreated = async function () {\n            const me = await onNodeCreated?.apply(this, arguments);\n\n            this.outputs[1].type = \"*\";\n            this.outputs[2].type = \"*\";\n            this.outputs[3].type = \"*\";\n            this.outputs[4].type = \"*\";\n\n            const ab_data = await widgetHookControl(this, 'type', 'aa');\n            await widgetHookControl(this, 'type', 'bb');\n\n            const oldCallback = ab_data.callback;\n            ab_data.callback = () => {\n                oldCallback?.apply(this, arguments);\n\n                this.outputs[0].name = ab_data.value;\n                this.outputs[0].type = ab_data.value;\n                let type = ab_data.value;\n                type = \"FLOAT\";\n                if (ab_data.value == \"INT\") {\n                    type = \"INT\";\n                } else if (ab_data.value == \"BOOLEAN\") {\n                    type = \"BOOLEAN\";\n                }\n                this.outputs[1].type = type;\n                this.outputs[2].type = type;\n                this.outputs[3].type = type;\n                this.outputs[4].type = type;\n                nodeFitHeight(this);\n            }\n            return me;\n        }\n    }\n})\n"
  },
  {
    "path": "web/util.js",
    "content": "/**/\n\nimport { app } from \"../../scripts/app.js\"\nimport { api } from \"../../scripts/api.js\"\n\nexport const TypeSlot = {\n    Input: 1,\n    Output: 2,\n};\n\nexport const TypeSlotEvent = {\n    Connect: true,\n    Disconnect: false,\n};\n\nexport async function apiJovimetrix(id, cmd, data=null, route=\"message\", ) {\n    try {\n        const response = await api.fetchApi(`/cozy_comfyui/${route}`, {\n            method: \"POST\",\n            headers: {\n                \"Content-Type\": \"application/json\",\n            },\n            body: JSON.stringify({\n                id: id,\n                cmd: cmd,\n                data: data\n            }),\n        });\n\n        if (!response.ok) {\n            throw new Error(`Error: ${response.status} - ${response.statusText}`);\n        }\n        return response;\n\n    } catch (error) {\n        console.error(\"API call to Jovimetrix failed:\", error);\n        throw error;\n    }\n}\n\n/*\n* matchFloatSize forces the target to be float[n] based on its type size\n*/\nexport async function widgetHookControl(node, control_key, child_key) {\n    const control = node.widgets.find(w => w.name == control_key);\n    const target = node.widgets.find(w => w.name == child_key);\n    const target_input = node.inputs.find(w => w.name == child_key);\n\n    if (!control || !target || !target_input) {\n        throw new Error(\"Required widgets not found\");\n    }\n\n    const track_xyzw = {\n        0: target.options?.default?.[0] || 0,\n        1: target.options?.default?.[1] || 0,\n        2: target.options?.default?.[2] || 0,\n        3: target.options?.default?.[3] || 0,\n    };\n\n    const track_options = {}\n    Object.assign(track_options, target.options);\n\n    const controlCallback = control.callback;\n    control.callback = async () => {\n        const me = await controlCallback?.apply(this, arguments);\n        Object.assign(target.options, track_options);\n\n        if ([\"VEC2\", \"VEC3\", \"VEC4\", \"FLOAT\", \"INT\", \"BOOLEAN\"].includes(control.value)) {\n            target_input.type = control.value;\n\n            if ([\"INT\", \"FLOAT\", \"BOOLEAN\"].includes(control.value)) {\n                target.type = \"VEC1\";\n            } else {\n                target.type = control.value;\n            }\n            target.options.type = target.type;\n\n            let size = 1;\n            if ([\"VEC2\", \"VEC3\", \"VEC4\"].includes(target.type)) {\n                const match = /\\d/.exec(target.type);\n                size = match[0];\n            }\n\n            target.value = {};\n            if ([\"VEC2\", \"VEC3\", \"VEC4\", \"FLOAT\"].includes(control.value)) {\n                for (let i = 0; i < size; i++) {\n                    target.value[i] = parseFloat(track_xyzw[i]).toFixed(target.options.precision);\n                }\n            } else if (control.value == \"INT\") {\n                target.options.step = 1;\n                target.options.round = 0;\n                target.options.precision = 0;\n                target.options.int = true;\n\n                target.value[0] = Number(track_xyzw[0]);\n            } else if (control.value == \"BOOLEAN\") {\n                target.options.step = 1;\n                target.options.precision = 0;\n                target.options.mij = 0;\n                target.options.maj = 1;\n                target.options.int = true;\n                target.value[0] = track_xyzw[0] != 0 ? 1 : 0;\n            }\n        }\n        nodeFitHeight(node);\n        return me;\n    }\n\n    const targetCallback = target.callback;\n    target.callback = async () => {\n        const me = await targetCallback?.apply(this, arguments);\n        if (target.type == \"toggle\") {\n            track_xyzw[0] = target.value != 0 ? 1 : 0;\n        } else if ([\"INT\", \"FLOAT\"].includes(target.type)) {\n            track_xyzw[0] = target.value;\n        } else {\n            Object.keys(target.value).forEach((key) => {\n                track_xyzw[key] = target.value[key];\n            });\n        }\n        return me;\n    };\n\n    await control.callback();\n    return control;\n}\n\nexport function nodeFitHeight(node) {\n    const size_old = node.size;\n    node.computeSize();\n    node.setSize([Math.max(size_old[0], node.size[0]), Math.min(size_old[1], node.size[1])]);\n    node.setDirtyCanvas(!0, !1);\n    app.graph.setDirtyCanvas(!0, !1);\n}\n\n/**\n * Manage the slots on a node to allow a dynamic number of inputs\n*/\nexport async function nodeAddDynamic(nodeType, prefix, dynamic_type='*') {\n    /*\n    this one should just put the \"prefix\" as the last empty entry.\n    Means we have to pay attention not to collide key names in the\n    input list.\n\n    Also need to make sure that we keep any non-dynamic ports.\n    */\n\n    const onNodeCreated = nodeType.prototype.onNodeCreated\n    nodeType.prototype.onNodeCreated = async function () {\n        const me = await onNodeCreated?.apply(this, arguments);\n\n        if (this.inputs.length == 0 || this.inputs[this.inputs.length-1].name != prefix) {\n            this.addInput(prefix, dynamic_type);\n        }\n        return me;\n    }\n\n    function slot_name(slot) {\n        return slot.name.split('_');\n    }\n\n    const onConnectionsChange = nodeType.prototype.onConnectionsChange\n    nodeType.prototype.onConnectionsChange = async function (slotType, slot_idx, event, link_info, node_slot) {\n        const me = onConnectionsChange?.apply(this, arguments);\n        const slot_parts = slot_name(node_slot);\n        if ((node_slot.type === dynamic_type || slot_parts.length > 1) && slotType === TypeSlot.Input && link_info !== null) {\n            const fromNode = this.graph._nodes.find(\n                (otherNode) => otherNode.id == link_info.origin_id\n            )\n            const parent_slot = fromNode.outputs[link_info.origin_slot];\n            if (event === TypeSlotEvent.Connect) {\n                node_slot.type = parent_slot.type;\n                node_slot.name = `0_${parent_slot.name}`;\n            } else {\n                this.removeInput(slot_idx);\n                node_slot.type = dynamic_type;\n                node_slot.name = prefix;\n                node_slot.link = null;\n            }\n\n            // clean off missing slot connects\n            let idx = 0;\n            let offset = 0;\n            while (idx < this.inputs.length) {\n                const parts = slot_name(this.inputs[idx]);\n                if (parts.length > 1) {\n                    const name = parts.slice(1).join('');\n                    this.inputs[idx].name = `${offset}_${name}`;\n                    offset += 1;\n                }\n                idx += 1;\n            }\n        }\n        // check that the last slot is a dynamic entry....\n        let last = this.inputs[this.inputs.length-1];\n        if (last.type != dynamic_type || last.name != prefix) {\n            this.addInput(prefix, dynamic_type);\n        }\n        nodeFitHeight(this);\n        return me;\n    }\n}\n\n/**\n * Trace to the root node that is not a virtual node.\n *\n * @param {Object} node - The starting node to trace from.\n * @returns {Object} - The first physical (non-virtual) node encountered, or the last node if no physical node is found.\n */\nexport function nodeVirtualLinkRoot(node) {\n    while (node) {\n        const { isVirtualNode, findSetter } = node;\n\n        if (!isVirtualNode || !findSetter) break;\n        const nextNode = findSetter(node.graph);\n\n        if (!nextNode) break;\n        node = nextNode;\n    }\n    return node;\n}\n\n/**\n * Trace through outputs until a physical (non-virtual) node is found.\n *\n * @param {Object} node - The starting node to trace from.\n * @returns {Object} - The first physical node encountered, or the last node if no physical node is found.\n */\nfunction nodeVirtualLinkChild(node) {\n    while (node) {\n        const { isVirtualNode, findGetter } = node;\n\n        if (!isVirtualNode || !findGetter) break;\n        const nextNode = findGetter(node.graph);\n\n        if (!nextNode) break;\n        node = nextNode;\n    }\n    return node;\n}\n\n/**\n * Remove inputs from a node until the stop condition is met.\n *\n * @param {Array} inputs - The list of inputs associated with the node.\n * @param {number} stop - The minimum number of inputs to retain. Default is 0.\n */\nexport function nodeInputsClear(node, stop = 0) {\n    while (node.inputs?.length > stop) {\n        node.removeInput(node.inputs.length - 1);\n    }\n}\n\n/**\n * Remove outputs from a node until the stop condition is met.\n *\n * @param {Array} outputs - The list of outputs associated with the node.\n * @param {number} stop - The minimum number of outputs to retain. Default is 0.\n */\nexport function nodeOutputsClear(node, stop = 0) {\n    while (node.outputs?.length > stop) {\n        node.removeOutput(node.outputs.length - 1);\n    }\n}\n"
  },
  {
    "path": "web/widget_vector.js",
    "content": "/**/\n\nimport { app } from \"../../scripts/app.js\"\nimport { $el } from \"../../scripts/ui.js\"\n/** @import { IWidget, LGraphCanvas } from '../../types/litegraph/litegraph.d.ts' */\n\nfunction arrayToObject(values, length, parseFn) {\n    const result = {};\n    for (let i = 0; i < length; i++) {\n        result[i] = parseFn(values[i]);\n    }\n    return result;\n}\n\nfunction domInnerValueChange(node, pos, widget, value, event=undefined) {\n    //const numtype = widget.type.includes(\"INT\") ? Number : parseFloat\n    widget.value = arrayToObject(value, Object.keys(value).length, widget.convert);\n    if (\n        widget.options &&\n        widget.options.property &&\n        node.properties[widget.options.property] !== undefined\n        ) {\n            node.setProperty(widget.options.property, widget.value)\n        }\n    if (widget.callback) {\n        widget.callback(widget.value, app.canvas, node, pos, event)\n    }\n}\n\nfunction colorHex2RGB(hex) {\n    hex = hex.replace(/^#/, '');\n    const bigint = parseInt(hex, 16);\n    const r = (bigint >> 16) & 255;\n    const g = (bigint >> 8) & 255;\n    const b = bigint & 255;\n    return [r, g, b];\n}\n\nfunction colorRGB2Hex(input) {\n    const rgbArray = typeof input == 'string' ? input.match(/\\d+/g) : input;\n    if (rgbArray.length < 3) {\n        throw new Error('input not 3 or 4 values');\n    }\n    const hexValues = rgbArray.map((value, index) => {\n        if (index == 3 && !value) return 'ff';\n        const hex = parseInt(value).toString(16);\n        return hex.length == 1 ? '0' + hex : hex;\n    });\n    return '#' + hexValues.slice(0, 3).join('') + (hexValues[3] || '');\n}\n\nconst VectorWidget = (app, inputName, options, initial) => {\n    const values = options[1]?.default || initial;\n    /** @type {IWidget} */\n    const widget = {\n        name: inputName,\n        type: options[0],\n        y: 0,\n        value: values,\n        options: options[1]\n    }\n\n    widget.convert = parseFloat;\n    widget.options.precision = widget.options?.precision || 2;\n    widget.options.step = widget.options?.step || 0.1;\n    widget.options.round = 1 / 10 ** widget.options.step;\n\n    if (widget.options?.rgb || widget.options?.int || false) {\n        widget.options.step = 1;\n        widget.options.round = 1;\n        widget.options.precision = 0;\n        widget.convert = Number;\n    }\n\n    if (widget.options?.rgb || false) {\n        widget.options.maj = 255;\n        widget.options.mij = 0;\n        widget.options.label = ['🟥', '🟩', '🟦', 'ALPHA'];\n    }\n\n    const offset_y = 4;\n    const widget_padding_left = 13;\n    const widget_padding = 30;\n    const label_full = 72;\n    const label_center = label_full / 2;\n\n    /** @type {HTMLInputElement} */\n    let picker;\n\n    widget.draw = function(ctx, node, width, Y, height) {\n        // if ((app.canvas.ds.scale < 0.50) || (!this.type2.startsWith(\"VEC\") && this.type2 != \"COORD2D\")) return;\n        if ((app.canvas.ds.scale < 0.50) || (!this.type.startsWith(\"VEC\"))) return;\n        ctx.save()\n        ctx.beginPath()\n        ctx.lineWidth = 1\n        ctx.fillStyle = LiteGraph.WIDGET_OUTLINE_COLOR\n        ctx.roundRect(widget_padding_left+2, Y, width - widget_padding, height, 15)\n        ctx.stroke()\n        ctx.lineWidth = 1\n        ctx.fillStyle = LiteGraph.WIDGET_BGCOLOR\n        ctx.roundRect(widget_padding_left+2, Y, width - widget_padding, height, 15)\n        ctx.fill()\n\n        // label\n        ctx.fillStyle = LiteGraph.WIDGET_SECONDARY_TEXT_COLOR\n        ctx.fillText(inputName, label_center - (inputName.length * 1.5), Y + height / 2 + offset_y)\n        let x = label_full + 1\n\n        const fields = Object.keys(this?.value || []);\n        let count = fields.length;\n        if (widget.options?.rgb) {\n            count += 0.23;\n        }\n        const element_width = (width - label_full - widget_padding) / count;\n        const element_width2 = element_width / 2;\n\n        let converted = [];\n        for (const idx of fields) {\n            ctx.save()\n            ctx.beginPath()\n            ctx.fillStyle = LiteGraph.WIDGET_OUTLINE_COLOR\n            // separation bar\n            if (idx != fields.length || (idx == fields.length && !this.options?.rgb)) {\n                ctx.moveTo(x, Y)\n                ctx.lineTo(x, Y+height)\n                ctx.stroke();\n            }\n\n            // value\n            ctx.fillStyle = LiteGraph.WIDGET_TEXT_COLOR\n            const it = this.value[idx.toString()];\n            let value = (widget.options.precision == 0) ? Number(it) : parseFloat(it).toFixed(widget.options.precision);\n            converted.push(value);\n            const text = value.toString();\n            ctx.fillText(text, x + element_width2 - text.length * 3.3, Y + height/2 + offset_y);\n            ctx.restore();\n            x += element_width;\n        }\n\n        if (this.options?.rgb && converted.length > 2) {\n            try {\n                ctx.fillStyle = colorRGB2Hex(converted);\n            } catch (e) {\n                console.error(converted, e);\n                ctx.fillStyle = \"#FFF\";\n            }\n            ctx.roundRect(width-1.17 * widget_padding, Y+1, 19, height-2, 16);\n            ctx.fill()\n        }\n        ctx.restore()\n    }\n\n    function clamp(widget, v, idx) {\n        v = Math.min(v, widget.options?.maj !== undefined ? widget.options.maj : v);\n        v = Math.max(v, widget.options?.mij !== undefined ? widget.options.mij : v);\n        widget.value[idx] = (widget.options.precision == 0) ? Number(v) : parseFloat(v).toFixed(widget.options.precision);\n    }\n\n    /**\n     * @todo ▶️, 🖱️, 😀\n     * @this IWidget\n     */\n    widget.onPointerDown = function (pointer, node, canvas) {\n        const e = pointer.eDown\n        const x = e.canvasX - node.pos[0] - label_full;\n        const size = Object.keys(this.value).length;\n        const element_width = (node.size[0] - label_full - widget_padding * 1.25) / size;\n        const index = Math.floor(x / element_width);\n\n        pointer.onClick = (eUp) => {\n            /* if click on header, reset to defaults */\n            if (index == -1 && eUp.shiftKey) {\n                widget.value = Object.assign({}, widget.options.default);\n                return;\n            }\n            else if (index >= 0 && index < size) {\n                const pos = [eUp.canvasX - node.pos[0], eUp.canvasY - node.pos[1]]\n                const old_value = { ...this.value };\n                const label = this.options?.label ? this.name + '➖' + this.options.label?.[index] : this.name;\n\n                LGraphCanvas.active_canvas.prompt(label, this.value[index], function(v) {\n                    if (/^[0-9+\\-*/()\\s]+|\\d+\\.\\d+$/.test(v)) {\n                        try {\n                            v = eval(v);\n                        } catch {\n                            v = old_value[index];\n                        }\n                    } else {\n                        v = old_value[index];\n                    }\n\n                    if (this.value[index] != v) {\n                        setTimeout(\n                            function () {\n                                clamp(this, v, index);\n                                domInnerValueChange(node, pos, this, this.value, eUp);\n                            }.bind(this), 5)\n                    }\n                }.bind(this), eUp);\n                return;\n            }\n            if (!this.options?.rgb) return;\n\n            const rgba = Object.values(this?.value || []);\n            const color = colorRGB2Hex(rgba.slice(0, 3));\n\n            if (index != size && (x < 0 && rgba.length > 2)) {\n                const target = Object.values(rgba.map((item) => 255 - item)).slice(0, 3);\n                this.value = Object.values(this.value);\n                this.value.splice(0, 3, ...target);\n                return\n            }\n\n            if (!picker) {\n                // firefox?\n                //position: \"absolute\", // Use absolute positioning for consistency\n                //left: `${eUp.pageX}px`, // Use pageX for more consistent placement\n                //top: `${eUp.pageY}px`,\n                picker = $el(\"input\", {\n                    type: \"color\",\n                    parent: document.body,\n                    style: {\n                        position: \"fixed\",\n                        left: `${eUp.clientX}px`,\n                        top: `${eUp.clientY}px`,\n                        height: \"0px\",\n                        width: \"0px\",\n                        padding: \"0px\",\n                        opacity: 0,\n                    },\n                });\n                picker.addEventListener('blur', () => picker.style.display = 'none')\n                picker.addEventListener('input', () => {\n                    if (!picker.value) return;\n\n                    widget.value = colorHex2RGB(picker.value);\n                    if (rgba.length > 3) {\n                        widget.value.push(rgba[3]);\n                    }\n                    canvas.setDirty(true)\n                })\n            } else {\n                picker.style.display = 'revert'\n                picker.style.left = `${eUp.clientX}px`\n                picker.style.top = `${eUp.clientY}px`\n            }\n            picker.value = color;\n            requestAnimationFrame(() => {\n                picker.showPicker()\n                picker.focus()\n            })\n        }\n\n        pointer.onDrag = (eMove) => {\n            if (!eMove.deltaX || !(index > -1)) return;\n            if (index >= size) return;\n            let v = parseFloat(this.value[index]);\n            v += this.options.step * Math.sign(eMove.deltaX);\n            clamp(this, v, index);\n            if (widget.callback) {\n                widget.callback(widget.value, app.canvas, node)\n            }\n        }\n    }\n\n    widget.serializeValue = async (node, index) => {\n        const rawValues = Array.isArray(widget.value)\n            ? widget.value\n            : Object.values(widget.value);\n        const funct = widget.options?.int ? Number : parseFloat;\n        return rawValues.map(v => funct(v));\n    };\n\n    return widget;\n}\n\napp.registerExtension({\n    name: \"jovi.widget.spinner\",\n    async getCustomWidgets(app) {\n        return {\n            VEC2: (node, inputName, inputData, app) => ({\n                widget: node.addCustomWidget(VectorWidget(app, inputName, inputData, [0, 0])),\n            }),\n            VEC3: (node, inputName, inputData, app) => ({\n                widget: node.addCustomWidget(VectorWidget(app, inputName, inputData, [0, 0, 0])),\n            }),\n            VEC4: (node, inputName, inputData, app) => ({\n                widget: node.addCustomWidget(VectorWidget(app, inputName, inputData, [0, 0, 0, 0])),\n            })\n        }\n    }\n})\n"
  }
]