Full Code of 1g0rrr/SimpleAutomation for AI

main 2bc237ebccdb cached
14 files
50.2 KB
12.7k tokens
15 symbols
1 requests
Download .txt
Repository: 1g0rrr/SimpleAutomation
Branch: main
Commit: 2bc237ebccdb
Files: 14
Total size: 50.2 KB

Directory structure:
gitextract_ljmww4sq/

├── .gitignore
├── LICENSE
├── README.md
├── colab/
│   └── SimpleAutomationTrainModel.ipynb
└── core/
    ├── configs/
    │   ├── chains/
    │   │   ├── chip_testing.yaml
    │   │   ├── eat_chips.yaml
    │   │   ├── glue_stick.yaml
    │   │   └── lamp_testing.yaml
    │   └── robot/
    │       ├── koch.yaml
    │       ├── so100.yaml
    │       └── so100_dummy.yaml
    ├── dummy_arm.py
    ├── models_chain.py
    └── test.py

================================================
FILE CONTENTS
================================================

================================================
FILE: .gitignore
================================================
# Logging
logs
tmp
wandb

# Data
data
outputs
videos
pretrained_model

# callibration
.cache/*

# Apple
.DS_Store

# VS Code
.vscode

# HPC
nautilus/*.yaml
*.key

# Slurm
sbatch*.sh

# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# PyInstaller
#  Usually these files are written by a python script from a template
#  before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
!tests/data
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/

# Ignore .cache except calibration
.cache/*
.cache/calibration/
.cache/calibration/**

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
.pybuilder/
target/

# Jupyter Notebook
.ipynb_checkpoints

# IPython
profile_default/
ipython_config.py

# pyenv
.python-version

# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/

# Celery stuff
celerybeat-schedule
celerybeat.pid

# SageMath parsed files
*.sage.py

# Environments
.env
.venv
env.bak/
venv.bak/

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/
.dmypy.json
dmypy.json

# Pyre type checker
.pyre/

# pytype static type analyzer
.pytype/

# Cython debug symbols
cython_debug/


================================================
FILE: LICENSE
================================================
                                Apache License
                           Version 2.0, January 2004
                        http://www.apache.org/licenses/

   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

   1. Definitions.

      "License" shall mean the terms and conditions for use, reproduction,
      and distribution as defined by Sections 1 through 9 of this document.

      "Licensor" shall mean the copyright owner or entity authorized by
      the copyright owner that is granting the License.

      "Legal Entity" shall mean the union of the acting entity and all
      other entities that control, are controlled by, or are under common
      control with that entity. For the purposes of this definition,
      "control" means (i) the power, direct or indirect, to cause the
      direction or management of such entity, whether by contract or
      otherwise, or (ii) ownership of fifty percent (50%) or more of the
      outstanding shares, or (iii) beneficial ownership of such entity.

      "You" (or "Your") shall mean an individual or Legal Entity
      exercising permissions granted by this License.

      "Source" form shall mean the preferred form for making modifications,
      including but not limited to software source code, documentation
      source, and configuration files.

      "Object" form shall mean any form resulting from mechanical
      transformation or translation of a Source form, including but
      not limited to compiled object code, generated documentation,
      and conversions to other media types.

      "Work" shall mean the work of authorship, whether in Source or
      Object form, made available under the License, as indicated by a
      copyright notice that is included in or attached to the work
      (an example is provided in the Appendix below).

      "Derivative Works" shall mean any work, whether in Source or Object
      form, that is based on (or derived from) the Work and for which the
      editorial revisions, annotations, elaborations, or other modifications
      represent, as a whole, an original work of authorship. For the purposes
      of this License, Derivative Works shall not include works that remain
      separable from, or merely link (or bind by name) to the interfaces of,
      the Work and Derivative Works thereof.

      "Contribution" shall mean any work of authorship, including
      the original version of the Work and any modifications or additions
      to that Work or Derivative Works thereof, that is intentionally
      submitted to Licensor for inclusion in the Work by the copyright owner
      or by an individual or Legal Entity authorized to submit on behalf of
      the copyright owner. For the purposes of this definition, "submitted"
      means any form of electronic, verbal, or written communication sent
      to the Licensor or its representatives, including but not limited to
      communication on electronic mailing lists, source code control systems,
      and issue tracking systems that are managed by, or on behalf of, the
      Licensor for the purpose of discussing and improving the Work, but
      excluding communication that is conspicuously marked or otherwise
      designated in writing by the copyright owner as "Not a Contribution."

      "Contributor" shall mean Licensor and any individual or Legal Entity
      on behalf of whom a Contribution has been received by Licensor and
      subsequently incorporated within the Work.

   2. Grant of Copyright License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      copyright license to reproduce, prepare Derivative Works of,
      publicly display, publicly perform, sublicense, and distribute the
      Work and such Derivative Works in Source or Object form.

   3. Grant of Patent License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      (except as stated in this section) patent license to make, have made,
      use, offer to sell, sell, import, and otherwise transfer the Work,
      where such license applies only to those patent claims licensable
      by such Contributor that are necessarily infringed by their
      Contribution(s) alone or by combination of their Contribution(s)
      with the Work to which such Contribution(s) was submitted. If You
      institute patent litigation against any entity (including a
      cross-claim or counterclaim in a lawsuit) alleging that the Work
      or a Contribution incorporated within the Work constitutes direct
      or contributory patent infringement, then any patent licenses
      granted to You under this License for that Work shall terminate
      as of the date such litigation is filed.

   4. Redistribution. You may reproduce and distribute copies of the
      Work or Derivative Works thereof in any medium, with or without
      modifications, and in Source or Object form, provided that You
      meet the following conditions:

      (a) You must give any other recipients of the Work or
          Derivative Works a copy of this License; and

      (b) You must cause any modified files to carry prominent notices
          stating that You changed the files; and

      (c) You must retain, in the Source form of any Derivative Works
          that You distribute, all copyright, patent, trademark, and
          attribution notices from the Source form of the Work,
          excluding those notices that do not pertain to any part of
          the Derivative Works; and

      (d) If the Work includes a "NOTICE" text file as part of its
          distribution, then any Derivative Works that You distribute must
          include a readable copy of the attribution notices contained
          within such NOTICE file, excluding those notices that do not
          pertain to any part of the Derivative Works, in at least one
          of the following places: within a NOTICE text file distributed
          as part of the Derivative Works; within the Source form or
          documentation, if provided along with the Derivative Works; or,
          within a display generated by the Derivative Works, if and
          wherever such third-party notices normally appear. The contents
          of the NOTICE file are for informational purposes only and
          do not modify the License. You may add Your own attribution
          notices within Derivative Works that You distribute, alongside
          or as an addendum to the NOTICE text from the Work, provided
          that such additional attribution notices cannot be construed
          as modifying the License.

      You may add Your own copyright statement to Your modifications and
      may provide additional or different license terms and conditions
      for use, reproduction, or distribution of Your modifications, or
      for any such Derivative Works as a whole, provided Your use,
      reproduction, and distribution of the Work otherwise complies with
      the conditions stated in this License.

   5. Submission of Contributions. Unless You explicitly state otherwise,
      any Contribution intentionally submitted for inclusion in the Work
      by You to the Licensor shall be under the terms and conditions of
      this License, without any additional terms or conditions.
      Notwithstanding the above, nothing herein shall supersede or modify
      the terms of any separate license agreement you may have executed
      with Licensor regarding such Contributions.

   6. Trademarks. This License does not grant permission to use the trade
      names, trademarks, service marks, or product names of the Licensor,
      except as required for reasonable and customary use in describing the
      origin of the Work and reproducing the content of the NOTICE file.

   7. Disclaimer of Warranty. Unless required by applicable law or
      agreed to in writing, Licensor provides the Work (and each
      Contributor provides its Contributions) on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
      implied, including, without limitation, any warranties or conditions
      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
      PARTICULAR PURPOSE. You are solely responsible for determining the
      appropriateness of using or redistributing the Work and assume any
      risks associated with Your exercise of permissions under this License.

   8. Limitation of Liability. In no event and under no legal theory,
      whether in tort (including negligence), contract, or otherwise,
      unless required by applicable law (such as deliberate and grossly
      negligent acts) or agreed to in writing, shall any Contributor be
      liable to You for damages, including any direct, indirect, special,
      incidental, or consequential damages of any character arising as a
      result of this License or out of the use or inability to use the
      Work (including but not limited to damages for loss of goodwill,
      work stoppage, computer failure or malfunction, or any and all
      other commercial damages or losses), even if such Contributor
      has been advised of the possibility of such damages.

   9. Accepting Warranty or Additional Liability. While redistributing
      the Work or Derivative Works thereof, You may choose to offer,
      and charge a fee for, acceptance of support, warranty, indemnity,
      or other liability obligations and/or rights consistent with this
      License. However, in accepting such obligations, You may act only
      on Your own behalf and on Your sole responsibility, not on behalf
      of any other Contributor, and only if You agree to indemnify,
      defend, and hold each Contributor harmless for any liability
      incurred by, or claims asserted against, such Contributor by reason
      of your accepting any such warranty or additional liability.

   END OF TERMS AND CONDITIONS


Copyright 2024 SimpleAutomation

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

================================================
FILE: README.md
================================================
# Make robots do something useful!

Our goal is to make robots affordable so more people can try them out, discover useful applications, and eventually make money using them to do work.

Currently, it's a set of helper scripts on top of LeRobot, plus a $300 robot arm compatible with π₀ and other foundational robotics models.

<img src="https://github.com/user-attachments/assets/4c30e970-5d89-48ea-86db-72ae8d1ab47a" width=450/>

# Intro

<table>
  <tr>
    <td><video src="https://github.com/user-attachments/assets/62472ce6-3084-41ec-8245-32a3c10f4b79" width=200/></td>
    <td></td>
  </tr>
</table>



# Installation

1.  If you didn't install LeRobot, install it:

```
git clone https://github.com/huggingface/lerobot.git
cd lerobot
pip install -e .
```

2 Clone Simple Automation scripts to another folder

```
git clone https://github.com/1g0rrr/SimpleAutomation.git
cd SimpleAutomation
```

3 Setup ports for your robot in "core/configs/robot/so100.yaml".

# Run

### Use multiple ACT models to solve complex robotics tasks.

For example, in the lamp testing demo, we combined 3 models:

1. For getting the lamp from a random position
2. For precise insertion into the tester
3. For sorting working/not working bulbs

![unnamed](https://github.com/user-attachments/assets/d105cf69-1b82-4581-90b7-9a9cd0a4f595)

### Run evaluation

-   Change the config file for using your models "core/configs/chains/lamp_testing.yaml"
-   While evaluating press "right" key to move to the next model

```
python core/models_chain.py evaluate \
  --robot-path core/configs/robot/so100.yaml \
  --chain-path core/configs/chains/lamp_testing.yaml
```

### Use LLM agent to run models
```
python core/models_chain.py llm_agent \
  --robot-path core/configs/robot/so100.yaml
```

### Run recording

-   The difference from Lerobot's recording is added teleoperation between episodes. This is usefull to be able to switch between models in not "resting" position.

```
python core/models_chain.py record \
  --robot-path core/configs/robot/so100.yaml \
  --fps 30 \
  --root data \
  --repo-id 1g0rrr/koch_test21 \
  --tags tutorial \
  --warmup-time-s 5 \
  --episode-time-s 5 \
  --reset-time-s 5 \
  --num-episodes 2
```

### Run teleoperation:

Use it for testing if all is working.

```
python core/models_chain.py teleoperate \
 --robot-path core/configs/robot/so100.yaml \
 --robot-overrides '~cameras'
```



# Tips

-   Make sure you have all inintial positions in the following model to prevent robot from sudden movements.
-   "Pick and place" task is hard for the model and gripper can grab object not precisely at the center. To solve this re-grab object at the beginning of next model.

# Training

### Train model in Google Colab:

You can model in Google Colab to save time.
https://colab.research.google.com/github/1g0rrr/SimpleAutomation/blob/main/colab/SimpleAutomationTrainModel.ipynb

-   It will take about 2.5 hours and $1.5 to train typical 80K steps.
-   Choose A100 as the fastest GPU.
-   Don't disconnect colab and don't close browser as all data will be deleted.

# Hardware
### S.A.M.01

![arm2](https://github.com/user-attachments/assets/35113d53-93b1-4678-af15-463d563cd238)

Join https://discord.gg/NFsqq4CVhs to get recent information.

### SO-100

Follow [SO-100](https://github.com/TheRobotStudio/SO-ARM100) to build your arm.

<img src="./media/so-100.png" width=500/>


# Join the community

Say 👋 in our [public discord channel](https://discord.gg/NFsqq4CVhs). We help each other with assembling, training models, and making the robot do something useful. 

Thank you for contributing to SimpleAutomation!

## Star History

[![Star History Chart](https://api.star-history.com/svg?repos=1g0rrr/simpleautomation&type=Timeline)](https://star-history.com/#1g0rrr/simpleautomation&Timeline)


================================================
FILE: colab/SimpleAutomationTrainModel.ipynb
================================================
{
  "cells": [
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "cX-BT9Wte6tV"
      },
      "outputs": [],
      "source": [
        "hugging_face_token = ''\n",
        "dataset_repo_id = 'test/test-dataset'\n",
        "model_repo_id = 'test/test-model'\n"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "bV6yVHzTq-dt"
      },
      "outputs": [],
      "source": [
        "!git clone https://github.com/huggingface/lerobot.git\n",
        "!cd lerobot && pip install -e . --ignore-installed"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "4vccJQqd_afv"
      },
      "outputs": [],
      "source": [
        "!pip install huggingface-hub\n",
        "!huggingface-cli login --token {hugging_face_token} --add-to-git-credential"
      ]
    },
    {
      "cell_type": "code",
      "execution_count": null,
      "metadata": {
        "id": "EN9GQ7AurBFo"
      },
      "outputs": [],
      "source": [
        "!cd lerobot && python lerobot/scripts/train.py \\\n",
        "  dataset_repo_id={dataset_repo_id} \\\n",
        "  policy=act_koch_real \\\n",
        "  env=koch_real \\\n",
        "  hydra.run.dir=outputs/train/mymodel \\\n",
        "  hydra.job.name=mymodel \\\n",
        "  device=cuda \\\n",
        "  wandb.enable=false\n",
        "\n",
        "\n",
        "!huggingface-cli upload {model_repo_id} lerobot/outputs/train/mymodel/checkpoints/last/pretrained_model\n",
        "\n",
        "from google.colab import runtime\n",
        "runtime.unassign()"
      ]
    }
  ],
  "metadata": {
    "accelerator": "GPU",
    "colab": {
      "gpuType": "A100",
      "machine_shape": "hm",
      "provenance": []
    },
    "kernelspec": {
      "display_name": "Python 3",
      "name": "python3"
    },
    "language_info": {
      "name": "python"
    }
  },
  "nbformat": 4,
  "nbformat_minor": 0
}


================================================
FILE: core/configs/chains/chip_testing.yaml
================================================
models:
    eat_chips:
        repo_id: "1g0rrr/eat_chips"
        control_time_s: 120


================================================
FILE: core/configs/chains/eat_chips.yaml
================================================
llm_agent:
    model: "gpt-4o-mini"
    system_prompt: "You're a helpful robot-arm assistant. Answer simple, fun and super concise."
    tools:
        eat_chips:
            repo_id: "1g0rrr/eat_chips"
            desc: "Grab opne chip and give it to user."
        grab_candy:
            repo_id: "1g0rrr/grab_candy"
            desc: "Grab candy and give it to user."
        grab_sponge:
            repo_id: "1g0rrr/grab_sponge"
            desc: "Clean the desktop."
        describe_area:
          desc: "Describe the area around you."
          prompt: "Describe in one phrase what objects you see on the table. Not including robot. Start answer with "I see...""


================================================
FILE: core/configs/chains/glue_stick.yaml
================================================
models:
    grab_stick:
        repo_id: "1g0rrr/grab_stick"
        control_time_s: 120
    pause:
        control_time_s: 120
    release_stick:
        repo_id: "1g0rrr/release_stick"
        control_time_s: 120


================================================
FILE: core/configs/chains/lamp_testing.yaml
================================================
models:
    grab_orange:
        repo_id: "1g0rrr/grab_sponge"
        control_time_s: 9999
    # grab_candy:
    #     repo_id: "1g0rrr/grab_candy"
    #     control_time_s: 120
    # grab_sponge:
    #     repo_id: "1g0rrr/grab_sponge"
    #     control_time_s: 120

    # insert_lamp:
    #     repo_id: "1g0rrr/insert_lamp"
    #     control_time_s: 120
    # testing_lamp:
    #     repo_id: "1g0rrr/testing_lamp"
    #     control_time_s: 120


================================================
FILE: core/configs/robot/koch.yaml
================================================
_target_: lerobot.common.robot_devices.robots.manipulator.ManipulatorRobot
robot_type: koch
calibration_dir: .cache/calibration/koch

# `max_relative_target` limits the magnitude of the relative positional target vector for safety purposes.
# Set this to a positive scalar to have the same value for all motors, or a list that is the same length as
# the number of motors in your follower arms.
max_relative_target: null

leader_arms:
    main:
        _target_: lerobot.common.robot_devices.motors.dynamixel.DynamixelMotorsBus
        port: /dev/tty.usbmodem58760433151
        motors:
            # name: (index, model)
            shoulder_pan: [1, "xl330-m077"]
            shoulder_lift: [2, "xl330-m077"]
            elbow_flex: [3, "xl330-m077"]
            wrist_flex: [4, "xl330-m077"]
            wrist_roll: [5, "xl330-m077"]
            gripper: [6, "xl330-m077"]

follower_arms:
    main:
        _target_: lerobot.common.robot_devices.motors.dynamixel.DynamixelMotorsBus
        port: /dev/tty.usbmodem58760434751
        motors:
            # name: (index, model)
            shoulder_pan: [1, "xl430-w250"]
            shoulder_lift: [2, "xl430-w250"]
            elbow_flex: [3, "xl330-m288"]
            wrist_flex: [4, "xl330-m288"]
            wrist_roll: [5, "xl330-m288"]
            gripper: [6, "xl330-m288"]

cameras:
    laptop:
        _target_: lerobot.common.robot_devices.cameras.opencv.OpenCVCamera
        camera_index: 0
        fps: 30
        width: 640
        height: 480
    phone:
        _target_: lerobot.common.robot_devices.cameras.opencv.OpenCVCamera
        camera_index: 1
        fps: 30
        width: 640
        height: 480

# ~ Koch specific settings ~
# Sets the leader arm in torque mode with the gripper motor set to this angle. This makes it possible
# to squeeze the gripper and have it spring back to an open position on its own.
gripper_open_degree: 35.156


================================================
FILE: core/configs/robot/so100.yaml
================================================
_target_: lerobot.common.robot_devices.robots.manipulator.ManipulatorRobot
robot_type: so100
calibration_dir: .cache/calibration/so_100

# `max_relative_target` limits the magnitude of the relative positional target vector for safety purposes.
# Set this to a positive scalar to have the same value for all motors, or a list that is the same length as
# the number of motors in your follower arms.
max_relative_target: null
# usbmodem58760434751
# dev/tty.usbmodem58760434751 - leader with original bus
leader_arms:
    main:
        _target_: lerobot.common.robot_devices.motors.feetech.FeetechMotorsBus
        # _target_: dummy_arm.SimulatedFollower
        port: /dev/tty.usbmodem58760434771
        motors:
            # name: (index, model)
            shoulder_pan: [1, "sts3215"]
            shoulder_lift: [2, "sts3215"]
            elbow_flex: [3, "sts3215"]
            wrist_flex: [4, "sts3215"]
            wrist_roll: [5, "sts3215"]
            gripper: [6, "sts3215"]

follower_arms:
    main:
        _target_: lerobot.common.robot_devices.motors.feetech.FeetechMotorsBus
        # _target_: dummy_arm.SimulatedFollower
        port: /dev/tty.usbmodem58760429711
        motors:
            # name: (index, model)
            shoulder_pan: [1, "sts3215"]
            shoulder_lift: [2, "sts3215"]
            elbow_flex: [3, "sts3215"]
            wrist_flex: [4, "sts3215"]
            wrist_roll: [5, "sts3215"]
            gripper: [6, "sts3215"]

cameras:
    laptop:
        _target_: lerobot.common.robot_devices.cameras.opencv.OpenCVCamera
        camera_index: 0
        fps: 30
        width: 640
        height: 480
    phone:
        _target_: lerobot.common.robot_devices.cameras.opencv.OpenCVCamera
        camera_index: 1
        fps: 30
        width: 640
        height: 480
# ~ Koch specific settings ~
# Sets the leader arm in torque mode with the gripper motor set to this angle. This makes it possible
# to squeeze the gripper and have it spring back to an open position on its own.
# gripper_open_degree: 35.156


================================================
FILE: core/configs/robot/so100_dummy.yaml
================================================
_target_: lerobot.common.robot_devices.robots.manipulator.ManipulatorRobot
robot_type: so100
calibration_dir: .cache/calibration/so_100

# `max_relative_target` limits the magnitude of the relative positional target vector for safety purposes.
# Set this to a positive scalar to have the same value for all motors, or a list that is the same length as
# the number of motors in your follower arms.
max_relative_target: null
# usbmodem58760434751
# dev/tty.usbmodem58760434751 - leader with original bus
leader_arms:
    main:
        # _target_: lerobot.common.robot_devices.motors.feetech.FeetechMotorsBus
        _target_: dummy_arm.SimulatedFollower
        port: /dev/tty.usbmodem58760434771
        motors:
            # name: (index, model)
            shoulder_pan: [1, "sts3215"]
            shoulder_lift: [2, "sts3215"]
            elbow_flex: [3, "sts3215"]
            wrist_flex: [4, "sts3215"]
            wrist_roll: [5, "sts3215"]
            gripper: [6, "sts3215"]

follower_arms:
    main:
        # _target_: lerobot.common.robot_devices.motors.feetech.FeetechMotorsBus
        _target_: dummy_arm.SimulatedFollower
        port: /dev/tty.usbmodem58760429711
        motors:
            # name: (index, model)
            shoulder_pan: [1, "sts3215"]
            shoulder_lift: [2, "sts3215"]
            elbow_flex: [3, "sts3215"]
            wrist_flex: [4, "sts3215"]
            wrist_roll: [5, "sts3215"]
            gripper: [6, "sts3215"]

cameras:
    laptop:
        _target_: lerobot.common.robot_devices.cameras.opencv.OpenCVCamera
        camera_index: 0
        fps: 30
        width: 640
        height: 480
    phone:
        _target_: lerobot.common.robot_devices.cameras.opencv.OpenCVCamera
        camera_index: 1
        fps: 30
        width: 640
        height: 480
# ~ Koch specific settings ~
# Sets the leader arm in torque mode with the gripper motor set to this angle. This makes it possible
# to squeeze the gripper and have it spring back to an open position on its own.
# gripper_open_degree: 35.156


================================================
FILE: core/dummy_arm.py
================================================
import enum
import logging
import math
import time
import traceback
from copy import deepcopy
from pathlib import Path

import numpy as np
import tqdm
import mujoco


from lerobot.common.robot_devices.utils import RobotDeviceAlreadyConnectedError, RobotDeviceNotConnectedError
from lerobot.common.utils.utils import capture_timestamp_utc

class SimulatedFollower:

    def __init__(
        self,
        port: str,
        # configuration,
        motors: dict[str, tuple[int, str]],
        extra_model_control_table: dict[str, list[tuple]] | None = None,
        extra_model_resolution: dict[str, int] | None = None,
        mock=False,
    ):
        # self.configuration = configuration
        self.old_pos = np.zeros(12)
        self.port = port
        self.motors = {
                    # name: (index, model)
                    "shoulder_pan": (1, "xl330-m077"),
                    "shoulder_lift": (2, "xl330-m077"),
                    "elbow_flex": (3, "xl330-m077"),
                    "wrist_flex": (4, "xl330-m077"),
                    "wrist_roll": (5, "xl330-m077"),
                    "gripper": (6, "xl330-m077"),
                }
        pass
    
    @property
    def motor_names(self) -> list[str]:
        return list(self.motors.keys())
    
    def connect(self):
        self.is_connected = True
        # self.data = self.configuration.data
        # self.model = self.configuration.model

        # init_pos_rad = [-1.5708, -1.5708, 1.5708, -1.5708, -1.5708, 0]
        # self.data.qpos[-6:] = init_pos_rad
        # self.old_pos = deepcopy(self.data.qpos[-6:])
        # deep copy
        # mujoco.mj_forward(self.model, self.data)

        pass

    def read(self, data_name, motor_names: str | list[str] | None = None):
        values = np.zeros(6)
        values = values.astype(np.int32)
        return values

    def set_calibration(self, calibration: dict[str, tuple[int, bool]]):
        self.calibration = calibration

    def write(self, data_name, values: int | float | np.ndarray, motor_names: str | list[str] | None = None):

        if data_name in ["Torque_Enable", "Operating_Mode", "Homing_Offset", "Drive_Mode", "Position_P_Gain", "Position_I_Gain", "Position_D_Gain"]:
            return np.array(None)

        pass

    def disconnect(self):
        self.is_connected = False

    def __del__(self):
        if getattr(self, "is_connected", False):
            self.disconnect()

if __name__ == "__main__":
    pass


================================================
FILE: core/models_chain.py
================================================
import argparse
import logging
import time
from pathlib import Path
from typing import List
import os

from dotenv import load_dotenv, find_dotenv
import speech_recognition as sr
from langchain_openai import ChatOpenAI
import shutil
import tqdm
from langchain_core.tools import tool
from langchain.agents import create_tool_calling_agent, AgentExecutor

from lerobot.common.robot_devices.cameras.opencv import OpenCVCamera
# from safetensors.torch import load_file, save_file
from lerobot.common.datasets.lerobot_dataset import LeRobotDataset
from lerobot.common.datasets.populate_dataset import (
    create_lerobot_dataset,
    delete_current_episode,
    init_dataset,
    save_current_episode,
)
from lerobot.common.robot_devices.control_utils import (
    control_loop,
    has_method,
    init_keyboard_listener,
    init_policy,
    log_control_info,
    record_episode,
    reset_environment,
    sanity_check_dataset_name,
    stop_recording,
    warmup_record,
)
from lerobot.common.robot_devices.robots.factory import make_robot
from lerobot.common.robot_devices.robots.utils import Robot
from lerobot.common.robot_devices.utils import busy_wait, safe_disconnect
from lerobot.common.utils.utils import init_hydra_config, init_logging, log_say, none_or_int

########################################################################################
# Control modes
########################################################################################


@safe_disconnect
def calibrate(robot: Robot, arms: list[str] | None):
    # TODO(aliberts): move this code in robots' classes
    if robot.robot_type.startswith("stretch"):
        if not robot.is_connected:
            robot.connect()
        if not robot.is_homed():
            robot.home()
        return

    unknown_arms = [arm_id for arm_id in arms if arm_id not in robot.available_arms]
    available_arms_str = " ".join(robot.available_arms)
    unknown_arms_str = " ".join(unknown_arms)

    if arms is None or len(arms) == 0:
        raise ValueError(
            "No arm provided. Use `--arms` as argument with one or more available arms.\n"
            f"For instance, to recalibrate all arms add: `--arms {available_arms_str}`"
        )

    if len(unknown_arms) > 0:
        raise ValueError(
            f"Unknown arms provided ('{unknown_arms_str}'). Available arms are `{available_arms_str}`."
        )

    for arm_id in arms:
        arm_calib_path = robot.calibration_dir / f"{arm_id}.json"
        if arm_calib_path.exists():
            print(f"Removing '{arm_calib_path}'")
            arm_calib_path.unlink()
        else:
            print(f"Calibration file not found '{arm_calib_path}'")

    if robot.is_connected:
        robot.disconnect()

    # Calling `connect` automatically runs calibration
    # when the calibration file is missing
    robot.connect()
    robot.disconnect()
    print("Calibration is done! You can now teleoperate and record datasets!")

@safe_disconnect
def teleoperate(
    robot: Robot, fps: int | None = None, teleop_time_s: float | None = None, display_cameras: bool = False
):
    control_loop(
        robot,
        control_time_s=teleop_time_s,
        fps=fps,
        teleoperate=True,
        display_cameras=display_cameras,
    )

@safe_disconnect
def llm_agent(
    robot: Robot, 
    chain_path: str | None = None,
    fps: int | None = None, 
    teleop_time_s: float | None = None, 
    display_cameras: bool = True
):
    import pyttsx3
    import base64
    import cv2

    from langchain.schema import SystemMessage
    from langchain_core.prompts import (
        ChatPromptTemplate,
        HumanMessagePromptTemplate,
        MessagesPlaceholder,
    )


    _ = load_dotenv(find_dotenv())
    api_key = os.getenv("OPENAI_API_KEY")
    

    robot.connect()
    engine = pyttsx3.init()



    models = {
        "grab_sponge":  {"repo_id": "1g0rrr/grab_sponge", "control_time_s": 32},
         "grab_orange": {"repo_id": "1g0rrr/grab_orange", "control_time_s": 10}, 
         "grab_candy":{"repo_id": "1g0rrr/grab_candy", "control_time_s": 10}
    }

    global policies 
    policies = {}

    for model_name in models:
        model = models[model_name]
        policy_overrides = ["device=cpu"]
        policy, policy_fps, device, use_amp = init_policy(model["repo_id"], policy_overrides)
        policies[model_name] = ({"policy": policy, "policy_fps": policy_fps, "device": device, "use_amp": use_amp, "control_time_s": model["control_time_s"]})


    @tool(return_direct=True)
    def grab_sponge():
        """Clean the desktop.
        """
        global policies
        do_control_loop(policies["grab_sponge"])

        return "Done"
    
    @tool(return_direct=True)
    def grab_orange():
        """Grab orange and give it to user.
        """
        global policies
        do_control_loop(policies["grab_orange"])

        return "Done"
        
    @tool(return_direct=True)
    def grab_candy():
        """Grab candy and give it to user.
        """
        global policies
        do_control_loop(policies["grab_candy"])

        return "Done"

    @tool(return_direct=True)
    def describe_area():
        """Describing what I can see.
        """

        llm = ChatOpenAI(temperature=0.1, model=llm_model, api_key=api_key)

        cam1 = OpenCVCamera(camera_index=0, fps=30, width=640, height=480, color_mode="bgr")
        cam1.connect()
        img = cam1.read()
        
        # cv2.imshow("Image", img)
        # cv2.waitKey(0)
        # cv2.destroyAllWindows()

        _, encoded_img = cv2.imencode('.png', img) 
        base64_img = base64.b64encode(encoded_img).decode("utf-8")    
        
        mime_type = 'image/png'
        encoded_image_url = f"data:{mime_type};base64,{base64_img}"


        chat_prompt_template = ChatPromptTemplate.from_messages(
            messages=[
                SystemMessage(content='Describe in one phrase what objects you see on the table. Not including robot. Start answer with "I see..."'),
                HumanMessagePromptTemplate.from_template(
                     [{'image_url': "{encoded_image_url}", 'type': 'image_url'}],
                )
            ]
        )

        chain = chat_prompt_template | llm
        res = chain.invoke({"encoded_image_url": encoded_image_url})

        return res.content
        
    def do_control_loop(policy_obj):
        global policies, models
        control_loop(
            robot=robot,
            control_time_s=policy_obj["control_time_s"],
            display_cameras=display_cameras,
            policy=policy_obj["policy"],
            device=policy_obj["device"],
            use_amp=policy_obj["use_amp"],
            fps = policy_obj["policy_fps"],
            teleoperate=False,
        )

    agent_prompt = ChatPromptTemplate.from_messages([
        ("system", "You're a helpful robot-arm assistant. Answer super concise."), 
        ("human", "{input}"), 
        ("placeholder", "{agent_scratchpad}"),
    ])

    llm_model = "gpt-4o-mini"

    llm = ChatOpenAI(temperature=0.1, model=llm_model, api_key=api_key)

    tools = [grab_sponge, grab_orange, grab_candy, describe_area]

    agent = create_tool_calling_agent(llm, tools, agent_prompt)
    agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

    r = sr.Recognizer()    

    def listen():
        with sr.Microphone() as source:
            audio = r.listen(source)
            print("Processing...")
        try:
            text = r.recognize_google(audio)
            return text
        except Exception as e:
            print("Error: " + str(e))
            return None


    def generate_response(prompt):
        completions = agent_executor.invoke({"input": prompt, })
        message = completions["output"]
        return message

    while True:
        print("Listening...")

        audio_prompt = listen()
        if audio_prompt is not None:
            print("You: " + audio_prompt)
            response = generate_response(audio_prompt)
            engine.say(response)
            engine.runAndWait()

            print("Robot: " + response)
            # to lower case
            if audio_prompt.lower() == "thank you":
                # Exit the program
                exit()
        else:
            print("Can you repeat?")
            continue


@safe_disconnect
def evaluate(
    robot: Robot, 
    chain_path: str | None = None,
    fps: int | None = None, 
    teleop_time_s: float | None = None, 
    display_cameras: bool = True
):
    robot_cfg = init_hydra_config(chain_path)

    models = robot_cfg["models"]
    policies = []
    for model_name in models:
        model = models[model_name]
        policy_overrides = ["device=cpu"]
        policy, policy_fps, device, use_amp = init_policy(model["repo_id"], policy_overrides)
        policies.append({"policy": policy, "policy_fps": policy_fps, "device": device, "use_amp": use_amp, "control_time_s": model["control_time_s"]})

    listener, events = init_keyboard_listener()
    for policy_obj in policies:
        control_loop(
            robot=robot,
            control_time_s=policy_obj["control_time_s"],
            display_cameras=display_cameras,
            events=events,
            policy=policy_obj["policy"],
            device=policy_obj["device"],
            use_amp=policy_obj["use_amp"],
            fps = policy_obj["policy_fps"],
            teleoperate=False,
        )
        print("Model is done!")

    print("Teleoperation is done!")


@safe_disconnect
def record(
    robot: Robot,
    root: str,
    repo_id: str,
    pretrained_policy_name_or_path: str | None = None,
    policy_overrides: List[str] | None = None,
    fps: int | None = None,
    warmup_time_s=2,
    episode_time_s=10,
    reset_time_s=5,
    num_episodes=50,
    video=True,
    run_compute_stats=True,
    push_to_hub=True,
    tags=None,
    num_image_writer_processes=1,
    num_image_writer_threads_per_camera=1,
    force_override=False,
    display_cameras=True,
    play_sounds=True,
):
    # TODO(rcadene): Add option to record logs
    listener = None
    events = None
    policy = None
    device = None
    use_amp = None

    # Load pretrained policy
    if pretrained_policy_name_or_path is not None:
        policy, policy_fps, device, use_amp = init_policy(pretrained_policy_name_or_path, policy_overrides)

        if fps is None:
            fps = policy_fps
            logging.warning(f"No fps provided, so using the fps from policy config ({policy_fps}).")
        elif fps != policy_fps:
            logging.warning(
                f"There is a mismatch between the provided fps ({fps}) and the one from policy config ({policy_fps})."
            )


    # Create empty dataset or load existing saved episodes
    sanity_check_dataset_name(repo_id, policy)
    dataset = init_dataset(
        repo_id,
        root,
        force_override,
        fps,
        video,
        write_images=robot.has_camera,
        num_image_writer_processes=num_image_writer_processes,
        num_image_writer_threads=num_image_writer_threads_per_camera * robot.num_cameras,
    )

    if not robot.is_connected:
        robot.connect()

    listener, events = init_keyboard_listener()

    # Execute a few seconds without recording to:
    # 1. teleoperate the robot to move it in starting position if no policy provided,
    # 2. give times to the robot devices to connect and start synchronizing,
    # 3. place the cameras windows on screen
    enable_teleoperation = policy is None
    log_say("Warmup record", play_sounds)
    warmup_record(robot, events, enable_teleoperation, warmup_time_s, display_cameras, fps)

    if has_method(robot, "teleop_safety_stop"):
        robot.teleop_safety_stop()

    while True:
        if dataset["num_episodes"] >= num_episodes:
            break

        episode_index = dataset["num_episodes"]

        # Visual sign in terminal that a recording is starting
        print("============================================")
        print("============================================")
        print("===========  START RECORDING  ==============")
        print("============================================")
        print("============================================")
        print("============================================")

        log_say(f"Recording episode {episode_index}", play_sounds)
        record_episode(
            dataset=dataset,
            robot=robot,
            events=events,
            episode_time_s=episode_time_s,
            display_cameras=display_cameras,
            policy=policy,
            device=device,
            use_amp=use_amp,
            fps=fps,
        )

        # Execute a few seconds without recording to give time to manually reset the environment
        # Current code logic doesn't allow to teleoperate during this time.
        # TODO(rcadene): add an option to enable teleoperation during reset
        # Skip reset for the last episode to be recorded
        if not events["stop_recording"] and (
            (episode_index < num_episodes - 1) or events["rerecord_episode"]
        ):
            log_say("Reset the environment", play_sounds)
            reset_environment(robot, events, reset_time_s)
            # log_say("Prepare position", play_sounds)
            # warmup_record(robot, events, enable_teleoperation, warmup_time_s, display_cameras, fps)

        if events["rerecord_episode"]:
            log_say("Re-record episode", play_sounds)
            events["rerecord_episode"] = False
            events["exit_early"] = False
            delete_current_episode(dataset)
            continue

        # Increment by one dataset["current_episode_index"]
        save_current_episode(dataset)

        if events["stop_recording"]:
            break

    log_say("Stop recording", play_sounds, blocking=True)
    stop_recording(robot, listener, display_cameras)

    lerobot_dataset = create_lerobot_dataset(dataset, run_compute_stats, push_to_hub, tags, play_sounds)

    data_dict = ["observation.images.laptop", "observation.images.phone"]
    image_keys = [key for key in data_dict if "image" in key]
    local_dir = Path(root) / repo_id
    videos_dir = local_dir / "videos"

    for episode_index in tqdm.tqdm(range(num_episodes)):
        for key in image_keys:
            # key = f"observation.images.{name}"
            tmp_imgs_dir = videos_dir / f"{key}_episode_{episode_index:06d}"
            shutil.rmtree(tmp_imgs_dir, ignore_errors=True)

    log_say("Exiting", play_sounds)
    return lerobot_dataset


@safe_disconnect
def replay(
    robot: Robot, episode: int, fps: int | None = None, root="data", repo_id="lerobot/debug", play_sounds=True
):
    # TODO(rcadene, aliberts): refactor with control_loop, once `dataset` is an instance of LeRobotDataset
    # TODO(rcadene): Add option to record logs
    local_dir = Path(root) / repo_id
    if not local_dir.exists():
        raise ValueError(local_dir)

    dataset = LeRobotDataset(repo_id, root=root)
    items = dataset.hf_dataset.select_columns("action")
    from_idx = dataset.episode_data_index["from"][episode].item()
    to_idx = dataset.episode_data_index["to"][episode].item()

    if not robot.is_connected:
        robot.connect()

    log_say("Replaying episode", play_sounds, blocking=True)
    for idx in range(from_idx, to_idx):
        start_episode_t = time.perf_counter()

        action = items[idx]["action"]
        robot.send_action(action)

        dt_s = time.perf_counter() - start_episode_t
        busy_wait(1 / fps - dt_s)

        dt_s = time.perf_counter() - start_episode_t
        log_control_info(robot, dt_s, fps=fps)


if __name__ == "__main__":
    parser = argparse.ArgumentParser()
    subparsers = parser.add_subparsers(dest="mode", required=True)

    # Set common options for all the subparsers
    base_parser = argparse.ArgumentParser(add_help=False)
    base_parser.add_argument(
        "--robot-path",
        type=str,
        default="lerobot/configs/robot/koch.yaml",
        help="Path to robot yaml file used to instantiate the robot using `make_robot` factory function.",
    )
    base_parser.add_argument(
        "--robot-overrides",
        type=str,
        nargs="*",
        help="Any key=value arguments to override config values (use dots for.nested=overrides)",
    )

    parser_calib = subparsers.add_parser("calibrate", parents=[base_parser])
    parser_calib.add_argument(
        "--arms",
        type=str,
        nargs="*",
        help="List of arms to calibrate (e.g. `--arms left_follower right_follower left_leader`)",
    )

    parser_teleop = subparsers.add_parser("teleoperate", parents=[base_parser])
    parser_teleop.add_argument(
        "--fps", type=none_or_int, default=None, help="Frames per second (set to None to disable)"
    )
    parser_teleop.add_argument(
        "--display-cameras",
        type=int,
        default=1,
        help="Display all cameras on screen (set to 1 to display or 0).",
    )


    parser_evaluate = subparsers.add_parser("evaluate", parents=[base_parser])
    parser_evaluate.add_argument(
        "--fps", type=none_or_int, default=None, help="Frames per second (set to None to disable)"
    )
    parser_evaluate.add_argument(
        "--chain-path",
        type=Path,
        default="core/configs/chains/lamp_testing.yaml",
        help="Path to chain configuration yaml file').",
    )    


    parser_llm_agent = subparsers.add_parser("llm_agent", parents=[base_parser])
    parser_llm_agent.add_argument(
        "--chain-path",
        type=Path,
        default="core/configs/chains/clean_whiteboard.yaml",
        help="Path to chain configuration yaml file').",
    )    

    parser_record = subparsers.add_parser("record", parents=[base_parser])
    parser_record.add_argument(
        "--fps", type=none_or_int, default=None, help="Frames per second (set to None to disable)"
    )
    parser_record.add_argument(
        "--root",
        type=Path,
        default="data",
        help="Root directory where the dataset will be stored locally at '{root}/{repo_id}' (e.g. 'data/hf_username/dataset_name').",
    )
    parser_record.add_argument(
        "--repo-id",
        type=str,
        default="lerobot/test",
        help="Dataset identifier. By convention it should match '{hf_username}/{dataset_name}' (e.g. `lerobot/test`).",
    )
    parser_record.add_argument(
        "--warmup-time-s",
        type=int,
        default=10,
        help="Number of seconds before starting data collection. It allows the robot devices to warmup and synchronize.",
    )
    parser_record.add_argument(
        "--episode-time-s",
        type=int,
        default=60,
        help="Number of seconds for data recording for each episode.",
    )
    parser_record.add_argument(
        "--reset-time-s",
        type=int,
        default=60,
        help="Number of seconds for resetting the environment after each episode.",
    )
    parser_record.add_argument("--num-episodes", type=int, default=50, help="Number of episodes to record.")
    parser_record.add_argument(
        "--run-compute-stats",
        type=int,
        default=1,
        help="By default, run the computation of the data statistics at the end of data collection. Compute intensive and not required to just replay an episode.",
    )
    parser_record.add_argument(
        "--push-to-hub",
        type=int,
        default=1,
        help="Upload dataset to Hugging Face hub.",
    )
    parser_record.add_argument(
        "--tags",
        type=str,
        nargs="*",
        help="Add tags to your dataset on the hub.",
    )
    parser_record.add_argument(
        "--num-image-writer-processes",
        type=int,
        default=0,
        help=(
            "Number of subprocesses handling the saving of frames as PNGs. Set to 0 to use threads only; "
            "set to ≥1 to use subprocesses, each using threads to write images. The best number of processes "
            "and threads depends on your system. We recommend 4 threads per camera with 0 processes. "
            "If fps is unstable, adjust the thread count. If still unstable, try using 1 or more subprocesses."
        ),
    )
    parser_record.add_argument(
        "--num-image-writer-threads-per-camera",
        type=int,
        default=8,
        help=(
            "Number of threads writing the frames as png images on disk, per camera. "
            "Too many threads might cause unstable teleoperation fps due to main thread being blocked. "
            "Not enough threads might cause low camera fps."
        ),
    )
    parser_record.add_argument(
        "--force-override",
        type=int,
        default=0,
        help="By default, data recording is resumed. When set to 1, delete the local directory and start data recording from scratch.",
    )
    parser_record.add_argument(
        "-p",
        "--pretrained-policy-name-or-path",
        type=str,
        help=(
            "Either the repo ID of a model hosted on the Hub or a path to a directory containing weights "
            "saved using `Policy.save_pretrained`."
        ),
    )
    parser_record.add_argument(
        "--policy-overrides",
        type=str,
        nargs="*",
        help="Any key=value arguments to override config values (use dots for.nested=overrides)",
    )

    parser_replay = subparsers.add_parser("replay", parents=[base_parser])
    parser_replay.add_argument(
        "--fps", type=none_or_int, default=None, help="Frames per second (set to None to disable)"
    )
    parser_replay.add_argument(
        "--root",
        type=Path,
        default="data",
        help="Root directory where the dataset will be stored locally at '{root}/{repo_id}' (e.g. 'data/hf_username/dataset_name').",
    )
    parser_replay.add_argument(
        "--repo-id",
        type=str,
        default="lerobot/test",
        help="Dataset identifier. By convention it should match '{hf_username}/{dataset_name}' (e.g. `lerobot/test`).",
    )
    parser_replay.add_argument("--episode", type=int, default=0, help="Index of the episode to replay.")

    args = parser.parse_args()

    init_logging()

    control_mode = args.mode
    robot_path = args.robot_path
    robot_overrides = args.robot_overrides
    kwargs = vars(args)
    del kwargs["mode"]
    del kwargs["robot_path"]
    del kwargs["robot_overrides"]

    robot_cfg = init_hydra_config(robot_path, robot_overrides)
    robot = make_robot(robot_cfg)

    if control_mode == "calibrate":
        calibrate(robot, **kwargs)

    elif control_mode == "teleoperate":
        teleoperate(robot, **kwargs)

    elif control_mode == "evaluate":
        evaluate(robot, **kwargs)

    elif control_mode == "llm_agent":
        llm_agent(robot, **kwargs)

    elif control_mode == "record":
        record(robot, **kwargs)

    elif control_mode == "replay":
        replay(robot, **kwargs)

    if robot.is_connected:
        # Disconnect manually to avoid a "Core dump" during process
        # termination due to camera threads not properly exiting.
        robot.disconnect()

================================================
FILE: core/test.py
================================================
Download .txt
gitextract_ljmww4sq/

├── .gitignore
├── LICENSE
├── README.md
├── colab/
│   └── SimpleAutomationTrainModel.ipynb
└── core/
    ├── configs/
    │   ├── chains/
    │   │   ├── chip_testing.yaml
    │   │   ├── eat_chips.yaml
    │   │   ├── glue_stick.yaml
    │   │   └── lamp_testing.yaml
    │   └── robot/
    │       ├── koch.yaml
    │       ├── so100.yaml
    │       └── so100_dummy.yaml
    ├── dummy_arm.py
    ├── models_chain.py
    └── test.py
Download .txt
SYMBOL INDEX (15 symbols across 2 files)

FILE: core/dummy_arm.py
  class SimulatedFollower (line 17) | class SimulatedFollower:
    method __init__ (line 19) | def __init__(
    method motor_names (line 43) | def motor_names(self) -> list[str]:
    method connect (line 46) | def connect(self):
    method read (line 59) | def read(self, data_name, motor_names: str | list[str] | None = None):
    method set_calibration (line 64) | def set_calibration(self, calibration: dict[str, tuple[int, bool]]):
    method write (line 67) | def write(self, data_name, values: int | float | np.ndarray, motor_nam...
    method disconnect (line 74) | def disconnect(self):
    method __del__ (line 77) | def __del__(self):

FILE: core/models_chain.py
  function calibrate (line 48) | def calibrate(robot: Robot, arms: list[str] | None):
  function teleoperate (line 90) | def teleoperate(
  function llm_agent (line 102) | def llm_agent(
  function evaluate (line 277) | def evaluate(
  function record (line 313) | def record(
  function replay (line 456) | def replay(
Condensed preview — 14 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (55K chars).
[
  {
    "path": ".gitignore",
    "chars": 1801,
    "preview": "# Logging\nlogs\ntmp\nwandb\n\n# Data\ndata\noutputs\nvideos\npretrained_model\n\n# callibration\n.cache/*\n\n# Apple\n.DS_Store\n\n# VS "
  },
  {
    "path": "LICENSE",
    "chars": 10730,
    "preview": "                                Apache License\n                           Version 2.0, January 2004\n                    "
  },
  {
    "path": "README.md",
    "chars": 3814,
    "preview": "# Make robots do something useful!\n\nOur goal is to make robots affordable so more people can try them out, discover usef"
  },
  {
    "path": "colab/SimpleAutomationTrainModel.ipynb",
    "chars": 2016,
    "preview": "{\n  \"cells\": [\n    {\n      \"cell_type\": \"code\",\n      \"execution_count\": null,\n      \"metadata\": {\n        \"id\": \"cX-BT9"
  },
  {
    "path": "core/configs/chains/chip_testing.yaml",
    "chars": 87,
    "preview": "models:\n    eat_chips:\n        repo_id: \"1g0rrr/eat_chips\"\n        control_time_s: 120\n"
  },
  {
    "path": "core/configs/chains/eat_chips.yaml",
    "chars": 673,
    "preview": "llm_agent:\n    model: \"gpt-4o-mini\"\n    system_prompt: \"You're a helpful robot-arm assistant. Answer simple, fun and sup"
  },
  {
    "path": "core/configs/chains/glue_stick.yaml",
    "chars": 215,
    "preview": "models:\n    grab_stick:\n        repo_id: \"1g0rrr/grab_stick\"\n        control_time_s: 120\n    pause:\n        control_time"
  },
  {
    "path": "core/configs/chains/lamp_testing.yaml",
    "chars": 449,
    "preview": "models:\n    grab_orange:\n        repo_id: \"1g0rrr/grab_sponge\"\n        control_time_s: 9999\n    # grab_candy:\n    #     "
  },
  {
    "path": "core/configs/robot/koch.yaml",
    "chars": 1915,
    "preview": "_target_: lerobot.common.robot_devices.robots.manipulator.ManipulatorRobot\nrobot_type: koch\ncalibration_dir: .cache/cali"
  },
  {
    "path": "core/configs/robot/so100.yaml",
    "chars": 2049,
    "preview": "_target_: lerobot.common.robot_devices.robots.manipulator.ManipulatorRobot\nrobot_type: so100\ncalibration_dir: .cache/cal"
  },
  {
    "path": "core/configs/robot/so100_dummy.yaml",
    "chars": 2049,
    "preview": "_target_: lerobot.common.robot_devices.robots.manipulator.ManipulatorRobot\nrobot_type: so100\ncalibration_dir: .cache/cal"
  },
  {
    "path": "core/dummy_arm.py",
    "chars": 2475,
    "preview": "import enum\nimport logging\nimport math\nimport time\nimport traceback\nfrom copy import deepcopy\nfrom pathlib import Path\n\n"
  },
  {
    "path": "core/models_chain.py",
    "chars": 23178,
    "preview": "import argparse\nimport logging\nimport time\nfrom pathlib import Path\nfrom typing import List\nimport os\n\nfrom dotenv impor"
  },
  {
    "path": "core/test.py",
    "chars": 0,
    "preview": ""
  }
]

About this extraction

This page contains the full source code of the 1g0rrr/SimpleAutomation GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 14 files (50.2 KB), approximately 12.7k tokens, and a symbol index with 15 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!