Repository: alvarosevilla95/autolang
Branch: master
Commit: 9a9eaba6999e
Files: 16
Total size: 21.6 KB
Directory structure:
gitextract_59b6scir/
├── .gitignore
├── Dockerfile
├── LICENSE
├── README.md
├── autolang/
│ ├── __main__.py
│ ├── agent/
│ │ ├── base.py
│ │ └── prompt.py
│ ├── auto.py
│ ├── executor.py
│ ├── learner.py
│ ├── planner.py
│ ├── printer.py
│ ├── reviewer.py
│ └── utils.py
├── requirements.txt
└── run_docker.sh
================================================
FILE CONTENTS
================================================
================================================
FILE: .gitignore
================================================
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
cover/
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Visual Studio Code
.vscode/*
!.vscode/settings.json
!.vscode/tasks.json
!.vscode/launch.json
!.vscode/extensions.json
!.vscode/*.code-snippets
================================================
FILE: Dockerfile
================================================
FROM python:3.11
ARG openai_key
ENV PYTHONBUFFERED 1
ENV OPENAI_API_KEY $openai_key
RUN python -m pip install --upgrade pip
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
ADD autolang autolang
CMD [ "python3", "-u", "-m" , "autolang"]
================================================
FILE: LICENSE
================================================
The MIT License
Copyright (c) Alvaro Sevilla
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.
================================================
FILE: README.md
================================================
# Autolang
Another take on BabyAGI, focused on workflows that complete. Powered by langchain.
Here's a simple demo: https://twitter.com/pictobit/status/1645504308874563584
## Running
To run Autolang, follow these steps:
(Optional) Customize the [tools provided to the agent](autolang/\_\_main\_\_.py)
Install dependencies:
```sh
pip install -r requirements.txt
```
Copy the `.env.example` file to `.env,` then edit it:
```sh
cp .env.example .env
```
Run the script:
```sh
python -m autolang
```
Alternatively, run with Docker:
```sh
./run_docker.sh
```
## Architecture
Autolang uses four main components:
<p align="center">
<img src="assets/diagram.svg">
</p>
### Planner
Runs once at the start, it thinks of a strategy to solve the problem, and produces a task list.
### Executor
A custom langchain agent, which implements ReAct to solve a single task in the plan. It can be provided any tools in the langchain format.
### Learner
Here's the interesting part. The system holds an information context string, which starts empty.
After each step, the learner merges the result with the current context, as a sort of medium-term memory
### Reviewer
Assesses the current task list, based on the current completed tasks and generated info context, and reprioritizes the pending tasks accordingly
## Next steps
Right now, the main limitation is the limited info context. As a next step, I'm planning on adding a "long term memory agent", that extracts information from the context, replacing it with a key. The executor agent will be provided a tool to retrieve these saved snippets if required.
================================================
FILE: autolang/__main__.py
================================================
import os
import faiss
import readline # for better CLI experience
from typing import List
from langchain import FAISS, InMemoryDocstore
from langchain.agents import Tool, load_tools
from langchain.chat_models import ChatOpenAI
from langchain.embeddings import OpenAIEmbeddings
from langchain.llms.base import BaseLLM
from .auto import AutoAgent
from dotenv import load_dotenv
# Load default environment variables (.env)
load_dotenv()
# API Keys
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY", default="")
assert OPENAI_API_KEY, "OPENAI_API_KEY environment variable is missing from .env"
OPENAI_API_MODEL = os.getenv("OPENAI_API_MODEL", default="gpt-3.5-turbo")
assert OPENAI_API_MODEL, "OPENAI_API_MODEL environment variable is missing from .env"
objective = input('What is my purpose? ')
llm: BaseLLM = ChatOpenAI(model_name=OPENAI_API_MODEL, temperature=0, request_timeout=120) # type: ignore
embeddings = OpenAIEmbeddings() # type: ignore
"""
Customize the tools the agent uses here. Here are some others you can add:
os.environ["WOLFRAM_ALPHA_APPID"] = "<APPID>"
os.environ["SERPER_API_KEY"] = "<KEY>"
tool_names = ["terminal", "requests", "python_repl", "human", "google-serper", "wolfram-alpha"]
"""
tool_names = ["python_repl", "human"]
tools: List[Tool] = load_tools(tool_names, llm=llm) # type: ignore
index = faiss.IndexFlatL2(1536)
docstore = InMemoryDocstore({})
vectorstore = FAISS(embeddings.embed_query, index, docstore, {})
agent = AutoAgent.from_llm_and_objectives(llm, objective, tools, vectorstore, verbose=True)
agent.run()
================================================
FILE: autolang/agent/base.py
================================================
"""An agent designed to hold a conversation in addition to using tools."""
from __future__ import annotations
import re
from typing import Any, List, Optional, Sequence, Tuple
from langchain.agents.agent import Agent
from langchain.callbacks.base import BaseCallbackManager
from langchain.chains import LLMChain
from langchain.llms.base import BaseLLM
from langchain.prompts import PromptTemplate
from langchain.tools.base import BaseTool
from .prompt import FORMAT_INSTRUCTIONS, PREFIX, SUFFIX
class AutonomousAgent(Agent):
"""An agent designed to execute a single task within a larger workflow."""
ai_prefix: str = "Jarvis"
@property
def _agent_type(self) -> str:
return "autonomous"
@property
def observation_prefix(self) -> str:
return "Observation: "
@property
def llm_prefix(self) -> str:
return "Thought:"
@property
def finish_tool_name(self) -> str:
return self.ai_prefix
@classmethod
def create_prompt(
cls,
tools: Sequence[BaseTool],
prefix: str = PREFIX,
suffix: str = SUFFIX,
format_instructions: str = FORMAT_INSTRUCTIONS,
ai_prefix: str = "AI",
human_prefix: str = "Human",
objective: Optional[str] = None,
input_variables: Optional[List[str]] = None,
) -> PromptTemplate:
tool_strings = "\n".join(
[f"> {tool.name}: {tool.description}" for tool in tools]
)
tool_names = ", ".join([tool.name for tool in tools])
prefix = prefix.format(objective=objective)
format_instructions = format_instructions.format(tool_names=tool_names, ai_prefix=ai_prefix, human_prefix=human_prefix)
template = "\n\n".join([prefix, tool_strings, format_instructions, suffix])
input_variables = ["input", "context", "agent_scratchpad"]
return PromptTemplate(template=template, input_variables=input_variables)
def _extract_tool_and_input(self, llm_output: str) -> Optional[Tuple[str, str]]:
if f"{self.ai_prefix}:" in llm_output:
return self.ai_prefix, llm_output.split(f"{self.ai_prefix}:")[-1].strip()
regex = r"Action: (.*?)[\n]*Action Input: (.*)"
match = re.search(regex, llm_output)
if not match:
raise ValueError(f"Could not parse LLM output: `{llm_output}`")
action = match.group(1)
action_input = match.group(2)
return action.strip(), action_input.strip(" ").strip('"')
@classmethod
def from_llm_and_tools(
cls,
llm: BaseLLM,
tools: Sequence[BaseTool],
objective: Optional[str] = None,
callback_manager: Optional[BaseCallbackManager] = None,
prefix: str = PREFIX,
suffix: str = SUFFIX,
format_instructions: str = FORMAT_INSTRUCTIONS,
ai_prefix: str = "Jarvis",
human_prefix: str = "Human",
input_variables: Optional[List[str]] = None,
**kwargs: Any,
) -> "AutonomousAgent":
"""Construct an agent from an LLM and tools."""
cls._validate_tools(tools)
prompt = cls.create_prompt(
tools,
ai_prefix=ai_prefix,
human_prefix=human_prefix,
prefix=prefix,
suffix=suffix,
objective=objective,
format_instructions=format_instructions,
input_variables=input_variables,
)
llm_chain = LLMChain(
llm=llm,
prompt=prompt,
callback_manager=callback_manager, # type: ignore
)
tool_names = [tool.name for tool in tools]
return cls(llm_chain=llm_chain, allowed_tools=tool_names, ai_prefix=ai_prefix, **kwargs)
================================================
FILE: autolang/agent/prompt.py
================================================
from typing import List, Optional, Sequence
from langchain.prompts import PromptTemplate
from langchain.tools.base import BaseTool
PREFIX = """Jarvis is a general purpose AI model trained by OpenAI.
Jarvis is tasked with executing a single task within the context of a larger workflow trying to accomplish the following objective: {objective}. It should focus only on the current task, and doesn't attempt to perform further work.
Jarvis is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions.
Overall, Jarvis is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics.
Jarvis is not having a conversation with a user, but rather producing the output of executing a task within a larger workflow.
TOOLS:
------
Jarvis has access to the following tools:"""
FORMAT_INSTRUCTIONS = """
Thought Process:
----------------
Jarvis always uses the followin thought process and foramt to execute its tasks:
```
Thought: Do I need to use a tool? Yes
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
```
When Jarvis has a response to say to the Human, or if it doesn't need to use a tool, it always uses the format:
```
Thought: Do I need to use a tool? No
{ai_prefix}: [your response here]
```"""
SUFFIX = """Begin!
Current context:
{context}
Current task: {input}
{agent_scratchpad}"""
================================================
FILE: autolang/auto.py
================================================
from collections import deque
from typing import Any, Dict, List, Optional
from pydantic import BaseModel, Field
from langchain.agents import Tool
from langchain.llms.base import BaseLLM
from langchain.vectorstores import VectorStore
from .executor import ExecutionAgent
from .planner import PlanningChain
from .reviewer import ReviewingChain
from .learner import LearningChain
from .printer import print_objective, print_next_task, print_task_list, print_task_result, print_end
class AutoAgent(BaseModel):
planning_chain: PlanningChain = Field(...)
reviewing_chain: ReviewingChain = Field(...)
execution_agent: ExecutionAgent = Field(...)
learning_chain: LearningChain = Field(...)
objective: str = Field(alias="objective")
vectorstore: Any = Field(...)
memory: str = Field("", init=False)
complete_list: List[Dict[str, str]] = Field(default_factory=list)
pending_list: deque[Dict[str, str]] = Field(default_factory=deque)
@classmethod
def from_llm_and_objectives(
cls,
llm: BaseLLM,
objective: str,
tools: List[Tool],
vectorstore: VectorStore,
verbose: bool = False,
) -> "AutoAgent":
planning_chain = PlanningChain.from_llm(llm, objective, tools=tools, verbose=verbose)
reviewing_chain = ReviewingChain.from_llm(llm, objective, verbose=verbose)
execution_agent = ExecutionAgent.from_llm(llm, objective, tools, verbose=verbose)
learning_chain = LearningChain.from_llm(llm, objective, verbose=verbose)
return cls(
objective=objective,
planning_chain=planning_chain,
reviewing_chain = reviewing_chain,
execution_agent=execution_agent,
learning_chain=learning_chain,
vectorstore=vectorstore,
)
def add_task(self, task: Dict):
self.pending_list.append(task)
def run(self, max_iterations: Optional[int] = None):
num_iters = 0
print_objective(self.objective)
self.pending_list = deque(self.planning_chain.generate_tasks())
while len(self.pending_list) > 0 and (max_iterations is None or num_iters < max_iterations):
num_iters += 1
print_task_list(self.complete_list, self.pending_list)
task = self.pending_list.popleft()
print_next_task(task)
result = self.execution_agent.execute_task(task["task_name"], self.memory)
if not result: result = "Empty result"
print_task_result(result)
self.complete_list.append({"task_id": task["task_id"], "task_name": task["task_name"]})
self.memory = self.learning_chain.update_memory(
memory=self.memory,
observation=result,
completed_tasks=[t["task_name"] for t in self.complete_list],
pending_tasks=[t["task_name"] for t in self.pending_list],
)
self.pending_list = self.reviewing_chain.review_tasks(
this_task_id=len(self.complete_list),
completed_tasks=[t["task_name"] for t in self.complete_list],
pending_tasks=[t["task_name"] for t in self.pending_list],
context=self.memory)
final_answer = self.execution_agent.execute_task("Provide the final answer", self.memory)
print_end(final_answer)
================================================
FILE: autolang/executor.py
================================================
from typing import List
from pydantic import BaseModel, Field
from langchain.agents import AgentExecutor, Tool
from langchain.llms.base import BaseLLM
from .agent.base import AutonomousAgent
class ExecutionAgent(BaseModel):
agent: AgentExecutor = Field(...)
@classmethod
def from_llm(cls, llm: BaseLLM, objective: str, tools: List[Tool], verbose: bool = True) -> "ExecutionAgent":
agent = AutonomousAgent.from_llm_and_tools(llm=llm, tools=tools, objective=objective, verbose=verbose)
agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=verbose)
return cls(agent=agent_executor)
def execute_task(self, task: str, context: str) -> str:
for i in range(3):
try:
return self.agent.run({"input": task, "context": context})
except ValueError:
print(f"Value error running executor agent. Will retry {2-i} times")
return "Failed to execute task."
================================================
FILE: autolang/learner.py
================================================
from typing import List
from langchain import LLMChain, PromptTemplate
from langchain.llms.base import BaseLLM
learning_template = """Cass is an AI specialized in information consolidation, part of a larger system that is solving a complex problem in multiple steps. Cass is provided the current information context, and the result of the latest step, and updates the context incorporating the result.
Cass is also provided the list of completed and still pending tasks.
The rest of the system is provided the task lists and context in the same way, so the context should never contain the tasks themselves
The information context is the only persistent memory the system has, after every step, the context must be updated with all relevant informtion, such that the context contains all information needed to complete the objective.
The ultimate objective is: {objective}.
Completed tasks: {completed_tasks}
The last task output was:
{last_output}
The list of pending tasks: {pending_tasks}
Current context to update:
{context}
Cass will generate an updated context. This context will replace the current context.
Cass: """
learning_prompt = lambda objective: PromptTemplate(
template=learning_template,
partial_variables={"objective": objective},
input_variables=["completed_tasks", "pending_tasks", "last_output", "context"],
)
class LearningChain(LLMChain):
@classmethod
def from_llm(cls, llm: BaseLLM, objective: str, verbose: bool = True) -> "LearningChain":
return cls(prompt=learning_prompt(objective), llm=llm, verbose=verbose)
def update_memory(self, memory: str, observation: str, completed_tasks: List[str], pending_tasks: List[str]):
return self.run(
completed_tasks=completed_tasks,
pending_tasks=pending_tasks,
last_output=observation,
context=memory
)
================================================
FILE: autolang/planner.py
================================================
from typing import List, Dict
from pydantic import Field
from langchain import LLMChain, PromptTemplate
from langchain.agents import Tool
from langchain.llms.base import BaseLLM
from .utils import parse_task_list
planning_template = """You are a task creation AI tasked with generating a full, exhaustive list of tasks to accomplish the following objective: {objective}.
The AI system that will execute these tasks will have access to the following tools:
{tool_strings}
Each task may only use a single tool, but not all tasks need to use one. The task should not specify the tool. The final task should achieve the objective.
Each task will be performed by a capable agent, do not break the problem down into too many tasks.
Aim to keep the list short, and never generate more than 5 tasks. Your response should be each task in a separate line, one line per task.
Use the following format:
1. First task
2. Second task
"""
planning_prompt = lambda objective: PromptTemplate(
template=planning_template,
partial_variables={"objective": objective},
input_variables=["tool_strings"],
)
class PlanningChain(LLMChain):
tool_strings: str = Field(...)
@classmethod
def from_llm(cls, llm: BaseLLM, objective: str, tools: List[Tool] , verbose: bool = True) -> "PlanningChain":
tool_strings = "\n".join([f"> {tool.name}: {tool.description}" for tool in tools])
return cls(prompt=planning_prompt(objective), llm=llm, verbose=verbose, tool_strings=tool_strings)
def generate_tasks(self) -> List[Dict]:
response = self.run(tool_strings=self.tool_strings)
return parse_task_list(response)
================================================
FILE: autolang/printer.py
================================================
def print_objective(objective):
color_print("*****Objective*****", 4)
print(objective)
def print_task_list(complete_list, pending_list):
color_print("*****TASK LIST*****", 5)
print("Completed: ")
for task in complete_list:
print(str(task["task_id"]) + ": " + task["task_name"])
print("\nPending: ")
for task in pending_list:
print(str(task["task_id"]) + ": " + task["task_name"])
def print_next_task(task):
color_print("*****NEXT TASK*****", 2)
print(str(task["task_id"]) + ": " + task["task_name"])
def print_task_result(result):
color_print("*****TASK RESULT*****", 3)
print(result)
def print_end(final_result):
color_print("*****TASK ENDING*****", 1)
print(final_result)
# leave at the end as the codes somehow screw up indenting in the rest of the file
def color_print(text: str, color: int):
print(f"\n\033[9{color}m\033[1m{text}\033[0m\033[0m\n")
================================================
FILE: autolang/reviewer.py
================================================
from collections import deque
from typing import List, Dict
from langchain import LLMChain, PromptTemplate
from langchain.llms.base import BaseLLM
from .utils import parse_task_list
reviewing_template = """Albus is a task reviewing and prioritization AI, tasked with cleaning the formatting of and reprioritizing the following tasks: {pending_tasks}.
Albus is provided with the list of completed tasks, the current pending tasks, and the information context that has been generated so far by the system.
Albus will decide if the current completed tasks and context are enough to generate a final answer. If this is the case, Albus will notify this using this exact format:
Review: Can answer
Albus will never generate the final answer.
If there is not enough information to answer, Albus will generate a new list of tasks. The tasks will be ordered by priority, with the most important task first. The tasks will be numbered, starting with {next_task_id}. The following format will be used:
Review: Must continue
#. First task
#. Second task
Albus will use the current pending tasks to generate this list, but it may remove tasks that are no longer required, or add new ones if strictly required.
The ultimate objective is: {objective}.
The following tasks have already been completed: {completed_tasks}.
This is the information context generated so far:
{context}
"""
reviewing_prompt = lambda objective: PromptTemplate(
template=reviewing_template,
partial_variables={"objective": objective},
input_variables=["completed_tasks", "pending_tasks", "context", "next_task_id"],
)
class ReviewingChain(LLMChain):
@classmethod
def from_llm(cls, llm: BaseLLM, objective: str, verbose: bool = True) -> "ReviewingChain":
return cls(prompt=reviewing_prompt(objective), llm=llm, verbose=verbose)
def review_tasks(self, this_task_id: int, completed_tasks: List[str], pending_tasks: List[str], context: str) -> deque[Dict]:
next_task_id = int(this_task_id) + 1
response = self.run(completed_tasks=completed_tasks, pending_tasks=pending_tasks, context=context, next_task_id=next_task_id)
return deque(parse_task_list(response))
================================================
FILE: autolang/utils.py
================================================
def parse_task_list(response):
new_tasks = response.split('\n')
prioritized_task_list = []
for task_string in new_tasks:
if not task_string.strip(): continue
task_parts = task_string.strip().split(".", 1)
if len(task_parts) == 2:
task_id = task_parts[0].strip()
task_name = task_parts[1].strip()
prioritized_task_list.append({"task_id": task_id, "task_name": task_name})
return prioritized_task_list
================================================
FILE: requirements.txt
================================================
faiss_cpu==1.7.3
langchain==0.0.136
pydantic==1.10.7
openai==0.27.4
python-dotenv==1.0.0
================================================
FILE: run_docker.sh
================================================
docker run -i $(docker build -q . --build-arg openai_key=$OPENAI_API_KEY)
gitextract_59b6scir/ ├── .gitignore ├── Dockerfile ├── LICENSE ├── README.md ├── autolang/ │ ├── __main__.py │ ├── agent/ │ │ ├── base.py │ │ └── prompt.py │ ├── auto.py │ ├── executor.py │ ├── learner.py │ ├── planner.py │ ├── printer.py │ ├── reviewer.py │ └── utils.py ├── requirements.txt └── run_docker.sh
SYMBOL INDEX (31 symbols across 8 files)
FILE: autolang/agent/base.py
class AutonomousAgent (line 16) | class AutonomousAgent(Agent):
method _agent_type (line 22) | def _agent_type(self) -> str:
method observation_prefix (line 26) | def observation_prefix(self) -> str:
method llm_prefix (line 30) | def llm_prefix(self) -> str:
method finish_tool_name (line 34) | def finish_tool_name(self) -> str:
method create_prompt (line 38) | def create_prompt(
method _extract_tool_and_input (line 59) | def _extract_tool_and_input(self, llm_output: str) -> Optional[Tuple[s...
method from_llm_and_tools (line 71) | def from_llm_and_tools(
FILE: autolang/auto.py
class AutoAgent (line 14) | class AutoAgent(BaseModel):
method from_llm_and_objectives (line 29) | def from_llm_and_objectives(
method add_task (line 50) | def add_task(self, task: Dict):
method run (line 53) | def run(self, max_iterations: Optional[int] = None):
FILE: autolang/executor.py
class ExecutionAgent (line 8) | class ExecutionAgent(BaseModel):
method from_llm (line 13) | def from_llm(cls, llm: BaseLLM, objective: str, tools: List[Tool], ver...
method execute_task (line 18) | def execute_task(self, task: str, context: str) -> str:
FILE: autolang/learner.py
class LearningChain (line 29) | class LearningChain(LLMChain):
method from_llm (line 32) | def from_llm(cls, llm: BaseLLM, objective: str, verbose: bool = True) ...
method update_memory (line 35) | def update_memory(self, memory: str, observation: str, completed_tasks...
FILE: autolang/planner.py
class PlanningChain (line 27) | class PlanningChain(LLMChain):
method from_llm (line 32) | def from_llm(cls, llm: BaseLLM, objective: str, tools: List[Tool] , ve...
method generate_tasks (line 36) | def generate_tasks(self) -> List[Dict]:
FILE: autolang/printer.py
function print_objective (line 1) | def print_objective(objective):
function print_task_list (line 5) | def print_task_list(complete_list, pending_list):
function print_next_task (line 15) | def print_next_task(task):
function print_task_result (line 19) | def print_task_result(result):
function print_end (line 23) | def print_end(final_result):
function color_print (line 28) | def color_print(text: str, color: int):
FILE: autolang/reviewer.py
class ReviewingChain (line 34) | class ReviewingChain(LLMChain):
method from_llm (line 37) | def from_llm(cls, llm: BaseLLM, objective: str, verbose: bool = True) ...
method review_tasks (line 40) | def review_tasks(self, this_task_id: int, completed_tasks: List[str], ...
FILE: autolang/utils.py
function parse_task_list (line 2) | def parse_task_list(response):
Condensed preview — 16 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (24K chars).
[
{
"path": ".gitignore",
"chars": 445,
"preview": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n\n# Unit test / coverage reports\nhtmlcov/\n.tox/"
},
{
"path": "Dockerfile",
"chars": 268,
"preview": "FROM python:3.11\n\nARG openai_key\n\nENV PYTHONBUFFERED 1\nENV OPENAI_API_KEY $openai_key\n\nRUN python -m pip install --upgra"
},
{
"path": "LICENSE",
"chars": 1070,
"preview": "The MIT License\n\nCopyright (c) Alvaro Sevilla\n\nPermission is hereby granted, free of charge, to any person obtaining a c"
},
{
"path": "README.md",
"chars": 1615,
"preview": "# Autolang\n\nAnother take on BabyAGI, focused on workflows that complete. Powered by langchain. \n\nHere's a simple demo: h"
},
{
"path": "autolang/__main__.py",
"chars": 1562,
"preview": "import os\nimport faiss\nimport readline # for better CLI experience\nfrom typing import List\nfrom langchain import FAISS, "
},
{
"path": "autolang/agent/base.py",
"chars": 3719,
"preview": "\"\"\"An agent designed to hold a conversation in addition to using tools.\"\"\"\nfrom __future__ import annotations\n\nimport re"
},
{
"path": "autolang/agent/prompt.py",
"chars": 1645,
"preview": "from typing import List, Optional, Sequence\n\nfrom langchain.prompts import PromptTemplate\nfrom langchain.tools.base impo"
},
{
"path": "autolang/auto.py",
"chars": 3401,
"preview": "from collections import deque\nfrom typing import Any, Dict, List, Optional\nfrom pydantic import BaseModel, Field\nfrom la"
},
{
"path": "autolang/executor.py",
"chars": 993,
"preview": "from typing import List\nfrom pydantic import BaseModel, Field\nfrom langchain.agents import AgentExecutor, Tool\nfrom lang"
},
{
"path": "autolang/learner.py",
"chars": 1929,
"preview": "from typing import List \nfrom langchain import LLMChain, PromptTemplate\nfrom langchain.llms.base import BaseLLM\n\nlearnin"
},
{
"path": "autolang/planner.py",
"chars": 1671,
"preview": "from typing import List, Dict \nfrom pydantic import Field\nfrom langchain import LLMChain, PromptTemplate\nfrom langchain."
},
{
"path": "autolang/printer.py",
"chars": 933,
"preview": "def print_objective(objective):\n color_print(\"*****Objective*****\", 4)\n print(objective)\n\ndef print_task_list(comp"
},
{
"path": "autolang/reviewer.py",
"chars": 2203,
"preview": "from collections import deque\nfrom typing import List, Dict \nfrom langchain import LLMChain, PromptTemplate\nfrom langcha"
},
{
"path": "autolang/utils.py",
"chars": 477,
"preview": "\ndef parse_task_list(response):\n new_tasks = response.split('\\n')\n prioritized_task_list = []\n for task_string "
},
{
"path": "requirements.txt",
"chars": 88,
"preview": "faiss_cpu==1.7.3\nlangchain==0.0.136\npydantic==1.10.7\nopenai==0.27.4\npython-dotenv==1.0.0"
},
{
"path": "run_docker.sh",
"chars": 74,
"preview": "docker run -i $(docker build -q . --build-arg openai_key=$OPENAI_API_KEY)\n"
}
]
About this extraction
This page contains the full source code of the alvarosevilla95/autolang GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 16 files (21.6 KB), approximately 5.4k tokens, and a symbol index with 31 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.