Full Code of mem0ai/mem0 for AI

main 6663b738d5ac cached
1580 files
6.2 MB
1.7M tokens
4378 symbols
1 requests
Download .txt
Showing preview only (6,819K chars total). Download the full file or copy to clipboard to get everything.
Repository: mem0ai/mem0
Branch: main
Commit: 6663b738d5ac
Files: 1580
Total size: 6.2 MB

Directory structure:
gitextract_jergwx9d/

├── .github/
│   ├── ISSUE_TEMPLATE/
│   │   ├── bug_report.yml
│   │   ├── config.yml
│   │   ├── documentation_issue.yml
│   │   └── feature_request.yml
│   ├── PULL_REQUEST_TEMPLATE.md
│   └── workflows/
│       ├── cd.yml
│       ├── ci.yml
│       ├── openclaw-checks.yml
│       └── ts-sdk-ci.yml
├── .gitignore
├── .pre-commit-config.yaml
├── CONTRIBUTING.md
├── LICENSE
├── LLM.md
├── MIGRATION_GUIDE_v1.0.md
├── Makefile
├── README.md
├── cookbooks/
│   ├── customer-support-chatbot.ipynb
│   ├── helper/
│   │   ├── __init__.py
│   │   └── mem0_teachability.py
│   └── mem0-autogen.ipynb
├── docs/
│   ├── README.md
│   ├── _snippets/
│   │   ├── async-memory-add.mdx
│   │   ├── blank-notif.mdx
│   │   ├── get-help.mdx
│   │   └── paper-release.mdx
│   ├── api-reference/
│   │   ├── entities/
│   │   │   ├── delete-user.mdx
│   │   │   └── get-users.mdx
│   │   ├── events/
│   │   │   ├── get-event.mdx
│   │   │   └── get-events.mdx
│   │   ├── memory/
│   │   │   ├── add-memories.mdx
│   │   │   ├── batch-delete.mdx
│   │   │   ├── batch-update.mdx
│   │   │   ├── create-memory-export.mdx
│   │   │   ├── delete-memories.mdx
│   │   │   ├── delete-memory.mdx
│   │   │   ├── feedback.mdx
│   │   │   ├── get-memories.mdx
│   │   │   ├── get-memory-export.mdx
│   │   │   ├── get-memory.mdx
│   │   │   ├── history-memory.mdx
│   │   │   ├── search-memories.mdx
│   │   │   └── update-memory.mdx
│   │   ├── organization/
│   │   │   ├── add-org-member.mdx
│   │   │   ├── create-org.mdx
│   │   │   ├── delete-org.mdx
│   │   │   ├── get-org-members.mdx
│   │   │   ├── get-org.mdx
│   │   │   └── get-orgs.mdx
│   │   ├── organizations-projects.mdx
│   │   ├── project/
│   │   │   ├── add-project-member.mdx
│   │   │   ├── create-project.mdx
│   │   │   ├── delete-project.mdx
│   │   │   ├── get-project-members.mdx
│   │   │   ├── get-project.mdx
│   │   │   └── get-projects.mdx
│   │   └── webhook/
│   │       ├── create-webhook.mdx
│   │       ├── delete-webhook.mdx
│   │       ├── get-webhook.mdx
│   │       └── update-webhook.mdx
│   ├── api-reference.mdx
│   ├── changelog.mdx
│   ├── components/
│   │   ├── embedders/
│   │   │   ├── config.mdx
│   │   │   ├── models/
│   │   │   │   ├── aws_bedrock.mdx
│   │   │   │   ├── azure_openai.mdx
│   │   │   │   ├── google_AI.mdx
│   │   │   │   ├── huggingface.mdx
│   │   │   │   ├── langchain.mdx
│   │   │   │   ├── lmstudio.mdx
│   │   │   │   ├── ollama.mdx
│   │   │   │   ├── openai.mdx
│   │   │   │   ├── together.mdx
│   │   │   │   └── vertexai.mdx
│   │   │   └── overview.mdx
│   │   ├── llms/
│   │   │   ├── config.mdx
│   │   │   ├── models/
│   │   │   │   ├── anthropic.mdx
│   │   │   │   ├── aws_bedrock.mdx
│   │   │   │   ├── azure_openai.mdx
│   │   │   │   ├── deepseek.mdx
│   │   │   │   ├── google_AI.mdx
│   │   │   │   ├── groq.mdx
│   │   │   │   ├── langchain.mdx
│   │   │   │   ├── litellm.mdx
│   │   │   │   ├── lmstudio.mdx
│   │   │   │   ├── mistral_AI.mdx
│   │   │   │   ├── ollama.mdx
│   │   │   │   ├── openai.mdx
│   │   │   │   ├── sarvam.mdx
│   │   │   │   ├── together.mdx
│   │   │   │   ├── vllm.mdx
│   │   │   │   └── xAI.mdx
│   │   │   └── overview.mdx
│   │   ├── rerankers/
│   │   │   ├── config.mdx
│   │   │   ├── custom-prompts.mdx
│   │   │   ├── models/
│   │   │   │   ├── cohere.mdx
│   │   │   │   ├── huggingface.mdx
│   │   │   │   ├── llm.mdx
│   │   │   │   ├── llm_reranker.mdx
│   │   │   │   ├── sentence_transformer.mdx
│   │   │   │   └── zero_entropy.mdx
│   │   │   ├── optimization.mdx
│   │   │   └── overview.mdx
│   │   └── vectordbs/
│   │       ├── config.mdx
│   │       ├── dbs/
│   │       │   ├── azure.mdx
│   │       │   ├── azure_mysql.mdx
│   │       │   ├── baidu.mdx
│   │       │   ├── cassandra.mdx
│   │       │   ├── chroma.mdx
│   │       │   ├── databricks.mdx
│   │       │   ├── elasticsearch.mdx
│   │       │   ├── faiss.mdx
│   │       │   ├── langchain.mdx
│   │       │   ├── milvus.mdx
│   │       │   ├── mongodb.mdx
│   │       │   ├── neptune_analytics.mdx
│   │       │   ├── opensearch.mdx
│   │       │   ├── pgvector.mdx
│   │       │   ├── pinecone.mdx
│   │       │   ├── qdrant.mdx
│   │       │   ├── redis.mdx
│   │       │   ├── s3_vectors.mdx
│   │       │   ├── supabase.mdx
│   │       │   ├── upstash-vector.mdx
│   │       │   ├── valkey.mdx
│   │       │   ├── vectorize.mdx
│   │       │   ├── vertex_ai.mdx
│   │       │   └── weaviate.mdx
│   │       └── overview.mdx
│   ├── contributing/
│   │   ├── development.mdx
│   │   └── documentation.mdx
│   ├── cookbooks/
│   │   ├── companions/
│   │   │   ├── ai-tutor.mdx
│   │   │   ├── local-companion-ollama.mdx
│   │   │   ├── nodejs-companion.mdx
│   │   │   ├── quickstart-demo.mdx
│   │   │   ├── travel-assistant.mdx
│   │   │   ├── voice-companion-openai.mdx
│   │   │   └── youtube-research.mdx
│   │   ├── essentials/
│   │   │   ├── building-ai-companion.mdx
│   │   │   ├── choosing-memory-architecture-vector-vs-graph.mdx
│   │   │   ├── controlling-memory-ingestion.mdx
│   │   │   ├── entity-partitioning-playbook.mdx
│   │   │   ├── exporting-memories.mdx
│   │   │   ├── memory-expiration-short-and-long-term.mdx
│   │   │   └── tagging-and-organizing-memories.mdx
│   │   ├── frameworks/
│   │   │   ├── chrome-extension.mdx
│   │   │   ├── eliza-os-character.mdx
│   │   │   ├── gemini-3-with-mem0-mcp.mdx
│   │   │   ├── llamaindex-multiagent.mdx
│   │   │   ├── llamaindex-react.mdx
│   │   │   ├── mirofish-swarm-memory.mdx
│   │   │   └── multimodal-retrieval.mdx
│   │   ├── integrations/
│   │   │   ├── agents-sdk-tool.mdx
│   │   │   ├── aws-bedrock.mdx
│   │   │   ├── healthcare-google-adk.mdx
│   │   │   ├── mastra-agent.mdx
│   │   │   ├── neptune-analytics.mdx
│   │   │   ├── openai-tool-calls.mdx
│   │   │   └── tavily-search.mdx
│   │   ├── operations/
│   │   │   ├── content-writing.mdx
│   │   │   ├── deep-research.mdx
│   │   │   ├── email-automation.mdx
│   │   │   ├── support-inbox.mdx
│   │   │   └── team-task-agent.mdx
│   │   └── overview.mdx
│   ├── core-concepts/
│   │   ├── memory-operations/
│   │   │   ├── add.mdx
│   │   │   ├── delete.mdx
│   │   │   ├── search.mdx
│   │   │   └── update.mdx
│   │   └── memory-types.mdx
│   ├── docs.json
│   ├── integrations/
│   │   ├── agentops.mdx
│   │   ├── agno.mdx
│   │   ├── autogen.mdx
│   │   ├── aws-bedrock.mdx
│   │   ├── camel-ai.mdx
│   │   ├── crewai.mdx
│   │   ├── dify.mdx
│   │   ├── elevenlabs.mdx
│   │   ├── flowise.mdx
│   │   ├── google-ai-adk.mdx
│   │   ├── keywords.mdx
│   │   ├── langchain-tools.mdx
│   │   ├── langchain.mdx
│   │   ├── langgraph.mdx
│   │   ├── livekit.mdx
│   │   ├── llama-index.mdx
│   │   ├── mastra.mdx
│   │   ├── openai-agents-sdk.mdx
│   │   ├── openclaw.mdx
│   │   ├── pipecat.mdx
│   │   ├── raycast.mdx
│   │   └── vercel-ai-sdk.mdx
│   ├── integrations.mdx
│   ├── introduction.mdx
│   ├── llms.txt
│   ├── migration/
│   │   ├── api-changes.mdx
│   │   ├── breaking-changes.mdx
│   │   ├── oss-to-platform.mdx
│   │   └── v0-to-v1.mdx
│   ├── open-source/
│   │   ├── configuration.mdx
│   │   ├── features/
│   │   │   ├── async-memory.mdx
│   │   │   ├── custom-fact-extraction-prompt.mdx
│   │   │   ├── custom-update-memory-prompt.mdx
│   │   │   ├── graph-memory.mdx
│   │   │   ├── metadata-filtering.mdx
│   │   │   ├── multimodal-support.mdx
│   │   │   ├── openai_compatibility.mdx
│   │   │   ├── overview.mdx
│   │   │   ├── reranker-search.mdx
│   │   │   ├── reranking.mdx
│   │   │   └── rest-api.mdx
│   │   ├── multimodal-support.mdx
│   │   ├── node-quickstart.mdx
│   │   ├── overview.mdx
│   │   └── python-quickstart.mdx
│   ├── openapi.json
│   ├── openmemory/
│   │   ├── integrations.mdx
│   │   ├── overview.mdx
│   │   └── quickstart.mdx
│   ├── platform/
│   │   ├── advanced-memory-operations.mdx
│   │   ├── contribute.mdx
│   │   ├── faqs.mdx
│   │   ├── features/
│   │   │   ├── advanced-retrieval.mdx
│   │   │   ├── async-client.mdx
│   │   │   ├── async-mode-default-change.mdx
│   │   │   ├── contextual-add.mdx
│   │   │   ├── criteria-retrieval.mdx
│   │   │   ├── custom-categories.mdx
│   │   │   ├── custom-instructions.mdx
│   │   │   ├── direct-import.mdx
│   │   │   ├── entity-scoped-memory.mdx
│   │   │   ├── expiration-date.mdx
│   │   │   ├── feedback-mechanism.mdx
│   │   │   ├── graph-memory.mdx
│   │   │   ├── graph-threshold.mdx
│   │   │   ├── group-chat.mdx
│   │   │   ├── mcp-integration.mdx
│   │   │   ├── memory-export.mdx
│   │   │   ├── multimodal-support.mdx
│   │   │   ├── platform-overview.mdx
│   │   │   ├── timestamp.mdx
│   │   │   ├── v2-memory-filters.mdx
│   │   │   └── webhooks.mdx
│   │   ├── mem0-mcp.mdx
│   │   ├── overview.mdx
│   │   ├── platform-vs-oss.mdx
│   │   └── quickstart.mdx
│   └── templates/
│       ├── api_reference_template.mdx
│       ├── concept_guide_template.mdx
│       ├── cookbook_template.mdx
│       ├── feature_guide_template.mdx
│       ├── integration_guide_template.mdx
│       ├── migration_guide_template.mdx
│       ├── operation_guide_template.mdx
│       ├── parameters_reference_template.mdx
│       ├── quickstart_template.mdx
│       ├── release_notes_template.mdx
│       ├── section_overview_template.mdx
│       └── troubleshooting_playbook_template.mdx
├── embedchain/
│   ├── CITATION.cff
│   ├── CONTRIBUTING.md
│   ├── LICENSE
│   ├── Makefile
│   ├── README.md
│   ├── configs/
│   │   ├── anthropic.yaml
│   │   ├── aws_bedrock.yaml
│   │   ├── azure_openai.yaml
│   │   ├── chroma.yaml
│   │   ├── chunker.yaml
│   │   ├── clarifai.yaml
│   │   ├── cohere.yaml
│   │   ├── full-stack.yaml
│   │   ├── google.yaml
│   │   ├── gpt4.yaml
│   │   ├── gpt4all.yaml
│   │   ├── huggingface.yaml
│   │   ├── jina.yaml
│   │   ├── llama2.yaml
│   │   ├── ollama.yaml
│   │   ├── opensearch.yaml
│   │   ├── opensource.yaml
│   │   ├── pinecone.yaml
│   │   ├── pipeline.yaml
│   │   ├── together.yaml
│   │   ├── vertexai.yaml
│   │   ├── vllm.yaml
│   │   └── weaviate.yaml
│   ├── docs/
│   │   ├── Makefile
│   │   ├── README.md
│   │   ├── _snippets/
│   │   │   ├── get-help.mdx
│   │   │   ├── missing-data-source-tip.mdx
│   │   │   ├── missing-llm-tip.mdx
│   │   │   └── missing-vector-db-tip.mdx
│   │   ├── api-reference/
│   │   │   ├── advanced/
│   │   │   │   └── configuration.mdx
│   │   │   ├── app/
│   │   │   │   ├── add.mdx
│   │   │   │   ├── chat.mdx
│   │   │   │   ├── delete.mdx
│   │   │   │   ├── deploy.mdx
│   │   │   │   ├── evaluate.mdx
│   │   │   │   ├── get.mdx
│   │   │   │   ├── overview.mdx
│   │   │   │   ├── query.mdx
│   │   │   │   ├── reset.mdx
│   │   │   │   └── search.mdx
│   │   │   ├── overview.mdx
│   │   │   └── store/
│   │   │       ├── ai-assistants.mdx
│   │   │       └── openai-assistant.mdx
│   │   ├── community/
│   │   │   └── connect-with-us.mdx
│   │   ├── components/
│   │   │   ├── data-sources/
│   │   │   │   ├── audio.mdx
│   │   │   │   ├── beehiiv.mdx
│   │   │   │   ├── csv.mdx
│   │   │   │   ├── custom.mdx
│   │   │   │   ├── data-type-handling.mdx
│   │   │   │   ├── directory.mdx
│   │   │   │   ├── discord.mdx
│   │   │   │   ├── discourse.mdx
│   │   │   │   ├── docs-site.mdx
│   │   │   │   ├── docx.mdx
│   │   │   │   ├── dropbox.mdx
│   │   │   │   ├── excel-file.mdx
│   │   │   │   ├── github.mdx
│   │   │   │   ├── gmail.mdx
│   │   │   │   ├── google-drive.mdx
│   │   │   │   ├── image.mdx
│   │   │   │   ├── json.mdx
│   │   │   │   ├── mdx.mdx
│   │   │   │   ├── mysql.mdx
│   │   │   │   ├── notion.mdx
│   │   │   │   ├── openapi.mdx
│   │   │   │   ├── overview.mdx
│   │   │   │   ├── pdf-file.mdx
│   │   │   │   ├── postgres.mdx
│   │   │   │   ├── qna.mdx
│   │   │   │   ├── sitemap.mdx
│   │   │   │   ├── slack.mdx
│   │   │   │   ├── substack.mdx
│   │   │   │   ├── text-file.mdx
│   │   │   │   ├── text.mdx
│   │   │   │   ├── web-page.mdx
│   │   │   │   ├── xml.mdx
│   │   │   │   ├── youtube-channel.mdx
│   │   │   │   └── youtube-video.mdx
│   │   │   ├── embedding-models.mdx
│   │   │   ├── evaluation.mdx
│   │   │   ├── introduction.mdx
│   │   │   ├── llms.mdx
│   │   │   ├── retrieval-methods.mdx
│   │   │   ├── vector-databases/
│   │   │   │   ├── chromadb.mdx
│   │   │   │   ├── elasticsearch.mdx
│   │   │   │   ├── lancedb.mdx
│   │   │   │   ├── opensearch.mdx
│   │   │   │   ├── pinecone.mdx
│   │   │   │   ├── qdrant.mdx
│   │   │   │   ├── weaviate.mdx
│   │   │   │   └── zilliz.mdx
│   │   │   └── vector-databases.mdx
│   │   ├── contribution/
│   │   │   ├── dev.mdx
│   │   │   ├── docs.mdx
│   │   │   ├── guidelines.mdx
│   │   │   └── python.mdx
│   │   ├── deployment/
│   │   │   ├── fly_io.mdx
│   │   │   ├── gradio_app.mdx
│   │   │   ├── huggingface_spaces.mdx
│   │   │   ├── modal_com.mdx
│   │   │   ├── railway.mdx
│   │   │   ├── render_com.mdx
│   │   │   └── streamlit_io.mdx
│   │   ├── development.mdx
│   │   ├── examples/
│   │   │   ├── chat-with-PDF.mdx
│   │   │   ├── community/
│   │   │   │   └── showcase.mdx
│   │   │   ├── discord_bot.mdx
│   │   │   ├── full_stack.mdx
│   │   │   ├── nextjs-assistant.mdx
│   │   │   ├── notebooks-and-replits.mdx
│   │   │   ├── openai-assistant.mdx
│   │   │   ├── opensource-assistant.mdx
│   │   │   ├── poe_bot.mdx
│   │   │   ├── rest-api/
│   │   │   │   ├── add-data.mdx
│   │   │   │   ├── chat.mdx
│   │   │   │   ├── check-status.mdx
│   │   │   │   ├── create.mdx
│   │   │   │   ├── delete.mdx
│   │   │   │   ├── deploy.mdx
│   │   │   │   ├── get-all-apps.mdx
│   │   │   │   ├── get-data.mdx
│   │   │   │   ├── getting-started.mdx
│   │   │   │   └── query.mdx
│   │   │   ├── showcase.mdx
│   │   │   ├── slack-AI.mdx
│   │   │   ├── slack_bot.mdx
│   │   │   ├── telegram_bot.mdx
│   │   │   └── whatsapp_bot.mdx
│   │   ├── get-started/
│   │   │   ├── deployment.mdx
│   │   │   ├── faq.mdx
│   │   │   ├── full-stack.mdx
│   │   │   ├── integrations.mdx
│   │   │   ├── introduction.mdx
│   │   │   └── quickstart.mdx
│   │   ├── integration/
│   │   │   ├── chainlit.mdx
│   │   │   ├── helicone.mdx
│   │   │   ├── langsmith.mdx
│   │   │   ├── openlit.mdx
│   │   │   └── streamlit-mistral.mdx
│   │   ├── mint.json
│   │   ├── product/
│   │   │   └── release-notes.mdx
│   │   ├── rest-api.json
│   │   ├── support/
│   │   │   └── get-help.mdx
│   │   └── use-cases/
│   │       ├── chatbots.mdx
│   │       ├── introduction.mdx
│   │       ├── question-answering.mdx
│   │       └── semantic-search.mdx
│   ├── embedchain/
│   │   ├── __init__.py
│   │   ├── alembic.ini
│   │   ├── app.py
│   │   ├── bots/
│   │   │   ├── __init__.py
│   │   │   ├── base.py
│   │   │   ├── discord.py
│   │   │   ├── poe.py
│   │   │   ├── slack.py
│   │   │   └── whatsapp.py
│   │   ├── cache.py
│   │   ├── chunkers/
│   │   │   ├── __init__.py
│   │   │   ├── audio.py
│   │   │   ├── base_chunker.py
│   │   │   ├── beehiiv.py
│   │   │   ├── common_chunker.py
│   │   │   ├── discourse.py
│   │   │   ├── docs_site.py
│   │   │   ├── docx_file.py
│   │   │   ├── excel_file.py
│   │   │   ├── gmail.py
│   │   │   ├── google_drive.py
│   │   │   ├── image.py
│   │   │   ├── json.py
│   │   │   ├── mdx.py
│   │   │   ├── mysql.py
│   │   │   ├── notion.py
│   │   │   ├── openapi.py
│   │   │   ├── pdf_file.py
│   │   │   ├── postgres.py
│   │   │   ├── qna_pair.py
│   │   │   ├── rss_feed.py
│   │   │   ├── sitemap.py
│   │   │   ├── slack.py
│   │   │   ├── substack.py
│   │   │   ├── table.py
│   │   │   ├── text.py
│   │   │   ├── unstructured_file.py
│   │   │   ├── web_page.py
│   │   │   ├── xml.py
│   │   │   └── youtube_video.py
│   │   ├── cli.py
│   │   ├── client.py
│   │   ├── config/
│   │   │   ├── __init__.py
│   │   │   ├── add_config.py
│   │   │   ├── app_config.py
│   │   │   ├── base_app_config.py
│   │   │   ├── base_config.py
│   │   │   ├── cache_config.py
│   │   │   ├── embedder/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── aws_bedrock.py
│   │   │   │   ├── base.py
│   │   │   │   ├── google.py
│   │   │   │   └── ollama.py
│   │   │   ├── evaluation/
│   │   │   │   ├── __init__.py
│   │   │   │   └── base.py
│   │   │   ├── llm/
│   │   │   │   ├── __init__.py
│   │   │   │   └── base.py
│   │   │   ├── mem0_config.py
│   │   │   ├── model_prices_and_context_window.json
│   │   │   ├── vector_db/
│   │   │   │   ├── base.py
│   │   │   │   ├── chroma.py
│   │   │   │   ├── elasticsearch.py
│   │   │   │   ├── lancedb.py
│   │   │   │   ├── opensearch.py
│   │   │   │   ├── pinecone.py
│   │   │   │   ├── qdrant.py
│   │   │   │   ├── weaviate.py
│   │   │   │   └── zilliz.py
│   │   │   └── vectordb/
│   │   │       └── __init__.py
│   │   ├── constants.py
│   │   ├── core/
│   │   │   └── __init__.py
│   │   ├── data_formatter/
│   │   │   ├── __init__.py
│   │   │   └── data_formatter.py
│   │   ├── deployment/
│   │   │   ├── fly.io/
│   │   │   │   ├── .dockerignore
│   │   │   │   ├── Dockerfile
│   │   │   │   ├── app.py
│   │   │   │   └── requirements.txt
│   │   │   ├── gradio.app/
│   │   │   │   ├── app.py
│   │   │   │   └── requirements.txt
│   │   │   ├── modal.com/
│   │   │   │   ├── .gitignore
│   │   │   │   ├── app.py
│   │   │   │   └── requirements.txt
│   │   │   ├── render.com/
│   │   │   │   ├── .gitignore
│   │   │   │   ├── app.py
│   │   │   │   ├── render.yaml
│   │   │   │   └── requirements.txt
│   │   │   └── streamlit.io/
│   │   │       ├── .streamlit/
│   │   │       │   └── secrets.toml
│   │   │       ├── app.py
│   │   │       └── requirements.txt
│   │   ├── embedchain.py
│   │   ├── embedder/
│   │   │   ├── __init__.py
│   │   │   ├── aws_bedrock.py
│   │   │   ├── azure_openai.py
│   │   │   ├── base.py
│   │   │   ├── clarifai.py
│   │   │   ├── cohere.py
│   │   │   ├── google.py
│   │   │   ├── gpt4all.py
│   │   │   ├── huggingface.py
│   │   │   ├── mistralai.py
│   │   │   ├── nvidia.py
│   │   │   ├── ollama.py
│   │   │   ├── openai.py
│   │   │   └── vertexai.py
│   │   ├── evaluation/
│   │   │   ├── __init__.py
│   │   │   ├── base.py
│   │   │   └── metrics/
│   │   │       ├── __init__.py
│   │   │       ├── answer_relevancy.py
│   │   │       ├── context_relevancy.py
│   │   │       └── groundedness.py
│   │   ├── factory.py
│   │   ├── helpers/
│   │   │   ├── __init__.py
│   │   │   ├── callbacks.py
│   │   │   └── json_serializable.py
│   │   ├── llm/
│   │   │   ├── __init__.py
│   │   │   ├── anthropic.py
│   │   │   ├── aws_bedrock.py
│   │   │   ├── azure_openai.py
│   │   │   ├── base.py
│   │   │   ├── clarifai.py
│   │   │   ├── cohere.py
│   │   │   ├── google.py
│   │   │   ├── gpt4all.py
│   │   │   ├── groq.py
│   │   │   ├── huggingface.py
│   │   │   ├── jina.py
│   │   │   ├── llama2.py
│   │   │   ├── mistralai.py
│   │   │   ├── nvidia.py
│   │   │   ├── ollama.py
│   │   │   ├── openai.py
│   │   │   ├── together.py
│   │   │   ├── vertex_ai.py
│   │   │   └── vllm.py
│   │   ├── loaders/
│   │   │   ├── __init__.py
│   │   │   ├── audio.py
│   │   │   ├── base_loader.py
│   │   │   ├── beehiiv.py
│   │   │   ├── csv.py
│   │   │   ├── directory_loader.py
│   │   │   ├── discord.py
│   │   │   ├── discourse.py
│   │   │   ├── docs_site_loader.py
│   │   │   ├── docx_file.py
│   │   │   ├── dropbox.py
│   │   │   ├── excel_file.py
│   │   │   ├── github.py
│   │   │   ├── gmail.py
│   │   │   ├── google_drive.py
│   │   │   ├── image.py
│   │   │   ├── json.py
│   │   │   ├── local_qna_pair.py
│   │   │   ├── local_text.py
│   │   │   ├── mdx.py
│   │   │   ├── mysql.py
│   │   │   ├── notion.py
│   │   │   ├── openapi.py
│   │   │   ├── pdf_file.py
│   │   │   ├── postgres.py
│   │   │   ├── rss_feed.py
│   │   │   ├── sitemap.py
│   │   │   ├── slack.py
│   │   │   ├── substack.py
│   │   │   ├── text_file.py
│   │   │   ├── unstructured_file.py
│   │   │   ├── web_page.py
│   │   │   ├── xml.py
│   │   │   ├── youtube_channel.py
│   │   │   └── youtube_video.py
│   │   ├── memory/
│   │   │   ├── __init__.py
│   │   │   ├── base.py
│   │   │   ├── message.py
│   │   │   └── utils.py
│   │   ├── migrations/
│   │   │   ├── env.py
│   │   │   ├── script.py.mako
│   │   │   └── versions/
│   │   │       └── 40a327b3debd_create_initial_migrations.py
│   │   ├── models/
│   │   │   ├── __init__.py
│   │   │   ├── data_type.py
│   │   │   ├── embedding_functions.py
│   │   │   ├── providers.py
│   │   │   └── vector_dimensions.py
│   │   ├── pipeline.py
│   │   ├── store/
│   │   │   ├── __init__.py
│   │   │   └── assistants.py
│   │   ├── telemetry/
│   │   │   ├── __init__.py
│   │   │   └── posthog.py
│   │   ├── utils/
│   │   │   ├── __init__.py
│   │   │   ├── cli.py
│   │   │   ├── evaluation.py
│   │   │   └── misc.py
│   │   └── vectordb/
│   │       ├── __init__.py
│   │       ├── base.py
│   │       ├── chroma.py
│   │       ├── elasticsearch.py
│   │       ├── lancedb.py
│   │       ├── opensearch.py
│   │       ├── pinecone.py
│   │       ├── qdrant.py
│   │       ├── weaviate.py
│   │       └── zilliz.py
│   ├── examples/
│   │   ├── api_server/
│   │   │   ├── .dockerignore
│   │   │   ├── .gitignore
│   │   │   ├── Dockerfile
│   │   │   ├── README.md
│   │   │   ├── api_server.py
│   │   │   ├── docker-compose.yml
│   │   │   ├── requirements.txt
│   │   │   └── variables.env
│   │   ├── chainlit/
│   │   │   ├── .gitignore
│   │   │   ├── README.md
│   │   │   ├── app.py
│   │   │   ├── chainlit.md
│   │   │   └── requirements.txt
│   │   ├── chat-pdf/
│   │   │   ├── README.md
│   │   │   ├── app.py
│   │   │   ├── embedchain.json
│   │   │   └── requirements.txt
│   │   ├── discord_bot/
│   │   │   ├── .dockerignore
│   │   │   ├── .gitignore
│   │   │   ├── Dockerfile
│   │   │   ├── README.md
│   │   │   ├── discord_bot.py
│   │   │   ├── docker-compose.yml
│   │   │   ├── requirements.txt
│   │   │   └── variables.env
│   │   ├── full_stack/
│   │   │   ├── .dockerignore
│   │   │   ├── README.md
│   │   │   ├── backend/
│   │   │   │   ├── .dockerignore
│   │   │   │   ├── .gitignore
│   │   │   │   ├── Dockerfile
│   │   │   │   ├── models.py
│   │   │   │   ├── paths.py
│   │   │   │   ├── requirements.txt
│   │   │   │   ├── routes/
│   │   │   │   │   ├── chat_response.py
│   │   │   │   │   ├── dashboard.py
│   │   │   │   │   └── sources.py
│   │   │   │   └── server.py
│   │   │   ├── docker-compose.yml
│   │   │   └── frontend/
│   │   │       ├── .dockerignore
│   │   │       ├── .eslintrc.json
│   │   │       ├── .gitignore
│   │   │       ├── Dockerfile
│   │   │       ├── jsconfig.json
│   │   │       ├── next.config.js
│   │   │       ├── package.json
│   │   │       ├── postcss.config.js
│   │   │       ├── src/
│   │   │       │   ├── components/
│   │   │       │   │   ├── PageWrapper.js
│   │   │       │   │   ├── chat/
│   │   │       │   │   │   ├── BotWrapper.js
│   │   │       │   │   │   └── HumanWrapper.js
│   │   │       │   │   └── dashboard/
│   │   │       │   │       ├── CreateBot.js
│   │   │       │   │       ├── DeleteBot.js
│   │   │       │   │       ├── PurgeChats.js
│   │   │       │   │       └── SetOpenAIKey.js
│   │   │       │   ├── containers/
│   │   │       │   │   ├── ChatWindow.js
│   │   │       │   │   ├── SetSources.js
│   │   │       │   │   └── Sidebar.js
│   │   │       │   ├── pages/
│   │   │       │   │   ├── [bot_slug]/
│   │   │       │   │   │   └── app.js
│   │   │       │   │   ├── _app.js
│   │   │       │   │   ├── _document.js
│   │   │       │   │   └── index.js
│   │   │       │   └── styles/
│   │   │       │       └── globals.css
│   │   │       └── tailwind.config.js
│   │   ├── mistral-streamlit/
│   │   │   ├── README.md
│   │   │   ├── app.py
│   │   │   ├── config.yaml
│   │   │   └── requirements.txt
│   │   ├── nextjs/
│   │   │   ├── README.md
│   │   │   ├── ec_app/
│   │   │   │   ├── .dockerignore
│   │   │   │   ├── Dockerfile
│   │   │   │   ├── app.py
│   │   │   │   ├── embedchain.json
│   │   │   │   ├── fly.toml
│   │   │   │   └── requirements.txt
│   │   │   ├── nextjs_discord/
│   │   │   │   ├── .dockerignore
│   │   │   │   ├── Dockerfile
│   │   │   │   ├── app.py
│   │   │   │   ├── embedchain.json
│   │   │   │   ├── fly.toml
│   │   │   │   └── requirements.txt
│   │   │   ├── nextjs_slack/
│   │   │   │   ├── .dockerignore
│   │   │   │   ├── Dockerfile
│   │   │   │   ├── app.py
│   │   │   │   ├── embedchain.json
│   │   │   │   ├── fly.toml
│   │   │   │   └── requirements.txt
│   │   │   └── requirements.txt
│   │   ├── private-ai/
│   │   │   ├── README.md
│   │   │   ├── config.yaml
│   │   │   ├── privateai.py
│   │   │   └── requirements.txt
│   │   ├── rest-api/
│   │   │   ├── .dockerignore
│   │   │   ├── .gitignore
│   │   │   ├── Dockerfile
│   │   │   ├── README.md
│   │   │   ├── __init__.py
│   │   │   ├── bruno/
│   │   │   │   └── ec-rest-api/
│   │   │   │       ├── bruno.json
│   │   │   │       ├── default_add.bru
│   │   │   │       ├── default_chat.bru
│   │   │   │       ├── default_query.bru
│   │   │   │       └── ping.bru
│   │   │   ├── configs/
│   │   │   │   └── README.md
│   │   │   ├── database.py
│   │   │   ├── default.yaml
│   │   │   ├── main.py
│   │   │   ├── models.py
│   │   │   ├── requirements.txt
│   │   │   ├── sample-config.yaml
│   │   │   ├── services.py
│   │   │   └── utils.py
│   │   ├── sadhguru-ai/
│   │   │   ├── README.md
│   │   │   ├── app.py
│   │   │   └── requirements.txt
│   │   ├── slack_bot/
│   │   │   ├── Dockerfile
│   │   │   └── requirements.txt
│   │   ├── telegram_bot/
│   │   │   ├── .gitignore
│   │   │   ├── Dockerfile
│   │   │   ├── README.md
│   │   │   ├── requirements.txt
│   │   │   └── telegram_bot.py
│   │   ├── unacademy-ai/
│   │   │   ├── README.md
│   │   │   ├── app.py
│   │   │   └── requirements.txt
│   │   └── whatsapp_bot/
│   │       ├── .gitignore
│   │       ├── Dockerfile
│   │       ├── README.md
│   │       ├── requirements.txt
│   │       ├── run.py
│   │       └── whatsapp_bot.py
│   ├── notebooks/
│   │   ├── anthropic.ipynb
│   │   ├── aws-bedrock.ipynb
│   │   ├── azure-openai.ipynb
│   │   ├── azure_openai.yaml
│   │   ├── chromadb.ipynb
│   │   ├── clarifai.ipynb
│   │   ├── cohere.ipynb
│   │   ├── elasticsearch.ipynb
│   │   ├── embedchain-chromadb-server.ipynb
│   │   ├── embedchain-docs-site-example.ipynb
│   │   ├── gpt4all.ipynb
│   │   ├── hugging_face_hub.ipynb
│   │   ├── jina.ipynb
│   │   ├── lancedb.ipynb
│   │   ├── llama2.ipynb
│   │   ├── ollama.ipynb
│   │   ├── openai.ipynb
│   │   ├── openai_azure.yaml
│   │   ├── opensearch.ipynb
│   │   ├── pinecone.ipynb
│   │   ├── together.ipynb
│   │   └── vertex_ai.ipynb
│   ├── poetry.toml
│   ├── pyproject.toml
│   └── tests/
│       ├── __init__.py
│       ├── chunkers/
│       │   ├── test_base_chunker.py
│       │   ├── test_chunkers.py
│       │   └── test_text.py
│       ├── conftest.py
│       ├── embedchain/
│       │   ├── test_add.py
│       │   ├── test_embedchain.py
│       │   └── test_utils.py
│       ├── embedder/
│       │   ├── test_aws_bedrock_embedder.py
│       │   ├── test_azure_openai_embedder.py
│       │   ├── test_embedder.py
│       │   └── test_huggingface_embedder.py
│       ├── evaluation/
│       │   ├── test_answer_relevancy_metric.py
│       │   ├── test_context_relevancy_metric.py
│       │   └── test_groundedness_metric.py
│       ├── helper_classes/
│       │   └── test_json_serializable.py
│       ├── llm/
│       │   ├── conftest.py
│       │   ├── test_anthrophic.py
│       │   ├── test_aws_bedrock.py
│       │   ├── test_azure_openai.py
│       │   ├── test_base_llm.py
│       │   ├── test_chat.py
│       │   ├── test_clarifai.py
│       │   ├── test_cohere.py
│       │   ├── test_generate_prompt.py
│       │   ├── test_google.py
│       │   ├── test_gpt4all.py
│       │   ├── test_huggingface.py
│       │   ├── test_jina.py
│       │   ├── test_llama2.py
│       │   ├── test_mistralai.py
│       │   ├── test_ollama.py
│       │   ├── test_openai.py
│       │   ├── test_query.py
│       │   ├── test_together.py
│       │   └── test_vertex_ai.py
│       ├── loaders/
│       │   ├── test_audio.py
│       │   ├── test_csv.py
│       │   ├── test_discourse.py
│       │   ├── test_docs_site.py
│       │   ├── test_docs_site_loader.py
│       │   ├── test_docx_file.py
│       │   ├── test_dropbox.py
│       │   ├── test_excel_file.py
│       │   ├── test_github.py
│       │   ├── test_gmail.py
│       │   ├── test_google_drive.py
│       │   ├── test_json.py
│       │   ├── test_local_qna_pair.py
│       │   ├── test_local_text.py
│       │   ├── test_mdx.py
│       │   ├── test_mysql.py
│       │   ├── test_notion.py
│       │   ├── test_openapi.py
│       │   ├── test_pdf_file.py
│       │   ├── test_postgres.py
│       │   ├── test_slack.py
│       │   ├── test_web_page.py
│       │   ├── test_xml.py
│       │   └── test_youtube_video.py
│       ├── memory/
│       │   ├── test_chat_memory.py
│       │   └── test_memory_messages.py
│       ├── models/
│       │   └── test_data_type.py
│       ├── telemetry/
│       │   └── test_posthog.py
│       ├── test_app.py
│       ├── test_client.py
│       ├── test_factory.py
│       ├── test_utils.py
│       └── vectordb/
│           ├── test_chroma_db.py
│           ├── test_elasticsearch_db.py
│           ├── test_lancedb.py
│           ├── test_pinecone.py
│           ├── test_qdrant.py
│           ├── test_weaviate.py
│           └── test_zilliz_db.py
├── evaluation/
│   ├── Makefile
│   ├── README.md
│   ├── evals.py
│   ├── generate_scores.py
│   ├── metrics/
│   │   ├── llm_judge.py
│   │   └── utils.py
│   ├── prompts.py
│   ├── run_experiments.py
│   └── src/
│       ├── langmem.py
│       ├── memzero/
│       │   ├── add.py
│       │   └── search.py
│       ├── openai/
│       │   └── predict.py
│       ├── rag.py
│       ├── utils.py
│       └── zep/
│           ├── add.py
│           └── search.py
├── examples/
│   ├── graph-db-demo/
│   │   ├── kuzu-example.ipynb
│   │   ├── memgraph-example.ipynb
│   │   ├── neo4j-example.ipynb
│   │   ├── neptune-db-example.ipynb
│   │   └── neptune-example.ipynb
│   ├── mem0-demo/
│   │   ├── .gitignore
│   │   ├── app/
│   │   │   ├── api/
│   │   │   │   └── chat/
│   │   │   │       └── route.ts
│   │   │   ├── assistant.tsx
│   │   │   ├── globals.css
│   │   │   ├── layout.tsx
│   │   │   └── page.tsx
│   │   ├── components/
│   │   │   ├── assistant-ui/
│   │   │   │   ├── markdown-text.tsx
│   │   │   │   ├── memory-indicator.tsx
│   │   │   │   ├── memory-ui.tsx
│   │   │   │   ├── theme-aware-logo.tsx
│   │   │   │   ├── thread-list.tsx
│   │   │   │   ├── thread.tsx
│   │   │   │   └── tooltip-icon-button.tsx
│   │   │   ├── mem0/
│   │   │   │   ├── github-button.tsx
│   │   │   │   ├── markdown.css
│   │   │   │   ├── markdown.tsx
│   │   │   │   └── theme-aware-logo.tsx
│   │   │   └── ui/
│   │   │       ├── alert-dialog.tsx
│   │   │       ├── avatar.tsx
│   │   │       ├── badge.tsx
│   │   │       ├── button.tsx
│   │   │       ├── popover.tsx
│   │   │       ├── scroll-area.tsx
│   │   │       └── tooltip.tsx
│   │   ├── components.json
│   │   ├── eslint.config.mjs
│   │   ├── lib/
│   │   │   └── utils.ts
│   │   ├── next-env.d.ts
│   │   ├── next.config.ts
│   │   ├── package.json
│   │   ├── postcss.config.mjs
│   │   ├── tailwind.config.ts
│   │   └── tsconfig.json
│   ├── misc/
│   │   ├── diet_assistant_voice_cartesia.py
│   │   ├── fitness_checker.py
│   │   ├── healthcare_assistant_google_adk.py
│   │   ├── movie_recommendation_grok3.py
│   │   ├── multillm_memory.py
│   │   ├── personal_assistant_agno.py
│   │   ├── personalized_search.py
│   │   ├── strands_agent_aws_elasticache_neptune.py
│   │   ├── study_buddy.py
│   │   ├── test.py
│   │   ├── vllm_example.py
│   │   └── voice_assistant_elevenlabs.py
│   ├── multiagents/
│   │   └── llamaindex_learning_system.py
│   ├── multimodal-demo/
│   │   ├── .gitattributes
│   │   ├── .gitignore
│   │   ├── components.json
│   │   ├── eslint.config.js
│   │   ├── index.html
│   │   ├── package.json
│   │   ├── postcss.config.js
│   │   ├── src/
│   │   │   ├── App.tsx
│   │   │   ├── components/
│   │   │   │   ├── api-settings-popup.tsx
│   │   │   │   ├── chevron-toggle.tsx
│   │   │   │   ├── header.tsx
│   │   │   │   ├── input-area.tsx
│   │   │   │   ├── memories.tsx
│   │   │   │   ├── messages.tsx
│   │   │   │   └── ui/
│   │   │   │       ├── avatar.tsx
│   │   │   │       ├── badge.tsx
│   │   │   │       ├── button.tsx
│   │   │   │       ├── card.tsx
│   │   │   │       ├── dialog.tsx
│   │   │   │       ├── input.tsx
│   │   │   │       ├── label.tsx
│   │   │   │       ├── scroll-area.tsx
│   │   │   │       └── select.tsx
│   │   │   ├── constants/
│   │   │   │   └── messages.ts
│   │   │   ├── contexts/
│   │   │   │   └── GlobalContext.tsx
│   │   │   ├── hooks/
│   │   │   │   ├── useAuth.ts
│   │   │   │   ├── useChat.ts
│   │   │   │   └── useFileHandler.ts
│   │   │   ├── index.css
│   │   │   ├── libs/
│   │   │   │   └── utils.ts
│   │   │   ├── main.tsx
│   │   │   ├── page.tsx
│   │   │   ├── pages/
│   │   │   │   └── home.tsx
│   │   │   ├── types.ts
│   │   │   ├── utils/
│   │   │   │   └── fileUtils.ts
│   │   │   └── vite-env.d.ts
│   │   ├── tailwind.config.js
│   │   ├── tsconfig.app.json
│   │   ├── tsconfig.json
│   │   ├── tsconfig.node.json
│   │   ├── useChat.ts
│   │   └── vite.config.ts
│   ├── openai-inbuilt-tools/
│   │   ├── index.js
│   │   └── package.json
│   ├── vercel-ai-sdk-chat-app/
│   │   ├── .gitattributes
│   │   ├── .gitignore
│   │   ├── components.json
│   │   ├── eslint.config.js
│   │   ├── index.html
│   │   ├── package.json
│   │   ├── postcss.config.js
│   │   ├── src/
│   │   │   ├── App.tsx
│   │   │   ├── components/
│   │   │   │   ├── api-settings-popup.tsx
│   │   │   │   ├── chevron-toggle.tsx
│   │   │   │   ├── header.tsx
│   │   │   │   ├── input-area.tsx
│   │   │   │   ├── memories.tsx
│   │   │   │   ├── messages.tsx
│   │   │   │   └── ui/
│   │   │   │       ├── avatar.tsx
│   │   │   │       ├── badge.tsx
│   │   │   │       ├── button.tsx
│   │   │   │       ├── card.tsx
│   │   │   │       ├── dialog.tsx
│   │   │   │       ├── input.tsx
│   │   │   │       ├── label.tsx
│   │   │   │       ├── scroll-area.tsx
│   │   │   │       └── select.tsx
│   │   │   ├── constants/
│   │   │   │   └── messages.ts
│   │   │   ├── contexts/
│   │   │   │   └── GlobalContext.tsx
│   │   │   ├── hooks/
│   │   │   │   ├── useAuth.ts
│   │   │   │   ├── useChat.ts
│   │   │   │   └── useFileHandler.ts
│   │   │   ├── index.css
│   │   │   ├── libs/
│   │   │   │   └── utils.ts
│   │   │   ├── main.tsx
│   │   │   ├── page.tsx
│   │   │   ├── pages/
│   │   │   │   └── home.tsx
│   │   │   ├── types.ts
│   │   │   ├── utils/
│   │   │   │   └── fileUtils.ts
│   │   │   └── vite-env.d.ts
│   │   ├── tailwind.config.js
│   │   ├── tsconfig.app.json
│   │   ├── tsconfig.json
│   │   ├── tsconfig.node.json
│   │   └── vite.config.ts
│   └── yt-assistant-chrome/
│       ├── .gitignore
│       ├── README.md
│       ├── manifest.json
│       ├── package.json
│       ├── public/
│       │   ├── options.html
│       │   └── popup.html
│       ├── src/
│       │   ├── background.js
│       │   ├── content.js
│       │   ├── options.js
│       │   └── popup.js
│       ├── styles/
│       │   ├── content.css
│       │   ├── options.css
│       │   └── popup.css
│       └── webpack.config.js
├── mem0/
│   ├── __init__.py
│   ├── client/
│   │   ├── __init__.py
│   │   ├── main.py
│   │   ├── project.py
│   │   └── utils.py
│   ├── configs/
│   │   ├── __init__.py
│   │   ├── base.py
│   │   ├── embeddings/
│   │   │   ├── __init__.py
│   │   │   └── base.py
│   │   ├── enums.py
│   │   ├── llms/
│   │   │   ├── __init__.py
│   │   │   ├── anthropic.py
│   │   │   ├── aws_bedrock.py
│   │   │   ├── azure.py
│   │   │   ├── base.py
│   │   │   ├── deepseek.py
│   │   │   ├── lmstudio.py
│   │   │   ├── ollama.py
│   │   │   ├── openai.py
│   │   │   └── vllm.py
│   │   ├── prompts.py
│   │   ├── rerankers/
│   │   │   ├── __init__.py
│   │   │   ├── base.py
│   │   │   ├── cohere.py
│   │   │   ├── config.py
│   │   │   ├── huggingface.py
│   │   │   ├── llm.py
│   │   │   ├── sentence_transformer.py
│   │   │   └── zero_entropy.py
│   │   └── vector_stores/
│   │       ├── __init__.py
│   │       ├── azure_ai_search.py
│   │       ├── azure_mysql.py
│   │       ├── baidu.py
│   │       ├── cassandra.py
│   │       ├── chroma.py
│   │       ├── databricks.py
│   │       ├── elasticsearch.py
│   │       ├── faiss.py
│   │       ├── langchain.py
│   │       ├── milvus.py
│   │       ├── mongodb.py
│   │       ├── neptune.py
│   │       ├── opensearch.py
│   │       ├── pgvector.py
│   │       ├── pinecone.py
│   │       ├── qdrant.py
│   │       ├── redis.py
│   │       ├── s3_vectors.py
│   │       ├── supabase.py
│   │       ├── upstash_vector.py
│   │       ├── valkey.py
│   │       ├── vertex_ai_vector_search.py
│   │       └── weaviate.py
│   ├── embeddings/
│   │   ├── __init__.py
│   │   ├── aws_bedrock.py
│   │   ├── azure_openai.py
│   │   ├── base.py
│   │   ├── configs.py
│   │   ├── fastembed.py
│   │   ├── gemini.py
│   │   ├── huggingface.py
│   │   ├── langchain.py
│   │   ├── lmstudio.py
│   │   ├── mock.py
│   │   ├── ollama.py
│   │   ├── openai.py
│   │   ├── together.py
│   │   └── vertexai.py
│   ├── exceptions.py
│   ├── graphs/
│   │   ├── __init__.py
│   │   ├── configs.py
│   │   ├── neptune/
│   │   │   ├── __init__.py
│   │   │   ├── base.py
│   │   │   ├── neptunedb.py
│   │   │   └── neptunegraph.py
│   │   ├── tools.py
│   │   └── utils.py
│   ├── llms/
│   │   ├── __init__.py
│   │   ├── anthropic.py
│   │   ├── aws_bedrock.py
│   │   ├── azure_openai.py
│   │   ├── azure_openai_structured.py
│   │   ├── base.py
│   │   ├── configs.py
│   │   ├── deepseek.py
│   │   ├── gemini.py
│   │   ├── groq.py
│   │   ├── langchain.py
│   │   ├── litellm.py
│   │   ├── lmstudio.py
│   │   ├── ollama.py
│   │   ├── openai.py
│   │   ├── openai_structured.py
│   │   ├── sarvam.py
│   │   ├── together.py
│   │   ├── vllm.py
│   │   └── xai.py
│   ├── memory/
│   │   ├── __init__.py
│   │   ├── base.py
│   │   ├── graph_memory.py
│   │   ├── kuzu_memory.py
│   │   ├── main.py
│   │   ├── memgraph_memory.py
│   │   ├── setup.py
│   │   ├── storage.py
│   │   ├── telemetry.py
│   │   └── utils.py
│   ├── proxy/
│   │   ├── __init__.py
│   │   └── main.py
│   ├── reranker/
│   │   ├── __init__.py
│   │   ├── base.py
│   │   ├── cohere_reranker.py
│   │   ├── huggingface_reranker.py
│   │   ├── llm_reranker.py
│   │   ├── sentence_transformer_reranker.py
│   │   └── zero_entropy_reranker.py
│   ├── utils/
│   │   ├── factory.py
│   │   └── gcp_auth.py
│   └── vector_stores/
│       ├── __init__.py
│       ├── azure_ai_search.py
│       ├── azure_mysql.py
│       ├── baidu.py
│       ├── base.py
│       ├── cassandra.py
│       ├── chroma.py
│       ├── configs.py
│       ├── databricks.py
│       ├── elasticsearch.py
│       ├── faiss.py
│       ├── langchain.py
│       ├── milvus.py
│       ├── mongodb.py
│       ├── neptune_analytics.py
│       ├── opensearch.py
│       ├── pgvector.py
│       ├── pinecone.py
│       ├── qdrant.py
│       ├── redis.py
│       ├── s3_vectors.py
│       ├── supabase.py
│       ├── upstash_vector.py
│       ├── valkey.py
│       ├── vertex_ai_vector_search.py
│       └── weaviate.py
├── mem0-ts/
│   ├── .gitignore
│   ├── .prettierignore
│   ├── README.md
│   ├── jest.config.js
│   ├── jest.integration.config.js
│   ├── package.json
│   ├── src/
│   │   ├── client/
│   │   │   ├── index.ts
│   │   │   ├── mem0.ts
│   │   │   ├── mem0.types.ts
│   │   │   ├── telemetry.ts
│   │   │   ├── telemetry.types.ts
│   │   │   └── tests/
│   │   │       ├── helpers.ts
│   │   │       ├── integration/
│   │   │       │   ├── batch.test.ts
│   │   │       │   ├── crud.test.ts
│   │   │       │   ├── global-setup.ts
│   │   │       │   ├── global-teardown.ts
│   │   │       │   ├── helpers.ts
│   │   │       │   ├── initialization.test.ts
│   │   │       │   ├── management.test.ts
│   │   │       │   └── search.test.ts
│   │   │       ├── memoryClient.batch.test.ts
│   │   │       ├── memoryClient.crud.test.ts
│   │   │       ├── memoryClient.init.test.ts
│   │   │       ├── memoryClient.project.test.ts
│   │   │       ├── memoryClient.search.test.ts
│   │   │       ├── memoryClient.users.test.ts
│   │   │       ├── memoryClient.webhooks.test.ts
│   │   │       └── setup.ts
│   │   ├── common/
│   │   │   ├── exceptions.test.ts
│   │   │   └── exceptions.ts
│   │   ├── community/
│   │   │   ├── .prettierignore
│   │   │   ├── package.json
│   │   │   ├── src/
│   │   │   │   ├── index.ts
│   │   │   │   └── integrations/
│   │   │   │       └── langchain/
│   │   │   │           ├── index.ts
│   │   │   │           └── mem0.ts
│   │   │   └── tsconfig.json
│   │   └── oss/
│   │       ├── .gitignore
│   │       ├── README.md
│   │       ├── examples/
│   │       │   ├── basic.ts
│   │       │   ├── llms/
│   │       │   │   └── mistral-example.ts
│   │       │   ├── local-llms.ts
│   │       │   ├── utils/
│   │       │   │   └── test-utils.ts
│   │       │   └── vector-stores/
│   │       │       ├── azure-ai-search.ts
│   │       │       ├── index.ts
│   │       │       ├── memory.ts
│   │       │       ├── pgvector.ts
│   │       │       ├── qdrant.ts
│   │       │       ├── redis.ts
│   │       │       └── supabase.ts
│   │       ├── package.json
│   │       ├── src/
│   │       │   ├── config/
│   │       │   │   ├── defaults.ts
│   │       │   │   └── manager.ts
│   │       │   ├── embeddings/
│   │       │   │   ├── azure.ts
│   │       │   │   ├── base.ts
│   │       │   │   ├── google.ts
│   │       │   │   ├── langchain.ts
│   │       │   │   ├── lmstudio.ts
│   │       │   │   ├── ollama.ts
│   │       │   │   └── openai.ts
│   │       │   ├── graphs/
│   │       │   │   ├── configs.ts
│   │       │   │   ├── tools.ts
│   │       │   │   └── utils.ts
│   │       │   ├── index.ts
│   │       │   ├── llms/
│   │       │   │   ├── anthropic.ts
│   │       │   │   ├── azure.ts
│   │       │   │   ├── base.ts
│   │       │   │   ├── google.ts
│   │       │   │   ├── groq.ts
│   │       │   │   ├── langchain.ts
│   │       │   │   ├── lmstudio.ts
│   │       │   │   ├── mistral.ts
│   │       │   │   ├── ollama.ts
│   │       │   │   ├── openai.ts
│   │       │   │   └── openai_structured.ts
│   │       │   ├── memory/
│   │       │   │   ├── graph_memory.ts
│   │       │   │   ├── index.ts
│   │       │   │   └── memory.types.ts
│   │       │   ├── prompts/
│   │       │   │   └── index.ts
│   │       │   ├── storage/
│   │       │   │   ├── DummyHistoryManager.ts
│   │       │   │   ├── MemoryHistoryManager.ts
│   │       │   │   ├── SQLiteManager.ts
│   │       │   │   ├── SupabaseHistoryManager.ts
│   │       │   │   ├── base.ts
│   │       │   │   └── index.ts
│   │       │   ├── tests/
│   │       │   │   ├── better-sqlite3-migration.test.ts
│   │       │   │   ├── sqlite-backward-compat.test.ts
│   │       │   │   └── sqlite-path-resolution.test.ts
│   │       │   ├── types/
│   │       │   │   └── index.ts
│   │       │   ├── utils/
│   │       │   │   ├── bm25.ts
│   │       │   │   ├── factory.ts
│   │       │   │   ├── logger.ts
│   │       │   │   ├── memory.ts
│   │       │   │   ├── sqlite.ts
│   │       │   │   ├── telemetry.ts
│   │       │   │   └── telemetry.types.ts
│   │       │   └── vector_stores/
│   │       │       ├── azure_ai_search.ts
│   │       │       ├── base.ts
│   │       │       ├── langchain.ts
│   │       │       ├── memory.ts
│   │       │       ├── pgvector.ts
│   │       │       ├── qdrant.ts
│   │       │       ├── redis.ts
│   │       │       ├── supabase.ts
│   │       │       └── vectorize.ts
│   │       ├── tests/
│   │       │   ├── config-manager.test.ts
│   │       │   ├── dimension-autodetect.test.ts
│   │       │   ├── factory.unit.test.ts
│   │       │   ├── google-llm.test.ts
│   │       │   ├── graph-memory-parsing.test.ts
│   │       │   ├── graph-prompts.test.ts
│   │       │   ├── lmstudio-embedder.test.ts
│   │       │   ├── lmstudio-llm.test.ts
│   │       │   ├── memory.add.test.ts
│   │       │   ├── memory.crud.test.ts
│   │       │   ├── memory.init.test.ts
│   │       │   ├── ollama-embedder.test.ts
│   │       │   ├── remove-code-blocks.test.ts
│   │       │   ├── storage.unit.test.ts
│   │       │   ├── tsup-externals.test.ts
│   │       │   ├── vector-store.unit.test.ts
│   │       │   └── vector-stores-compat.test.ts
│   │       └── tsconfig.json
│   ├── tests/
│   │   └── .gitkeep
│   ├── tsconfig.json
│   ├── tsconfig.test.json
│   └── tsup.config.ts
├── openclaw/
│   ├── .gitignore
│   ├── .npmrc
│   ├── CHANGELOG.md
│   ├── README.md
│   ├── config.ts
│   ├── filtering.ts
│   ├── index.test.ts
│   ├── index.ts
│   ├── isolation.ts
│   ├── openclaw-plugin-sdk.d.ts
│   ├── openclaw.plugin.json
│   ├── package.json
│   ├── pnpm-workspace.yaml
│   ├── providers.ts
│   ├── sqlite-resilience.test.ts
│   ├── tsconfig.json
│   ├── tsup.config.ts
│   └── types.ts
├── openmemory/
│   ├── .gitignore
│   ├── CONTRIBUTING.md
│   ├── Makefile
│   ├── README.md
│   ├── api/
│   │   ├── .dockerignore
│   │   ├── .env.example
│   │   ├── .python-version
│   │   ├── Dockerfile
│   │   ├── README.md
│   │   ├── alembic/
│   │   │   ├── README
│   │   │   ├── env.py
│   │   │   ├── script.py.mako
│   │   │   └── versions/
│   │   │       ├── 0b53c747049a_initial_migration.py
│   │   │       ├── add_config_table.py
│   │   │       └── afd00efbd06b_add_unique_user_id_constraints.py
│   │   ├── alembic.ini
│   │   ├── app/
│   │   │   ├── __init__.py
│   │   │   ├── config.py
│   │   │   ├── database.py
│   │   │   ├── mcp_server.py
│   │   │   ├── models.py
│   │   │   ├── routers/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── apps.py
│   │   │   │   ├── backup.py
│   │   │   │   ├── config.py
│   │   │   │   ├── memories.py
│   │   │   │   └── stats.py
│   │   │   ├── schemas.py
│   │   │   └── utils/
│   │   │       ├── __init__.py
│   │   │       ├── categorization.py
│   │   │       ├── db.py
│   │   │       ├── memory.py
│   │   │       ├── permissions.py
│   │   │       └── prompts.py
│   │   ├── config.json
│   │   ├── default_config.json
│   │   ├── main.py
│   │   └── requirements.txt
│   ├── backup-scripts/
│   │   └── export_openmemory.sh
│   ├── compose/
│   │   ├── chroma.yml
│   │   ├── elasticsearch.yml
│   │   ├── faiss.yml
│   │   ├── milvus.yml
│   │   ├── opensearch.yml
│   │   ├── pgvector.yml
│   │   ├── qdrant.yml
│   │   ├── redis.yml
│   │   └── weaviate.yml
│   ├── docker-compose.yml
│   ├── run.sh
│   └── ui/
│       ├── .dockerignore
│       ├── .env.example
│       ├── Dockerfile
│       ├── app/
│       │   ├── apps/
│       │   │   ├── [appId]/
│       │   │   │   ├── components/
│       │   │   │   │   ├── AppDetailCard.tsx
│       │   │   │   │   └── MemoryCard.tsx
│       │   │   │   └── page.tsx
│       │   │   ├── components/
│       │   │   │   ├── AppCard.tsx
│       │   │   │   ├── AppFilters.tsx
│       │   │   │   └── AppGrid.tsx
│       │   │   └── page.tsx
│       │   ├── globals.css
│       │   ├── layout.tsx
│       │   ├── loading.tsx
│       │   ├── memories/
│       │   │   ├── components/
│       │   │   │   ├── CreateMemoryDialog.tsx
│       │   │   │   ├── FilterComponent.tsx
│       │   │   │   ├── MemoriesSection.tsx
│       │   │   │   ├── MemoryFilters.tsx
│       │   │   │   ├── MemoryPagination.tsx
│       │   │   │   ├── MemoryTable.tsx
│       │   │   │   └── PageSizeSelector.tsx
│       │   │   └── page.tsx
│       │   ├── memory/
│       │   │   └── [id]/
│       │   │       ├── components/
│       │   │       │   ├── AccessLog.tsx
│       │   │       │   ├── MemoryActions.tsx
│       │   │       │   ├── MemoryDetails.tsx
│       │   │       │   └── RelatedMemories.tsx
│       │   │       └── page.tsx
│       │   ├── not-found.tsx
│       │   ├── page.tsx
│       │   ├── providers.tsx
│       │   └── settings/
│       │       └── page.tsx
│       ├── components/
│       │   ├── Navbar.tsx
│       │   ├── dashboard/
│       │   │   ├── Install.tsx
│       │   │   └── Stats.tsx
│       │   ├── form-view.tsx
│       │   ├── json-editor.tsx
│       │   ├── shared/
│       │   │   ├── categories.tsx
│       │   │   ├── source-app.tsx
│       │   │   └── update-memory.tsx
│       │   ├── theme-provider.tsx
│       │   ├── types.ts
│       │   └── ui/
│       │       ├── accordion.tsx
│       │       ├── alert-dialog.tsx
│       │       ├── alert.tsx
│       │       ├── aspect-ratio.tsx
│       │       ├── avatar.tsx
│       │       ├── badge.tsx
│       │       ├── breadcrumb.tsx
│       │       ├── button.tsx
│       │       ├── calendar.tsx
│       │       ├── card.tsx
│       │       ├── carousel.tsx
│       │       ├── chart.tsx
│       │       ├── checkbox.tsx
│       │       ├── collapsible.tsx
│       │       ├── command.tsx
│       │       ├── context-menu.tsx
│       │       ├── dialog.tsx
│       │       ├── drawer.tsx
│       │       ├── dropdown-menu.tsx
│       │       ├── form.tsx
│       │       ├── hover-card.tsx
│       │       ├── input-otp.tsx
│       │       ├── input.tsx
│       │       ├── label.tsx
│       │       ├── menubar.tsx
│       │       ├── navigation-menu.tsx
│       │       ├── pagination.tsx
│       │       ├── popover.tsx
│       │       ├── progress.tsx
│       │       ├── radio-group.tsx
│       │       ├── resizable.tsx
│       │       ├── scroll-area.tsx
│       │       ├── select.tsx
│       │       ├── separator.tsx
│       │       ├── sheet.tsx
│       │       ├── sidebar.tsx
│       │       ├── skeleton.tsx
│       │       ├── slider.tsx
│       │       ├── sonner.tsx
│       │       ├── switch.tsx
│       │       ├── table.tsx
│       │       ├── tabs.tsx
│       │       ├── textarea.tsx
│       │       ├── toast.tsx
│       │       ├── toaster.tsx
│       │       ├── toggle-group.tsx
│       │       ├── toggle.tsx
│       │       ├── tooltip.tsx
│       │       ├── use-mobile.tsx
│       │       └── use-toast.ts
│       ├── components.json
│       ├── entrypoint.sh
│       ├── hooks/
│       │   ├── use-mobile.tsx
│       │   ├── use-toast.ts
│       │   ├── useAppsApi.ts
│       │   ├── useConfig.ts
│       │   ├── useFiltersApi.ts
│       │   ├── useMemoriesApi.ts
│       │   ├── useStats.ts
│       │   └── useUI.ts
│       ├── next-env.d.ts
│       ├── next.config.dev.mjs
│       ├── next.config.mjs
│       ├── package.json
│       ├── postcss.config.mjs
│       ├── skeleton/
│       │   ├── AppCardSkeleton.tsx
│       │   ├── AppDetailCardSkeleton.tsx
│       │   ├── AppFiltersSkeleton.tsx
│       │   ├── MemoryCardSkeleton.tsx
│       │   ├── MemorySkeleton.tsx
│       │   └── MemoryTableSkeleton.tsx
│       ├── store/
│       │   ├── appsSlice.ts
│       │   ├── configSlice.ts
│       │   ├── filtersSlice.ts
│       │   ├── memoriesSlice.ts
│       │   ├── profileSlice.ts
│       │   ├── store.ts
│       │   └── uiSlice.ts
│       ├── styles/
│       │   ├── animation.css
│       │   ├── globals.css
│       │   └── notfound.scss
│       ├── tailwind.config.ts
│       └── tsconfig.json
├── pyproject.toml
├── server/
│   ├── Dockerfile
│   ├── Makefile
│   ├── README.md
│   ├── dev.Dockerfile
│   ├── docker-compose.yaml
│   ├── main.py
│   └── requirements.txt
├── skills/
│   └── mem0/
│       ├── LICENSE
│       ├── README.md
│       ├── SKILL.md
│       ├── references/
│       │   ├── api-reference.md
│       │   ├── architecture.md
│       │   ├── features.md
│       │   ├── integration-patterns.md
│       │   ├── quickstart.md
│       │   ├── sdk-guide.md
│       │   └── use-cases.md
│       └── scripts/
│           └── mem0_doc_search.py
├── tests/
│   ├── __init__.py
│   ├── configs/
│   │   └── test_prompts.py
│   ├── embeddings/
│   │   ├── test_azure_openai_embeddings.py
│   │   ├── test_fastembed_embeddings.py
│   │   ├── test_gemini_emeddings.py
│   │   ├── test_huggingface_embeddings.py
│   │   ├── test_lm_studio_embeddings.py
│   │   ├── test_ollama_embeddings.py
│   │   ├── test_openai_embeddings.py
│   │   └── test_vertexai_embeddings.py
│   ├── llms/
│   │   ├── test_azure_openai.py
│   │   ├── test_azure_openai_structured.py
│   │   ├── test_deepseek.py
│   │   ├── test_gemini.py
│   │   ├── test_groq.py
│   │   ├── test_langchain.py
│   │   ├── test_litellm.py
│   │   ├── test_lm_studio.py
│   │   ├── test_ollama.py
│   │   ├── test_openai.py
│   │   ├── test_together.py
│   │   └── test_vllm.py
│   ├── memory/
│   │   ├── test_json_prompt_fix.py
│   │   ├── test_kuzu.py
│   │   ├── test_main.py
│   │   ├── test_memgraph_memory.py
│   │   ├── test_neo4j_cypher_syntax.py
│   │   ├── test_neptune_analytics_memory.py
│   │   ├── test_neptune_memory.py
│   │   ├── test_safe_deepcopy_config.py
│   │   └── test_storage.py
│   ├── rerankers/
│   │   ├── conftest.py
│   │   ├── test_llm_reranker_config.py
│   │   ├── test_llm_reranker_nested_config.py
│   │   └── test_llm_reranker_rerank.py
│   ├── test_main.py
│   ├── test_memory.py
│   ├── test_memory_integration.py
│   ├── test_proxy.py
│   ├── test_telemetry.py
│   └── vector_stores/
│       ├── test_azure_ai_search.py
│       ├── test_azure_mysql.py
│       ├── test_baidu.py
│       ├── test_cassandra.py
│       ├── test_chroma.py
│       ├── test_databricks.py
│       ├── test_elasticsearch.py
│       ├── test_faiss.py
│       ├── test_langchain_vector_store.py
│       ├── test_milvus.py
│       ├── test_mongodb.py
│       ├── test_neptune_analytics.py
│       ├── test_opensearch.py
│       ├── test_pgvector.py
│       ├── test_pinecone.py
│       ├── test_qdrant.py
│       ├── test_s3_vectors.py
│       ├── test_supabase.py
│       ├── test_upstash_vector.py
│       ├── test_valkey.py
│       ├── test_vertex_ai_vector_search.py
│       └── test_weaviate.py
└── vercel-ai-sdk/
    ├── .gitattributes
    ├── .gitignore
    ├── README.md
    ├── config/
    │   └── test-config.ts
    ├── jest.config.js
    ├── nodemon.json
    ├── package.json
    ├── src/
    │   ├── index.ts
    │   ├── mem0-facade.ts
    │   ├── mem0-generic-language-model.ts
    │   ├── mem0-provider-selector.ts
    │   ├── mem0-provider.ts
    │   ├── mem0-types.ts
    │   ├── mem0-utils.ts
    │   ├── provider-response-provider.ts
    │   └── stream-utils.ts
    ├── teardown.ts
    ├── tests/
    │   ├── generate-output.test.ts
    │   ├── mem0-provider-tests/
    │   │   ├── mem0-cohere.test.ts
    │   │   ├── mem0-google.test.ts
    │   │   ├── mem0-groq.test.ts
    │   │   ├── mem0-openai-structured-ouput.test.ts
    │   │   ├── mem0-openai.test.ts
    │   │   └── mem0_anthropic.test.ts
    │   ├── mem0-toolcalls.test.ts
    │   ├── memory-core.test.ts
    │   ├── text-properties.test.ts
    │   └── utils-test/
    │       ├── anthropic-integration.test.ts
    │       ├── cohere-integration.test.ts
    │       ├── google-integration.test.ts
    │       ├── groq-integration.test.ts
    │       └── openai-integration.test.ts
    ├── tsconfig.json
    └── tsup.config.ts

================================================
FILE CONTENTS
================================================

================================================
FILE: .github/ISSUE_TEMPLATE/bug_report.yml
================================================
name: 🐛 Bug Report
description: Create a report to help us reproduce and fix the bug

body:
- type: markdown
  attributes:
    value: >
      #### Before submitting a bug, please make sure the issue hasn't been already addressed by searching through [the existing and past issues](https://github.com/embedchain/embedchain/issues?q=is%3Aissue+sort%3Acreated-desc+).
- type: textarea
  attributes:
    label: 🐛 Describe the bug
    description: |
      Please provide a clear and concise description of what the bug is.

      If relevant, add a minimal example so that we can reproduce the error by running the code. It is very important for the snippet to be as succinct (minimal) as possible, so please take time to trim down any irrelevant code to help us debug efficiently. We are going to copy-paste your code and we expect to get the same result as you did: avoid any external data, and include the relevant imports, etc. For example:

      ```python
      # All necessary imports at the beginning
      import embedchain as ec
      # Your code goes here


      ```

      Please also paste or describe the results you observe instead of the expected results. If you observe an error, please paste the error message including the **full** traceback of the exception. It may be relevant to wrap error messages in ```` ```triple quotes blocks``` ````.
    placeholder: |
      A clear and concise description of what the bug is.

      ```python
      Sample code to reproduce the problem
      ```

      ```
      The error message you got, with the full traceback.
      ````
  validations:
    required: true
- type: markdown
  attributes:
    value: >
      Thanks for contributing 🎉!


================================================
FILE: .github/ISSUE_TEMPLATE/config.yml
================================================
blank_issues_enabled: true
contact_links:
  - name: 1-on-1 Session
    url: https://cal.com/taranjeetio/ec
    about: Speak directly with Taranjeet, the founder, to discuss issues, share feedback, or explore improvements for Embedchain
  - name: Discord
    url: https://discord.gg/6PzXDgEjG5
    about: General community discussions


================================================
FILE: .github/ISSUE_TEMPLATE/documentation_issue.yml
================================================
name: Documentation
description: Report an issue related to the Embedchain docs.
title: "DOC: <Please write a comprehensive title after the 'DOC: ' prefix>"

body:
- type: textarea
  attributes:
    label: "Issue with current documentation:"
    description: >
      Please make sure to leave a reference to the document/code you're
      referring to.


================================================
FILE: .github/ISSUE_TEMPLATE/feature_request.yml
================================================
name: 🚀 Feature request
description: Submit a proposal/request for a new Embedchain feature

body:
- type: textarea
  id: feature-request
  attributes:
    label: 🚀 The feature
    description: >
      A clear and concise description of the feature proposal
  validations:
    required: true
- type: textarea
  attributes:
    label: Motivation, pitch
    description: >
      Please outline the motivation for the proposal. Is your feature request related to a specific problem? e.g., *"I'm working on X and would like Y to be possible"*. If this is related to another GitHub issue, please link here too.
  validations:
    required: true
- type: markdown
  attributes:
    value: >
      Thanks for contributing 🎉!


================================================
FILE: .github/PULL_REQUEST_TEMPLATE.md
================================================
## Description

Please include a summary of the change and which issue is fixed. Please also include relevant motivation and context. List any dependencies that are required for this change.

Fixes # (issue)

## Type of change

Please delete options that are not relevant.

- [ ] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [ ] Refactor (does not change functionality, e.g. code style improvements, linting)
- [ ] Documentation update

## How Has This Been Tested?

Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration

Please delete options that are not relevant.

- [ ] Unit Test
- [ ] Test Script (please provide)

## Checklist:

- [ ] My code follows the style guidelines of this project
- [ ] I have performed a self-review of my own code
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] I have made corresponding changes to the documentation
- [ ] My changes generate no new warnings
- [ ] I have added tests that prove my fix is effective or that my feature works
- [ ] New and existing unit tests pass locally with my changes
- [ ] Any dependent changes have been merged and published in downstream modules
- [ ] I have checked my code and corrected any misspellings

## Maintainer Checklist

- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] Made sure Checks passed


================================================
FILE: .github/workflows/cd.yml
================================================
name: Publish Python 🐍 distributions 📦 to PyPI and TestPyPI

on:
  release:
    types: [published]

jobs:
  build-n-publish:
    name: Build and publish Python 🐍 distributions 📦 to PyPI and TestPyPI
    runs-on: ubuntu-latest
    permissions:
      id-token: write
    steps:
      - uses: actions/checkout@v2

      - name: Set up Python
        uses: actions/setup-python@v2
        with:
          python-version: '3.11'

      - name: Install Hatch
        run: |
          pip install hatch

      - name: Install dependencies
        run: |
          hatch env create

      - name: Build a binary wheel and a source tarball
        run: |
          hatch build --clean

      # TODO: Needs to setup mem0 repo on Test PyPI
      # - name: Publish distribution 📦 to Test PyPI
      #   uses: pypa/gh-action-pypi-publish@release/v1
      #   with:
      #     repository_url: https://test.pypi.org/legacy/
      #     packages_dir: dist/

      - name: Publish distribution 📦 to PyPI
        if: startsWith(github.ref, 'refs/tags')
        uses: pypa/gh-action-pypi-publish@release/v1
        with:
          packages_dir: dist/


================================================
FILE: .github/workflows/ci.yml
================================================
name: ci

on:
  push:
    branches: [main]
    paths:
      - 'mem0/**'
      - 'tests/**'
      - 'embedchain/**'
      - '.github/workflows/**'
      - 'pyproject.toml'
  pull_request:
    paths:
      - 'mem0/**'
      - 'tests/**'
      - 'embedchain/**'

jobs:
  check_changes:
    runs-on: ubuntu-latest
    outputs:
      mem0_changed: ${{ steps.filter.outputs.mem0 }}
      embedchain_changed: ${{ steps.filter.outputs.embedchain }}
    steps:
    - uses: actions/checkout@v3
    - uses: dorny/paths-filter@v2
      id: filter
      with:
        filters: |
          mem0:
            - 'mem0/**'
            - 'tests/**'
            - '.github/workflows/**'
            - 'pyproject.toml'
          embedchain:
            - 'embedchain/**'

  build_mem0:
    needs: check_changes
    if: needs.check_changes.outputs.mem0_changed == 'true'
    runs-on: ubuntu-latest
    strategy:
      matrix:
        python-version: ["3.10", "3.11", "3.12"]
    steps:
      - uses: actions/checkout@v3
      - name: Set up Python ${{ matrix.python-version }}
        uses: actions/setup-python@v4
        with:
          python-version: ${{ matrix.python-version }}
      - name: Clean up disk space
        run: |
          df -h
          sudo rm -rf /usr/share/dotnet /usr/local/lib/android /opt/ghc /opt/hostedtoolcache/CodeQL
          sudo docker image prune --all --force
          sudo docker builder prune -a
          df -h
      - name: Install Hatch
        run: pip install hatch
      - name: Load cached venv
        id: cached-hatch-dependencies
        uses: actions/cache@v3
        with:
          path: .venv
          key: venv-mem0-${{ runner.os }}-${{ hashFiles('**/pyproject.toml') }}
      - name: Install GEOS Libraries
        run: sudo apt-get update && sudo apt-get install -y libgeos-dev
      - name: Install dependencies
        run: |
          pip install --upgrade pip
          pip install -e ".[test,graph,vector_stores,llms,extras]"
          pip install ruff
        if: steps.cached-hatch-dependencies.outputs.cache-hit != 'true'
      - name: Run Linting
        run: make lint
      - name: Run tests and generate coverage report
        run: make test

  build_embedchain:
    needs: check_changes
    if: needs.check_changes.outputs.embedchain_changed == 'true'
    runs-on: ubuntu-latest
    strategy:
      matrix:
        python-version: ["3.9", "3.10", "3.11", "3.12"]
    steps:
      - uses: actions/checkout@v3
      - name: Set up Python ${{ matrix.python-version }}
        uses: actions/setup-python@v4
        with:
          python-version: ${{ matrix.python-version }}
      - name: Install Hatch
        run: pip install hatch
      - name: Load cached venv
        id: cached-hatch-dependencies
        uses: actions/cache@v3
        with:
          path: .venv
          key: venv-embedchain-${{ runner.os }}-${{ hashFiles('**/pyproject.toml') }}
      - name: Install dependencies
        run: cd embedchain && make install_all
        if: steps.cached-hatch-dependencies.outputs.cache-hit != 'true'
      - name: Run Formatting
        run: |
          mkdir -p embedchain/.ruff_cache && chmod -R 777 embedchain/.ruff_cache
          cd embedchain && hatch run format
      - name: Lint with ruff
        run: cd embedchain && make lint
      - name: Run tests and generate coverage report
        run: cd embedchain && make coverage
      - name: Upload coverage reports to Codecov
        uses: codecov/codecov-action@v3
        with:
          file: coverage.xml
        env:
          CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}


================================================
FILE: .github/workflows/openclaw-checks.yml
================================================
name: openclaw checks

on:
  workflow_dispatch:
  push:
    branches: [main]
    paths:
      - 'openclaw/**'
      - '.github/workflows/openclaw-checks.yml'
  pull_request:
    paths:
      - 'openclaw/**'
      - '.github/workflows/openclaw-checks.yml'

jobs:
  lint:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Install pnpm
        uses: pnpm/action-setup@v4
        with:
          version: 9

      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: 20
          cache: 'pnpm'
          cache-dependency-path: openclaw/pnpm-lock.yaml

      - name: Install dependencies
        run: cd openclaw && pnpm install --frozen-lockfile

      - name: Type check
        run: cd openclaw && pnpm exec tsc --noEmit

  test:
    runs-on: ubuntu-latest
    strategy:
      matrix:
        node-version: [20, 22]
    steps:
      - uses: actions/checkout@v4

      - name: Install pnpm
        uses: pnpm/action-setup@v4
        with:
          version: 9

      - name: Setup Node.js ${{ matrix.node-version }}
        uses: actions/setup-node@v4
        with:
          node-version: ${{ matrix.node-version }}
          cache: 'pnpm'
          cache-dependency-path: openclaw/pnpm-lock.yaml

      - name: Install dependencies
        run: cd openclaw && pnpm install --frozen-lockfile

      - name: Run tests with coverage
        run: cd openclaw && pnpm exec vitest run --coverage

      - name: Upload coverage to Codecov
        if: matrix.node-version == 20
        uses: codecov/codecov-action@v4
        with:
          flags: openclaw
          directory: openclaw/coverage
        env:
          CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}

  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Install pnpm
        uses: pnpm/action-setup@v4
        with:
          version: 9

      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: 20
          cache: 'pnpm'
          cache-dependency-path: openclaw/pnpm-lock.yaml

      - name: Install dependencies
        run: cd openclaw && pnpm install --frozen-lockfile

      - name: Build
        run: cd openclaw && pnpm build

      - name: Verify dist output exists
        run: |
          test -f openclaw/dist/index.js || (echo "Build output missing: dist/index.js" && exit 1)
          test -f openclaw/dist/index.d.ts || (echo "Build output missing: dist/index.d.ts" && exit 1)


================================================
FILE: .github/workflows/ts-sdk-ci.yml
================================================
name: TypeScript SDK CI

on:
  push:
    branches: [main]
    paths:
      - 'mem0-ts/**'
      - '.github/workflows/ts-sdk-ci.yml'
  pull_request:
    paths:
      - 'mem0-ts/**'

jobs:
  check_changes:
    runs-on: ubuntu-latest
    outputs:
      ts_sdk_changed: ${{ steps.filter.outputs.ts_sdk }}
    steps:
      - uses: actions/checkout@v4
      - uses: dorny/paths-filter@v2
        id: filter
        with:
          filters: |
            ts_sdk:
              - 'mem0-ts/**'

  build_ts_sdk:
    needs: check_changes
    if: needs.check_changes.outputs.ts_sdk_changed == 'true'
    runs-on: ubuntu-latest
    strategy:
      matrix:
        node-version: [20, 22]

    steps:
      - uses: actions/checkout@v4

      - uses: pnpm/action-setup@v4
        with:
          version: 10

      - uses: actions/setup-node@v4
        with:
          node-version: ${{ matrix.node-version }}
          cache: 'pnpm'
          cache-dependency-path: mem0-ts/pnpm-lock.yaml

      - name: Install dependencies
        working-directory: mem0-ts
        run: pnpm install --frozen-lockfile

      - name: Lint
        working-directory: mem0-ts
        run: npx prettier --check .

      - name: Build
        working-directory: mem0-ts
        run: pnpm run build

      - name: Run unit tests
        working-directory: mem0-ts
        run: pnpm run test:unit

      - name: Verify package exports
        working-directory: mem0-ts
        run: |
          node -e "const m = require('./dist/index.js'); console.log('Client exports:', Object.keys(m).length)"
          node -e "const m = require('./dist/oss/index.js'); console.log('OSS exports:', Object.keys(m).length)"

      - name: Upload coverage
        if: matrix.node-version == 20
        uses: actions/upload-artifact@v4
        with:
          name: coverage-report
          path: mem0-ts/coverage/

  integration_ts_sdk:
    needs: build_ts_sdk
    runs-on: ubuntu-latest
    strategy:
      max-parallel: 1
      matrix:
        node-version: [20, 22]

    steps:
      - uses: actions/checkout@v4

      - uses: pnpm/action-setup@v4
        with:
          version: 10

      - uses: actions/setup-node@v4
        with:
          node-version: ${{ matrix.node-version }}
          cache: 'pnpm'
          cache-dependency-path: mem0-ts/pnpm-lock.yaml

      - name: Install dependencies
        working-directory: mem0-ts
        run: pnpm install --frozen-lockfile

      - name: Build
        working-directory: mem0-ts
        run: pnpm run build

      - name: Run integration tests (with cleanup)
        working-directory: mem0-ts
        env:
          MEM0_API_KEY: ${{ secrets.MEM0_API_KEY }}
        run: pnpm run test:integration


================================================
FILE: .gitignore
================================================
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
**/node_modules/

# C extensions
*.so

# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST

# PyInstaller
#  Usually these files are written by a python script from a template
#  before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
cover/

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
.pybuilder/
target/

# Jupyter Notebook

# IPython
profile_default/
ipython_config.py

# pyenv
#   For a library or package, you might want to ignore these files since the code is
#   intended to run in multiple environments; otherwise, check them in:
# .python-version

# pipenv
#   According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
#   However, in case of collaboration, if having platform-specific dependencies or dependencies
#   having no cross-platform support, pipenv may install dependencies that don't work, or not
#   install all needed dependencies.
#Pipfile.lock

# poetry
#   Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
#   This is especially recommended for binary packages to ensure reproducibility, and is more
#   commonly ignored for libraries.
#   https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
#poetry.lock

# pdm
#   Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
#pdm.lock
#   pdm stores project-wide configurations in .pdm.toml, but it is recommended not to include it
#   in version control.
#   https://pdm.fming.dev/#use-with-ide
.pdm.toml

# PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
__pypackages__/

# Celery stuff
celerybeat-schedule
celerybeat.pid

# SageMath parsed files
*.sage.py

# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
pyenv/

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/
.dmypy.json
dmypy.json

# Pyre type checker
.pyre/

# pytype static type analyzer
.pytype/

# Cython debug symbols
cython_debug/

# PyCharm
#  JetBrains specific template is maintained in a separate JetBrains.gitignore that can
#  be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
#  and can be added to the global gitignore or merged into this file.  For a more nuclear
#  option (not recommended) you can uncomment the following to ignore the entire idea folder.
#.idea/

.ideas.md
.todos.md

# Database
db
test-db
!embedchain/embedchain/core/db/

.vscode
.idea/

.DS_Store

notebooks/*.yaml
.ipynb_checkpoints/

!configs/*.yaml

# cache db
*.db

# local directories for testing
eval/
qdrant_storage/
.crossnote
testing.ipynb


================================================
FILE: .pre-commit-config.yaml
================================================
repos:
  - repo: local
    hooks:
      - id: ruff
        name: Ruff
        entry: ruff check
        language: system
        types: [python]
        args: [--fix] 

      - id: isort
        name: isort
        entry: isort
        language: system
        types: [python]
        args: ["--profile", "black"]


================================================
FILE: CONTRIBUTING.md
================================================
# Contributing to mem0

Let us make contribution easy, collaborative and fun.

## Submit your Contribution through PR

To make a contribution, follow these steps:

1. Fork and clone this repository
2. Do the changes on your fork with dedicated feature branch `feature/f1`
3. If you modified the code (new feature or bug-fix), please add tests for it
4. Include proper documentation / docstring and examples to run the feature
5. Ensure that all tests pass
6. Submit a pull request

For more details about pull requests, please read [GitHub's guides](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request).


### 📦 Development Environment

We use `hatch` for managing development environments. To set up:

```bash
# Activate environment for specific Python version:
hatch shell dev_py_3_9   # Python 3.9
hatch shell dev_py_3_10  # Python 3.10  
hatch shell dev_py_3_11  # Python 3.11
hatch shell dev_py_3_12  # Python 3.12

# The environment will automatically install all dev dependencies
# Run tests within the activated shell:
make test
```

### 📌 Pre-commit

To ensure our standards, make sure to install pre-commit before starting to contribute.

```bash
pre-commit install
```

### 🧪 Testing

We use `pytest` to test our code across multiple Python versions. You can run tests using:

```bash
# Run tests with default Python version
make test

# Test specific Python versions:
make test-py-3.9   # Python 3.9 environment
make test-py-3.10  # Python 3.10 environment
make test-py-3.11  # Python 3.11 environment
make test-py-3.12  # Python 3.12 environment

# When using hatch shells, run tests with:
make test  # After activating a shell with hatch shell test_XX
```

Make sure that all tests pass across all supported Python versions before submitting a pull request.

We look forward to your pull requests and can't wait to see your contributions!


================================================
FILE: LICENSE
================================================
                                 Apache License
                           Version 2.0, January 2004
                        http://www.apache.org/licenses/

   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

   1. Definitions.

      "License" shall mean the terms and conditions for use, reproduction,
      and distribution as defined by Sections 1 through 9 of this document.

      "Licensor" shall mean the copyright owner or entity authorized by
      the copyright owner that is granting the License.

      "Legal Entity" shall mean the union of the acting entity and all
      other entities that control, are controlled by, or are under common
      control with that entity. For the purposes of this definition,
      "control" means (i) the power, direct or indirect, to cause the
      direction or management of such entity, whether by contract or
      otherwise, or (ii) ownership of fifty percent (50%) or more of the
      outstanding shares, or (iii) beneficial ownership of such entity.

      "You" (or "Your") shall mean an individual or Legal Entity
      exercising permissions granted by this License.

      "Source" form shall mean the preferred form for making modifications,
      including but not limited to software source code, documentation
      source, and configuration files.

      "Object" form shall mean any form resulting from mechanical
      transformation or translation of a Source form, including but
      not limited to compiled object code, generated documentation,
      and conversions to other media types.

      "Work" shall mean the work of authorship, whether in Source or
      Object form, made available under the License, as indicated by a
      copyright notice that is included in or attached to the work
      (an example is provided in the Appendix below).

      "Derivative Works" shall mean any work, whether in Source or Object
      form, that is based on (or derived from) the Work and for which the
      editorial revisions, annotations, elaborations, or other modifications
      represent, as a whole, an original work of authorship. For the purposes
      of this License, Derivative Works shall not include works that remain
      separable from, or merely link (or bind by name) to the interfaces of,
      the Work and Derivative Works thereof.

      "Contribution" shall mean any work of authorship, including
      the original version of the Work and any modifications or additions
      to that Work or Derivative Works thereof, that is intentionally
      submitted to Licensor for inclusion in the Work by the copyright owner
      or by an individual or Legal Entity authorized to submit on behalf of
      the copyright owner. For the purposes of this definition, "submitted"
      means any form of electronic, verbal, or written communication sent
      to the Licensor or its representatives, including but not limited to
      communication on electronic mailing lists, source code control systems,
      and issue tracking systems that are managed by, or on behalf of, the
      Licensor for the purpose of discussing and improving the Work, but
      excluding communication that is conspicuously marked or otherwise
      designated in writing by the copyright owner as "Not a Contribution."

      "Contributor" shall mean Licensor and any individual or Legal Entity
      on behalf of whom a Contribution has been received by Licensor and
      subsequently incorporated within the Work.

   2. Grant of Copyright License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      copyright license to reproduce, prepare Derivative Works of,
      publicly display, publicly perform, sublicense, and distribute the
      Work and such Derivative Works in Source or Object form.

   3. Grant of Patent License. Subject to the terms and conditions of
      this License, each Contributor hereby grants to You a perpetual,
      worldwide, non-exclusive, no-charge, royalty-free, irrevocable
      (except as stated in this section) patent license to make, have made,
      use, offer to sell, sell, import, and otherwise transfer the Work,
      where such license applies only to those patent claims licensable
      by such Contributor that are necessarily infringed by their
      Contribution(s) alone or by combination of their Contribution(s)
      with the Work to which such Contribution(s) was submitted. If You
      institute patent litigation against any entity (including a
      cross-claim or counterclaim in a lawsuit) alleging that the Work
      or a Contribution incorporated within the Work constitutes direct
      or contributory patent infringement, then any patent licenses
      granted to You under this License for that Work shall terminate
      as of the date such litigation is filed.

   4. Redistribution. You may reproduce and distribute copies of the
      Work or Derivative Works thereof in any medium, with or without
      modifications, and in Source or Object form, provided that You
      meet the following conditions:

      (a) You must give any other recipients of the Work or
          Derivative Works a copy of this License; and

      (b) You must cause any modified files to carry prominent notices
          stating that You changed the files; and

      (c) You must retain, in the Source form of any Derivative Works
          that You distribute, all copyright, patent, trademark, and
          attribution notices from the Source form of the Work,
          excluding those notices that do not pertain to any part of
          the Derivative Works; and

      (d) If the Work includes a "NOTICE" text file as part of its
          distribution, then any Derivative Works that You distribute must
          include a readable copy of the attribution notices contained
          within such NOTICE file, excluding those notices that do not
          pertain to any part of the Derivative Works, in at least one
          of the following places: within a NOTICE text file distributed
          as part of the Derivative Works; within the Source form or
          documentation, if provided along with the Derivative Works; or,
          within a display generated by the Derivative Works, if and
          wherever such third-party notices normally appear. The contents
          of the NOTICE file are for informational purposes only and
          do not modify the License. You may add Your own attribution
          notices within Derivative Works that You distribute, alongside
          or as an addendum to the NOTICE text from the Work, provided
          that such additional attribution notices cannot be construed
          as modifying the License.

      You may add Your own copyright statement to Your modifications and
      may provide additional or different license terms and conditions
      for use, reproduction, or distribution of Your modifications, or
      for any such Derivative Works as a whole, provided Your use,
      reproduction, and distribution of the Work otherwise complies with
      the conditions stated in this License.

   5. Submission of Contributions. Unless You explicitly state otherwise,
      any Contribution intentionally submitted for inclusion in the Work
      by You to the Licensor shall be under the terms and conditions of
      this License, without any additional terms or conditions.
      Notwithstanding the above, nothing herein shall supersede or modify
      the terms of any separate license agreement you may have executed
      with Licensor regarding such Contributions.

   6. Trademarks. This License does not grant permission to use the trade
      names, trademarks, service marks, or product names of the Licensor,
      except as required for reasonable and customary use in describing the
      origin of the Work and reproducing the content of the NOTICE file.

   7. Disclaimer of Warranty. Unless required by applicable law or
      agreed to in writing, Licensor provides the Work (and each
      Contributor provides its Contributions) on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
      implied, including, without limitation, any warranties or conditions
      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
      PARTICULAR PURPOSE. You are solely responsible for determining the
      appropriateness of using or redistributing the Work and assume any
      risks associated with Your exercise of permissions under this License.

   8. Limitation of Liability. In no event and under no legal theory,
      whether in tort (including negligence), contract, or otherwise,
      unless required by applicable law (such as deliberate and grossly
      negligent acts) or agreed to in writing, shall any Contributor be
      liable to You for damages, including any direct, indirect, special,
      incidental, or consequential damages of any character arising as a
      result of this License or out of the use or inability to use the
      Work (including but not limited to damages for loss of goodwill,
      work stoppage, computer failure or malfunction, or any and all
      other commercial damages or losses), even if such Contributor
      has been advised of the possibility of such damages.

   9. Accepting Warranty or Additional Liability. While redistributing
      the Work or Derivative Works thereof, You may choose to offer,
      and charge a fee for, acceptance of support, warranty, indemnity,
      or other liability obligations and/or rights consistent with this
      License. However, in accepting such obligations, You may act only
      on Your own behalf and on Your sole responsibility, not on behalf
      of any other Contributor, and only if You agree to indemnify,
      defend, and hold each Contributor harmless for any liability
      incurred by, or claims asserted against, such Contributor by reason
      of your accepting any such warranty or additional liability.

   END OF TERMS AND CONDITIONS

   APPENDIX: How to apply the Apache License to your work.

      To apply the Apache License to your work, attach the following
      boilerplate notice, with the fields enclosed by brackets "[]"
      replaced with your own identifying information. (Don't include
      the brackets!)  The text should be enclosed in the appropriate
      comment syntax for the file format. We also recommend that a
      file or class name and description of purpose be included on the
      same "printed page" as the copyright notice for easier
      identification within third-party archives.

   Copyright [2023] [Taranjeet Singh]

   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.


================================================
FILE: LLM.md
================================================
# Mem0 - The Memory Layer for Personalized AI

## Overview

Mem0 ("mem-zero") is an intelligent memory layer that enhances AI assistants and agents with persistent, personalized memory capabilities. It enables AI systems to remember user preferences, adapt to individual needs, and continuously learn over time—making it ideal for customer support chatbots, AI assistants, and autonomous systems.

**Key Benefits:**
- +26% Accuracy over OpenAI Memory on LOCOMO benchmark
- 91% Faster responses than full-context approaches
- 90% Lower token usage than full-context methods

## Installation

```bash
# Python
pip install mem0ai

# TypeScript/JavaScript
npm install mem0ai
```

## Quick Start

### Python - Self-Hosted
```python
from mem0 import Memory

# Initialize memory
memory = Memory()

# Add memories
memory.add([
    {"role": "user", "content": "I love pizza and hate broccoli"},
    {"role": "assistant", "content": "I'll remember your food preferences!"}
], user_id="user123")

# Search memories
results = memory.search("food preferences", user_id="user123")
print(results)

# Get all memories
all_memories = memory.get_all(user_id="user123")
```

### Python - Hosted Platform
```python
from mem0 import MemoryClient

# Initialize client
client = MemoryClient(api_key="your-api-key")

# Add memories
client.add([
    {"role": "user", "content": "My name is John and I'm a developer"}
], user_id="john")

# Search memories
results = client.search("What do you know about me?", user_id="john")
```

### TypeScript - Client SDK
```typescript
import { MemoryClient } from 'mem0ai';

const client = new MemoryClient({ apiKey: 'your-api-key' });

// Add memory
const memories = await client.add([
  { role: 'user', content: 'My name is John' }
], { user_id: 'john' });

// Search memories
const results = await client.search('What is my name?', { user_id: 'john' });
```

### TypeScript - OSS SDK
```typescript
import { Memory } from 'mem0ai/oss';

const memory = new Memory({
  embedder: { provider: 'openai', config: { apiKey: 'key' } },
  vectorStore: { provider: 'memory', config: { dimension: 1536 } },
  llm: { provider: 'openai', config: { apiKey: 'key' } }
});

const result = await memory.add('My name is John', { userId: 'john' });
```

## Core API Reference

### Memory Class (Self-Hosted)

**Import:** `from mem0 import Memory, AsyncMemory`

#### Initialization
```python
from mem0 import Memory
from mem0.configs.base import MemoryConfig

# Basic initialization
memory = Memory()

# With custom configuration
config = MemoryConfig(
    vector_store={"provider": "qdrant", "config": {"host": "localhost"}},
    llm={"provider": "openai", "config": {"model": "gpt-4.1-nano-2025-04-14"}},
    embedder={"provider": "openai", "config": {"model": "text-embedding-3-small"}}
)
memory = Memory(config)
```

#### Core Methods

**add(messages, *, user_id=None, agent_id=None, run_id=None, metadata=None, infer=True, memory_type=None, prompt=None)**
- **Purpose**: Create new memories from messages
- **Parameters**:
  - `messages`: str, dict, or list of message dicts
  - `user_id/agent_id/run_id`: Session identifiers (at least one required)
  - `metadata`: Additional metadata to store
  - `infer`: Whether to use LLM for fact extraction (default: True)
  - `memory_type`: "procedural_memory" for procedural memories
  - `prompt`: Custom prompt for memory creation
- **Returns**: Dict with "results" key containing memory operations

**search(query, *, user_id=None, agent_id=None, run_id=None, limit=100, filters=None, threshold=None)**
- **Purpose**: Search memories semantically
- **Parameters**:
  - `query`: Search query string
  - `user_id/agent_id/run_id`: Session filters (at least one required)
  - `limit`: Maximum results (default: 100)
  - `filters`: Additional search filters
  - `threshold`: Minimum similarity score
- **Returns**: Dict with "results" containing scored memories

**get(memory_id)**
- **Purpose**: Retrieve specific memory by ID
- **Returns**: Memory dict with id, memory, hash, timestamps, metadata

**get_all(*, user_id=None, agent_id=None, run_id=None, filters=None, limit=100)**
- **Purpose**: List all memories with optional filtering
- **Returns**: Dict with "results" containing list of memories

**update(memory_id, data)**
- **Purpose**: Update memory content or metadata
- **Returns**: Success message dict

**delete(memory_id)**
- **Purpose**: Delete specific memory
- **Returns**: Success message dict

**delete_all(user_id=None, agent_id=None, run_id=None)**
- **Purpose**: Delete all memories for session (at least one ID required)
- **Returns**: Success message dict

**history(memory_id)**
- **Purpose**: Get memory change history
- **Returns**: List of memory change history

**reset()**
- **Purpose**: Reset entire memory store
- **Returns**: None

### MemoryClient Class (Hosted Platform)

**Import:** `from mem0 import MemoryClient, AsyncMemoryClient`

#### Initialization
```python
client = MemoryClient(
    api_key="your-api-key",  # or set MEM0_API_KEY env var
    host="https://api.mem0.ai",  # optional
    org_id="your-org-id",  # optional
    project_id="your-project-id"  # optional
)
```

#### Core Methods

**add(messages, **kwargs)**
- **Purpose**: Create memories from message conversations
- **Parameters**: messages (list of message dicts), user_id, agent_id, app_id, metadata, filters
- **Returns**: API response dict with memory creation results

**search(query, version="v1", **kwargs)**
- **Purpose**: Search memories based on query
- **Parameters**: query, version ("v1"/"v2"), user_id, agent_id, app_id, top_k, filters
- **Returns**: List of search result dictionaries

**get(memory_id)**
- **Purpose**: Retrieve specific memory by ID
- **Returns**: Memory data dictionary

**get_all(version="v1", **kwargs)**
- **Purpose**: Retrieve all memories with filtering
- **Parameters**: version, user_id, agent_id, app_id, top_k, page, page_size
- **Returns**: List of memory dictionaries

**update(memory_id, text=None, metadata=None)**
- **Purpose**: Update memory text or metadata
- **Returns**: Updated memory data

**delete(memory_id)**
- **Purpose**: Delete specific memory
- **Returns**: Success response

**delete_all(**kwargs)**
- **Purpose**: Delete all memories with filtering
- **Returns**: Success message

#### Batch Operations

**batch_update(memories)**
- **Purpose**: Update multiple memories in single request
- **Parameters**: List of memory update objects
- **Returns**: Batch operation result

**batch_delete(memories)**
- **Purpose**: Delete multiple memories in single request
- **Parameters**: List of memory objects
- **Returns**: Batch operation result

#### User Management

**users()**
- **Purpose**: Get all users, agents, and sessions with memories
- **Returns**: Dict with user/agent/session data

**delete_users(user_id=None, agent_id=None, app_id=None, run_id=None)**
- **Purpose**: Delete specific entities or all entities
- **Returns**: Success message

**reset()**
- **Purpose**: Reset client by deleting all users and memories
- **Returns**: Success message

#### Additional Features

**history(memory_id)**
- **Purpose**: Get memory change history
- **Returns**: List of memory changes

**feedback(memory_id, feedback, **kwargs)**
- **Purpose**: Provide feedback on memory
- **Returns**: Feedback response

**create_memory_export(schema, **kwargs)**
- **Purpose**: Create memory export with JSON schema
- **Returns**: Export creation response

**get_memory_export(**kwargs)**
- **Purpose**: Retrieve exported memory data
- **Returns**: Exported data


## Configuration System

### MemoryConfig

```python
from mem0.configs.base import MemoryConfig

config = MemoryConfig(
    vector_store=VectorStoreConfig(provider="qdrant", config={...}),
    llm=LlmConfig(provider="openai", config={...}),
    embedder=EmbedderConfig(provider="openai", config={...}),
    graph_store=GraphStoreConfig(provider="neo4j", config={...}),  # optional
    history_db_path="~/.mem0/history.db",
    version="v1.1",
    custom_fact_extraction_prompt="Custom prompt...",
    custom_update_memory_prompt="Custom prompt..."
)
```

### Supported Providers

#### LLM Providers (19 supported)
- **openai** - OpenAI GPT models (default)
- **anthropic** - Claude models
- **gemini** - Google Gemini
- **groq** - Groq inference
- **ollama** - Local Ollama models
- **together** - Together AI
- **aws_bedrock** - AWS Bedrock models
- **azure_openai** - Azure OpenAI
- **litellm** - LiteLLM proxy
- **deepseek** - DeepSeek models
- **xai** - xAI models
- **sarvam** - Sarvam AI
- **lmstudio** - LM Studio local server
- **vllm** - vLLM inference server
- **langchain** - LangChain integration
- **openai_structured** - OpenAI with structured output
- **azure_openai_structured** - Azure OpenAI with structured output

#### Embedding Providers (10 supported)
- **openai** - OpenAI embeddings (default)
- **ollama** - Ollama embeddings
- **huggingface** - HuggingFace models
- **azure_openai** - Azure OpenAI embeddings
- **gemini** - Google Gemini embeddings
- **vertexai** - Google Vertex AI
- **together** - Together AI embeddings
- **lmstudio** - LM Studio embeddings
- **langchain** - LangChain embeddings
- **aws_bedrock** - AWS Bedrock embeddings

#### Vector Store Providers (19 supported)
- **qdrant** - Qdrant vector database (default)
- **chroma** - ChromaDB
- **pinecone** - Pinecone vector database
- **pgvector** - PostgreSQL with pgvector
- **mongodb** - MongoDB Atlas Vector Search
- **milvus** - Milvus vector database
- **weaviate** - Weaviate
- **faiss** - Facebook AI Similarity Search
- **redis** - Redis vector search
- **elasticsearch** - Elasticsearch
- **opensearch** - OpenSearch
- **azure_ai_search** - Azure AI Search
- **vertex_ai_vector_search** - Google Vertex AI Vector Search
- **upstash_vector** - Upstash Vector
- **supabase** - Supabase vector
- **baidu** - Baidu vector database
- **langchain** - LangChain vector stores
- **s3_vectors** - Amazon S3 Vectors
- **databricks** - Databricks vector stores

#### Graph Store Providers (4 supported)
- **neo4j** - Neo4j graph database
- **memgraph** - Memgraph
- **neptune** - AWS Neptune Analytics
- **kuzu** - Kuzu Graph database

### Configuration Examples

#### OpenAI Configuration
```python
config = MemoryConfig(
    llm={
        "provider": "openai",
        "config": {
            "model": "gpt-4.1-nano-2025-04-14",
            "temperature": 0.1,
            "max_tokens": 1000
        }
    },
    embedder={
        "provider": "openai",
        "config": {
            "model": "text-embedding-3-small"
        }
    }
)
```

#### Local Setup with Ollama
```python
config = MemoryConfig(
    llm={
        "provider": "ollama",
        "config": {
            "model": "llama3.1:8b",
            "ollama_base_url": "http://localhost:11434"
        }
    },
    embedder={
        "provider": "ollama",
        "config": {
            "model": "nomic-embed-text"
        }
    },
    vector_store={
        "provider": "chroma",
        "config": {
            "collection_name": "my_memories",
            "path": "./chroma_db"
        }
    }
)
```

#### Graph Memory with Neo4j
```python
config = MemoryConfig(
    graph_store={
        "provider": "neo4j",
        "config": {
            "url": "bolt://localhost:7687",
            "username": "neo4j",
            "password": "password",
            "database": "neo4j"
        }
    }
)
```

#### Enterprise Setup
```python
config = MemoryConfig(
    llm={
        "provider": "azure_openai",
        "config": {
            "model": "gpt-4",
            "azure_endpoint": "https://your-resource.openai.azure.com/",
            "api_key": "your-api-key",
            "api_version": "2024-02-01"
        }
    },
    vector_store={
        "provider": "pinecone",
        "config": {
            "api_key": "your-pinecone-key",
            "index_name": "mem0-index",
            "dimension": 1536
        }
    }
)
```

#### LLM Providers
- **OpenAI** - GPT-4, GPT-3.5-turbo, and structured outputs
- **Anthropic** - Claude models with advanced reasoning
- **Google AI** - Gemini models for multimodal applications
- **AWS Bedrock** - Enterprise-grade AWS managed models
- **Azure OpenAI** - Microsoft Azure hosted OpenAI models
- **Groq** - High-performance LPU optimized models
- **Together** - Open-source model inference platform
- **Ollama** - Local model deployment for privacy
- **vLLM** - High-performance inference framework
- **LM Studio** - Local model management
- **DeepSeek** - Advanced reasoning models
- **Sarvam** - Indian language models
- **XAI** - xAI models
- **LiteLLM** - Unified LLM interface
- **LangChain** - LangChain LLM integration

#### Vector Store Providers
- **Chroma** - AI-native open-source vector database
- **Qdrant** - High-performance vector similarity search
- **Pinecone** - Managed vector database with serverless options
- **Weaviate** - Open-source vector search engine
- **PGVector** - PostgreSQL extension for vector search
- **Milvus** - Open-source vector database for scale
- **Redis** - Real-time vector storage with Redis Stack
- **Supabase** - Open-source Firebase alternative
- **Upstash Vector** - Serverless vector database
- **Elasticsearch** - Distributed search and analytics
- **OpenSearch** - Open-source search and analytics
- **FAISS** - Facebook AI Similarity Search
- **MongoDB** - Document database with vector search
- **Azure AI Search** - Microsoft's search service
- **Vertex AI Vector Search** - Google Cloud vector search
- **Databricks Vector Search** - Delta Lake integration
- **Baidu** - Baidu vector database
- **LangChain** - LangChain vector store integration

#### Embedding Providers
- **OpenAI** - High-quality text embeddings
- **Azure OpenAI** - Enterprise Azure-hosted embeddings
- **Google AI** - Gemini embedding models
- **AWS Bedrock** - Amazon embedding models
- **Hugging Face** - Open-source embedding models
- **Vertex AI** - Google Cloud enterprise embeddings
- **Ollama** - Local embedding models
- **Together** - Open-source model embeddings
- **LM Studio** - Local model embeddings
- **LangChain** - LangChain embedder integration

## TypeScript/JavaScript SDK

### Client SDK (Hosted Platform)

```typescript
import { MemoryClient } from 'mem0ai';

const client = new MemoryClient({
  apiKey: 'your-api-key',
  host: 'https://api.mem0.ai',  // optional
  organizationId: 'org-id',     // optional
  projectId: 'project-id'       // optional
});

// Core operations
const memories = await client.add([
  { role: 'user', content: 'I love pizza' }
], { user_id: 'user123' });

const results = await client.search('food preferences', { user_id: 'user123' });
const memory = await client.get('memory-id');
const allMemories = await client.getAll({ user_id: 'user123' });

// Management operations
await client.update('memory-id', 'Updated content');
await client.delete('memory-id');
await client.deleteAll({ user_id: 'user123' });

// Batch operations
await client.batchUpdate([{ id: 'mem1', text: 'new text' }]);
await client.batchDelete(['mem1', 'mem2']);

// User management
const users = await client.users();
await client.deleteUsers({ user_ids: ['user1', 'user2'] });

// Webhooks
const webhooks = await client.getWebhooks();
await client.createWebhook({
  url: 'https://your-webhook.com',
  name: 'My Webhook',
  eventTypes: ['memory.created', 'memory.updated']
});
```

### OSS SDK (Self-Hosted)

```typescript
import { Memory } from 'mem0ai/oss';

const memory = new Memory({
  embedder: {
    provider: 'openai',
    config: { apiKey: 'your-key' }
  },
  vectorStore: {
    provider: 'qdrant',
    config: { host: 'localhost', port: 6333 }
  },
  llm: {
    provider: 'openai',
    config: { model: 'gpt-4.1-nano' }
  }
});

// Core operations
const result = await memory.add('I love pizza', { userId: 'user123' });
const searchResult = await memory.search('food preferences', { userId: 'user123' });
const memoryItem = await memory.get('memory-id');
const allMemories = await memory.getAll({ userId: 'user123' });

// Management
await memory.update('memory-id', 'Updated content');
await memory.delete('memory-id');
await memory.deleteAll({ userId: 'user123' });

// History and reset
const history = await memory.history('memory-id');
await memory.reset();
```

### Key TypeScript Types

```typescript
interface Message {
  role: 'user' | 'assistant';
  content: string | MultiModalMessages;
}

interface Memory {
  id: string;
  memory?: string;
  user_id?: string;
  categories?: string[];
  created_at?: Date;
  updated_at?: Date;
  metadata?: any;
  score?: number;
}

interface MemoryOptions {
  user_id?: string;
  agent_id?: string;
  app_id?: string;
  run_id?: string;
  metadata?: Record<string, any>;
  filters?: Record<string, any>;
  api_version?: 'v1' | 'v2';
  infer?: boolean;
  enable_graph?: boolean;
}

interface SearchResult {
  results: Memory[];
  relations?: any[];
}
```

## Advanced Features

### Graph Memory

Graph memory enables relationship tracking between entities mentioned in conversations.

```python
# Enable graph memory
config = MemoryConfig(
    graph_store={
        "provider": "neo4j",
        "config": {
            "url": "bolt://localhost:7687",
            "username": "neo4j",
            "password": "password"
        }
    }
)
memory = Memory(config)

# Add memory with relationship extraction
result = memory.add(
    "John works at OpenAI and is friends with Sarah",
    user_id="user123"
)

# Result includes both memories and relationships
print(result["results"])     # Memory entries
print(result["relations"])   # Graph relationships
```

**Supported Graph Databases:**
- **Neo4j**: Full-featured graph database with Cypher queries
- **Memgraph**: High-performance in-memory graph database
- **Neptune**: AWS managed graph database service
- **kuzu** - OSS Kuzu Graph database

### Multimodal Memory

Store and retrieve memories from text, images, and PDFs.

```python
# Text + Image
messages = [
    {"role": "user", "content": "This is my travel setup"},
    {
        "role": "user",
        "content": {
            "type": "image_url",
            "image_url": {"url": f"data:image/jpeg;base64,{base64_image}"}
        }
    }
]
client.add(messages, user_id="user123")

# PDF processing
pdf_message = {
    "role": "user",
    "content": {
        "type": "pdf_url",
        "pdf_url": {"url": "https://example.com/document.pdf"}
    }
}
client.add([pdf_message], user_id="user123")
```

### Procedural Memory

Store step-by-step procedures and workflows.

```python
# Add procedural memory
result = memory.add(
    "To deploy the app: 1. Run tests 2. Build Docker image 3. Push to registry 4. Update k8s manifests",
    user_id="developer123",
    memory_type="procedural_memory"
)

# Search for procedures
procedures = memory.search(
    "How to deploy?",
    user_id="developer123"
)
```

### Custom Prompts

```python
custom_extraction_prompt = """
Extract key facts from the conversation focusing on:
1. Personal preferences
2. Technical skills
3. Project requirements
4. Important dates and deadlines

Conversation: {messages}
"""

config = MemoryConfig(
    custom_fact_extraction_prompt=custom_extraction_prompt
)
memory = Memory(config)
```


## Common Usage Patterns

### 1. Personal AI Assistant

```python
class PersonalAssistant:
    def __init__(self):
        self.memory = Memory()
        self.llm = OpenAI()  # Your LLM client
    
    def chat(self, user_input: str, user_id: str) -> str:
        # Retrieve relevant memories
        memories = self.memory.search(user_input, user_id=user_id, limit=5)
        
        # Build context from memories
        context = "\n".join([f"- {m['memory']}" for m in memories['results']])
        
        # Generate response with context
        prompt = f"""
        Context from previous conversations:
        {context}
        
        User: {user_input}
        Assistant:
        """
        
        response = self.llm.generate(prompt)
        
        # Store the conversation
        self.memory.add([
            {"role": "user", "content": user_input},
            {"role": "assistant", "content": response}
        ], user_id=user_id)
        
        return response
```

### 2. Customer Support Bot

```python
class SupportBot:
    def __init__(self):
        self.memory = MemoryClient(api_key="your-key")
    
    def handle_ticket(self, customer_id: str, issue: str) -> str:
        # Get customer history
        history = self.memory.search(
            issue,
            user_id=customer_id,
            limit=10
        )
        
        # Check for similar past issues
        similar_issues = [m for m in history if m['score'] > 0.8]
        
        if similar_issues:
            context = f"Previous similar issues: {similar_issues[0]['memory']}"
        else:
            context = "No previous similar issues found."
        
        # Generate response
        response = self.generate_support_response(issue, context)
        
        # Store interaction
        self.memory.add([
            {"role": "user", "content": f"Issue: {issue}"},
            {"role": "assistant", "content": response}
        ], user_id=customer_id, metadata={
            "category": "support_ticket",
            "timestamp": datetime.now().isoformat()
        })
        
        return response
```

### 3. Learning Assistant

```python
class StudyBuddy:
    def __init__(self):
        self.memory = Memory()
    
    def study_session(self, student_id: str, topic: str, content: str):
        # Store study material
        self.memory.add(
            f"Studied {topic}: {content}",
            user_id=student_id,
            metadata={
                "topic": topic,
                "session_date": datetime.now().isoformat(),
                "type": "study_session"
            }
        )
    
    def quiz_student(self, student_id: str, topic: str) -> list:
        # Get relevant study materials
        materials = self.memory.search(
            f"topic:{topic}",
            user_id=student_id,
            filters={"metadata.type": "study_session"}
        )
        
        # Generate quiz questions based on materials
        questions = self.generate_quiz_questions(materials)
        return questions
    
    def track_progress(self, student_id: str) -> dict:
        # Get all study sessions
        sessions = self.memory.get_all(
            user_id=student_id,
            filters={"metadata.type": "study_session"}
        )
        
        # Analyze progress
        topics_studied = {}
        for session in sessions['results']:
            topic = session['metadata']['topic']
            topics_studied[topic] = topics_studied.get(topic, 0) + 1
        
        return {
            "total_sessions": len(sessions['results']),
            "topics_covered": len(topics_studied),
            "topic_frequency": topics_studied
        }
```

### 4. Multi-Agent System

```python
class MultiAgentSystem:
    def __init__(self):
        self.shared_memory = Memory()
        self.agents = {
            "researcher": ResearchAgent(),
            "writer": WriterAgent(),
            "reviewer": ReviewAgent()
        }
    
    def collaborative_task(self, task: str, session_id: str):
        # Research phase
        research_results = self.agents["researcher"].research(task)
        self.shared_memory.add(
            f"Research findings: {research_results}",
            agent_id="researcher",
            run_id=session_id,
            metadata={"phase": "research"}
        )
        
        # Writing phase
        research_context = self.shared_memory.search(
            "research findings",
            run_id=session_id
        )
        draft = self.agents["writer"].write(task, research_context)
        self.shared_memory.add(
            f"Draft content: {draft}",
            agent_id="writer",
            run_id=session_id,
            metadata={"phase": "writing"}
        )
        
        # Review phase
        all_context = self.shared_memory.get_all(run_id=session_id)
        final_output = self.agents["reviewer"].review(draft, all_context)
        
        return final_output
```

### 5. Voice Assistant with Memory

```python
import speech_recognition as sr
from gtts import gTTS
import pygame

class VoiceAssistant:
    def __init__(self):
        self.memory = Memory()
        self.recognizer = sr.Recognizer()
        self.microphone = sr.Microphone()
    
    def listen_and_respond(self, user_id: str):
        # Listen to user
        with self.microphone as source:
            audio = self.recognizer.listen(source)
        
        try:
            # Convert speech to text
            user_input = self.recognizer.recognize_google(audio)
            print(f"User said: {user_input}")
            
            # Get relevant memories
            memories = self.memory.search(user_input, user_id=user_id)
            context = "\n".join([m['memory'] for m in memories['results'][:3]])
            
            # Generate response
            response = self.generate_response(user_input, context)
            
            # Store conversation
            self.memory.add([
                {"role": "user", "content": user_input},
                {"role": "assistant", "content": response}
            ], user_id=user_id)
            
            # Convert response to speech
            tts = gTTS(text=response, lang='en')
            tts.save("response.mp3")
            
            # Play response
            pygame.mixer.init()
            pygame.mixer.music.load("response.mp3")
            pygame.mixer.music.play()
            
            return response
            
        except sr.UnknownValueError:
            return "Sorry, I didn't understand that."
```

## Best Practices

### 1. Memory Organization

```python
# Use consistent user/agent/session IDs
user_id = f"user_{user_email.replace('@', '_')}"
agent_id = f"agent_{agent_name}"
run_id = f"session_{datetime.now().strftime('%Y%m%d_%H%M%S')}"

# Add meaningful metadata
metadata = {
    "category": "customer_support",
    "priority": "high",
    "department": "technical",
    "timestamp": datetime.now().isoformat(),
    "source": "chat_widget"
}

# Use descriptive memory content
memory.add(
    "Customer John Smith reported login issues with 2FA on mobile app. Resolved by clearing app cache.",
    user_id=customer_id,
    metadata=metadata
)
```

### 2. Search Optimization

```python
# Use specific search queries
results = memory.search(
    "login issues mobile app",  # Specific keywords
    user_id=customer_id,
    limit=5,  # Reasonable limit
    threshold=0.7  # Filter low-relevance results
)

# Combine multiple searches for comprehensive results
technical_issues = memory.search("technical problems", user_id=user_id)
recent_conversations = memory.get_all(
    user_id=user_id,
    filters={"metadata.timestamp": {"$gte": last_week}},
    limit=10
)
```

### 3. Memory Lifecycle Management

```python
# Regular cleanup of old memories
def cleanup_old_memories(memory_client, days_old=90):
    cutoff_date = datetime.now() - timedelta(days=days_old)
    
    all_memories = memory_client.get_all()
    for mem in all_memories:
        if datetime.fromisoformat(mem['created_at']) < cutoff_date:
            memory_client.delete(mem['id'])

# Archive important memories
def archive_memory(memory_client, memory_id):
    memory = memory_client.get(memory_id)
    memory_client.update(memory_id, metadata={
        **memory.get('metadata', {}),
        'archived': True,
        'archive_date': datetime.now().isoformat()
    })
```

### 4. Error Handling

```python
def safe_memory_operation(memory_client, operation, *args, **kwargs):
    try:
        return operation(*args, **kwargs)
    except Exception as e:
        logger.error(f"Memory operation failed: {e}")
        # Fallback to basic response without memory
        return {"results": [], "message": "Memory temporarily unavailable"}

# Usage
results = safe_memory_operation(
    memory_client,
    memory_client.search,
    query,
    user_id=user_id
)
```

### 5. Performance Optimization

```python
# Batch operations when possible
memories_to_add = [
    {"content": msg1, "user_id": user_id},
    {"content": msg2, "user_id": user_id},
    {"content": msg3, "user_id": user_id}
]

# Instead of multiple add() calls, use batch operations
for memory_data in memories_to_add:
    memory.add(memory_data["content"], user_id=memory_data["user_id"])

# Cache frequently accessed memories
from functools import lru_cache

@lru_cache(maxsize=100)
def get_user_preferences(user_id: str):
    return memory.search("preferences settings", user_id=user_id, limit=5)
```


## Integration Examples

### AutoGen Integration

```python
from cookbooks.helper.mem0_teachability import Mem0Teachability
from mem0 import Memory

# Add memory capability to AutoGen agents
memory = Memory()
teachability = Mem0Teachability(
    verbosity=1,
    reset_db=False,
    recall_threshold=1.5,
    memory_client=memory
)

# Apply to agent
teachability.add_to_agent(your_autogen_agent)
```

### LangChain Integration

```python
from langchain.memory import ConversationBufferMemory
from mem0 import Memory

class Mem0LangChainMemory(ConversationBufferMemory):
    def __init__(self, user_id: str, **kwargs):
        super().__init__(**kwargs)
        self.mem0 = Memory()
        self.user_id = user_id
    
    def save_context(self, inputs, outputs):
        # Save to both LangChain and Mem0
        super().save_context(inputs, outputs)
        
        # Store in Mem0 for long-term memory
        self.mem0.add([
            {"role": "user", "content": str(inputs)},
            {"role": "assistant", "content": str(outputs)}
        ], user_id=self.user_id)
    
    def load_memory_variables(self, inputs):
        # Load from LangChain buffer
        variables = super().load_memory_variables(inputs)
        
        # Enhance with relevant long-term memories
        relevant_memories = self.mem0.search(
            str(inputs),
            user_id=self.user_id,
            limit=3
        )
        
        if relevant_memories['results']:
            long_term_context = "\n".join([
                f"- {m['memory']}" for m in relevant_memories['results']
            ])
            variables['history'] += f"\n\nRelevant past context:\n{long_term_context}"
        
        return variables
```

### Streamlit App

```python
import streamlit as st
from mem0 import Memory

# Initialize memory
if 'memory' not in st.session_state:
    st.session_state.memory = Memory()

# User input
user_id = st.text_input("User ID", value="user123")
user_message = st.text_input("Your message")

if st.button("Send"):
    # Get relevant memories
    memories = st.session_state.memory.search(
        user_message,
        user_id=user_id,
        limit=5
    )
    
    # Display memories
    if memories['results']:
        st.subheader("Relevant Memories:")
        for memory in memories['results']:
            st.write(f"- {memory['memory']} (Score: {memory['score']:.2f})")
    
    # Generate and display response
    response = generate_response(user_message, memories)
    st.write(f"Assistant: {response}")
    
    # Store conversation
    st.session_state.memory.add([
        {"role": "user", "content": user_message},
        {"role": "assistant", "content": response}
    ], user_id=user_id)

# Display all memories
if st.button("Show All Memories"):
    all_memories = st.session_state.memory.get_all(user_id=user_id)
    for memory in all_memories['results']:
        st.write(f"- {memory['memory']}")
```

### FastAPI Backend

```python
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from mem0 import MemoryClient
from typing import List, Optional

app = FastAPI()
memory_client = MemoryClient(api_key="your-api-key")

class ChatMessage(BaseModel):
    role: str
    content: str

class ChatRequest(BaseModel):
    messages: List[ChatMessage]
    user_id: str
    metadata: Optional[dict] = None

class SearchRequest(BaseModel):
    query: str
    user_id: str
    limit: int = 10

@app.post("/chat")
async def chat(request: ChatRequest):
    try:
        # Add messages to memory
        result = memory_client.add(
            [msg.dict() for msg in request.messages],
            user_id=request.user_id,
            metadata=request.metadata
        )
        return {"status": "success", "result": result}
    except Exception as e:
        raise HTTPException(status_code=500, detail=str(e))

@app.post("/search")
async def search_memories(request: SearchRequest):
    try:
        results = memory_client.search(
            request.query,
            user_id=request.user_id,
            limit=request.limit
        )
        return {"results": results}
    except Exception as e:
        raise HTTPException(status_code=500, detail=str(e))

@app.get("/memories/{user_id}")
async def get_user_memories(user_id: str, limit: int = 50):
    try:
        memories = memory_client.get_all(user_id=user_id, limit=limit)
        return {"memories": memories}
    except Exception as e:
        raise HTTPException(status_code=500, detail=str(e))

@app.delete("/memories/{memory_id}")
async def delete_memory(memory_id: str):
    try:
        result = memory_client.delete(memory_id)
        return {"status": "deleted", "result": result}
    except Exception as e:
        raise HTTPException(status_code=500, detail=str(e))
```

## Troubleshooting

### Common Issues

1. **Memory Not Found**
   ```python
   # Check if memory exists before operations
   memory = memory_client.get(memory_id)
   if not memory:
       print(f"Memory {memory_id} not found")
   ```

2. **Search Returns No Results**
   ```python
   # Lower the similarity threshold
   results = memory.search(
       query,
       user_id=user_id,
       threshold=0.5  # Lower threshold
   )
   
   # Check if memories exist for user
   all_memories = memory.get_all(user_id=user_id)
   if not all_memories['results']:
       print("No memories found for user")
   ```

3. **Configuration Issues**
   ```python
   # Validate configuration
   try:
       memory = Memory(config)
       # Test with a simple operation
       memory.add("Test memory", user_id="test")
       print("Configuration valid")
   except Exception as e:
       print(f"Configuration error: {e}")
   ```

4. **API Rate Limits**
   ```python
   import time
   from functools import wraps
   
   def rate_limit_retry(max_retries=3, delay=1):
       def decorator(func):
           @wraps(func)
           def wrapper(*args, **kwargs):
               for attempt in range(max_retries):
                   try:
                       return func(*args, **kwargs)
                   except Exception as e:
                       if "rate limit" in str(e).lower() and attempt < max_retries - 1:
                           time.sleep(delay * (2 ** attempt))  # Exponential backoff
                           continue
                       raise e
               return wrapper
           return decorator
   
   @rate_limit_retry()
   def safe_memory_add(memory, content, user_id):
       return memory.add(content, user_id=user_id)
   ```

### Performance Tips

1. **Optimize Vector Store Configuration**
   ```python
   # For Qdrant
   config = MemoryConfig(
       vector_store={
           "provider": "qdrant",
           "config": {
               "host": "localhost",
               "port": 6333,
               "collection_name": "memories",
               "embedding_model_dims": 1536,
               "distance": "cosine"
           }
       }
   )
   ```

2. **Batch Processing**
   ```python
   # Process multiple memories efficiently
   def batch_add_memories(memory_client, conversations, user_id, batch_size=10):
       for i in range(0, len(conversations), batch_size):
           batch = conversations[i:i+batch_size]
           for conv in batch:
               memory_client.add(conv, user_id=user_id)
           time.sleep(0.1)  # Small delay between batches
   ```

3. **Memory Cleanup**
   ```python
   # Regular cleanup to maintain performance
   def cleanup_memories(memory_client, user_id, max_memories=1000):
       all_memories = memory_client.get_all(user_id=user_id)
       if len(all_memories) > max_memories:
           # Keep most recent memories
           sorted_memories = sorted(
               all_memories,
               key=lambda x: x['created_at'],
               reverse=True
           )
           
           # Delete oldest memories
           for memory in sorted_memories[max_memories:]:
               memory_client.delete(memory['id'])
   ```

## Resources

- **Documentation**: https://docs.mem0.ai
- **GitHub Repository**: https://github.com/mem0ai/mem0
- **Discord Community**: https://mem0.dev/DiG
- **Platform**: https://app.mem0.ai
- **Research Paper**: https://mem0.ai/research
- **Examples**: https://github.com/mem0ai/mem0/tree/main/examples

## License

Mem0 is available under the Apache 2.0 License. See the [LICENSE](https://github.com/mem0ai/mem0/blob/main/LICENSE) file for more details.



================================================
FILE: MIGRATION_GUIDE_v1.0.md
================================================
# Migration Guide: Upgrading to mem0 1.0.0

## TL;DR

**What changed?** We simplified the API by removing confusing version parameters. Now everything returns a consistent format: `{"results": [...]}`.

**What you need to do:**
1. Upgrade: `pip install mem0ai==1.0.0`
2. Remove `version` and `output_format` parameters from your code
3. Update response handling to use `result["results"]` instead of treating responses as lists

**Time needed:** ~5-10 minutes for most projects

---

## Quick Migration Guide

### 1. Install the Update

```bash
pip install mem0ai==1.0.0
```

### 2. Update Your Code

**If you're using the Memory API:**

```python
# Before
memory = Memory(config=MemoryConfig(version="v1.1"))
result = memory.add("I like pizza")

# After
memory = Memory()  # That's it - version is automatic now
result = memory.add("I like pizza")
```

**If you're using the Client API:**

```python
# Before
client.add(messages, output_format="v1.1")
client.search(query, version="v2", output_format="v1.1")

# After
client.add(messages)  # Just remove those extra parameters
client.search(query)
```

### 3. Update How You Handle Responses

All responses now use the same format: a dictionary with `"results"` key.

```python
# Before - you might have done this
result = memory.add("I like pizza")
for item in result:  # Treating it as a list
    print(item)

# After - do this instead
result = memory.add("I like pizza")
for item in result["results"]:  # Access the results key
    print(item)

# Graph relations (if you use them)
if "relations" in result:
    for relation in result["relations"]:
        print(relation)
```

---

## Enhanced Message Handling

The platform client (MemoryClient) now supports the same flexible message formats as the OSS version:

```python
from mem0 import MemoryClient

client = MemoryClient(api_key="your-key")

# All three formats now work:

# 1. Single string (automatically converted to user message)
client.add("I like pizza", user_id="alice")

# 2. Single message dictionary
client.add({"role": "user", "content": "I like pizza"}, user_id="alice")

# 3. List of messages (conversation)
client.add([
    {"role": "user", "content": "I like pizza"},
    {"role": "assistant", "content": "I'll remember that!"}
], user_id="alice")
```

### Async Mode Configuration

The `async_mode` parameter now defaults to `True` but can be configured:

```python
# Default behavior (async_mode=True)
client.add(messages, user_id="alice")

# Explicitly set async mode
client.add(messages, user_id="alice", async_mode=True)

# Disable async mode if needed
client.add(messages, user_id="alice", async_mode=False)
```

**Note:** `async_mode=True` provides better performance for most use cases. Only set it to `False` if you have specific synchronous processing requirements.

---

## That's It!

For most users, that's all you need to know. The changes are:
- ✅ No more `version` or `output_format` parameters
- ✅ Consistent `{"results": [...]}` response format
- ✅ Cleaner, simpler API

---

## Common Issues

**Getting `KeyError: 'results'`?**

Your code is still treating the response as a list. Update it:
```python
# Change this:
for memory in response:

# To this:
for memory in response["results"]:
```

**Getting `TypeError: unexpected keyword argument`?**

You're still passing old parameters. Remove them:
```python
# Change this:
client.add(messages, output_format="v1.1")

# To this:
client.add(messages)
```

**Seeing deprecation warnings?**

Remove any explicit `version="v1.0"` from your config:
```python
# Change this:
memory = Memory(config=MemoryConfig(version="v1.0"))

# To this:
memory = Memory()
```

---

## What's New in 1.0.0

- **Better vector stores:** Fixed OpenSearch and improved reliability across all stores
- **Cleaner API:** One way to do things, no more confusing options
- **Enhanced GCP support:** Better Vertex AI configuration options
- **Flexible message input:** Platform client now accepts strings, dicts, and lists (aligned with OSS)
- **Configurable async_mode:** Now defaults to `True` but users can override if needed

---

## Need Help?

- Check [GitHub Issues](https://github.com/mem0ai/mem0/issues)
- Read the [documentation](https://docs.mem0.ai/)
- Open a new issue if you're stuck

---

## Advanced: Configuration Changes

**If you configured vector stores with version:**

```python
# Before
config = MemoryConfig(
    version="v1.1",
    vector_store=VectorStoreConfig(...)
)

# After
config = MemoryConfig(
    vector_store=VectorStoreConfig(...)
)
```

---

## Testing Your Migration

Quick sanity check:

```python
from mem0 import Memory

memory = Memory()

# Add should return a dict with "results"
result = memory.add("I like pizza", user_id="test")
assert "results" in result

# Search should return a dict with "results"
search = memory.search("food", user_id="test")
assert "results" in search

# Get all should return a dict with "results"
all_memories = memory.get_all(user_id="test")
assert "results" in all_memories

print("✅ Migration successful!")
```


================================================
FILE: Makefile
================================================
.PHONY: format sort lint

# Variables
ISORT_OPTIONS = --profile black
PROJECT_NAME := mem0ai

# Default target
all: format sort lint

install:
	hatch env create

install_all:
	pip install ruff==0.6.9 groq together boto3 litellm ollama chromadb weaviate weaviate-client sentence_transformers vertexai \
	            google-generativeai elasticsearch opensearch-py vecs "pinecone<7.0.0" pinecone-text faiss-cpu langchain-community \
							upstash-vector azure-search-documents langchain-memgraph langchain-neo4j langchain-aws rank-bm25 pymochow pymongo psycopg kuzu databricks-sdk valkey

# Format code with ruff
format:
	hatch run format

# Sort imports with isort
sort:
	hatch run isort mem0/

# Lint code with ruff
lint:
	hatch run lint

docs:
	cd docs && mintlify dev

build:
	hatch build

publish:
	hatch publish

clean:
	rm -rf dist

test:
	hatch run test

test-py-3.9:
	hatch run dev_py_3_9:test

test-py-3.10:
	hatch run dev_py_3_10:test

test-py-3.11:
	hatch run dev_py_3_11:test

test-py-3.12:
	hatch run dev_py_3_12:test


================================================
FILE: README.md
================================================
<p align="center">
  <a href="https://github.com/mem0ai/mem0">
    <img src="docs/images/banner-sm.png" width="800px" alt="Mem0 - The Memory Layer for Personalized AI">
  </a>
</p>
<p align="center" style="display: flex; justify-content: center; gap: 20px; align-items: center;">
  <a href="https://trendshift.io/repositories/11194" target="blank">
    <img src="https://trendshift.io/api/badge/repositories/11194" alt="mem0ai%2Fmem0 | Trendshift" width="250" height="55"/>
  </a>
</p>

<p align="center">
  <a href="https://mem0.ai">Learn more</a>
  ·
  <a href="https://mem0.dev/DiG">Join Discord</a>
  ·
  <a href="https://mem0.dev/demo">Demo</a>
  ·
  <a href="https://mem0.dev/openmemory">OpenMemory</a>
</p>

<p align="center">
  <a href="https://mem0.dev/DiG">
    <img src="https://img.shields.io/badge/Discord-%235865F2.svg?&logo=discord&logoColor=white" alt="Mem0 Discord">
  </a>
  <a href="https://pepy.tech/project/mem0ai">
    <img src="https://img.shields.io/pypi/dm/mem0ai" alt="Mem0 PyPI - Downloads">
  </a>
  <a href="https://github.com/mem0ai/mem0">
    <img src="https://img.shields.io/github/commit-activity/m/mem0ai/mem0?style=flat-square" alt="GitHub commit activity">
  </a>
  <a href="https://pypi.org/project/mem0ai" target="blank">
    <img src="https://img.shields.io/pypi/v/mem0ai?color=%2334D058&label=pypi%20package" alt="Package version">
  </a>
  <a href="https://www.npmjs.com/package/mem0ai" target="blank">
    <img src="https://img.shields.io/npm/v/mem0ai" alt="Npm package">
  </a>
  <a href="https://www.ycombinator.com/companies/mem0">
    <img src="https://img.shields.io/badge/Y%20Combinator-S24-orange?style=flat-square" alt="Y Combinator S24">
  </a>
</p>

<p align="center">
  <a href="https://mem0.ai/research"><strong>📄 Building Production-Ready AI Agents with Scalable Long-Term Memory →</strong></a>
</p>
<p align="center">
  <strong>⚡ +26% Accuracy vs. OpenAI Memory • 🚀 91% Faster • 💰 90% Fewer Tokens</strong>
</p>

> **🎉 mem0ai v1.0.0 is now available!** This major release includes API modernization, improved vector store support, and enhanced GCP integration. [See migration guide →](MIGRATION_GUIDE_v1.0.md)

##  🔥 Research Highlights
- **+26% Accuracy** over OpenAI Memory on the LOCOMO benchmark
- **91% Faster Responses** than full-context, ensuring low-latency at scale
- **90% Lower Token Usage** than full-context, cutting costs without compromise
- [Read the full paper](https://mem0.ai/research)

# Introduction

[Mem0](https://mem0.ai) ("mem-zero") enhances AI assistants and agents with an intelligent memory layer, enabling personalized AI interactions. It remembers user preferences, adapts to individual needs, and continuously learns over time—ideal for customer support chatbots, AI assistants, and autonomous systems.

### Key Features & Use Cases

**Core Capabilities:**
- **Multi-Level Memory**: Seamlessly retains User, Session, and Agent state with adaptive personalization
- **Developer-Friendly**: Intuitive API, cross-platform SDKs, and a fully managed service option

**Applications:**
- **AI Assistants**: Consistent, context-rich conversations
- **Customer Support**: Recall past tickets and user history for tailored help
- **Healthcare**: Track patient preferences and history for personalized care
- **Productivity & Gaming**: Adaptive workflows and environments based on user behavior

## 🚀 Quickstart Guide <a name="quickstart"></a>

Choose between our hosted platform or self-hosted package:

### Hosted Platform

Get up and running in minutes with automatic updates, analytics, and enterprise security.

1. Sign up on [Mem0 Platform](https://app.mem0.ai)
2. Embed the memory layer via SDK or API keys

### Self-Hosted (Open Source)

Install the sdk via pip:

```bash
pip install mem0ai
```

Install sdk via npm:
```bash
npm install mem0ai
```

### Basic Usage

Mem0 requires an LLM to function, with `gpt-4.1-nano-2025-04-14 from OpenAI as the default. However, it supports a variety of LLMs; for details, refer to our [Supported LLMs documentation](https://docs.mem0.ai/components/llms/overview).

First step is to instantiate the memory:

```python
from openai import OpenAI
from mem0 import Memory

openai_client = OpenAI()
memory = Memory()

def chat_with_memories(message: str, user_id: str = "default_user") -> str:
    # Retrieve relevant memories
    relevant_memories = memory.search(query=message, user_id=user_id, limit=3)
    memories_str = "\n".join(f"- {entry['memory']}" for entry in relevant_memories["results"])

    # Generate Assistant response
    system_prompt = f"You are a helpful AI. Answer the question based on query and memories.\nUser Memories:\n{memories_str}"
    messages = [{"role": "system", "content": system_prompt}, {"role": "user", "content": message}]
    response = openai_client.chat.completions.create(model="gpt-4.1-nano-2025-04-14", messages=messages)
    assistant_response = response.choices[0].message.content

    # Create new memories from the conversation
    messages.append({"role": "assistant", "content": assistant_response})
    memory.add(messages, user_id=user_id)

    return assistant_response

def main():
    print("Chat with AI (type 'exit' to quit)")
    while True:
        user_input = input("You: ").strip()
        if user_input.lower() == 'exit':
            print("Goodbye!")
            break
        print(f"AI: {chat_with_memories(user_input)}")

if __name__ == "__main__":
    main()
```

For detailed integration steps, see the [Quickstart](https://docs.mem0.ai/quickstart) and [API Reference](https://docs.mem0.ai/api-reference).

## 🔗 Integrations & Demos

- **ChatGPT with Memory**: Personalized chat powered by Mem0 ([Live Demo](https://mem0.dev/demo))
- **Browser Extension**: Store memories across ChatGPT, Perplexity, and Claude ([Chrome Extension](https://chromewebstore.google.com/detail/onihkkbipkfeijkadecaafbgagkhglop?utm_source=item-share-cb))
- **Langgraph Support**: Build a customer bot with Langgraph + Mem0 ([Guide](https://docs.mem0.ai/integrations/langgraph))
- **CrewAI Integration**: Tailor CrewAI outputs with Mem0 ([Example](https://docs.mem0.ai/integrations/crewai))

## 📚 Documentation & Support

- Full docs: https://docs.mem0.ai
- Community: [Discord](https://mem0.dev/DiG) · [Twitter](https://x.com/mem0ai)
- Contact: founders@mem0.ai

## Citation

We now have a paper you can cite:

```bibtex
@article{mem0,
  title={Mem0: Building Production-Ready AI Agents with Scalable Long-Term Memory},
  author={Chhikara, Prateek and Khant, Dev and Aryan, Saket and Singh, Taranjeet and Yadav, Deshraj},
  journal={arXiv preprint arXiv:2504.19413},
  year={2025}
}
```

## ⚖️ License

Apache 2.0 — see the [LICENSE](https://github.com/mem0ai/mem0/blob/main/LICENSE) file for details.

================================================
FILE: cookbooks/customer-support-chatbot.ipynb
================================================
{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "from typing import List, Dict\n",
    "from mem0 import Memory\n",
    "from datetime import datetime\n",
    "import anthropic\n",
    "\n",
    "# Set up environment variables\n",
    "os.environ[\"OPENAI_API_KEY\"] = \"your_openai_api_key\"  # needed for embedding model\n",
    "os.environ[\"ANTHROPIC_API_KEY\"] = \"your_anthropic_api_key\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "class SupportChatbot:\n",
    "    def __init__(self):\n",
    "        # Initialize Mem0 with Anthropic's Claude\n",
    "        self.config = {\n",
    "            \"llm\": {\n",
    "                \"provider\": \"anthropic\",\n",
    "                \"config\": {\n",
    "                    \"model\": \"claude-3-5-sonnet-latest\",\n",
    "                    \"temperature\": 0.1,\n",
    "                    \"max_tokens\": 2000,\n",
    "                },\n",
    "            }\n",
    "        }\n",
    "        self.client = anthropic.Client(api_key=os.environ[\"ANTHROPIC_API_KEY\"])\n",
    "        self.memory = Memory.from_config(self.config)\n",
    "\n",
    "        # Define support context\n",
    "        self.system_context = \"\"\"\n",
    "        You are a helpful customer support agent. Use the following guidelines:\n",
    "        - Be polite and professional\n",
    "        - Show empathy for customer issues\n",
    "        - Reference past interactions when relevant\n",
    "        - Maintain consistent information across conversations\n",
    "        - If you're unsure about something, ask for clarification\n",
    "        - Keep track of open issues and follow-ups\n",
    "        \"\"\"\n",
    "\n",
    "    def store_customer_interaction(self, user_id: str, message: str, response: str, metadata: Dict = None):\n",
    "        \"\"\"Store customer interaction in memory.\"\"\"\n",
    "        if metadata is None:\n",
    "            metadata = {}\n",
    "\n",
    "        # Add timestamp to metadata\n",
    "        metadata[\"timestamp\"] = datetime.now().isoformat()\n",
    "\n",
    "        # Format conversation for storage\n",
    "        conversation = [{\"role\": \"user\", \"content\": message}, {\"role\": \"assistant\", \"content\": response}]\n",
    "\n",
    "        # Store in Mem0\n",
    "        self.memory.add(conversation, user_id=user_id, metadata=metadata)\n",
    "\n",
    "    def get_relevant_history(self, user_id: str, query: str) -> List[Dict]:\n",
    "        \"\"\"Retrieve relevant past interactions.\"\"\"\n",
    "        return self.memory.search(\n",
    "            query=query,\n",
    "            user_id=user_id,\n",
    "            limit=5,  # Adjust based on needs\n",
    "        )\n",
    "\n",
    "    def handle_customer_query(self, user_id: str, query: str) -> str:\n",
    "        \"\"\"Process customer query with context from past interactions.\"\"\"\n",
    "\n",
    "        # Get relevant past interactions\n",
    "        relevant_history = self.get_relevant_history(user_id, query)\n",
    "\n",
    "        # Build context from relevant history\n",
    "        context = \"Previous relevant interactions:\\n\"\n",
    "        for memory in relevant_history:\n",
    "            context += f\"Customer: {memory['memory']}\\n\"\n",
    "            context += f\"Support: {memory['memory']}\\n\"\n",
    "            context += \"---\\n\"\n",
    "\n",
    "        # Prepare prompt with context and current query\n",
    "        prompt = f\"\"\"\n",
    "        {self.system_context}\n",
    "\n",
    "        {context}\n",
    "\n",
    "        Current customer query: {query}\n",
    "\n",
    "        Provide a helpful response that takes into account any relevant past interactions.\n",
    "        \"\"\"\n",
    "\n",
    "        # Generate response using Claude\n",
    "        response = self.client.messages.create(\n",
    "            model=\"claude-3-5-sonnet-latest\",\n",
    "            messages=[{\"role\": \"user\", \"content\": prompt}],\n",
    "            max_tokens=2000,\n",
    "            temperature=0.1,\n",
    "        )\n",
    "\n",
    "        # Store interaction\n",
    "        self.store_customer_interaction(\n",
    "            user_id=user_id, message=query, response=response, metadata={\"type\": \"support_query\"}\n",
    "        )\n",
    "\n",
    "        return response.content[0].text"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Welcome to Customer Support! Type 'exit' to end the conversation.\n",
      "Customer: Hi, I'm having trouble connecting my new smartwatch to the mobile app. It keeps showing a connection error.\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/var/folders/5x/9kmqjfm947g5yh44m7fjk75r0000gn/T/ipykernel_99777/1076713094.py:55: DeprecationWarning: The current get_all API output format is deprecated. To use the latest format, set `api_version='v1.1'`. The current format will be removed in mem0ai 1.1.0 and later versions.\n",
      "  return self.memory.search(\n",
      "/var/folders/5x/9kmqjfm947g5yh44m7fjk75r0000gn/T/ipykernel_99777/1076713094.py:47: DeprecationWarning: The current add API output format is deprecated. To use the latest format, set `api_version='v1.1'`. The current format will be removed in mem0ai 1.1.0 and later versions.\n",
      "  self.memory.add(\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Support: Hello! Thank you for reaching out about the connection issue with your smartwatch. I understand how frustrating it can be when a new device won't connect properly. I'll be happy to help you resolve this.\n",
      "\n",
      "To better assist you, could you please provide me with:\n",
      "1. The model of your smartwatch\n",
      "2. The type of phone you're using (iOS or Android)\n",
      "3. Whether you've already installed the companion app on your phone\n",
      "4. If you've tried pairing the devices before\n",
      "\n",
      "These details will help me provide you with the most accurate troubleshooting steps. In the meantime, here are some general tips that might help:\n",
      "- Make sure Bluetooth is enabled on your phone\n",
      "- Keep your smartwatch and phone within close range (within 3 feet) during pairing\n",
      "- Ensure both devices have sufficient battery power\n",
      "- Check if your phone's operating system meets the minimum requirements for the smartwatch\n",
      "\n",
      "Please provide the requested information, and I'll guide you through the specific steps to resolve the connection error.\n",
      "\n",
      "Is there anything else you'd like to share about the issue? \n",
      "\n",
      "\n",
      "Customer: The connection issue is still happening even after trying the steps you suggested.\n",
      "Support: I apologize that you're still experiencing connection issues with your smartwatch. I understand how frustrating it must be to have this problem persist even after trying the initial troubleshooting steps. Let's try some additional solutions to resolve this.\n",
      "\n",
      "Before we proceed, could you please confirm:\n",
      "1. Which specific steps you've already attempted?\n",
      "2. Are you seeing any particular error message?\n",
      "3. What model of smartwatch and phone are you using?\n",
      "\n",
      "This information will help me provide more targeted solutions and avoid suggesting steps you've already tried. In the meantime, here are a few advanced troubleshooting steps we can consider:\n",
      "\n",
      "1. Completely resetting the Bluetooth connection\n",
      "2. Checking for any software updates for both the watch and phone\n",
      "3. Testing the connection with a different mobile device to isolate the issue\n",
      "\n",
      "Would you be able to provide those details so I can better assist you? I'll make sure to document this ongoing issue to help track its resolution. \n",
      "\n",
      "\n",
      "Customer: exit\n",
      "Thank you for using our support service. Goodbye!\n"
     ]
    }
   ],
   "source": [
    "chatbot = SupportChatbot()\n",
    "user_id = \"customer_bot\"\n",
    "print(\"Welcome to Customer Support! Type 'exit' to end the conversation.\")\n",
    "\n",
    "while True:\n",
    "    # Get user input\n",
    "    query = input()\n",
    "    print(\"Customer:\", query)\n",
    "\n",
    "    # Check if user wants to exit\n",
    "    if query.lower() == \"exit\":\n",
    "        print(\"Thank you for using our support service. Goodbye!\")\n",
    "        break\n",
    "\n",
    "    # Handle the query and print the response\n",
    "    response = chatbot.handle_customer_query(user_id, query)\n",
    "    print(\"Support:\", response, \"\\n\\n\")"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": ".venv",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.4"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}


================================================
FILE: cookbooks/helper/__init__.py
================================================


================================================
FILE: cookbooks/helper/mem0_teachability.py
================================================
# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai
#
# SPDX-License-Identifier: Apache-2.0
#
# Portions derived from  https://github.com/microsoft/autogen are under the MIT License.
# SPDX-License-Identifier: MIT
# forked from autogen.agentchat.contrib.capabilities.teachability.Teachability

from typing import Dict, Optional, Union

from autogen.agentchat.assistant_agent import ConversableAgent
from autogen.agentchat.contrib.capabilities.agent_capability import AgentCapability
from autogen.agentchat.contrib.text_analyzer_agent import TextAnalyzerAgent
from termcolor import colored

from mem0 import Memory


class Mem0Teachability(AgentCapability):
    def __init__(
        self,
        verbosity: Optional[int] = 0,
        reset_db: Optional[bool] = False,
        recall_threshold: Optional[float] = 1.5,
        max_num_retrievals: Optional[int] = 10,
        llm_config: Optional[Union[Dict, bool]] = None,
        agent_id: Optional[str] = None,
        memory_client: Optional[Memory] = None,
    ):
        self.verbosity = verbosity
        self.recall_threshold = recall_threshold
        self.max_num_retrievals = max_num_retrievals
        self.llm_config = llm_config
        self.analyzer = None
        self.teachable_agent = None
        self.agent_id = agent_id
        self.memory = memory_client if memory_client else Memory()

        if reset_db:
            self.memory.reset()

    def add_to_agent(self, agent: ConversableAgent):
        self.teachable_agent = agent
        agent.register_hook(hookable_method="process_last_received_message", hook=self.process_last_received_message)

        if self.llm_config is None:
            self.llm_config = agent.llm_config
        assert self.llm_config, "Teachability requires a valid llm_config."

        self.analyzer = TextAnalyzerAgent(llm_config=self.llm_config)

        agent.update_system_message(
            agent.system_message
            + "\nYou've been given the special ability to remember user teachings from prior conversations."
        )

    def process_last_received_message(self, text: Union[Dict, str]):
        expanded_text = text
        if self.memory.get_all(agent_id=self.agent_id):
            expanded_text = self._consider_memo_retrieval(text)
        self._consider_memo_storage(text)
        return expanded_text

    def _consider_memo_storage(self, comment: Union[Dict, str]):
        response = self._analyze(
            comment,
            "Does any part of the TEXT ask the agent to perform a task or solve a problem? Answer with just one word, yes or no.",
        )

        if "yes" in response.lower():
            advice = self._analyze(
                comment,
                "Briefly copy any advice from the TEXT that may be useful for a similar but different task in the future. But if no advice is present, just respond with 'none'.",
            )

            if "none" not in advice.lower():
                task = self._analyze(
                    comment,
                    "Briefly copy just the task from the TEXT, then stop. Don't solve it, and don't include any advice.",
                )

                general_task = self._analyze(
                    task,
                    "Summarize very briefly, in general terms, the type of task described in the TEXT. Leave out details that might not appear in a similar problem.",
                )

                if self.verbosity >= 1:
                    print(colored("\nREMEMBER THIS TASK-ADVICE PAIR", "light_yellow"))
                self.memory.add(
                    [{"role": "user", "content": f"Task: {general_task}\nAdvice: {advice}"}], agent_id=self.agent_id
                )

        response = self._analyze(
            comment,
            "Does the TEXT contain information that could be committed to memory? Answer with just one word, yes or no.",
        )

        if "yes" in response.lower():
            question = self._analyze(
                comment,
                "Imagine that the user forgot this information in the TEXT. How would they ask you for this information? Include no other text in your response.",
            )

            answer = self._analyze(
                comment, "Copy the information from the TEXT that should be committed to memory. Add no explanation."
            )

            if self.verbosity >= 1:
                print(colored("\nREMEMBER THIS QUESTION-ANSWER PAIR", "light_yellow"))
            self.memory.add(
                [{"role": "user", "content": f"Question: {question}\nAnswer: {answer}"}], agent_id=self.agent_id
            )

    def _consider_memo_retrieval(self, comment: Union[Dict, str]):
        if self.verbosity >= 1:
            print(colored("\nLOOK FOR RELEVANT MEMOS, AS QUESTION-ANSWER PAIRS", "light_yellow"))
        memo_list = self._retrieve_relevant_memos(comment)

        response = self._analyze(
            comment,
            "Does any part of the TEXT ask the agent to perform a task or solve a problem? Answer with just one word, yes or no.",
        )

        if "yes" in response.lower():
            if self.verbosity >= 1:
                print(colored("\nLOOK FOR RELEVANT MEMOS, AS TASK-ADVICE PAIRS", "light_yellow"))
            task = self._analyze(
                comment, "Copy just the task from the TEXT, then stop. Don't solve it, and don't include any advice."
            )

            general_task = self._analyze(
                task,
                "Summarize very briefly, in general terms, the type of task described in the TEXT. Leave out details that might not appear in a similar problem.",
            )

            memo_list.extend(self._retrieve_relevant_memos(general_task))

        memo_list = list(set(memo_list))
        return comment + self._concatenate_memo_texts(memo_list)

    def _retrieve_relevant_memos(self, input_text: str) -> list:
        search_results = self.memory.search(input_text, agent_id=self.agent_id, limit=self.max_num_retrievals)
        memo_list = [result["memory"] for result in search_results if result["score"] <= self.recall_threshold]

        if self.verbosity >= 1 and not memo_list:
            print(colored("\nTHE CLOSEST MEMO IS BEYOND THE THRESHOLD:", "light_yellow"))
            if search_results["results"]:
                print(search_results["results"][0])
            print()

        return memo_list

    def _concatenate_memo_texts(self, memo_list: list) -> str:
        memo_texts = ""
        if memo_list:
            info = "\n# Memories that might help\n"
            for memo in memo_list:
                info += f"- {memo}\n"
            if self.verbosity >= 1:
                print(colored(f"\nMEMOS APPENDED TO LAST MESSAGE...\n{info}\n", "light_yellow"))
            memo_texts += "\n" + info
        return memo_texts

    def _analyze(self, text_to_analyze: Union[Dict, str], analysis_instructions: Union[Dict, str]):
        self.analyzer.reset()
        self.teachable_agent.send(
            recipient=self.analyzer, message=text_to_analyze, request_reply=False, silent=(self.verbosity < 2)
        )
        self.teachable_agent.send(
            recipient=self.analyzer, message=analysis_instructions, request_reply=True, silent=(self.verbosity < 2)
        )
        return self.teachable_agent.last_message(self.analyzer)["content"]


================================================
FILE: cookbooks/mem0-autogen.ipynb
================================================
{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "1e8a980a2e0b9a85",
   "metadata": {},
   "outputs": [],
   "source": [
    "%pip install --upgrade pip\n",
    "%pip install mem0ai pyautogen flaml"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "d437544fe259dd1b",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-09-25T20:29:52.443024Z",
     "start_time": "2024-09-25T20:29:52.440046Z"
    }
   },
   "outputs": [],
   "source": [
    "# Set up ENV Vars\n",
    "import os\n",
    "\n",
    "os.environ[\"OPENAI_API_KEY\"] = \"sk-xxx\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "initial_id",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-09-25T20:30:03.914245Z",
     "start_time": "2024-09-25T20:29:53.236601Z"
    },
    "collapsed": true
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "WARNING:autogen.agentchat.contrib.gpt_assistant_agent:OpenAI client config of GPTAssistantAgent(assistant) - model: gpt-4o\n",
      "WARNING:autogen.agentchat.contrib.gpt_assistant_agent:Matching assistant found, using the first matching assistant: {'id': 'asst_PpOJ2mJC8QeysR54I6DEdi4E', 'created_at': 1726444855, 'description': None, 'instructions': 'You are a helpful AI assistant.\\nSolve tasks using your coding and language skills.\\nIn the following cases, suggest python code (in a python coding block) or shell script (in a sh coding block) for the user to execute.\\n    1. When you need to collect info, use the code to output the info you need, for example, browse or search the web, download/read a file, print the content of a webpage or a file, get the current date/time, check the operating system. After sufficient info is printed and the task is ready to be solved based on your language skill, you can solve the task by yourself.\\n    2. When you need to perform some task with code, use the code to perform the task and output the result. Finish the task smartly.\\nSolve the task step by step if you need to. If a plan is not provided, explain your plan first. Be clear which step uses code, and which step uses your language skill.\\nWhen using code, you must indicate the script type in the code block. The user cannot provide any other feedback or perform any other action beyond executing the code you suggest. The user can\\'t modify your code. So do not suggest incomplete code which requires users to modify. Don\\'t use a code block if it\\'s not intended to be executed by the user.\\nIf you want the user to save the code in a file before executing it, put # filename: <filename> inside the code block as the first line. Don\\'t include multiple code blocks in one response. Do not ask users to copy and paste the result. Instead, use \\'print\\' function for the output when relevant. Check the execution result returned by the user.\\nIf the result indicates there is an error, fix the error and output the code again. Suggest the full code instead of partial code or code changes. If the error can\\'t be fixed or if the task is not solved even after the code is executed successfully, analyze the problem, revisit your assumption, collect additional info you need, and think of a different approach to try.\\nWhen you find an answer, verify the answer carefully. Include verifiable evidence in your response if possible.\\nReply \"TERMINATE\" in the end when everything is done.\\n    ', 'metadata': {}, 'model': 'gpt-4o', 'name': 'assistant', 'object': 'assistant', 'tools': [], 'response_format': 'auto', 'temperature': 1.0, 'tool_resources': ToolResources(code_interpreter=None, file_search=None), 'top_p': 1.0}\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\u001b[33muser_proxy\u001b[0m (to assistant):\n",
      "\n",
      "Write a Python function that reverses a string.\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33massistant\u001b[0m (to user_proxy):\n",
      "\n",
      "Sure! Here is the Python code for a function that takes a string as input and returns the reversed string.\n",
      "\n",
      "```python\n",
      "def reverse_string(s):\n",
      "    return s[::-1]\n",
      "\n",
      "# Example usage\n",
      "if __name__ == \"__main__\":\n",
      "    example_string = \"Hello, world!\"\n",
      "    reversed_string = reverse_string(example_string)\n",
      "    print(f\"Original string: {example_string}\")\n",
      "    print(f\"Reversed string: {reversed_string}\")\n",
      "```\n",
      "\n",
      "When you run this code, it will print the original string and the reversed string. You can replace `example_string` with any string you want to reverse.\n",
      "\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[31m\n",
      ">>>>>>>> EXECUTING CODE BLOCK 0 (inferred language is python)...\u001b[0m\n",
      "\u001b[33muser_proxy\u001b[0m (to assistant):\n",
      "\n",
      "exitcode: 0 (execution succeeded)\n",
      "Code output: \n",
      "Original string: Hello, world!\n",
      "Reversed string: !dlrow ,olleH\n",
      "\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33massistant\u001b[0m (to user_proxy):\n",
      "\n",
      "Great, the function worked as expected! The original string \"Hello, world!\" was correctly reversed to \"!dlrow ,olleH\".\n",
      "\n",
      "If you have any other tasks or need further assistance, let me know! \n",
      "\n",
      "TERMINATE\n",
      "\n",
      "\n",
      "--------------------------------------------------------------------------------\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "ChatResult(chat_id=None, chat_history=[{'content': 'Write a Python function that reverses a string.', 'role': 'assistant', 'name': 'user_proxy'}, {'content': 'Sure! Here is the Python code for a function that takes a string as input and returns the reversed string.\\n\\n```python\\ndef reverse_string(s):\\n    return s[::-1]\\n\\n# Example usage\\nif __name__ == \"__main__\":\\n    example_string = \"Hello, world!\"\\n    reversed_string = reverse_string(example_string)\\n    print(f\"Original string: {example_string}\")\\n    print(f\"Reversed string: {reversed_string}\")\\n```\\n\\nWhen you run this code, it will print the original string and the reversed string. You can replace `example_string` with any string you want to reverse.\\n', 'role': 'user', 'name': 'assistant'}, {'content': 'exitcode: 0 (execution succeeded)\\nCode output: \\nOriginal string: Hello, world!\\nReversed string: !dlrow ,olleH\\n', 'role': 'assistant', 'name': 'user_proxy'}, {'content': 'Great, the function worked as expected! The original string \"Hello, world!\" was correctly reversed to \"!dlrow ,olleH\".\\n\\nIf you have any other tasks or need further assistance, let me know! \\n\\nTERMINATE\\n', 'role': 'user', 'name': 'assistant'}], summary='Great, the function worked as expected! The original string \"Hello, world!\" was correctly reversed to \"!dlrow ,olleH\".\\n\\nIf you have any other tasks or need further assistance, let me know! \\n\\n\\n', cost={'usage_including_cached_inference': {'total_cost': 0}, 'usage_excluding_cached_inference': {'total_cost': 0}}, human_input=[])"
      ]
     },
     "execution_count": 12,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# AutoGen GPTAssistantAgent Capabilities:\n",
    "# - Generates code based on user requirements and preferences.\n",
    "# - Analyzes, refactors, and debugs existing code efficiently.\n",
    "# - Maintains consistent coding standards across multiple sessions.\n",
    "# - Remembers project-specific conventions and architectural decisions.\n",
    "# - Learns from past interactions to improve future code suggestions.\n",
    "# - Reduces repetitive explanations of coding preferences, enhancing productivity.\n",
    "# - Adapts to team-specific practices for a more cohesive development process.\n",
    "\n",
    "import logging\n",
    "import os\n",
    "\n",
    "from autogen import AssistantAgent, UserProxyAgent\n",
    "from autogen.agentchat.contrib.gpt_assistant_agent import GPTAssistantAgent\n",
    "\n",
    "logger = logging.getLogger(__name__)\n",
    "logger.setLevel(logging.WARNING)\n",
    "\n",
    "assistant_id = os.environ.get(\"ASSISTANT_ID\", None)\n",
    "\n",
    "# LLM Configuration\n",
    "CACHE_SEED = 42  # choose your poison\n",
    "llm_config = {\n",
    "    \"config_list\": [{\"model\": \"gpt-4o\", \"api_key\": os.environ[\"OPENAI_API_KEY\"]}],\n",
    "    \"cache_seed\": CACHE_SEED,\n",
    "    \"timeout\": 120,\n",
    "    \"temperature\": 0.0,\n",
    "}\n",
    "\n",
    "assistant_config = {\"assistant_id\": assistant_id}\n",
    "\n",
    "gpt_assistant = GPTAssistantAgent(\n",
    "    name=\"assistant\",\n",
    "    instructions=AssistantAgent.DEFAULT_SYSTEM_MESSAGE,\n",
    "    llm_config=llm_config,\n",
    "    assistant_config=assistant_config,\n",
    ")\n",
    "\n",
    "user_proxy = UserProxyAgent(\n",
    "    name=\"user_proxy\",\n",
    "    code_execution_config={\n",
    "        \"work_dir\": \"coding\",\n",
    "        \"use_docker\": False,\n",
    "    },  # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.\n",
    "    is_termination_msg=lambda msg: \"TERMINATE\" in msg[\"content\"],\n",
    "    human_input_mode=\"NEVER\",\n",
    "    max_consecutive_auto_reply=1,\n",
    "    llm_config=llm_config,\n",
    ")\n",
    "\n",
    "user_query = \"Write a Python function that reverses a string.\"\n",
    "# Initiate Chat w/o Memory\n",
    "user_proxy.initiate_chat(gpt_assistant, message=user_query)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "id": "c2fe6fd02324be37",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-09-25T20:31:40.536369Z",
     "start_time": "2024-09-25T20:31:31.078911Z"
    }
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/var/folders/z6/3w4ng1lj3mn4vmhplgc4y0580000gn/T/ipykernel_77647/3850691550.py:28: DeprecationWarning: The current add API output format is deprecated. To use the latest format, set `api_version='v1.1'`. The current format will be removed in mem0ai 1.1.0 and later versions.\n",
      "  MEM0_MEMORY_CLIENT.add(MEMORY_DATA, user_id=USER_ID)\n",
      "/var/folders/z6/3w4ng1lj3mn4vmhplgc4y0580000gn/T/ipykernel_77647/3850691550.py:29: DeprecationWarning: The current add API output format is deprecated. To use the latest format, set `api_version='v1.1'`. The current format will be removed in mem0ai 1.1.0 and later versions.\n",
      "  MEM0_MEMORY_CLIENT.add(MEMORY_DATA, agent_id=AGENT_ID)\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "{'message': 'ok'}"
      ]
     },
     "execution_count": 16,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Benefits of Preference Memory in AutoGen Agents:\n",
    "# - Personalization: Tailors responses to individual user or team preferences.\n",
    "# - Consistency: Maintains uniform coding style and standards across sessions.\n",
    "# - Efficiency: Reduces need to restate preferences, saving time in each interaction.\n",
    "# - Adaptability: Evolves understanding of user needs over multiple conversations.\n",
    "# - Context Retention: Keeps project-specific details accessible without repetition.\n",
    "# - Improved Recommendations: Suggests solutions aligned with past preferences.\n",
    "# - Long-term Learning: Accumulates knowledge to enhance future interactions.\n",
    "# - Reduced Cognitive Load: Users don't need to remember and restate all preferences.\n",
    "\n",
    "\n",
    "# Setting memory (preference) for the user\n",
    "from mem0 import Memory\n",
    "\n",
    "# Initialize Mem0\n",
    "MEM0_MEMORY_CLIENT = Memory()\n",
    "\n",
    "USER_ID = \"chicory.ai.user\"\n",
    "MEMORY_DATA = \"\"\"\n",
    "* Preference for readability: The user prefers code to be explicitly written with clear variable names.\n",
    "* Preference for comments: The user prefers comments explaining each step.\n",
    "* Naming convention: The user prefers camelCase for variable names.\n",
    "* Docstrings: The user prefers functions to have a descriptive docstring.\n",
    "\"\"\"\n",
    "AGENT_ID = \"chicory.ai\"\n",
    "\n",
    "# Add preference data to memory\n",
    "MEM0_MEMORY_CLIENT.add(MEMORY_DATA, user_id=USER_ID)\n",
    "MEM0_MEMORY_CLIENT.add(MEMORY_DATA, agent_id=AGENT_ID)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fb6d6a8f36aedfd6",
   "metadata": {},
   "source": [
    "## Option 1: \n",
    "Using Direct Prompt Injection:\n",
    "`user memory example`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "id": "29be484c69093371",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-09-25T20:31:52.411604Z",
     "start_time": "2024-09-25T20:31:40.611497Z"
    }
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/var/folders/z6/3w4ng1lj3mn4vmhplgc4y0580000gn/T/ipykernel_77647/703598432.py:2: DeprecationWarning: The current get_all API output format is deprecated. To use the latest format, set `api_version='v1.1'`. The current format will be removed in mem0ai 1.1.0 and later versions.\n",
      "  relevant_memories = MEM0_MEMORY_CLIENT.search(user_query, user_id=USER_ID, limit=3)\n",
      "INFO:autogen.agentchat.contrib.gpt_assistant_agent:Clearing thread thread_BOgA5TdAOrYqSHLVpxc5ZifB\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Relevant memories:\n",
      "Prefers functions to have a descriptive docstring\n",
      "Prefers camelCase for variable names\n",
      "Prefers code to be explicitly written with clear variable names\n",
      "\u001b[33muser_proxy\u001b[0m (to assistant):\n",
      "\n",
      "Write a Python function that reverses a string.\n",
      " Coding Preferences: \n",
      "Prefers functions to have a descriptive docstring\n",
      "Prefers camelCase for variable names\n",
      "Prefers code to be explicitly written with clear variable names\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33massistant\u001b[0m (to user_proxy):\n",
      "\n",
      "Sure, I will write a Python function that reverses a given string with clear and descriptive variable names, along with a descriptive docstring.\n",
      "\n",
      "```python\n",
      "def reverseString(inputString):\n",
      "    \"\"\"\n",
      "    Reverses the given string.\n",
      "\n",
      "    Parameters:\n",
      "    inputString (str): The string to be reversed.\n",
      "\n",
      "    Returns:\n",
      "    str: The reversed string.\n",
      "    \"\"\"\n",
      "    # Initialize an empty string to store the reversed version\n",
      "    reversedString = \"\"\n",
      "\n",
      "    # Iterate through each character in the input string in reverse order\n",
      "    for char in inputString[::-1]:\n",
      "        reversedString += char\n",
      "\n",
      "    return reversedString\n",
      "\n",
      "# Example usage\n",
      "if __name__ == \"__main__\":\n",
      "    testString = \"Hello World!\"\n",
      "    print(\"Original String: \" + testString)\n",
      "    print(\"Reversed String: \" + reverseString(testString))\n",
      "```\n",
      "\n",
      "Please save this code in a Python file and execute it. It will print both the original and reversed strings. Let me know if you need further assistance or modifications.\n",
      "\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[31m\n",
      ">>>>>>>> EXECUTING CODE BLOCK 0 (inferred language is python)...\u001b[0m\n",
      "\u001b[33muser_proxy\u001b[0m (to assistant):\n",
      "\n",
      "exitcode: 0 (execution succeeded)\n",
      "Code output: \n",
      "Original String: Hello World!\n",
      "Reversed String: !dlroW olleH\n",
      "\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33massistant\u001b[0m (to user_proxy):\n",
      "\n",
      "Great! It looks like the code executed successfully and produced the correct output, reversing the string \"Hello World!\" to \"!dlroW olleH\".\n",
      "\n",
      "To summarize, the function `reverseString` works as expected:\n",
      "\n",
      "- It takes an input string and initializes an empty string called `reversedString`.\n",
      "- It iterates through the given string in reverse order and appends each character to `reversedString`.\n",
      "- Finally, it returns the reversed string.\n",
      "\n",
      "Since everything is working correctly and as intended, we can conclude that the task is successfully completed.\n",
      "\n",
      "TERMINATE\n",
      "\n",
      "\n",
      "--------------------------------------------------------------------------------\n"
     ]
    }
   ],
   "source": [
    "# Retrieve the memory\n",
    "relevant_memories = MEM0_MEMORY_CLIENT.search(user_query, user_id=USER_ID, limit=3)\n",
    "relevant_memories_text = \"\\n\".join(mem[\"memory\"] for mem in relevant_memories)\n",
    "print(\"Relevant memories:\")\n",
    "print(relevant_memories_text)\n",
    "\n",
    "prompt = f\"{user_query}\\n Coding Preferences: \\n{relevant_memories_text}\"\n",
    "browse_result = user_proxy.initiate_chat(gpt_assistant, message=prompt)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fc0ae72d0ef7f6de",
   "metadata": {},
   "source": [
    "## Option 2:\n",
    "Using UserProxyAgent: \n",
    "`agent memory example`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "id": "bfd9342cf2096ca5",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-09-25T20:31:52.421965Z",
     "start_time": "2024-09-25T20:31:52.418762Z"
    }
   },
   "outputs": [],
   "source": [
    "# UserProxyAgent in AutoGen:\n",
    "# - Acts as intermediary between humans and AI agents in the AutoGen framework.\n",
    "# - Simulates user behavior and interactions within multi-agent conversations.\n",
    "# - Can be configured to execute code blocks received in messages.\n",
    "# - Supports flexible human input modes (e.g., ALWAYS, TERMINATE, NEVER).\n",
    "# - Customizable for specific interaction patterns and behaviors.\n",
    "# - Can be integrated with memory systems like mem0 for enhanced functionality.\n",
    "# - Capable of fetching relevant memories before processing a query.\n",
    "# - Enables more context-aware and personalized agent responses.\n",
    "# - Bridges the gap between human input and AI processing in complex workflows.\n",
    "\n",
    "\n",
    "class Mem0ProxyCoderAgent(UserProxyAgent):\n",
    "    def __init__(self, *args, **kwargs):\n",
    "        super().__init__(*args, **kwargs)\n",
    "        self.memory = MEM0_MEMORY_CLIENT\n",
    "        self.agent_id = kwargs.get(\"name\")\n",
    "\n",
    "    def initiate_chat(self, assistant, message):\n",
    "        # Retrieve memory for the agent\n",
    "        agent_memories = self.memory.search(message, agent_id=self.agent_id, limit=3)\n",
    "        agent_memories_txt = \"\\n\".join(mem[\"memory\"] for mem in agent_memories)\n",
    "        prompt = f\"{message}\\n Coding Preferences: \\n{str(agent_memories_txt)}\"\n",
    "        response = super().initiate_chat(assistant, message=prompt)\n",
    "        # Add new memory after processing the message\n",
    "        response_dist = response.__dict__ if not isinstance(response, dict) else response\n",
    "        MEMORY_DATA = [{\"role\": \"user\", \"content\": message}, {\"role\": \"assistant\", \"content\": response_dist}]\n",
    "        self.memory.add(MEMORY_DATA, agent_id=self.agent_id)\n",
    "        return response"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "id": "6d2a757d1cf65881",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-09-25T20:32:20.269222Z",
     "start_time": "2024-09-25T20:32:07.485051Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\u001b[33mchicory.ai\u001b[0m (to assistant):\n",
      "\n",
      "Write a Python function that reverses a string.\n",
      " Coding Preferences: \n",
      "Prefers functions to have a descriptive docstring\n",
      "Prefers camelCase for variable names\n",
      "Prefers code to be explicitly written with clear variable names\n",
      "\n",
      "--------------------------------------------------------------------------------\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/var/folders/z6/3w4ng1lj3mn4vmhplgc4y0580000gn/T/ipykernel_77647/1070513538.py:13: DeprecationWarning: The current get_all API output format is deprecated. To use the latest format, set `api_version='v1.1'`. The current format will be removed in mem0ai 1.1.0 and later versions.\n",
      "  agent_memories = self.memory.search(message, agent_id=self.agent_id, limit=3)\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\u001b[33massistant\u001b[0m (to chicory.ai):\n",
      "\n",
      "Sure, I'll write a Python function that reverses a string following your coding preferences.\n",
      "\n",
      "```python\n",
      "def reverseString(inputString):\n",
      "    \"\"\"\n",
      "    Reverse the given string.\n",
      "\n",
      "    Parameters:\n",
      "    inputString (str): The string to be reversed.\n",
      "\n",
      "    Returns:\n",
      "    str: The reversed string.\n",
      "    \"\"\"\n",
      "    reversedString = inputString[::-1]\n",
      "    return reversedString\n",
      "\n",
      "# Example usage:\n",
      "inputString = \"hello\"\n",
      "print(reverseString(inputString))  # Output: \"olleh\"\n",
      "```\n",
      "\n",
      "This function `reverseString` takes an `inputString`, reverses it using slicing (`inputString[::-1]`), and returns the reversed string. The docstring provides a clear description of the function's purpose, parameters, and return value. The variable names are explicitly descriptive.\n",
      "\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[31m\n",
      ">>>>>>>> EXECUTING CODE BLOCK 0 (inferred language is python)...\u001b[0m\n",
      "\u001b[33mchicory.ai\u001b[0m (to assistant):\n",
      "\n",
      "exitcode: 0 (execution succeeded)\n",
      "Code output: \n",
      "olleh\n",
      "\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33massistant\u001b[0m (to chicory.ai):\n",
      "\n",
      "Great! The function has successfully reversed the string as expected.\n",
      "\n",
      "If you have any more tasks or need further assistance, feel free to ask.\n",
      "\n",
      "TERMINATE\n",
      "\n",
      "\n",
      "--------------------------------------------------------------------------------\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/var/folders/z6/3w4ng1lj3mn4vmhplgc4y0580000gn/T/ipykernel_77647/1070513538.py:20: DeprecationWarning: The current add API output format is deprecated. To use the latest format, set `api_version='v1.1'`. The current format will be removed in mem0ai 1.1.0 and later versions.\n",
      "  self.memory.add(MEMORY_DATA, agent_id=self.agent_id)\n"
     ]
    }
   ],
   "source": [
    "mem0_user_proxy = Mem0ProxyCoderAgent(\n",
    "    name=AGENT_ID,\n",
    "    code_execution_config={\n",
    "        \"work_dir\": \"coding\",\n",
    "        \"use_docker\": False,\n",
    "    },  # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.\n",
    "    is_termination_msg=lambda msg: \"TERMINATE\" in msg[\"content\"],\n",
    "    human_input_mode=\"NEVER\",\n",
    "    max_consecutive_auto_reply=1,\n",
    ")\n",
    "code_result = mem0_user_proxy.initiate_chat(gpt_assistant, message=user_query)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7706c06216ca4374",
   "metadata": {},
   "source": [
    "# Option 3:\n",
    "Using Teachability:\n",
    "`agent memory example`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "id": "ae6bb87061877645",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-09-25T20:33:17.737146Z",
     "start_time": "2024-09-25T20:33:17.713250Z"
    }
   },
   "outputs": [],
   "source": [
    "# building on top of existing Teachability package from autogen\n",
    "# from autogen.agentchat.contrib.capabilities.teachability import Teachability\n",
    "\n",
    "# AutoGen Teachability Feature:\n",
    "# - Enables agents to learn and remember across multiple chat sessions.\n",
    "# - Addresses the limitation of traditional LLMs forgetting after conversations end.\n",
    "# - Uses vector database to store \"memos\" of taught information.\n",
    "# - Can remember facts, preferences, and even complex skills.\n",
    "# - Allows for cumulative learning and knowledge retention over time.\n",
    "# - Enhances personalization and adaptability of AI assistants.\n",
    "# - Can be integrated with mem0 for improved memory management.\n",
    "# - Potential for more efficient and context-aware information retrieval.\n",
    "# - Enables creation of AI agents with long-term memory and learning abilities.\n",
    "# - Improves consistency and reduces repetition in user-agent interactions.\n",
    "\n",
    "from cookbooks.helper.mem0_teachability import Mem0Teachability\n",
    "\n",
    "teachability = Mem0Teachability(\n",
    "    verbosity=2,  # for visibility of what's happening\n",
    "    recall_threshold=0.5,\n",
    "    reset_db=False,  # Use True to force-reset the memo DB, and False to use an existing DB.\n",
    "    agent_id=AGENT_ID,\n",
    "    memory_client=MEM0_MEMORY_CLIENT,\n",
    ")\n",
    "teachability.add_to_agent(user_proxy)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "id": "36c9bcbedcd406b4",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2024-09-25T20:33:46.616261Z",
     "start_time": "2024-09-25T20:33:19.719999Z"
    }
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "INFO:autogen.agentchat.contrib.gpt_assistant_agent:Clearing thread thread_dfnrEoXX4MoZesb0cerO9LKm\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\u001b[33muser_proxy\u001b[0m (to assistant):\n",
      "\n",
      "Write a Python function that reverses a string.\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33massistant\u001b[0m (to user_proxy):\n",
      "\n",
      "Sure, I'll provide you with a Python function that takes a string as input and returns the reversed string. Here is the complete code:\n",
      "\n",
      "```python\n",
      "# filename: reverse_string.py\n",
      "\n",
      "def reverse_string(s: str) -> str:\n",
      "    \"\"\"\n",
      "    This function takes a string as input and returns the reversed string.\n",
      "    \n",
      "    :param s: Input string to be reversed\n",
      "    :return: Reversed string\n",
      "    \"\"\"\n",
      "    return s[::-1]\n",
      "\n",
      "# Example usage\n",
      "input_string = \"Hello, World!\"\n",
      "reversed_string = reverse_string(input_string)\n",
      "print(f\"Original string: {input_string}\")\n",
      "print(f\"Reversed string: {reversed_string}\")\n",
      "```\n",
      "\n",
      "Save the above code in a file named `reverse_string.py`, then execute it. This script defines the `reverse_string` function and demonstrates its usage by reversing the string \"Hello, World!\". It will print both the original and reversed strings.\n",
      "\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[93m\n",
      "LOOK FOR RELEVANT MEMOS, AS QUESTION-ANSWER PAIRS\u001b[0m\n",
      "\u001b[33muser_proxy\u001b[0m (to analyzer):\n",
      "\n",
      "Sure, I'll provide you with a Python function that takes a string as input and returns the reversed string. Here is the complete code:\n",
      "\n",
      "```python\n",
      "# filename: reverse_string.py\n",
      "\n",
      "def reverse_string(s: str) -> str:\n",
      "    \"\"\"\n",
      "    This function takes a string as input and returns the reversed string.\n",
      "    \n",
      "    :param s: Input string to be reversed\n",
      "    :return: Reversed string\n",
      "    \"\"\"\n",
      "    return s[::-1]\n",
      "\n",
      "# Example usage\n",
      "input_string = \"Hello, World!\"\n",
      "reversed_string = reverse_string(input_string)\n",
      "print(f\"Original string: {input_string}\")\n",
      "print(f\"Reversed string: {reversed_string}\")\n",
      "```\n",
      "\n",
      "Save the above code in a file named `reverse_string.py`, then execute it. This script defines the `reverse_string` function and demonstrates its usage by reversing the string \"Hello, World!\". It will print both the original and reversed strings.\n",
      "\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33muser_proxy\u001b[0m (to analyzer):\n",
      "\n",
      "Does any part of the TEXT ask the agent to perform a task or solve a problem? Answer with just one word, yes or no.\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33manalyzer\u001b[0m (to user_proxy):\n",
      "\n",
      "Yes\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[93m\n",
      "LOOK FOR RELEVANT MEMOS, AS TASK-ADVICE PAIRS\u001b[0m\n",
      "\u001b[33muser_proxy\u001b[0m (to analyzer):\n",
      "\n",
      "Sure, I'll provide you with a Python function that takes a string as input and returns the reversed string. Here is the complete code:\n",
      "\n",
      "```python\n",
      "# filename: reverse_string.py\n",
      "\n",
      "def reverse_string(s: str) -> str:\n",
      "    \"\"\"\n",
      "    This function takes a string as input and returns the reversed string.\n",
      "    \n",
      "    :param s: Input string to be reversed\n",
      "    :return: Reversed string\n",
      "    \"\"\"\n",
      "    return s[::-1]\n",
      "\n",
      "# Example usage\n",
      "input_string = \"Hello, World!\"\n",
      "reversed_string = reverse_string(input_string)\n",
      "print(f\"Original string: {input_string}\")\n",
      "print(f\"Reversed string: {reversed_string}\")\n",
      "```\n",
      "\n",
      "Save the above code in a file named `reverse_string.py`, then execute it. This script defines the `reverse_string` function and demonstrates its usage by reversing the string \"Hello, World!\". It will print both the original and reversed strings.\n",
      "\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33muser_proxy\u001b[0m (to analyzer):\n",
      "\n",
      "Copy just the task from the TEXT, then stop. Don't solve it, and don't include any advice.\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33manalyzer\u001b[0m (to user_proxy):\n",
      "\n",
      "Save the above code in a file named `reverse_string.py`, then execute it. This script defines the `reverse_string` function and demonstrates its usage by reversing the string \"Hello, World!\". It will print both the original and reversed strings.\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33muser_proxy\u001b[0m (to analyzer):\n",
      "\n",
      "Save the above code in a file named `reverse_string.py`, then execute it. This script defines the `reverse_string` function and demonstrates its usage by reversing the string \"Hello, World!\". It will print both the original and reversed strings.\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33muser_proxy\u001b[0m (to analyzer):\n",
      "\n",
      "Summarize very briefly, in general terms, the type of task described in the TEXT. Leave out details that might not appear in a similar problem.\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33manalyzer\u001b[0m (to user_proxy):\n",
      "\n",
      "The task involves saving a script to a file, executing it, and demonstrating a function that reverses a string.\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[93m\n",
      "MEMOS APPENDED TO LAST MESSAGE...\n",
      "\n",
      "# Memories that might help\n",
      "- Prefers functions to have a descriptive docstring\n",
      "- Prefers camelCase for variable names\n",
      "- Prefers comments explaining each step\n",
      "- Prefers code to be explicitly written with clear variable names\n",
      "\n",
      "\u001b[0m\n",
      "\u001b[33muser_proxy\u001b[0m (to analyzer):\n",
      "\n",
      "Sure, I'll provide you with a Python function that takes a string as input and returns the reversed string. Here is the complete code:\n",
      "\n",
      "```python\n",
      "# filename: reverse_string.py\n",
      "\n",
      "def reverse_string(s: str) -> str:\n",
      "    \"\"\"\n",
      "    This function takes a string as input and returns the reversed string.\n",
      "    \n",
      "    :param s: Input string to be reversed\n",
      "    :return: Reversed string\n",
      "    \"\"\"\n",
      "    return s[::-1]\n",
      "\n",
      "# Example usage\n",
      "input_string = \"Hello, World!\"\n",
      "reversed_string = reverse_string(input_string)\n",
      "print(f\"Original string: {input_string}\")\n",
      "print(f\"Reversed string: {reversed_string}\")\n",
      "```\n",
      "\n",
      "Save the above code in a file named `reverse_string.py`, then execute it. This script defines the `reverse_string` function and demonstrates its usage by reversing the string \"Hello, World!\". It will print both the original and reversed strings.\n",
      "\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33muser_proxy\u001b[0m (to analyzer):\n",
      "\n",
      "Does any part of the TEXT ask the agent to perform a task or solve a problem? Answer with just one word, yes or no.\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33manalyzer\u001b[0m (to user_proxy):\n",
      "\n",
      "Yes\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33muser_proxy\u001b[0m (to analyzer):\n",
      "\n",
      "Sure, I'll provide you with a Python function that takes a string as input and returns the reversed string. Here is the complete code:\n",
      "\n",
      "```python\n",
      "# filename: reverse_string.py\n",
      "\n",
      "def reverse_string(s: str) -> str:\n",
      "    \"\"\"\n",
      "    This function takes a string as input and returns the reversed string.\n",
      "    \n",
      "    :param s: Input string to be reversed\n",
      "    :return: Reversed string\n",
      "    \"\"\"\n",
      "    return s[::-1]\n",
      "\n",
      "# Example usage\n",
      "input_string = \"Hello, World!\"\n",
      "reversed_string = reverse_string(input_string)\n",
      "print(f\"Original string: {input_string}\")\n",
      "print(f\"Reversed string: {reversed_string}\")\n",
      "```\n",
      "\n",
      "Save the above code in a file named `reverse_string.py`, then execute it. This script defines the `reverse_string` function and demonstrates its usage by reversing the string \"Hello, World!\". It will print both the original and reversed strings.\n",
      "\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33muser_proxy\u001b[0m (to analyzer):\n",
      "\n",
      "Briefly copy any advice from the TEXT that may be useful for a similar but different task in the future. But if no advice is present, just respond with 'none'.\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33manalyzer\u001b[0m (to user_proxy):\n",
      "\n",
      "Save the above code in a file named `reverse_string.py`, then execute it.\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33muser_proxy\u001b[0m (to analyzer):\n",
      "\n",
      "Sure, I'll provide you with a Python function that takes a string as input and returns the reversed string. Here is the complete code:\n",
      "\n",
      "```python\n",
      "# filename: reverse_string.py\n",
      "\n",
      "def reverse_string(s: str) -> str:\n",
      "    \"\"\"\n",
      "    This function takes a string as input and returns the reversed string.\n",
      "    \n",
      "    :param s: Input string to be reversed\n",
      "    :return: Reversed string\n",
      "    \"\"\"\n",
      "    return s[::-1]\n",
      "\n",
      "# Example usage\n",
      "input_string = \"Hello, World!\"\n",
      "reversed_string = reverse_string(input_string)\n",
      "print(f\"Original string: {input_string}\")\n",
      "print(f\"Reversed string: {reversed_string}\")\n",
      "```\n",
      "\n",
      "Save the above code in a file named `reverse_string.py`, then execute it. This script defines the `reverse_string` function and demonstrates its usage by reversing the string \"Hello, World!\". It will print both the original and reversed strings.\n",
      "\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33muser_proxy\u001b[0m (to analyzer):\n",
      "\n",
      "Briefly copy just the task from the TEXT, then stop. Don't solve it, and don't include any advice.\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33manalyzer\u001b[0m (to user_proxy):\n",
      "\n",
      "Save the above code in a file named `reverse_string.py`, then execute it. This script defines the `reverse_string` function and demonstrates its usage by reversing the string \"Hello, World!\". It will print both the original and reversed strings.\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33muser_proxy\u001b[0m (to analyzer):\n",
      "\n",
      "Save the above code in a file named `reverse_string.py`, then execute it. This script defines the `reverse_string` function and demonstrates its usage by reversing the string \"Hello, World!\". It will print both the original and reversed strings.\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33muser_proxy\u001b[0m (to analyzer):\n",
      "\n",
      "Summarize very briefly, in general terms, the type of task described in the TEXT. Leave out details that might not appear in a similar problem.\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33manalyzer\u001b[0m (to user_proxy):\n",
      "\n",
      "The task involves saving a script to a file, executing it, and demonstrating a function that reverses a string.\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[93m\n",
      "REMEMBER THIS TASK-ADVICE PAIR\u001b[0m\n",
      "\u001b[33muser_proxy\u001b[0m (to analyzer):\n",
      "\n",
      "Sure, I'll provide you with a Python function that takes a string as input and returns the reversed string. Here is the complete code:\n",
      "\n",
      "```python\n",
      "# filename: reverse_string.py\n",
      "\n",
      "def reverse_string(s: str) -> str:\n",
      "    \"\"\"\n",
      "    This function takes a string as input and returns the reversed string.\n",
      "    \n",
      "    :param s: Input string to be reversed\n",
      "    :return: Reversed string\n",
      "    \"\"\"\n",
      "    return s[::-1]\n",
      "\n",
      "# Example usage\n",
      "input_string = \"Hello, World!\"\n",
      "reversed_string = reverse_string(input_string)\n",
      "print(f\"Original string: {input_string}\")\n",
      "print(f\"Reversed string: {reversed_string}\")\n",
      "```\n",
      "\n",
      "Save the above code in a file named `reverse_string.py`, then execute it. This script defines the `reverse_string` function and demonstrates its usage by reversing the string \"Hello, World!\". It will print both the original and reversed strings.\n",
      "\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33muser_proxy\u001b[0m (to analyzer):\n",
      "\n",
      "Does the TEXT contain information that could be committed to memory? Answer with just one word, yes or no.\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33manalyzer\u001b[0m (to user_proxy):\n",
      "\n",
      "Yes\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33muser_proxy\u001b[0m (to analyzer):\n",
      "\n",
      "Sure, I'll provide you with a Python function that takes a string as input and returns the reversed string. Here is the complete code:\n",
      "\n",
      "```python\n",
      "# filename: reverse_string.py\n",
      "\n",
      "def reverse_string(s: str) -> str:\n",
      "    \"\"\"\n",
      "    This function takes a string as input and returns the reversed string.\n",
      "    \n",
      "    :param s: Input string to be reversed\n",
      "    :return: Reversed string\n",
      "    \"\"\"\n",
      "    return s[::-1]\n",
      "\n",
      "# Example usage\n",
      "input_string = \"Hello, World!\"\n",
      "reversed_string = reverse_string(input_string)\n",
      "print(f\"Original string: {input_string}\")\n",
      "print(f\"Reversed string: {reversed_string}\")\n",
      "```\n",
      "\n",
      "Save the above code in a file named `reverse_string.py`, then execute it. This script defines the `reverse_string` function and demonstrates its usage by reversing the string \"Hello, World!\". It will print both the original and reversed strings.\n",
      "\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33muser_proxy\u001b[0m (to analyzer):\n",
      "\n",
      "Imagine that the user forgot this information in the TEXT. How would they ask you for this information? Include no other text in your response.\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33manalyzer\u001b[0m (to user_proxy):\n",
      "\n",
      "How do I reverse a string in Python?\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33muser_proxy\u001b[0m (to analyzer):\n",
      "\n",
      "Sure, I'll provide you with a Python function that takes a string as input and returns the reversed string. Here is the complete code:\n",
      "\n",
      "```python\n",
      "# filename: reverse_string.py\n",
      "\n",
      "def reverse_string(s: str) -> str:\n",
      "    \"\"\"\n",
      "    This function takes a string as input and returns the reversed string.\n",
      "    \n",
      "    :param s: Input string to be reversed\n",
      "    :return: Reversed string\n",
      "    \"\"\"\n",
      "    return s[::-1]\n",
      "\n",
      "# Example usage\n",
      "input_string = \"Hello, World!\"\n",
      "reversed_string = reverse_string(input_string)\n",
      "print(f\"Original string: {input_string}\")\n",
      "print(f\"Reversed string: {reversed_string}\")\n",
      "```\n",
      "\n",
      "Save the above code in a file named `reverse_string.py`, then execute it. This script defines the `reverse_string` function and demonstrates its usage by reversing the string \"Hello, World!\". It will print both the original and reversed strings.\n",
      "\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33muser_proxy\u001b[0m (to analyzer):\n",
      "\n",
      "Copy the information from the TEXT that should be committed to memory. Add no explanation.\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33manalyzer\u001b[0m (to user_proxy):\n",
      "\n",
      "```python\n",
      "# filename: reverse_string.py\n",
      "\n",
      "def reverse_string(s: str) -> str:\n",
      "    \"\"\"\n",
      "    This function takes a string as input and returns the reversed string.\n",
      "    \n",
      "    :param s: Input string to be reversed\n",
      "    :return: Reversed string\n",
      "    \"\"\"\n",
      "    return s[::-1]\n",
      "\n",
      "# Example usage\n",
      "input_string = \"Hello, World!\"\n",
      "reversed_string = reverse_string(input_string)\n",
      "print(f\"Original string: {input_string}\")\n",
      "print(f\"Reversed string: {reversed_string}\")\n",
      "```\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[93m\n",
      "REMEMBER THIS QUESTION-ANSWER PAIR\u001b[0m\n",
      "\u001b[31m\n",
      ">>>>>>>> EXECUTING CODE BLOCK 0 (inferred language is python)...\u001b[0m\n",
      "\u001b[33muser_proxy\u001b[0m (to assistant):\n",
      "\n",
      "exitcode: 0 (execution succeeded)\n",
      "Code output: \n",
      "Original string: Hello, World!\n",
      "Reversed string: !dlroW ,olleH\n",
      "\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33massistant\u001b[0m (to user_proxy):\n",
      "\n",
      "The code executed successfully, and the output is correct. The string \"Hello, World!\" was successfully reversed to \"!dlroW ,olleH\".\n",
      "\n",
      "If you have any other tasks or need further assistance, feel free to ask.\n",
      "\n",
      "TERMINATE\n",
      "\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[93m\n",
      "LOOK FOR RELEVANT MEMOS, AS QUESTION-ANSWER PAIRS\u001b[0m\n",
      "\u001b[33muser_proxy\u001b[0m (to analyzer):\n",
      "\n",
      "The code executed successfully, and the output is correct. The string \"Hello, World!\" was successfully reversed to \"!dlroW ,olleH\".\n",
      "\n",
      "If you have any other tasks or need further assistance, feel free to ask.\n",
      "\n",
      "TERMINATE\n",
      "\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33muser_proxy\u001b[0m (to analyzer):\n",
      "\n",
      "Does any part of the TEXT ask the agent to perform a task or solve a problem? Answer with just one word, yes or no.\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33manalyzer\u001b[0m (to user_proxy):\n",
      "\n",
      "Yes\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[93m\n",
      "LOOK FOR RELEVANT MEMOS, AS TASK-ADVICE PAIRS\u001b[0m\n",
      "\u001b[33muser_proxy\u001b[0m (to analyzer):\n",
      "\n",
      "The code executed successfully, and the output is correct. The string \"Hello, World!\" was successfully reversed to \"!dlroW ,olleH\".\n",
      "\n",
      "If you have any other tasks or need further assistance, feel free to ask.\n",
      "\n",
      "TERMINATE\n",
      "\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33muser_proxy\u001b[0m (to analyzer):\n",
      "\n",
      "Copy just the task from the TEXT, then stop. Don't solve it, and don't include any advice.\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33manalyzer\u001b[0m (to user_proxy):\n",
      "\n",
      "If you have any other tasks or need further assistance, feel free to ask.\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33muser_proxy\u001b[0m (to analyzer):\n",
      "\n",
      "If you have any other tasks or need further assistance, feel free to ask.\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33muser_proxy\u001b[0m (to analyzer):\n",
      "\n",
      "Summarize very briefly, in general terms, the type of task described in the TEXT. Leave out details that might not appear in a similar problem.\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33manalyzer\u001b[0m (to user_proxy):\n",
      "\n",
      "The task described in the TEXT involves offering help or assistance with various tasks.\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[93m\n",
      "MEMOS APPENDED TO LAST MESSAGE...\n",
      "\n",
      "# Memories that might help\n",
      "- Prefers functions to have a descriptive docstring\n",
      "- Prefers comments explaining each step\n",
      "- Task involves saving a script to a file, executing it, and demonstrating a function that reverses a string\n",
      "- Prefers code to be explicitly written with clear variable names\n",
      "- Code should be saved in a file named 'reverse_string.py'\n",
      "- Prefers camelCase for variable names\n",
      "\n",
      "\u001b[0m\n",
      "\u001b[33muser_proxy\u001b[0m (to analyzer):\n",
      "\n",
      "The code executed successfully, and the output is correct. The string \"Hello, World!\" was successfully reversed to \"!dlroW ,olleH\".\n",
      "\n",
      "If you have any other tasks or need further assistance, feel free to ask.\n",
      "\n",
      "TERMINATE\n",
      "\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33muser_proxy\u001b[0m (to analyzer):\n",
      "\n",
      "Does any part of the TEXT ask the agent to perform a task or solve a problem? Answer with just one word, yes or no.\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33manalyzer\u001b[0m (to user_proxy):\n",
      "\n",
      "Yes\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33muser_proxy\u001b[0m (to analyzer):\n",
      "\n",
      "The code executed successfully, and the output is correct. The string \"Hello, World!\" was successfully reversed to \"!dlroW ,olleH\".\n",
      "\n",
      "If you have any other tasks or need further assistance, feel free to ask.\n",
      "\n",
      "TERMINATE\n",
      "\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33muser_proxy\u001b[0m (to analyzer):\n",
      "\n",
      "Briefly copy any advice from the TEXT that may be useful for a similar but different task in the future. But if no advice is present, just respond with 'none'.\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33manalyzer\u001b[0m (to user_proxy):\n",
      "\n",
      "none\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33muser_proxy\u001b[0m (to analyzer):\n",
      "\n",
      "The code executed successfully, and the output is correct. The string \"Hello, World!\" was successfully reversed to \"!dlroW ,olleH\".\n",
      "\n",
      "If you have any other tasks or need further assistance, feel free to ask.\n",
      "\n",
      "TERMINATE\n",
      "\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33muser_proxy\u001b[0m (to analyzer):\n",
      "\n",
      "Does the TEXT contain information that could be committed to memory? Answer with just one word, yes or no.\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33manalyzer\u001b[0m (to user_proxy):\n",
      "\n",
      "Yes\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33muser_proxy\u001b[0m (to analyzer):\n",
      "\n",
      "The code executed successfully, and the output is correct. The string \"Hello, World!\" was successfully reversed to \"!dlroW ,olleH\".\n",
      "\n",
      "If you have any other tasks or need further assistance, feel free to ask.\n",
      "\n",
      "TERMINATE\n",
      "\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33muser_proxy\u001b[0m (to analyzer):\n",
      "\n",
      "Imagine that the user forgot this information in the TEXT. How would they ask you for this information? Include no other text in your response.\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33manalyzer\u001b[0m (to user_proxy):\n",
      "\n",
      "What was the original string that was reversed to \"!dlroW ,olleH\"?\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33muser_proxy\u001b[0m (to analyzer):\n",
      "\n",
      "The code executed successfully, and the output is correct. The string \"Hello, World!\" was successfully reversed to \"!dlroW ,olleH\".\n",
      "\n",
      "If you have any other tasks or need further assistance, feel free to ask.\n",
      "\n",
      "TERMINATE\n",
      "\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33muser_proxy\u001b[0m (to analyzer):\n",
      "\n",
      "Copy the information from the TEXT that should be committed to memory. Add no explanation.\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[33manalyzer\u001b[0m (to user_proxy):\n",
      "\n",
      "The string \"Hello, World!\" was successfully reversed to \"!dlroW ,olleH\".\n",
      "\n",
      "--------------------------------------------------------------------------------\n",
      "\u001b[93m\n",
      "REMEMBER THIS QUESTION-ANSWER PAIR\u001b[0m\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "ChatResult(chat_id=None, chat_history=[{'content': 'Write a Python function that reverses a string.', 'role': 'assistant', 'name': 'user_proxy'}, {'content': 'Sure, I\\'ll provide you with a Python function that takes a string as input and returns the reversed string. Here is the complete code:\\n\\n```python\\n# filename: reverse_string.py\\n\\ndef reverse_string(s: str) -> str:\\n    \"\"\"\\n    This function takes a string as input and returns the reversed string.\\n    \\n    :param s: Input string to be reversed\\n    :return: Reversed string\\n    \"\"\"\\n    return s[::-1]\\n\\n# Example usage\\ninput_string = \"Hello, World!\"\\nreversed_string = reverse_string(input_string)\\nprint(f\"Original string: {input_string}\")\\nprint(f\"Reversed string: {reversed_string}\")\\n```\\n\\nSave the above code in a file named `reverse_string.py`, then execute it. This script defines the `reverse_string` function and demonstrates its usage by reversing the string \"Hello, World!\". It will print both the original and reversed strings.\\n\\n\\n# Memories that might help\\n- Prefers functions to have a descriptive docstring\\n- Prefers camelCase for variable names\\n- Prefers comments explaining each step\\n- Prefers code to be explicitly written with clear variable names\\n', 'role': 'user', 'name': 'assistant'}, {'content': 'exitcode: 0 (execution succeeded)\\nCode output: \\nOriginal string: Hello, World!\\nReversed string: !dlroW ,olleH\\n', 'role': 'assistant', 'name': 'user_proxy'}, {'content': 'The code executed successfully, and the output is correct. The string \"Hello, World!\" was successfully reversed to \"!dlroW ,olleH\".\\n\\nIf you have any other tasks or need further assistance, feel free to ask.\\n\\nTERMINATE\\n\\n\\n# Memories that might help\\n- Prefers functions to have a descriptive docstring\\n- Prefers comments explaining each step\\n- Task involves saving a script to a file, executing it, and demonstrating a function that reverses a string\\n- Prefers code to be explicitly written with clear variable names\\n- Code should be saved in a file named \\'reverse_string.py\\'\\n- Prefers camelCase for variable names\\n', 'role': 'user', 'name': 'assistant'}], summary='The code executed successfully, and the output is correct. The string \"Hello, World!\" was successfully reversed to \"!dlroW ,olleH\".\\n\\nIf you have any other tasks or need further assistance, feel free to ask.\\n\\n\\n', cost={'usage_including_cached_inference': {'total_cost': 0}, 'usage_excluding_cached_inference': {'total_cost': 0}}, human_input=[])"
      ]
     },
     "execution_count": 21,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# Initiate Chat w/ Teachability + Memory\n",
    "user_proxy.initiate_chat(gpt_assistant, message=user_query)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 2
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython2",
   "version": "2.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}


================================================
FILE: docs/README.md
================================================
# Mintlify Starter Kit

Click on `Use this template` to copy the Mintlify starter kit. The starter kit contains examples including

- Guide pages
- Navigation
- Customizations
- API Reference pages
- Use of popular components

### Development

Install the [Mintlify CLI](https://www.npmjs.com/package/mintlify) to preview the documentation changes locally. To install, use the following command

```
npm i -g mintlify
```

Run the following command at the root of your documentation (where mint.json is)

```
mintlify dev
```

### Publishing Changes

Install our Github App to auto propagate changes from your repo to your deployment. Changes will be deployed to production automatically after pushing to the default branch. Find the link to install on your dashboard. 

#### Troubleshooting

- Mintlify dev isn't running - Run `mintlify install` it'll re-install dependencies.
- Page loads as a 404 - Make sure you are running in a folder with `mint.json`


================================================
FILE: docs/_snippets/async-memory-add.mdx
================================================
<Note type="info">
  📢 Heads up!
  We're moving to async memory add for a faster experience.
  If you signed up after July 1st, 2025, your add requests will work in the background and return right away.
</Note> 

================================================
FILE: docs/_snippets/blank-notif.mdx
================================================


================================================
FILE: docs/_snippets/get-help.mdx
================================================
<CardGroup cols={3}>
  <Card title="Discord" icon="discord" href="https://mem0.dev/DiD" color="#7289DA">
    Join our community
  </Card>
  <Card title="GitHub" icon="github" href="https://github.com/mem0ai/mem0/discussions/new?category=q-a">
    Ask questions on GitHub
  </Card>
  <Card title="Support" icon="calendar" href="https://cal.com/taranjeetio/meet">
  Talk to founders
  </Card>
</CardGroup>


================================================
FILE: docs/_snippets/paper-release.mdx
================================================
<Note type="info">
  <strong>🎉 Mem0 1.0.0 is here!</strong> Enhanced filtering, reranking, and smarter memory management.
</Note>

================================================
FILE: docs/api-reference/entities/delete-user.mdx
================================================
---
title: 'Delete User'
openapi: delete /v2/entities/{entity_type}/{entity_id}/
---

================================================
FILE: docs/api-reference/entities/get-users.mdx
================================================
---
title: 'Get Users'
openapi: get /v1/entities/
---

================================================
FILE: docs/api-reference/events/get-event.mdx
================================================
---
title: 'Get Event'
openapi: get /v1/event/{event_id}/
---

Retrieve details about a specific event by passing its `event_id`. This endpoint is particularly helpful for tracking the status, payload, and completion details of asynchronous memory operations.


================================================
FILE: docs/api-reference/events/get-events.mdx
================================================
---
title: 'Get Events'
openapi: get /v1/events/
---

List recent events for your organization and project.

## Use Cases

- **Dashboards**: Summarize adds/searches over time by paging through events.
- **Alerting**: Poll for `FAILED` events and trigger follow-up workflows.
- **Audit**: Store the returned payload/metadata for compliance logs.



================================================
FILE: docs/api-reference/memory/add-memories.mdx
================================================
---
title: 'Add Memories'
openapi: post /v1/memories/
---

Add new facts, messages, or metadata to a user’s memory store. The Add Memories endpoint accepts either raw text or conversational turns and commits them asynchronously so the memory is ready for later search, retrieval, and graph queries.

## Endpoint

- **Method**: `POST`
- **URL**: `/v1/memories/`
- **Content-Type**: `application/json`

Memories are processed asynchronously by default. The response contains queued events you can track while the platform finalizes enrichment.

## Required headers

| Header | Required | Description |
| --- | --- | --- |
| `Authorization: Token <MEM0_API_KEY>` | Yes | API key scoped to your workspace. |
| `Accept: application/json` | Yes | Ensures a JSON response. |

## Request body

Provide at least one message or direct memory string. Most callers supply `messages` so Mem0 can infer structured memories as part of ingestion.

<CodeGroup>
```json Basic request
{
  "user_id": "alice",
  "messages": [
    { "role": "user", "content": "I moved to Austin last month." }
  ],
  "metadata": {
    "source": "onboarding_form"
  }
}
```
</CodeGroup>

### Common fields

| Field | Type | Required | Description |
| --- | --- | --- | --- |
| `user_id` | string | No* | Associates the memory with a user. Provide when you want the memory scoped to a specific identity. |
| `messages` | array | No* | Conversation turns for Mem0 to infer memories from. Each object should include `role` and `content`. |
| `metadata` | object | Optional | Custom key/value metadata (e.g., `{"topic": "preferences"}`). |
| `infer` | boolean (default `true`) | Optional | Set to `false` to skip inference and store the provided text as-is. |
| `async_mode` | boolean (default `true`) | Optional | Controls asynchronous processing. Most clients leave this enabled. |
| `output_format` | string (default `v1.1`) | Optional | Response format. `v1.1` wraps results in a `results` array. |

> \* Provide at least one `messages` entry to describe what you are storing. For scoped memories, include `user_id`. You can also attach `agent_id`, `app_id`, `run_id`, `project_id`, or `org_id` to refine ownership.

## Response

Successful requests return an array of events queued for processing. Each event includes the generated memory text and an identifier you can persist for auditing.

<CodeGroup>
```json 200 response
[
  {
    "id": "mem_01JF8ZS4Y0R0SPM13R5R6H32CJ",
    "event": "ADD",
    "data": {
      "memory": "The user moved to Austin in 2025."
    }
  }
]
```

```json 400 response
{
  "error": "400 Bad Request",
  "details": {
    "message": "Invalid input data. Please refer to the memory creation documentation at https://docs.mem0.ai/platform/quickstart#4-1-create-memories for correct formatting and required fields."
  }
}
```
</CodeGroup>

## Graph relationships

Add Memories can enrich the knowledge graph on write. Set `enable_graph: true` to create entity nodes and relationships for the stored memory. Use this when you want downstream `get_all` or search calls to traverse connected entities.

<CodeGroup>
```json Graph-aware request
{
  "user_id": "alice",
  "messages": [
    { "role": "user", "content": "I met with Dr. Lee at General Hospital." }
  ],
  "enable_graph": true
}
```
</CodeGroup>

The response follows the same format, and related entities become available in [Graph Memory](/platform/features/graph-memory) queries.


================================================
FILE: docs/api-reference/memory/batch-delete.mdx
================================================
---
title: 'Batch Delete Memories'
openapi: delete /v1/batch/
---


================================================
FILE: docs/api-reference/memory/batch-update.mdx
================================================
---
title: 'Batch Update Memories'
openapi: put /v1/batch/
---

================================================
FILE: docs/api-reference/memory/create-memory-export.mdx
================================================
---
title: 'Create Memory Export'
openapi: post /v1/exports/
---

Submit a job to create a structured export of memories using a customizable Pydantic schema. This process may take some time to complete, especially if you're exporting a large number of memories. You can tailor the export by applying various filters (e.g., `user_id`, `agent_id`, `run_id`, or `session_id`) and by modifying the Pydantic schema to ensure the final data matches your exact needs.


================================================
FILE: docs/api-reference/memory/delete-memories.mdx
================================================
---
title: 'Delete Memories'
openapi: delete /v1/memories/
---


================================================
FILE: docs/api-reference/memory/delete-memory.mdx
================================================
---
title: 'Delete Memory'
openapi: delete /v1/memories/{memory_id}/
---

================================================
FILE: docs/api-reference/memory/feedback.mdx
================================================
---
title: 'Feedback'
openapi: post /v1/feedback/
---


================================================
FILE: docs/api-reference/memory/get-memories.mdx
================================================
---
title: "Get Memories"
openapi: post /v2/memories/
---

The v2 get memories API is powerful and flexible, allowing for more precise memory listing without the need for a search query. It supports complex logical operations (AND, OR, NOT) and comparison operators for advanced filtering capabilities. The comparison operators include:

- `in`: Matches any of the values specified
- `gte`: Greater than or equal to
- `lte`: Less than or equal to
- `gt`: Greater than
- `lt`: Less than
- `ne`: Not equal to
- `icontains`: Case-insensitive containment check
- `*`: Wildcard character that matches everything

<CodeGroup>
```python Code
memories = client.get_all(
    filters={
        "AND": [
            {
                "user_id": "alex"
            },
            {
                "created_at": {"gte": "2024-07-01", "lte": "2024-07-31"}
            }
        ]
    }
)
```

```python Output
{
    "results": [
        {
            "id": "f4cbdb08-7062-4f3e-8eb2-9f5c80dfe64c",
            "memory": "Alex is planning a trip to San Francisco from July 1st to July 10th",
            "created_at": "2024-07-01T12:00:00Z",
            "updated_at": "2024-07-01T12:00:00Z"
        },
        {
            "id": "a2b8c3d4-5e6f-7g8h-9i0j-1k2l3m4n5o6p",
            "memory": "Alex prefers vegetarian restaurants",
            "created_at": "2024-07-05T15:30:00Z",
            "updated_at": "2024-07-05T15:30:00Z"
        }
    ],
    "total": 2
}
```

</CodeGroup>

## Graph Memory

To retrieve graph memory relationships between entities, pass `output_format="v1.1"` in your request. This will return memories with entity and relationship information from the knowledge graph.

<CodeGroup>
```python Code
memories = client.get_all(
    filters={
        "user_id": "alex"
    },
    output_format="v1.1"
)
```

```python Output
{
    "results": [
        {
            "id": "f4cbdb08-7062-4f3e-8eb2-9f5c80dfe64c",
            "memory": "Alex is planning a trip to San Francisco",
            "entities": [
                {
                    "id": "entity-1",
                    "name": "Alex",
                    "type": "person"
                },
                {
                    "id": "entity-2",
                    "name": "San Francisco",
                    "type": "location"
                }
            ],
            "relations": [
                {
                    "source": "entity-1",
                    "target": "entity-2",
                    "relationship": "traveling_to"
                }
            ]
        }
    ]
}
```

</CodeGroup>


================================================
FILE: docs/api-reference/memory/get-memory-export.mdx
================================================
---
title: 'Get Memory Export'
openapi: post /v1/exports/get
---

Retrieve the latest structured memory export after submitting an export job. You can filter the export by `user_id`, `run_id`, `session_id`, or `app_id` to get the most recent export matching your filters.

================================================
FILE: docs/api-reference/memory/get-memory.mdx
================================================
---
title: 'Get Memory'
openapi: get /v1/memories/{memory_id}/
---

================================================
FILE: docs/api-reference/memory/history-memory.mdx
================================================
---
title: 'Memory History'
openapi: get /v1/memories/{memory_id}/history/
---

================================================
FILE: docs/api-reference/memory/search-memories.mdx
================================================
---
title: 'Search Memories'
openapi: post /v2/memories/search/
---

The v2 search API is powerful and flexible, allowing for more precise memory retrieval. It supports complex logical operations (AND, OR, NOT) and comparison operators for advanced filtering capabilities. The comparison operators include:
- `in`: Matches any of the values specified
- `gte`: Greater than or equal to
- `lte`: Less than or equal to
- `gt`: Greater than
- `lt`: Less than
- `ne`: Not equal to
- `icontains`: Case-insensitive containment check
- `*`: Wildcard character that matches everything

<CodeGroup>
```python Platform API Example
related_memories = client.search(
    query="What are Alice's hobbies?",
    filters={
        "OR": [
            {
              "user_id": "alice"
            },
            {
              "agent_id": {"in": ["travel-agent", "sports-agent"]}
            }
        ]
    },
)
```

```json Output
{
  "memories": [
    {
      "id": "ea925981-272f-40dd-b576-be64e4871429",
      "memory": "Likes to play cricket and plays cricket on weekends.",
      "metadata": {
        "category": "hobbies"
      },
      "score": 0.32116443111457704,
      "created_at": "2024-07-26T10:29:36.630547-07:00",
      "updated_at": null,
      "user_id": "alice",
      "agent_id": "sports-agent"
    }
  ],
}
```
</CodeGroup>

<CodeGroup>
```python Wildcard Example
# Using wildcard to match all run_ids for a specific user
all_memories = client.search(
    query="What are Alice's hobbies?",
    filters={
        "AND": [
            {
                "user_id": "alice"
            },
            {
                "run_id": "*"
            }
        ]
    },
)
```
</CodeGroup>

<CodeGroup>
```python Categories Filter Examples
# Example 1: Using 'contains' for partial matching
finance_memories = client.search(
    query="What are my financial goals?",
    filters={
        "AND": [
            { "user_id": "alice" },
            {
                "categories": {
                    "contains": "finance"
                }
            }
        ]
    },
)

# Example 2: Using 'in' for exact matching
personal_memories = client.search(
    query="What personal information do you have?",
    filters={
        "AND": [
            { "user_id": "alice" },
            {
                "categories": {
                    "in": ["personal_information"]
                }
            }
        ]
    },
)
```
</CodeGroup>


================================================
FILE: docs/api-reference/memory/update-memory.mdx
================================================
---
title: 'Update Memory'
openapi: put /v1/memories/{memory_id}/
---

================================================
FILE: docs/api-reference/organization/add-org-member.mdx
================================================
---
title: 'Add Member'
openapi: post /api/v1/orgs/organizations/{org_id}/members/
---

The API provides two roles for organization members:

- `READER`: Allows viewing of organization resources.
- `OWNER`: Grants full administrative access to manage the organization and its resources.


================================================
FILE: docs/api-reference/organization/create-org.mdx
================================================
---
title: 'Create Organization'
openapi: post /api/v1/orgs/organizations/
---

================================================
FILE: docs/api-reference/organization/delete-org.mdx
================================================
---
title: 'Delete Organization'
openapi: delete /api/v1/orgs/organizations/{org_id}/
---

================================================
FILE: docs/api-reference/organization/get-org-members.mdx
================================================
---
title: 'Get Members'
openapi: get /api/v1/orgs/organizations/{org_id}/members/
---

================================================
FILE: docs/api-reference/organization/get-org.mdx
================================================
---
title: 'Get Organization'
openapi: get /api/v1/orgs/organizations/{org_id}/
---

================================================
FILE: docs/api-reference/organization/get-orgs.mdx
================================================
---
title: 'Get Organizations'
openapi: get /api/v1/orgs/organizations/
---

================================================
FILE: docs/api-reference/organizations-projects.mdx
================================================
---
title: Organizations & Projects
icon: "building"
description: "Manage multi-tenant applications with organization and project APIs"
---

## Overview

Organizations and projects provide multi-tenant support, access control, and team collaboration capabilities for Mem0 Platform. Use these APIs to build applications that support multiple teams, customers, or isolated environments.

<Info>
Organizations and projects are **optional** features. You can use Mem0 without them for single-user or simple multi-user applications.
</Info>

## Key Capabilities

- **Multi-org/project Support**: Specify organization and project when initializing the Mem0 client to attribute API usage appropriately
- **Member Management**: Control access to data through organization and project membership
- **Access Control**: Only members can access memories and data within their organization/project scope
- **Team Isolation**: Maintain data separation between different teams and projects for secure collaboration

---

## Using Organizations & Projects

### Initialize with Org/Project Context

Example with the mem0 Python package:

<Tabs>
  <Tab title="Python">

```python
from mem0 import MemoryClient
client = MemoryClient(org_id='YOUR_ORG_ID', project_id='YOUR_PROJECT_ID')
```

  </Tab>

  <Tab title="Node.js">

```javascript
import { MemoryClient } from "mem0ai";
const client = new MemoryClient({
  organizationId: "YOUR_ORG_ID",
  projectId: "YOUR_PROJECT_ID"
});
```

  </Tab>
</Tabs>

---

## Project Management

The Mem0 client provides comprehensive project management through the `client.project` interface:

### Get Project Details

Retrieve information about the current project:

```python
# Get all project details
project_info = client.project.get()

# Get specific fields only
project_info = client.project.get(fields=["name", "description", "custom_categories"])
```

### Create a New Project

Create a new project within your organization:

```python
# Create a project with name and description
new_project = client.project.create(
    name="My New Project",
    description="A project for managing customer support memories"
)
```

### Update Project Settings

Modify project configuration including custom instructions, categories, and graph settings:

```python
# Update project with custom categories
client.project.update(
    custom_categories=[
        {"customer_preferences": "Customer likes, dislikes, and preferences"},
        {"support_history": "Previous support interactions and resolutions"}
    ]
)

# Update project with custom instructions
client.project.update(
    custom_instructions="..."
)

# Enable graph memory for the project
client.project.update(enable_graph=True)

# Update multiple settings at once
client.project.update(
    custom_instructions="...",
    custom_categories=[
        {"personal_info": "User personal information and preferences"},
        {"work_context": "Professional context and work-related information"}
    ],
    enable_graph=True
)
```

### Delete Project

<Warning>
This action will remove all memories, messages, and other related data in the project. **This operation is irreversible.**
</Warning>

Remove a project and all its associated data:

```python
# Delete the current project (irreversible)
result = client.project.delete()
```

---

## Member Management

Manage project members and their access levels:

```python
# Get all project members
members = client.project.get_members()

# Add a new member as a reader
client.project.add_member(
    email="colleague@company.com",
    role="READER"  # or "OWNER"
)

# Update a member's role
client.project.update_member(
    email="colleague@company.com",
    role="OWNER"
)

# Remove a member from the project
client.project.remove_member(email="colleague@company.com")
```

### Member Roles

| Role | Permissions |
|------|-------------|
| **READER** | Can view and search memories, but cannot modify project settings or manage members |
| **OWNER** | Full access including project modification, member management, and all reader permissions |

---

## Async Support

All project methods are available in async mode:

```python
from mem0 import AsyncMemoryClient

async def manage_project():
    client = AsyncMemoryClient(org_id='YOUR_ORG_ID', project_id='YOUR_PROJECT_ID')

    # All methods support async/await
    project_info = await client.project.get()
    await client.project.update(enable_graph=True)
    members = await client.project.get_members()

# To call the async function properly
import asyncio
asyncio.run(manage_project())
```

---

## API Reference

For complete API specifications and additional endpoints, see:

<CardGroup cols={2}>
  <Card title="Organizations APIs" icon="building" href="/api-reference/organization/create-org">
    Create, get, and manage organizations
  </Card>

  <Card title="Project APIs" icon="folder" href="/api-reference/project/create-project">
    Full project CRUD and member management endpoints
  </Card>
</CardGroup>


================================================
FILE: docs/api-reference/project/add-project-member.mdx
================================================
---
title: 'Add Member'
openapi: post /api/v1/orgs/organizations/{org_id}/projects/{project_id}/members/
---

The API provides two roles for project members:

- `READER`: Allows viewing of project resources.
- `OWNER`: Grants full administrative access to manage the project and its resources.


================================================
FILE: docs/api-reference/project/create-project.mdx
================================================
---
title: 'Create Project'
openapi: post /api/v1/orgs/organizations/{org_id}/projects/
---

================================================
FILE: docs/api-reference/project/delete-project.mdx
================================================
---
title: 'Delete Project'
openapi: delete /api/v1/orgs/organizations/{org_id}/projects/{project_id}/
---

================================================
FILE: docs/api-reference/project/get-project-members.mdx
================================================
---
title: 'Get Members'
openapi: get /api/v1/orgs/organizations/{org_id}/projects/{project_id}/members/
---

================================================
FILE: docs/api-reference/project/get-project.mdx
================================================
---
title: 'Get Project'
openapi: get /api/v1/orgs/organizations/{org_id}/projects/{project_id}/
---

================================================
FILE: docs/api-reference/project/get-projects.mdx
================================================
---
title: 'Get Projects'
openapi: get /api/v1/orgs/organizations/{org_id}/projects/
---

================================================
FILE: docs/api-reference/webhook/create-webhook.mdx
================================================
---
title: 'Create Webhook'
openapi: post /api/v1/webhooks/projects/{project_id}/
---



================================================
FILE: docs/api-reference/webhook/delete-webhook.mdx
================================================
---
title: 'Delete Webhook'
openapi: delete /api/v1/webhooks/{webhook_id}/
---


================================================
FILE: docs/api-reference/webhook/get-webhook.mdx
================================================
---
title: 'Get Webhook'
openapi: get /api/v1/webhooks/projects/{project_id}/
---



================================================
FILE: docs/api-reference/webhook/update-webhook.mdx
================================================
---
title: 'Update Webhook'
openapi: put /api/v1/webhooks/{webhook_id}/
---



================================================
FILE: docs/api-reference.mdx
================================================
---
title: "Overview"
icon: "terminal"
iconType: "solid"
description: "REST APIs for memory management, search, and entity operations"
---

## Mem0 REST API

Mem0 provides a comprehensive REST API for integrating advanced memory capabilities into your applications. Create, search, update, and manage memories across users, agents, and custom entities with simple HTTP requests.

<Info>
**Quick start:** Get your API key from the [Mem0 Dashboard](https://app.mem0.ai/dashboard/api-keys) and make your first memory operation in minutes.
</Info>

---

## Quick Start Guide

Get started with Mem0 API in three simple steps:

1. **[Add Memories](/api-reference/memory/add-memories)** - Store information and context from user conversations
2. **[Search Memories](/api-reference/memory/search-memories)** - Retrieve relevant memories using semantic search
3. **[Get Memories](/api-reference/memory/get-memories)** - Fetch all memories for a specific entity

---

## Core Operations

<CardGroup cols={2}>
  <Card title="Add Memories" icon="plus" href="/api-reference/memory/add-memories">
    Store new memories from conversations and interactions
  </Card>

  <Card title="Search Memories" icon="magnifying-glass" href="/api-reference/memory/search-memories">
    Find relevant memories using semantic search with filters
  </Card>

  <Card title="Update Memory" icon="pen" href="/api-reference/memory/update-memory">
    Modify existing memory content and metadata
  </Card>

  <Card title="Delete Memory" icon="trash" href="/api-reference/memory/delete-memory">
    Remove specific memories or batch delete operations
  </Card>
</CardGroup>

---

## API Categories

Explore the full API organized by functionality:

<CardGroup cols={2}>
  <Card title="Memory APIs" icon="microchip" href="/api-reference/memory/add-memories">
    Core and advanced operations: CRUD, search, batch updates, history, and exports
  </Card>

  <Card title="Events APIs" icon="clock" href="/api-reference/events/get-events">
    Track and monitor the status of asynchronous memory operations
  </Card>

  <Card title="Entities APIs" icon="users" href="/api-reference/entities/get-users">
    Manage users, agents, and their associated memory data
  </Card>

  <Card title="Organizations & Projects" icon="building" href="/api-reference/organizations-projects">
    Multi-tenant support, access control, and team collaboration
  </Card>

  <Card title="Webhooks" icon="webhook" href="/api-reference/webhook/create-webhook">
    Real-time notifications for memory events and updates
  </Card>
</CardGroup>

<Note>
**Building multi-tenant apps?** Learn about [Organizations & Projects](/api-reference/organizations-projects) for team isolation and access control.
</Note>

---

## Authentication

All API requests require authentication using Token-based authentication. Include your API key in the Authorization header:

```bash
Authorization: Token <your-api-key>
```

Get your API key from the [Mem0 Dashboard](https://app.mem0.ai/dashboard/api-keys).

<Warning>
**Keep your API key secure.** Never expose it in client-side code or public repositories. Use environment variables and server-side requests only.
</Warning>

---

## Next Steps

<CardGroup cols={2}>
  <Card title="Add Your First Memory" icon="rocket" href="/api-reference/memory/add-memories">
    Start storing memories via the REST API
  </Card>

  <Card title="Search with Filters" icon="filter" href="/api-reference/memory/search-memories">
    Learn advanced search and filtering techniques
  </Card>
</CardGroup>


================================================
FILE: docs/changelog.mdx
================================================
---
title: "Product Updates"
mode: "wide"
---

 
<Tabs>
<Tab title="Python">

<Update label="2026-03-19" description="v1.0.7">

**Bug Fixes:**
- **Core:** Fixed control characters in LLM JSON responses causing parse failures (#4420)
- **Core:** Replaced hardcoded US/Pacific timezone references with `timezone.utc` (#4404)
- **Core:** Preserved `http_auth` in `_safe_deepcopy_config` for OpenSearch (#4418)
- **Core:** Normalized malformed LLM fact output before embedding (#4224)
- **Embeddings:** Pass `encoding_format='float'` in OpenAI embeddings for proxy compatibility (#4058)
- **LLMs:** Fixed Ollama to pass tools to `client.chat` and parse `tool_calls` from response (#4176)
- **Reranker:** Support nested LLM config in `LLMReranker` for non-OpenAI providers (#4405)
- **Vector Stores:** Cast `vector_distance` to float in Redis search (#4377)

**Improvements:**
- **Embeddings:** Improved Ollama embedder with model name normalization and error handling (#4403)

</Update>

<Update label="2026-03-16" description="v1.0.6">

**Bug Fixes:**
- **Telemetry:** Fixed telemetry vector store initialization still running when `MEM0_TELEMETRY` is disabled (#4351)
- **Core:** Removed destructive `vector_store.reset()` call from `delete_all()` that was wiping the entire vector store instead of deleting only the target memories (#4349)
- **OSS:** `OllamaLLM` now respects the configured URL instead of always falling back to localhost (#4320)
- **Core:** Fixed `KeyError` when LLM omits the `entities` key in tool call response (#4313)
- **Prompts:** Ensured JSON instruction is included in prompts when using `json_object` response format (#4271)
- **Core:** Fixed incorrect database parameter handling (#3913)

**Dependencies:**
- Updated LangChain dependencies to v1.0.0 (#4353)
- Bumped protobuf dependency to 5.29.6 and extended upper bound to `<7.0.0` (#4326)

</Update>

<Update label="2026-03-03" description="v1.0.5">
- **Telemetry Fix**
  - Fixed an issue where the PostHog client was initialized even after telemetry was disabled. Although events were not captured, the client was unnecessarily initialized.
</Update>

<Update label="2026-02-17" description="v1.0.4">

**New Features & Updates:**
- **Memory Update:**
  - Added `timestamp` parameter to `update()` — accepts Unix epoch (int/float) or ISO 8601 string

</Update>

<Update label="2026-01-29" description="v1.0.3">

**New Features & Updates:**
- **Project Settings:**
  - Added inclusion prompt, exclusion prompt, memory depth, and usecase setting

</Update>

<Update label="2026-01-13" description="v1.0.2">

**New Features & Updates:**
- **Vector Stores:**
  - Added DriverInfo metadata to MongoDB vector store

</Update>

<Update label="2025-11-14" description="v1.0.1">

**New Features & Updates:**
- **Vector Stores:**
  - Added Apache Cassandra vector store support
- **Embeddings:**
  - Added FastEmbed embedding support for local embeddings
- **Graph Store:**
  - Added configurable embedding similarity threshold for graph store node matching

**Bug Fixes:**
- **Core:**
  - Fixed condition check for memories_result type in Memory class
  - Fixed list_memories endpoint Pydantic validation error
  - Fixed memory deletion not removing from vector store

</Update>

<Update label="2025-10-16" description="v1.0.0">

**New Features & Updates:**
- **Vector Stores:**
  - Added Azure MySQL support
  - Added Azure AI Search Vector Store support
- **LLMs:**
  - Added Tool Call support for LangchainLLM
  - Enabled custom model and parameters for Hugging Face with huggingface_base_url
  - Updated default LLM configuration
- **Rerankers:**
  - Added reranker support: Cohere, ZeroEntropy, Hugging Face, Sentence Transformers, and LLMs
- **Core:**
  - Added metadata filtering for OSS
  - Added Assistant memory retrieval
  - Enabled async mode as default

**Improvements:**
- **Prompts:**
  - Improved prompt for better memory retrieval
- **Dependencies:**
  - Updated dependency compatibility with OpenAI 2.x
- **Validation:**
  - Validated embedding_dims for Kuzu integration

**Bug Fixes:**
- **Vector Stores:**
  - Fixed Databricks Vector Store integration
  - Fixed Milvus DB bug and added test coverage
  - Fixed Weaviate search method
- **LLMs:**
  - Fixed bug with thinking LLM in vLLM

</Update>

<Update label="2025-09-25" description="v0.1.118">

**New Features & Updates:**
- **Vector Stores:**
  - Added Valkey vector store support
  - Added support for ChromaDB Cloud
  - Added Mem0 vector store backend integration for Neptune Analytics
- **Graph Store:**
  - Added Neptune-DB graph store with vector store
- **Core:**
  - Implemented structured exception classes with error codes and suggested actions

**Improvements:**
- **Dependencies:**
  - Updated OpenAI dependency and improved Ollama compatibility
- **Testing:**
  - Added Weaviate DB test
  - Added comprehensive test suite for SQLiteManager
- **Documentation:**
  - Updated category docs
  - Updated Search V2 / Get All V2 filters documentation
  - Refactored AWS example title
  - Fixed Quickstart cURL example

**Bug Fixes:**
- **Vector Stores:**
  - Databricks bug fixes
  - Fixed S3 Vectors memory initialization issue from configuration
- **Core:**
  - Fixed JSON parsing with new memories
  - Replaced hardcoded LLM provider with provider from configuration
- **LLMs:**
  - Fixed Bedrock Anthropic models to use system field

</Update>

<Update label="2025-09-03" description="v0.1.117">

**New Features & Updates:**
- **OpenMemory:**
  - Added memory export / import feature
  - Added vector store integrations: Weaviate, FAISS, PGVector, Chroma, Redis, Elasticsearch, Milvus
  - Added `export_openmemory.sh` migration script
- **Vector Stores:**
  - Added Amazon S3 Vectors support
  - Added Databricks Mosaic AI vector store support
  - Added support for OpenAI Store
- **Graph Memory:** Added support for graph memory using Kuzu
- **Azure:** Added Azure Identity for Azure OpenAI and Azure AI Search authentication
- **Elasticsearch:** Added headers configuration support

**Improvements:**
  - Added custom connection client to enable connecting to local containers for Weaviate
  - Updated configuration AWS Bedrock
  - Fixed dependency issues and tests; updated docstrings
- **Documentation:**
  - Fixed Graph Docs page missing in sidebar
  - Updated integration documentation
  - Added version param in Search V2 API documentation
  - Updated Databricks documentation and refactored docs
  - Updated favicon logo
  - Fixed typos and Typescript docs

**Bug Fixes:**
- Baidu: Added missing provider for Baidu vector DB
- MongoDB: Replaced `query_vector` args in search method
- Fixed new memory mistaken for current
- AsyncMemory._add_to_vector_store: handled edge case when no facts found
- Fixed missing commas in Kuzu graph INSERT queries
- Fixed inconsistent created and updated properties for Graph
- Fixed missing `app_id` on client for Neptune Analytics
- Correctly pick AWS region from environment variable
- Fixed Ollama model existence check

**Refactoring:**
- **PGVector:** Use internal connection pools and context managers

</Update>

<Update label="2025-08-14" description="v0.1.116">

**New Features & Updates:**
- **Pinecone:** Added namespace support and improved type safety
- **Milvus:** Added db_name field to MilvusDBConfig
- **Vector Stores:** Added multi-id filters support
- **Vercel AI SDK:** Migration to AI SDK V5.0
- **Python Support:** Added Python 3.12 support
- **Graph Memory:** Added sanitizer methods for nodes and relationships
- **LLM Monitoring:** Added monitoring callback support

**Improvements:**
- **Performance:**
  - Improved async handling in AsyncMemory class
- **Documentation:**
  - Added async add announcement
  - Added personalized search docs
  - Added Neptune examples
  - Added V5 migration docs
- **Configuration:**
  - Refactored base class config for LLMs
  - Added sslmode for pgvector
- **Dependencies:**
  - Updated psycopg to version 3
  - Updated Docker compose

**Bug Fixes:**
- **Tests:**
  - Fixed failing tests
  - Restricted package versions
- **Memgraph:**
  - Fixed async attribute errors
  - Fixed n_embeddings usage
  - Fixed indexing issues
- **Vector Stores:**
  - Fixed Qdrant cloud indexing
  - Fixed Neo4j Cypher syntax
  - Fixed LLM parameters
- **Graph Store:**
  - Fixed LM config prioritization
- **Dependencies:**
  - Fixed JSON import for psycopg

**Refactoring:**
- **Google AI:** Refactored from Gemini to Google AI
- **Base Classes:** Refactored LLM base class configuration

</Update>

<Update label="2025-07-24" description="v0.1.115">

**New Features & Updates:**
- Enhanced project management via `client.project` and `AsyncMemoryClient.project` interfaces
- Full support for project CRUD operations (create, read, update, delete)
- Project member management: add, update, remove, and list members
- Manage project settings including custom instructions, categories, retrieval criteria, and graph enablement
- Both sync and async support for all project management operations

**Improvements:**
- **Documentation:**
  - Added detailed API reference and usage examples for new project management methods.
  - Updated all docs to use `client.project.get()` and `client.project.update()` instead of deprecated methods.
  
- **Deprecation:**
  - Marked `get_project()` and `update_project()` as deprecated (these methods were already present); added warnings to guide users to the new API.

**Bug Fixes:**
- **Tests:**
  - Fixed Gemini embedder and LLM test mocks for correct error handling and argument structure.
- **vLLM:**
  - Fixed duplicate import in vLLM module.

</Update>

<Update label="2025-07-05" description="v0.1.114">

**New Features:**
- **OpenAI Agents:** Added OpenAI agents SDK support
- **Amazon Neptune:** Added Amazon Neptune Analytics graph_store configuration and integration
- **vLLM:** Added vLLM support

**Improvements:**
- **Documentation:** 
  - Added SOC2 and HIPAA compliance documentation
  - Enhanced group chat feature documentation for platform
  - Added Google AI ADK Integration documentation
  - Fixed documentation images and links
- **Setup:** Fixed Mem0 setup, logging, and documentation issues

**Bug Fixes:**
- **MongoDB:** Fixed MongoDB Vector Store misaligned strings and classes
- **vLLM:** Fixed missing OpenAI import in vLLM module and call errors
- **Dependencies:** Fixed CI issues related to missing dependencies
- **In
Download .txt
gitextract_jergwx9d/

├── .github/
│   ├── ISSUE_TEMPLATE/
│   │   ├── bug_report.yml
│   │   ├── config.yml
│   │   ├── documentation_issue.yml
│   │   └── feature_request.yml
│   ├── PULL_REQUEST_TEMPLATE.md
│   └── workflows/
│       ├── cd.yml
│       ├── ci.yml
│       ├── openclaw-checks.yml
│       └── ts-sdk-ci.yml
├── .gitignore
├── .pre-commit-config.yaml
├── CONTRIBUTING.md
├── LICENSE
├── LLM.md
├── MIGRATION_GUIDE_v1.0.md
├── Makefile
├── README.md
├── cookbooks/
│   ├── customer-support-chatbot.ipynb
│   ├── helper/
│   │   ├── __init__.py
│   │   └── mem0_teachability.py
│   └── mem0-autogen.ipynb
├── docs/
│   ├── README.md
│   ├── _snippets/
│   │   ├── async-memory-add.mdx
│   │   ├── blank-notif.mdx
│   │   ├── get-help.mdx
│   │   └── paper-release.mdx
│   ├── api-reference/
│   │   ├── entities/
│   │   │   ├── delete-user.mdx
│   │   │   └── get-users.mdx
│   │   ├── events/
│   │   │   ├── get-event.mdx
│   │   │   └── get-events.mdx
│   │   ├── memory/
│   │   │   ├── add-memories.mdx
│   │   │   ├── batch-delete.mdx
│   │   │   ├── batch-update.mdx
│   │   │   ├── create-memory-export.mdx
│   │   │   ├── delete-memories.mdx
│   │   │   ├── delete-memory.mdx
│   │   │   ├── feedback.mdx
│   │   │   ├── get-memories.mdx
│   │   │   ├── get-memory-export.mdx
│   │   │   ├── get-memory.mdx
│   │   │   ├── history-memory.mdx
│   │   │   ├── search-memories.mdx
│   │   │   └── update-memory.mdx
│   │   ├── organization/
│   │   │   ├── add-org-member.mdx
│   │   │   ├── create-org.mdx
│   │   │   ├── delete-org.mdx
│   │   │   ├── get-org-members.mdx
│   │   │   ├── get-org.mdx
│   │   │   └── get-orgs.mdx
│   │   ├── organizations-projects.mdx
│   │   ├── project/
│   │   │   ├── add-project-member.mdx
│   │   │   ├── create-project.mdx
│   │   │   ├── delete-project.mdx
│   │   │   ├── get-project-members.mdx
│   │   │   ├── get-project.mdx
│   │   │   └── get-projects.mdx
│   │   └── webhook/
│   │       ├── create-webhook.mdx
│   │       ├── delete-webhook.mdx
│   │       ├── get-webhook.mdx
│   │       └── update-webhook.mdx
│   ├── api-reference.mdx
│   ├── changelog.mdx
│   ├── components/
│   │   ├── embedders/
│   │   │   ├── config.mdx
│   │   │   ├── models/
│   │   │   │   ├── aws_bedrock.mdx
│   │   │   │   ├── azure_openai.mdx
│   │   │   │   ├── google_AI.mdx
│   │   │   │   ├── huggingface.mdx
│   │   │   │   ├── langchain.mdx
│   │   │   │   ├── lmstudio.mdx
│   │   │   │   ├── ollama.mdx
│   │   │   │   ├── openai.mdx
│   │   │   │   ├── together.mdx
│   │   │   │   └── vertexai.mdx
│   │   │   └── overview.mdx
│   │   ├── llms/
│   │   │   ├── config.mdx
│   │   │   ├── models/
│   │   │   │   ├── anthropic.mdx
│   │   │   │   ├── aws_bedrock.mdx
│   │   │   │   ├── azure_openai.mdx
│   │   │   │   ├── deepseek.mdx
│   │   │   │   ├── google_AI.mdx
│   │   │   │   ├── groq.mdx
│   │   │   │   ├── langchain.mdx
│   │   │   │   ├── litellm.mdx
│   │   │   │   ├── lmstudio.mdx
│   │   │   │   ├── mistral_AI.mdx
│   │   │   │   ├── ollama.mdx
│   │   │   │   ├── openai.mdx
│   │   │   │   ├── sarvam.mdx
│   │   │   │   ├── together.mdx
│   │   │   │   ├── vllm.mdx
│   │   │   │   └── xAI.mdx
│   │   │   └── overview.mdx
│   │   ├── rerankers/
│   │   │   ├── config.mdx
│   │   │   ├── custom-prompts.mdx
│   │   │   ├── models/
│   │   │   │   ├── cohere.mdx
│   │   │   │   ├── huggingface.mdx
│   │   │   │   ├── llm.mdx
│   │   │   │   ├── llm_reranker.mdx
│   │   │   │   ├── sentence_transformer.mdx
│   │   │   │   └── zero_entropy.mdx
│   │   │   ├── optimization.mdx
│   │   │   └── overview.mdx
│   │   └── vectordbs/
│   │       ├── config.mdx
│   │       ├── dbs/
│   │       │   ├── azure.mdx
│   │       │   ├── azure_mysql.mdx
│   │       │   ├── baidu.mdx
│   │       │   ├── cassandra.mdx
│   │       │   ├── chroma.mdx
│   │       │   ├── databricks.mdx
│   │       │   ├── elasticsearch.mdx
│   │       │   ├── faiss.mdx
│   │       │   ├── langchain.mdx
│   │       │   ├── milvus.mdx
│   │       │   ├── mongodb.mdx
│   │       │   ├── neptune_analytics.mdx
│   │       │   ├── opensearch.mdx
│   │       │   ├── pgvector.mdx
│   │       │   ├── pinecone.mdx
│   │       │   ├── qdrant.mdx
│   │       │   ├── redis.mdx
│   │       │   ├── s3_vectors.mdx
│   │       │   ├── supabase.mdx
│   │       │   ├── upstash-vector.mdx
│   │       │   ├── valkey.mdx
│   │       │   ├── vectorize.mdx
│   │       │   ├── vertex_ai.mdx
│   │       │   └── weaviate.mdx
│   │       └── overview.mdx
│   ├── contributing/
│   │   ├── development.mdx
│   │   └── documentation.mdx
│   ├── cookbooks/
│   │   ├── companions/
│   │   │   ├── ai-tutor.mdx
│   │   │   ├── local-companion-ollama.mdx
│   │   │   ├── nodejs-companion.mdx
│   │   │   ├── quickstart-demo.mdx
│   │   │   ├── travel-assistant.mdx
│   │   │   ├── voice-companion-openai.mdx
│   │   │   └── youtube-research.mdx
│   │   ├── essentials/
│   │   │   ├── building-ai-companion.mdx
│   │   │   ├── choosing-memory-architecture-vector-vs-graph.mdx
│   │   │   ├── controlling-memory-ingestion.mdx
│   │   │   ├── entity-partitioning-playbook.mdx
│   │   │   ├── exporting-memories.mdx
│   │   │   ├── memory-expiration-short-and-long-term.mdx
│   │   │   └── tagging-and-organizing-memories.mdx
│   │   ├── frameworks/
│   │   │   ├── chrome-extension.mdx
│   │   │   ├── eliza-os-character.mdx
│   │   │   ├── gemini-3-with-mem0-mcp.mdx
│   │   │   ├── llamaindex-multiagent.mdx
│   │   │   ├── llamaindex-react.mdx
│   │   │   ├── mirofish-swarm-memory.mdx
│   │   │   └── multimodal-retrieval.mdx
│   │   ├── integrations/
│   │   │   ├── agents-sdk-tool.mdx
│   │   │   ├── aws-bedrock.mdx
│   │   │   ├── healthcare-google-adk.mdx
│   │   │   ├── mastra-agent.mdx
│   │   │   ├── neptune-analytics.mdx
│   │   │   ├── openai-tool-calls.mdx
│   │   │   └── tavily-search.mdx
│   │   ├── operations/
│   │   │   ├── content-writing.mdx
│   │   │   ├── deep-research.mdx
│   │   │   ├── email-automation.mdx
│   │   │   ├── support-inbox.mdx
│   │   │   └── team-task-agent.mdx
│   │   └── overview.mdx
│   ├── core-concepts/
│   │   ├── memory-operations/
│   │   │   ├── add.mdx
│   │   │   ├── delete.mdx
│   │   │   ├── search.mdx
│   │   │   └── update.mdx
│   │   └── memory-types.mdx
│   ├── docs.json
│   ├── integrations/
│   │   ├── agentops.mdx
│   │   ├── agno.mdx
│   │   ├── autogen.mdx
│   │   ├── aws-bedrock.mdx
│   │   ├── camel-ai.mdx
│   │   ├── crewai.mdx
│   │   ├── dify.mdx
│   │   ├── elevenlabs.mdx
│   │   ├── flowise.mdx
│   │   ├── google-ai-adk.mdx
│   │   ├── keywords.mdx
│   │   ├── langchain-tools.mdx
│   │   ├── langchain.mdx
│   │   ├── langgraph.mdx
│   │   ├── livekit.mdx
│   │   ├── llama-index.mdx
│   │   ├── mastra.mdx
│   │   ├── openai-agents-sdk.mdx
│   │   ├── openclaw.mdx
│   │   ├── pipecat.mdx
│   │   ├── raycast.mdx
│   │   └── vercel-ai-sdk.mdx
│   ├── integrations.mdx
│   ├── introduction.mdx
│   ├── llms.txt
│   ├── migration/
│   │   ├── api-changes.mdx
│   │   ├── breaking-changes.mdx
│   │   ├── oss-to-platform.mdx
│   │   └── v0-to-v1.mdx
│   ├── open-source/
│   │   ├── configuration.mdx
│   │   ├── features/
│   │   │   ├── async-memory.mdx
│   │   │   ├── custom-fact-extraction-prompt.mdx
│   │   │   ├── custom-update-memory-prompt.mdx
│   │   │   ├── graph-memory.mdx
│   │   │   ├── metadata-filtering.mdx
│   │   │   ├── multimodal-support.mdx
│   │   │   ├── openai_compatibility.mdx
│   │   │   ├── overview.mdx
│   │   │   ├── reranker-search.mdx
│   │   │   ├── reranking.mdx
│   │   │   └── rest-api.mdx
│   │   ├── multimodal-support.mdx
│   │   ├── node-quickstart.mdx
│   │   ├── overview.mdx
│   │   └── python-quickstart.mdx
│   ├── openapi.json
│   ├── openmemory/
│   │   ├── integrations.mdx
│   │   ├── overview.mdx
│   │   └── quickstart.mdx
│   ├── platform/
│   │   ├── advanced-memory-operations.mdx
│   │   ├── contribute.mdx
│   │   ├── faqs.mdx
│   │   ├── features/
│   │   │   ├── advanced-retrieval.mdx
│   │   │   ├── async-client.mdx
│   │   │   ├── async-mode-default-change.mdx
│   │   │   ├── contextual-add.mdx
│   │   │   ├── criteria-retrieval.mdx
│   │   │   ├── custom-categories.mdx
│   │   │   ├── custom-instructions.mdx
│   │   │   ├── direct-import.mdx
│   │   │   ├── entity-scoped-memory.mdx
│   │   │   ├── expiration-date.mdx
│   │   │   ├── feedback-mechanism.mdx
│   │   │   ├── graph-memory.mdx
│   │   │   ├── graph-threshold.mdx
│   │   │   ├── group-chat.mdx
│   │   │   ├── mcp-integration.mdx
│   │   │   ├── memory-export.mdx
│   │   │   ├── multimodal-support.mdx
│   │   │   ├── platform-overview.mdx
│   │   │   ├── timestamp.mdx
│   │   │   ├── v2-memory-filters.mdx
│   │   │   └── webhooks.mdx
│   │   ├── mem0-mcp.mdx
│   │   ├── overview.mdx
│   │   ├── platform-vs-oss.mdx
│   │   └── quickstart.mdx
│   └── templates/
│       ├── api_reference_template.mdx
│       ├── concept_guide_template.mdx
│       ├── cookbook_template.mdx
│       ├── feature_guide_template.mdx
│       ├── integration_guide_template.mdx
│       ├── migration_guide_template.mdx
│       ├── operation_guide_template.mdx
│       ├── parameters_reference_template.mdx
│       ├── quickstart_template.mdx
│       ├── release_notes_template.mdx
│       ├── section_overview_template.mdx
│       └── troubleshooting_playbook_template.mdx
├── embedchain/
│   ├── CITATION.cff
│   ├── CONTRIBUTING.md
│   ├── LICENSE
│   ├── Makefile
│   ├── README.md
│   ├── configs/
│   │   ├── anthropic.yaml
│   │   ├── aws_bedrock.yaml
│   │   ├── azure_openai.yaml
│   │   ├── chroma.yaml
│   │   ├── chunker.yaml
│   │   ├── clarifai.yaml
│   │   ├── cohere.yaml
│   │   ├── full-stack.yaml
│   │   ├── google.yaml
│   │   ├── gpt4.yaml
│   │   ├── gpt4all.yaml
│   │   ├── huggingface.yaml
│   │   ├── jina.yaml
│   │   ├── llama2.yaml
│   │   ├── ollama.yaml
│   │   ├── opensearch.yaml
│   │   ├── opensource.yaml
│   │   ├── pinecone.yaml
│   │   ├── pipeline.yaml
│   │   ├── together.yaml
│   │   ├── vertexai.yaml
│   │   ├── vllm.yaml
│   │   └── weaviate.yaml
│   ├── docs/
│   │   ├── Makefile
│   │   ├── README.md
│   │   ├── _snippets/
│   │   │   ├── get-help.mdx
│   │   │   ├── missing-data-source-tip.mdx
│   │   │   ├── missing-llm-tip.mdx
│   │   │   └── missing-vector-db-tip.mdx
│   │   ├── api-reference/
│   │   │   ├── advanced/
│   │   │   │   └── configuration.mdx
│   │   │   ├── app/
│   │   │   │   ├── add.mdx
│   │   │   │   ├── chat.mdx
│   │   │   │   ├── delete.mdx
│   │   │   │   ├── deploy.mdx
│   │   │   │   ├── evaluate.mdx
│   │   │   │   ├── get.mdx
│   │   │   │   ├── overview.mdx
│   │   │   │   ├── query.mdx
│   │   │   │   ├── reset.mdx
│   │   │   │   └── search.mdx
│   │   │   ├── overview.mdx
│   │   │   └── store/
│   │   │       ├── ai-assistants.mdx
│   │   │       └── openai-assistant.mdx
│   │   ├── community/
│   │   │   └── connect-with-us.mdx
│   │   ├── components/
│   │   │   ├── data-sources/
│   │   │   │   ├── audio.mdx
│   │   │   │   ├── beehiiv.mdx
│   │   │   │   ├── csv.mdx
│   │   │   │   ├── custom.mdx
│   │   │   │   ├── data-type-handling.mdx
│   │   │   │   ├── directory.mdx
│   │   │   │   ├── discord.mdx
│   │   │   │   ├── discourse.mdx
│   │   │   │   ├── docs-site.mdx
│   │   │   │   ├── docx.mdx
│   │   │   │   ├── dropbox.mdx
│   │   │   │   ├── excel-file.mdx
│   │   │   │   ├── github.mdx
│   │   │   │   ├── gmail.mdx
│   │   │   │   ├── google-drive.mdx
│   │   │   │   ├── image.mdx
│   │   │   │   ├── json.mdx
│   │   │   │   ├── mdx.mdx
│   │   │   │   ├── mysql.mdx
│   │   │   │   ├── notion.mdx
│   │   │   │   ├── openapi.mdx
│   │   │   │   ├── overview.mdx
│   │   │   │   ├── pdf-file.mdx
│   │   │   │   ├── postgres.mdx
│   │   │   │   ├── qna.mdx
│   │   │   │   ├── sitemap.mdx
│   │   │   │   ├── slack.mdx
│   │   │   │   ├── substack.mdx
│   │   │   │   ├── text-file.mdx
│   │   │   │   ├── text.mdx
│   │   │   │   ├── web-page.mdx
│   │   │   │   ├── xml.mdx
│   │   │   │   ├── youtube-channel.mdx
│   │   │   │   └── youtube-video.mdx
│   │   │   ├── embedding-models.mdx
│   │   │   ├── evaluation.mdx
│   │   │   ├── introduction.mdx
│   │   │   ├── llms.mdx
│   │   │   ├── retrieval-methods.mdx
│   │   │   ├── vector-databases/
│   │   │   │   ├── chromadb.mdx
│   │   │   │   ├── elasticsearch.mdx
│   │   │   │   ├── lancedb.mdx
│   │   │   │   ├── opensearch.mdx
│   │   │   │   ├── pinecone.mdx
│   │   │   │   ├── qdrant.mdx
│   │   │   │   ├── weaviate.mdx
│   │   │   │   └── zilliz.mdx
│   │   │   └── vector-databases.mdx
│   │   ├── contribution/
│   │   │   ├── dev.mdx
│   │   │   ├── docs.mdx
│   │   │   ├── guidelines.mdx
│   │   │   └── python.mdx
│   │   ├── deployment/
│   │   │   ├── fly_io.mdx
│   │   │   ├── gradio_app.mdx
│   │   │   ├── huggingface_spaces.mdx
│   │   │   ├── modal_com.mdx
│   │   │   ├── railway.mdx
│   │   │   ├── render_com.mdx
│   │   │   └── streamlit_io.mdx
│   │   ├── development.mdx
│   │   ├── examples/
│   │   │   ├── chat-with-PDF.mdx
│   │   │   ├── community/
│   │   │   │   └── showcase.mdx
│   │   │   ├── discord_bot.mdx
│   │   │   ├── full_stack.mdx
│   │   │   ├── nextjs-assistant.mdx
│   │   │   ├── notebooks-and-replits.mdx
│   │   │   ├── openai-assistant.mdx
│   │   │   ├── opensource-assistant.mdx
│   │   │   ├── poe_bot.mdx
│   │   │   ├── rest-api/
│   │   │   │   ├── add-data.mdx
│   │   │   │   ├── chat.mdx
│   │   │   │   ├── check-status.mdx
│   │   │   │   ├── create.mdx
│   │   │   │   ├── delete.mdx
│   │   │   │   ├── deploy.mdx
│   │   │   │   ├── get-all-apps.mdx
│   │   │   │   ├── get-data.mdx
│   │   │   │   ├── getting-started.mdx
│   │   │   │   └── query.mdx
│   │   │   ├── showcase.mdx
│   │   │   ├── slack-AI.mdx
│   │   │   ├── slack_bot.mdx
│   │   │   ├── telegram_bot.mdx
│   │   │   └── whatsapp_bot.mdx
│   │   ├── get-started/
│   │   │   ├── deployment.mdx
│   │   │   ├── faq.mdx
│   │   │   ├── full-stack.mdx
│   │   │   ├── integrations.mdx
│   │   │   ├── introduction.mdx
│   │   │   └── quickstart.mdx
│   │   ├── integration/
│   │   │   ├── chainlit.mdx
│   │   │   ├── helicone.mdx
│   │   │   ├── langsmith.mdx
│   │   │   ├── openlit.mdx
│   │   │   └── streamlit-mistral.mdx
│   │   ├── mint.json
│   │   ├── product/
│   │   │   └── release-notes.mdx
│   │   ├── rest-api.json
│   │   ├── support/
│   │   │   └── get-help.mdx
│   │   └── use-cases/
│   │       ├── chatbots.mdx
│   │       ├── introduction.mdx
│   │       ├── question-answering.mdx
│   │       └── semantic-search.mdx
│   ├── embedchain/
│   │   ├── __init__.py
│   │   ├── alembic.ini
│   │   ├── app.py
│   │   ├── bots/
│   │   │   ├── __init__.py
│   │   │   ├── base.py
│   │   │   ├── discord.py
│   │   │   ├── poe.py
│   │   │   ├── slack.py
│   │   │   └── whatsapp.py
│   │   ├── cache.py
│   │   ├── chunkers/
│   │   │   ├── __init__.py
│   │   │   ├── audio.py
│   │   │   ├── base_chunker.py
│   │   │   ├── beehiiv.py
│   │   │   ├── common_chunker.py
│   │   │   ├── discourse.py
│   │   │   ├── docs_site.py
│   │   │   ├── docx_file.py
│   │   │   ├── excel_file.py
│   │   │   ├── gmail.py
│   │   │   ├── google_drive.py
│   │   │   ├── image.py
│   │   │   ├── json.py
│   │   │   ├── mdx.py
│   │   │   ├── mysql.py
│   │   │   ├── notion.py
│   │   │   ├── openapi.py
│   │   │   ├── pdf_file.py
│   │   │   ├── postgres.py
│   │   │   ├── qna_pair.py
│   │   │   ├── rss_feed.py
│   │   │   ├── sitemap.py
│   │   │   ├── slack.py
│   │   │   ├── substack.py
│   │   │   ├── table.py
│   │   │   ├── text.py
│   │   │   ├── unstructured_file.py
│   │   │   ├── web_page.py
│   │   │   ├── xml.py
│   │   │   └── youtube_video.py
│   │   ├── cli.py
│   │   ├── client.py
│   │   ├── config/
│   │   │   ├── __init__.py
│   │   │   ├── add_config.py
│   │   │   ├── app_config.py
│   │   │   ├── base_app_config.py
│   │   │   ├── base_config.py
│   │   │   ├── cache_config.py
│   │   │   ├── embedder/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── aws_bedrock.py
│   │   │   │   ├── base.py
│   │   │   │   ├── google.py
│   │   │   │   └── ollama.py
│   │   │   ├── evaluation/
│   │   │   │   ├── __init__.py
│   │   │   │   └── base.py
│   │   │   ├── llm/
│   │   │   │   ├── __init__.py
│   │   │   │   └── base.py
│   │   │   ├── mem0_config.py
│   │   │   ├── model_prices_and_context_window.json
│   │   │   ├── vector_db/
│   │   │   │   ├── base.py
│   │   │   │   ├── chroma.py
│   │   │   │   ├── elasticsearch.py
│   │   │   │   ├── lancedb.py
│   │   │   │   ├── opensearch.py
│   │   │   │   ├── pinecone.py
│   │   │   │   ├── qdrant.py
│   │   │   │   ├── weaviate.py
│   │   │   │   └── zilliz.py
│   │   │   └── vectordb/
│   │   │       └── __init__.py
│   │   ├── constants.py
│   │   ├── core/
│   │   │   └── __init__.py
│   │   ├── data_formatter/
│   │   │   ├── __init__.py
│   │   │   └── data_formatter.py
│   │   ├── deployment/
│   │   │   ├── fly.io/
│   │   │   │   ├── .dockerignore
│   │   │   │   ├── Dockerfile
│   │   │   │   ├── app.py
│   │   │   │   └── requirements.txt
│   │   │   ├── gradio.app/
│   │   │   │   ├── app.py
│   │   │   │   └── requirements.txt
│   │   │   ├── modal.com/
│   │   │   │   ├── .gitignore
│   │   │   │   ├── app.py
│   │   │   │   └── requirements.txt
│   │   │   ├── render.com/
│   │   │   │   ├── .gitignore
│   │   │   │   ├── app.py
│   │   │   │   ├── render.yaml
│   │   │   │   └── requirements.txt
│   │   │   └── streamlit.io/
│   │   │       ├── .streamlit/
│   │   │       │   └── secrets.toml
│   │   │       ├── app.py
│   │   │       └── requirements.txt
│   │   ├── embedchain.py
│   │   ├── embedder/
│   │   │   ├── __init__.py
│   │   │   ├── aws_bedrock.py
│   │   │   ├── azure_openai.py
│   │   │   ├── base.py
│   │   │   ├── clarifai.py
│   │   │   ├── cohere.py
│   │   │   ├── google.py
│   │   │   ├── gpt4all.py
│   │   │   ├── huggingface.py
│   │   │   ├── mistralai.py
│   │   │   ├── nvidia.py
│   │   │   ├── ollama.py
│   │   │   ├── openai.py
│   │   │   └── vertexai.py
│   │   ├── evaluation/
│   │   │   ├── __init__.py
│   │   │   ├── base.py
│   │   │   └── metrics/
│   │   │       ├── __init__.py
│   │   │       ├── answer_relevancy.py
│   │   │       ├── context_relevancy.py
│   │   │       └── groundedness.py
│   │   ├── factory.py
│   │   ├── helpers/
│   │   │   ├── __init__.py
│   │   │   ├── callbacks.py
│   │   │   └── json_serializable.py
│   │   ├── llm/
│   │   │   ├── __init__.py
│   │   │   ├── anthropic.py
│   │   │   ├── aws_bedrock.py
│   │   │   ├── azure_openai.py
│   │   │   ├── base.py
│   │   │   ├── clarifai.py
│   │   │   ├── cohere.py
│   │   │   ├── google.py
│   │   │   ├── gpt4all.py
│   │   │   ├── groq.py
│   │   │   ├── huggingface.py
│   │   │   ├── jina.py
│   │   │   ├── llama2.py
│   │   │   ├── mistralai.py
│   │   │   ├── nvidia.py
│   │   │   ├── ollama.py
│   │   │   ├── openai.py
│   │   │   ├── together.py
│   │   │   ├── vertex_ai.py
│   │   │   └── vllm.py
│   │   ├── loaders/
│   │   │   ├── __init__.py
│   │   │   ├── audio.py
│   │   │   ├── base_loader.py
│   │   │   ├── beehiiv.py
│   │   │   ├── csv.py
│   │   │   ├── directory_loader.py
│   │   │   ├── discord.py
│   │   │   ├── discourse.py
│   │   │   ├── docs_site_loader.py
│   │   │   ├── docx_file.py
│   │   │   ├── dropbox.py
│   │   │   ├── excel_file.py
│   │   │   ├── github.py
│   │   │   ├── gmail.py
│   │   │   ├── google_drive.py
│   │   │   ├── image.py
│   │   │   ├── json.py
│   │   │   ├── local_qna_pair.py
│   │   │   ├── local_text.py
│   │   │   ├── mdx.py
│   │   │   ├── mysql.py
│   │   │   ├── notion.py
│   │   │   ├── openapi.py
│   │   │   ├── pdf_file.py
│   │   │   ├── postgres.py
│   │   │   ├── rss_feed.py
│   │   │   ├── sitemap.py
│   │   │   ├── slack.py
│   │   │   ├── substack.py
│   │   │   ├── text_file.py
│   │   │   ├── unstructured_file.py
│   │   │   ├── web_page.py
│   │   │   ├── xml.py
│   │   │   ├── youtube_channel.py
│   │   │   └── youtube_video.py
│   │   ├── memory/
│   │   │   ├── __init__.py
│   │   │   ├── base.py
│   │   │   ├── message.py
│   │   │   └── utils.py
│   │   ├── migrations/
│   │   │   ├── env.py
│   │   │   ├── script.py.mako
│   │   │   └── versions/
│   │   │       └── 40a327b3debd_create_initial_migrations.py
│   │   ├── models/
│   │   │   ├── __init__.py
│   │   │   ├── data_type.py
│   │   │   ├── embedding_functions.py
│   │   │   ├── providers.py
│   │   │   └── vector_dimensions.py
│   │   ├── pipeline.py
│   │   ├── store/
│   │   │   ├── __init__.py
│   │   │   └── assistants.py
│   │   ├── telemetry/
│   │   │   ├── __init__.py
│   │   │   └── posthog.py
│   │   ├── utils/
│   │   │   ├── __init__.py
│   │   │   ├── cli.py
│   │   │   ├── evaluation.py
│   │   │   └── misc.py
│   │   └── vectordb/
│   │       ├── __init__.py
│   │       ├── base.py
│   │       ├── chroma.py
│   │       ├── elasticsearch.py
│   │       ├── lancedb.py
│   │       ├── opensearch.py
│   │       ├── pinecone.py
│   │       ├── qdrant.py
│   │       ├── weaviate.py
│   │       └── zilliz.py
│   ├── examples/
│   │   ├── api_server/
│   │   │   ├── .dockerignore
│   │   │   ├── .gitignore
│   │   │   ├── Dockerfile
│   │   │   ├── README.md
│   │   │   ├── api_server.py
│   │   │   ├── docker-compose.yml
│   │   │   ├── requirements.txt
│   │   │   └── variables.env
│   │   ├── chainlit/
│   │   │   ├── .gitignore
│   │   │   ├── README.md
│   │   │   ├── app.py
│   │   │   ├── chainlit.md
│   │   │   └── requirements.txt
│   │   ├── chat-pdf/
│   │   │   ├── README.md
│   │   │   ├── app.py
│   │   │   ├── embedchain.json
│   │   │   └── requirements.txt
│   │   ├── discord_bot/
│   │   │   ├── .dockerignore
│   │   │   ├── .gitignore
│   │   │   ├── Dockerfile
│   │   │   ├── README.md
│   │   │   ├── discord_bot.py
│   │   │   ├── docker-compose.yml
│   │   │   ├── requirements.txt
│   │   │   └── variables.env
│   │   ├── full_stack/
│   │   │   ├── .dockerignore
│   │   │   ├── README.md
│   │   │   ├── backend/
│   │   │   │   ├── .dockerignore
│   │   │   │   ├── .gitignore
│   │   │   │   ├── Dockerfile
│   │   │   │   ├── models.py
│   │   │   │   ├── paths.py
│   │   │   │   ├── requirements.txt
│   │   │   │   ├── routes/
│   │   │   │   │   ├── chat_response.py
│   │   │   │   │   ├── dashboard.py
│   │   │   │   │   └── sources.py
│   │   │   │   └── server.py
│   │   │   ├── docker-compose.yml
│   │   │   └── frontend/
│   │   │       ├── .dockerignore
│   │   │       ├── .eslintrc.json
│   │   │       ├── .gitignore
│   │   │       ├── Dockerfile
│   │   │       ├── jsconfig.json
│   │   │       ├── next.config.js
│   │   │       ├── package.json
│   │   │       ├── postcss.config.js
│   │   │       ├── src/
│   │   │       │   ├── components/
│   │   │       │   │   ├── PageWrapper.js
│   │   │       │   │   ├── chat/
│   │   │       │   │   │   ├── BotWrapper.js
│   │   │       │   │   │   └── HumanWrapper.js
│   │   │       │   │   └── dashboard/
│   │   │       │   │       ├── CreateBot.js
│   │   │       │   │       ├── DeleteBot.js
│   │   │       │   │       ├── PurgeChats.js
│   │   │       │   │       └── SetOpenAIKey.js
│   │   │       │   ├── containers/
│   │   │       │   │   ├── ChatWindow.js
│   │   │       │   │   ├── SetSources.js
│   │   │       │   │   └── Sidebar.js
│   │   │       │   ├── pages/
│   │   │       │   │   ├── [bot_slug]/
│   │   │       │   │   │   └── app.js
│   │   │       │   │   ├── _app.js
│   │   │       │   │   ├── _document.js
│   │   │       │   │   └── index.js
│   │   │       │   └── styles/
│   │   │       │       └── globals.css
│   │   │       └── tailwind.config.js
│   │   ├── mistral-streamlit/
│   │   │   ├── README.md
│   │   │   ├── app.py
│   │   │   ├── config.yaml
│   │   │   └── requirements.txt
│   │   ├── nextjs/
│   │   │   ├── README.md
│   │   │   ├── ec_app/
│   │   │   │   ├── .dockerignore
│   │   │   │   ├── Dockerfile
│   │   │   │   ├── app.py
│   │   │   │   ├── embedchain.json
│   │   │   │   ├── fly.toml
│   │   │   │   └── requirements.txt
│   │   │   ├── nextjs_discord/
│   │   │   │   ├── .dockerignore
│   │   │   │   ├── Dockerfile
│   │   │   │   ├── app.py
│   │   │   │   ├── embedchain.json
│   │   │   │   ├── fly.toml
│   │   │   │   └── requirements.txt
│   │   │   ├── nextjs_slack/
│   │   │   │   ├── .dockerignore
│   │   │   │   ├── Dockerfile
│   │   │   │   ├── app.py
│   │   │   │   ├── embedchain.json
│   │   │   │   ├── fly.toml
│   │   │   │   └── requirements.txt
│   │   │   └── requirements.txt
│   │   ├── private-ai/
│   │   │   ├── README.md
│   │   │   ├── config.yaml
│   │   │   ├── privateai.py
│   │   │   └── requirements.txt
│   │   ├── rest-api/
│   │   │   ├── .dockerignore
│   │   │   ├── .gitignore
│   │   │   ├── Dockerfile
│   │   │   ├── README.md
│   │   │   ├── __init__.py
│   │   │   ├── bruno/
│   │   │   │   └── ec-rest-api/
│   │   │   │       ├── bruno.json
│   │   │   │       ├── default_add.bru
│   │   │   │       ├── default_chat.bru
│   │   │   │       ├── default_query.bru
│   │   │   │       └── ping.bru
│   │   │   ├── configs/
│   │   │   │   └── README.md
│   │   │   ├── database.py
│   │   │   ├── default.yaml
│   │   │   ├── main.py
│   │   │   ├── models.py
│   │   │   ├── requirements.txt
│   │   │   ├── sample-config.yaml
│   │   │   ├── services.py
│   │   │   └── utils.py
│   │   ├── sadhguru-ai/
│   │   │   ├── README.md
│   │   │   ├── app.py
│   │   │   └── requirements.txt
│   │   ├── slack_bot/
│   │   │   ├── Dockerfile
│   │   │   └── requirements.txt
│   │   ├── telegram_bot/
│   │   │   ├── .gitignore
│   │   │   ├── Dockerfile
│   │   │   ├── README.md
│   │   │   ├── requirements.txt
│   │   │   └── telegram_bot.py
│   │   ├── unacademy-ai/
│   │   │   ├── README.md
│   │   │   ├── app.py
│   │   │   └── requirements.txt
│   │   └── whatsapp_bot/
│   │       ├── .gitignore
│   │       ├── Dockerfile
│   │       ├── README.md
│   │       ├── requirements.txt
│   │       ├── run.py
│   │       └── whatsapp_bot.py
│   ├── notebooks/
│   │   ├── anthropic.ipynb
│   │   ├── aws-bedrock.ipynb
│   │   ├── azure-openai.ipynb
│   │   ├── azure_openai.yaml
│   │   ├── chromadb.ipynb
│   │   ├── clarifai.ipynb
│   │   ├── cohere.ipynb
│   │   ├── elasticsearch.ipynb
│   │   ├── embedchain-chromadb-server.ipynb
│   │   ├── embedchain-docs-site-example.ipynb
│   │   ├── gpt4all.ipynb
│   │   ├── hugging_face_hub.ipynb
│   │   ├── jina.ipynb
│   │   ├── lancedb.ipynb
│   │   ├── llama2.ipynb
│   │   ├── ollama.ipynb
│   │   ├── openai.ipynb
│   │   ├── openai_azure.yaml
│   │   ├── opensearch.ipynb
│   │   ├── pinecone.ipynb
│   │   ├── together.ipynb
│   │   └── vertex_ai.ipynb
│   ├── poetry.toml
│   ├── pyproject.toml
│   └── tests/
│       ├── __init__.py
│       ├── chunkers/
│       │   ├── test_base_chunker.py
│       │   ├── test_chunkers.py
│       │   └── test_text.py
│       ├── conftest.py
│       ├── embedchain/
│       │   ├── test_add.py
│       │   ├── test_embedchain.py
│       │   └── test_utils.py
│       ├── embedder/
│       │   ├── test_aws_bedrock_embedder.py
│       │   ├── test_azure_openai_embedder.py
│       │   ├── test_embedder.py
│       │   └── test_huggingface_embedder.py
│       ├── evaluation/
│       │   ├── test_answer_relevancy_metric.py
│       │   ├── test_context_relevancy_metric.py
│       │   └── test_groundedness_metric.py
│       ├── helper_classes/
│       │   └── test_json_serializable.py
│       ├── llm/
│       │   ├── conftest.py
│       │   ├── test_anthrophic.py
│       │   ├── test_aws_bedrock.py
│       │   ├── test_azure_openai.py
│       │   ├── test_base_llm.py
│       │   ├── test_chat.py
│       │   ├── test_clarifai.py
│       │   ├── test_cohere.py
│       │   ├── test_generate_prompt.py
│       │   ├── test_google.py
│       │   ├── test_gpt4all.py
│       │   ├── test_huggingface.py
│       │   ├── test_jina.py
│       │   ├── test_llama2.py
│       │   ├── test_mistralai.py
│       │   ├── test_ollama.py
│       │   ├── test_openai.py
│       │   ├── test_query.py
│       │   ├── test_together.py
│       │   └── test_vertex_ai.py
│       ├── loaders/
│       │   ├── test_audio.py
│       │   ├── test_csv.py
│       │   ├── test_discourse.py
│       │   ├── test_docs_site.py
│       │   ├── test_docs_site_loader.py
│       │   ├── test_docx_file.py
│       │   ├── test_dropbox.py
│       │   ├── test_excel_file.py
│       │   ├── test_github.py
│       │   ├── test_gmail.py
│       │   ├── test_google_drive.py
│       │   ├── test_json.py
│       │   ├── test_local_qna_pair.py
│       │   ├── test_local_text.py
│       │   ├── test_mdx.py
│       │   ├── test_mysql.py
│       │   ├── test_notion.py
│       │   ├── test_openapi.py
│       │   ├── test_pdf_file.py
│       │   ├── test_postgres.py
│       │   ├── test_slack.py
│       │   ├── test_web_page.py
│       │   ├── test_xml.py
│       │   └── test_youtube_video.py
│       ├── memory/
│       │   ├── test_chat_memory.py
│       │   └── test_memory_messages.py
│       ├── models/
│       │   └── test_data_type.py
│       ├── telemetry/
│       │   └── test_posthog.py
│       ├── test_app.py
│       ├── test_client.py
│       ├── test_factory.py
│       ├── test_utils.py
│       └── vectordb/
│           ├── test_chroma_db.py
│           ├── test_elasticsearch_db.py
│           ├── test_lancedb.py
│           ├── test_pinecone.py
│           ├── test_qdrant.py
│           ├── test_weaviate.py
│           └── test_zilliz_db.py
├── evaluation/
│   ├── Makefile
│   ├── README.md
│   ├── evals.py
│   ├── generate_scores.py
│   ├── metrics/
│   │   ├── llm_judge.py
│   │   └── utils.py
│   ├── prompts.py
│   ├── run_experiments.py
│   └── src/
│       ├── langmem.py
│       ├── memzero/
│       │   ├── add.py
│       │   └── search.py
│       ├── openai/
│       │   └── predict.py
│       ├── rag.py
│       ├── utils.py
│       └── zep/
│           ├── add.py
│           └── search.py
├── examples/
│   ├── graph-db-demo/
│   │   ├── kuzu-example.ipynb
│   │   ├── memgraph-example.ipynb
│   │   ├── neo4j-example.ipynb
│   │   ├── neptune-db-example.ipynb
│   │   └── neptune-example.ipynb
│   ├── mem0-demo/
│   │   ├── .gitignore
│   │   ├── app/
│   │   │   ├── api/
│   │   │   │   └── chat/
│   │   │   │       └── route.ts
│   │   │   ├── assistant.tsx
│   │   │   ├── globals.css
│   │   │   ├── layout.tsx
│   │   │   └── page.tsx
│   │   ├── components/
│   │   │   ├── assistant-ui/
│   │   │   │   ├── markdown-text.tsx
│   │   │   │   ├── memory-indicator.tsx
│   │   │   │   ├── memory-ui.tsx
│   │   │   │   ├── theme-aware-logo.tsx
│   │   │   │   ├── thread-list.tsx
│   │   │   │   ├── thread.tsx
│   │   │   │   └── tooltip-icon-button.tsx
│   │   │   ├── mem0/
│   │   │   │   ├── github-button.tsx
│   │   │   │   ├── markdown.css
│   │   │   │   ├── markdown.tsx
│   │   │   │   └── theme-aware-logo.tsx
│   │   │   └── ui/
│   │   │       ├── alert-dialog.tsx
│   │   │       ├── avatar.tsx
│   │   │       ├── badge.tsx
│   │   │       ├── button.tsx
│   │   │       ├── popover.tsx
│   │   │       ├── scroll-area.tsx
│   │   │       └── tooltip.tsx
│   │   ├── components.json
│   │   ├── eslint.config.mjs
│   │   ├── lib/
│   │   │   └── utils.ts
│   │   ├── next-env.d.ts
│   │   ├── next.config.ts
│   │   ├── package.json
│   │   ├── postcss.config.mjs
│   │   ├── tailwind.config.ts
│   │   └── tsconfig.json
│   ├── misc/
│   │   ├── diet_assistant_voice_cartesia.py
│   │   ├── fitness_checker.py
│   │   ├── healthcare_assistant_google_adk.py
│   │   ├── movie_recommendation_grok3.py
│   │   ├── multillm_memory.py
│   │   ├── personal_assistant_agno.py
│   │   ├── personalized_search.py
│   │   ├── strands_agent_aws_elasticache_neptune.py
│   │   ├── study_buddy.py
│   │   ├── test.py
│   │   ├── vllm_example.py
│   │   └── voice_assistant_elevenlabs.py
│   ├── multiagents/
│   │   └── llamaindex_learning_system.py
│   ├── multimodal-demo/
│   │   ├── .gitattributes
│   │   ├── .gitignore
│   │   ├── components.json
│   │   ├── eslint.config.js
│   │   ├── index.html
│   │   ├── package.json
│   │   ├── postcss.config.js
│   │   ├── src/
│   │   │   ├── App.tsx
│   │   │   ├── components/
│   │   │   │   ├── api-settings-popup.tsx
│   │   │   │   ├── chevron-toggle.tsx
│   │   │   │   ├── header.tsx
│   │   │   │   ├── input-area.tsx
│   │   │   │   ├── memories.tsx
│   │   │   │   ├── messages.tsx
│   │   │   │   └── ui/
│   │   │   │       ├── avatar.tsx
│   │   │   │       ├── badge.tsx
│   │   │   │       ├── button.tsx
│   │   │   │       ├── card.tsx
│   │   │   │       ├── dialog.tsx
│   │   │   │       ├── input.tsx
│   │   │   │       ├── label.tsx
│   │   │   │       ├── scroll-area.tsx
│   │   │   │       └── select.tsx
│   │   │   ├── constants/
│   │   │   │   └── messages.ts
│   │   │   ├── contexts/
│   │   │   │   └── GlobalContext.tsx
│   │   │   ├── hooks/
│   │   │   │   ├── useAuth.ts
│   │   │   │   ├── useChat.ts
│   │   │   │   └── useFileHandler.ts
│   │   │   ├── index.css
│   │   │   ├── libs/
│   │   │   │   └── utils.ts
│   │   │   ├── main.tsx
│   │   │   ├── page.tsx
│   │   │   ├── pages/
│   │   │   │   └── home.tsx
│   │   │   ├── types.ts
│   │   │   ├── utils/
│   │   │   │   └── fileUtils.ts
│   │   │   └── vite-env.d.ts
│   │   ├── tailwind.config.js
│   │   ├── tsconfig.app.json
│   │   ├── tsconfig.json
│   │   ├── tsconfig.node.json
│   │   ├── useChat.ts
│   │   └── vite.config.ts
│   ├── openai-inbuilt-tools/
│   │   ├── index.js
│   │   └── package.json
│   ├── vercel-ai-sdk-chat-app/
│   │   ├── .gitattributes
│   │   ├── .gitignore
│   │   ├── components.json
│   │   ├── eslint.config.js
│   │   ├── index.html
│   │   ├── package.json
│   │   ├── postcss.config.js
│   │   ├── src/
│   │   │   ├── App.tsx
│   │   │   ├── components/
│   │   │   │   ├── api-settings-popup.tsx
│   │   │   │   ├── chevron-toggle.tsx
│   │   │   │   ├── header.tsx
│   │   │   │   ├── input-area.tsx
│   │   │   │   ├── memories.tsx
│   │   │   │   ├── messages.tsx
│   │   │   │   └── ui/
│   │   │   │       ├── avatar.tsx
│   │   │   │       ├── badge.tsx
│   │   │   │       ├── button.tsx
│   │   │   │       ├── card.tsx
│   │   │   │       ├── dialog.tsx
│   │   │   │       ├── input.tsx
│   │   │   │       ├── label.tsx
│   │   │   │       ├── scroll-area.tsx
│   │   │   │       └── select.tsx
│   │   │   ├── constants/
│   │   │   │   └── messages.ts
│   │   │   ├── contexts/
│   │   │   │   └── GlobalContext.tsx
│   │   │   ├── hooks/
│   │   │   │   ├── useAuth.ts
│   │   │   │   ├── useChat.ts
│   │   │   │   └── useFileHandler.ts
│   │   │   ├── index.css
│   │   │   ├── libs/
│   │   │   │   └── utils.ts
│   │   │   ├── main.tsx
│   │   │   ├── page.tsx
│   │   │   ├── pages/
│   │   │   │   └── home.tsx
│   │   │   ├── types.ts
│   │   │   ├── utils/
│   │   │   │   └── fileUtils.ts
│   │   │   └── vite-env.d.ts
│   │   ├── tailwind.config.js
│   │   ├── tsconfig.app.json
│   │   ├── tsconfig.json
│   │   ├── tsconfig.node.json
│   │   └── vite.config.ts
│   └── yt-assistant-chrome/
│       ├── .gitignore
│       ├── README.md
│       ├── manifest.json
│       ├── package.json
│       ├── public/
│       │   ├── options.html
│       │   └── popup.html
│       ├── src/
│       │   ├── background.js
│       │   ├── content.js
│       │   ├── options.js
│       │   └── popup.js
│       ├── styles/
│       │   ├── content.css
│       │   ├── options.css
│       │   └── popup.css
│       └── webpack.config.js
├── mem0/
│   ├── __init__.py
│   ├── client/
│   │   ├── __init__.py
│   │   ├── main.py
│   │   ├── project.py
│   │   └── utils.py
│   ├── configs/
│   │   ├── __init__.py
│   │   ├── base.py
│   │   ├── embeddings/
│   │   │   ├── __init__.py
│   │   │   └── base.py
│   │   ├── enums.py
│   │   ├── llms/
│   │   │   ├── __init__.py
│   │   │   ├── anthropic.py
│   │   │   ├── aws_bedrock.py
│   │   │   ├── azure.py
│   │   │   ├── base.py
│   │   │   ├── deepseek.py
│   │   │   ├── lmstudio.py
│   │   │   ├── ollama.py
│   │   │   ├── openai.py
│   │   │   └── vllm.py
│   │   ├── prompts.py
│   │   ├── rerankers/
│   │   │   ├── __init__.py
│   │   │   ├── base.py
│   │   │   ├── cohere.py
│   │   │   ├── config.py
│   │   │   ├── huggingface.py
│   │   │   ├── llm.py
│   │   │   ├── sentence_transformer.py
│   │   │   └── zero_entropy.py
│   │   └── vector_stores/
│   │       ├── __init__.py
│   │       ├── azure_ai_search.py
│   │       ├── azure_mysql.py
│   │       ├── baidu.py
│   │       ├── cassandra.py
│   │       ├── chroma.py
│   │       ├── databricks.py
│   │       ├── elasticsearch.py
│   │       ├── faiss.py
│   │       ├── langchain.py
│   │       ├── milvus.py
│   │       ├── mongodb.py
│   │       ├── neptune.py
│   │       ├── opensearch.py
│   │       ├── pgvector.py
│   │       ├── pinecone.py
│   │       ├── qdrant.py
│   │       ├── redis.py
│   │       ├── s3_vectors.py
│   │       ├── supabase.py
│   │       ├── upstash_vector.py
│   │       ├── valkey.py
│   │       ├── vertex_ai_vector_search.py
│   │       └── weaviate.py
│   ├── embeddings/
│   │   ├── __init__.py
│   │   ├── aws_bedrock.py
│   │   ├── azure_openai.py
│   │   ├── base.py
│   │   ├── configs.py
│   │   ├── fastembed.py
│   │   ├── gemini.py
│   │   ├── huggingface.py
│   │   ├── langchain.py
│   │   ├── lmstudio.py
│   │   ├── mock.py
│   │   ├── ollama.py
│   │   ├── openai.py
│   │   ├── together.py
│   │   └── vertexai.py
│   ├── exceptions.py
│   ├── graphs/
│   │   ├── __init__.py
│   │   ├── configs.py
│   │   ├── neptune/
│   │   │   ├── __init__.py
│   │   │   ├── base.py
│   │   │   ├── neptunedb.py
│   │   │   └── neptunegraph.py
│   │   ├── tools.py
│   │   └── utils.py
│   ├── llms/
│   │   ├── __init__.py
│   │   ├── anthropic.py
│   │   ├── aws_bedrock.py
│   │   ├── azure_openai.py
│   │   ├── azure_openai_structured.py
│   │   ├── base.py
│   │   ├── configs.py
│   │   ├── deepseek.py
│   │   ├── gemini.py
│   │   ├── groq.py
│   │   ├── langchain.py
│   │   ├── litellm.py
│   │   ├── lmstudio.py
│   │   ├── ollama.py
│   │   ├── openai.py
│   │   ├── openai_structured.py
│   │   ├── sarvam.py
│   │   ├── together.py
│   │   ├── vllm.py
│   │   └── xai.py
│   ├── memory/
│   │   ├── __init__.py
│   │   ├── base.py
│   │   ├── graph_memory.py
│   │   ├── kuzu_memory.py
│   │   ├── main.py
│   │   ├── memgraph_memory.py
│   │   ├── setup.py
│   │   ├── storage.py
│   │   ├── telemetry.py
│   │   └── utils.py
│   ├── proxy/
│   │   ├── __init__.py
│   │   └── main.py
│   ├── reranker/
│   │   ├── __init__.py
│   │   ├── base.py
│   │   ├── cohere_reranker.py
│   │   ├── huggingface_reranker.py
│   │   ├── llm_reranker.py
│   │   ├── sentence_transformer_reranker.py
│   │   └── zero_entropy_reranker.py
│   ├── utils/
│   │   ├── factory.py
│   │   └── gcp_auth.py
│   └── vector_stores/
│       ├── __init__.py
│       ├── azure_ai_search.py
│       ├── azure_mysql.py
│       ├── baidu.py
│       ├── base.py
│       ├── cassandra.py
│       ├── chroma.py
│       ├── configs.py
│       ├── databricks.py
│       ├── elasticsearch.py
│       ├── faiss.py
│       ├── langchain.py
│       ├── milvus.py
│       ├── mongodb.py
│       ├── neptune_analytics.py
│       ├── opensearch.py
│       ├── pgvector.py
│       ├── pinecone.py
│       ├── qdrant.py
│       ├── redis.py
│       ├── s3_vectors.py
│       ├── supabase.py
│       ├── upstash_vector.py
│       ├── valkey.py
│       ├── vertex_ai_vector_search.py
│       └── weaviate.py
├── mem0-ts/
│   ├── .gitignore
│   ├── .prettierignore
│   ├── README.md
│   ├── jest.config.js
│   ├── jest.integration.config.js
│   ├── package.json
│   ├── src/
│   │   ├── client/
│   │   │   ├── index.ts
│   │   │   ├── mem0.ts
│   │   │   ├── mem0.types.ts
│   │   │   ├── telemetry.ts
│   │   │   ├── telemetry.types.ts
│   │   │   └── tests/
│   │   │       ├── helpers.ts
│   │   │       ├── integration/
│   │   │       │   ├── batch.test.ts
│   │   │       │   ├── crud.test.ts
│   │   │       │   ├── global-setup.ts
│   │   │       │   ├── global-teardown.ts
│   │   │       │   ├── helpers.ts
│   │   │       │   ├── initialization.test.ts
│   │   │       │   ├── management.test.ts
│   │   │       │   └── search.test.ts
│   │   │       ├── memoryClient.batch.test.ts
│   │   │       ├── memoryClient.crud.test.ts
│   │   │       ├── memoryClient.init.test.ts
│   │   │       ├── memoryClient.project.test.ts
│   │   │       ├── memoryClient.search.test.ts
│   │   │       ├── memoryClient.users.test.ts
│   │   │       ├── memoryClient.webhooks.test.ts
│   │   │       └── setup.ts
│   │   ├── common/
│   │   │   ├── exceptions.test.ts
│   │   │   └── exceptions.ts
│   │   ├── community/
│   │   │   ├── .prettierignore
│   │   │   ├── package.json
│   │   │   ├── src/
│   │   │   │   ├── index.ts
│   │   │   │   └── integrations/
│   │   │   │       └── langchain/
│   │   │   │           ├── index.ts
│   │   │   │           └── mem0.ts
│   │   │   └── tsconfig.json
│   │   └── oss/
│   │       ├── .gitignore
│   │       ├── README.md
│   │       ├── examples/
│   │       │   ├── basic.ts
│   │       │   ├── llms/
│   │       │   │   └── mistral-example.ts
│   │       │   ├── local-llms.ts
│   │       │   ├── utils/
│   │       │   │   └── test-utils.ts
│   │       │   └── vector-stores/
│   │       │       ├── azure-ai-search.ts
│   │       │       ├── index.ts
│   │       │       ├── memory.ts
│   │       │       ├── pgvector.ts
│   │       │       ├── qdrant.ts
│   │       │       ├── redis.ts
│   │       │       └── supabase.ts
│   │       ├── package.json
│   │       ├── src/
│   │       │   ├── config/
│   │       │   │   ├── defaults.ts
│   │       │   │   └── manager.ts
│   │       │   ├── embeddings/
│   │       │   │   ├── azure.ts
│   │       │   │   ├── base.ts
│   │       │   │   ├── google.ts
│   │       │   │   ├── langchain.ts
│   │       │   │   ├── lmstudio.ts
│   │       │   │   ├── ollama.ts
│   │       │   │   └── openai.ts
│   │       │   ├── graphs/
│   │       │   │   ├── configs.ts
│   │       │   │   ├── tools.ts
│   │       │   │   └── utils.ts
│   │       │   ├── index.ts
│   │       │   ├── llms/
│   │       │   │   ├── anthropic.ts
│   │       │   │   ├── azure.ts
│   │       │   │   ├── base.ts
│   │       │   │   ├── google.ts
│   │       │   │   ├── groq.ts
│   │       │   │   ├── langchain.ts
│   │       │   │   ├── lmstudio.ts
│   │       │   │   ├── mistral.ts
│   │       │   │   ├── ollama.ts
│   │       │   │   ├── openai.ts
│   │       │   │   └── openai_structured.ts
│   │       │   ├── memory/
│   │       │   │   ├── graph_memory.ts
│   │       │   │   ├── index.ts
│   │       │   │   └── memory.types.ts
│   │       │   ├── prompts/
│   │       │   │   └── index.ts
│   │       │   ├── storage/
│   │       │   │   ├── DummyHistoryManager.ts
│   │       │   │   ├── MemoryHistoryManager.ts
│   │       │   │   ├── SQLiteManager.ts
│   │       │   │   ├── SupabaseHistoryManager.ts
│   │       │   │   ├── base.ts
│   │       │   │   └── index.ts
│   │       │   ├── tests/
│   │       │   │   ├── better-sqlite3-migration.test.ts
│   │       │   │   ├── sqlite-backward-compat.test.ts
│   │       │   │   └── sqlite-path-resolution.test.ts
│   │       │   ├── types/
│   │       │   │   └── index.ts
│   │       │   ├── utils/
│   │       │   │   ├── bm25.ts
│   │       │   │   ├── factory.ts
│   │       │   │   ├── logger.ts
│   │       │   │   ├── memory.ts
│   │       │   │   ├── sqlite.ts
│   │       │   │   ├── telemetry.ts
│   │       │   │   └── telemetry.types.ts
│   │       │   └── vector_stores/
│   │       │       ├── azure_ai_search.ts
│   │       │       ├── base.ts
│   │       │       ├── langchain.ts
│   │       │       ├── memory.ts
│   │       │       ├── pgvector.ts
│   │       │       ├── qdrant.ts
│   │       │       ├── redis.ts
│   │       │       ├── supabase.ts
│   │       │       └── vectorize.ts
│   │       ├── tests/
│   │       │   ├── config-manager.test.ts
│   │       │   ├── dimension-autodetect.test.ts
│   │       │   ├── factory.unit.test.ts
│   │       │   ├── google-llm.test.ts
│   │       │   ├── graph-memory-parsing.test.ts
│   │       │   ├── graph-prompts.test.ts
│   │       │   ├── lmstudio-embedder.test.ts
│   │       │   ├── lmstudio-llm.test.ts
│   │       │   ├── memory.add.test.ts
│   │       │   ├── memory.crud.test.ts
│   │       │   ├── memory.init.test.ts
│   │       │   ├── ollama-embedder.test.ts
│   │       │   ├── remove-code-blocks.test.ts
│   │       │   ├── storage.unit.test.ts
│   │       │   ├── tsup-externals.test.ts
│   │       │   ├── vector-store.unit.test.ts
│   │       │   └── vector-stores-compat.test.ts
│   │       └── tsconfig.json
│   ├── tests/
│   │   └── .gitkeep
│   ├── tsconfig.json
│   ├── tsconfig.test.json
│   └── tsup.config.ts
├── openclaw/
│   ├── .gitignore
│   ├── .npmrc
│   ├── CHANGELOG.md
│   ├── README.md
│   ├── config.ts
│   ├── filtering.ts
│   ├── index.test.ts
│   ├── index.ts
│   ├── isolation.ts
│   ├── openclaw-plugin-sdk.d.ts
│   ├── openclaw.plugin.json
│   ├── package.json
│   ├── pnpm-workspace.yaml
│   ├── providers.ts
│   ├── sqlite-resilience.test.ts
│   ├── tsconfig.json
│   ├── tsup.config.ts
│   └── types.ts
├── openmemory/
│   ├── .gitignore
│   ├── CONTRIBUTING.md
│   ├── Makefile
│   ├── README.md
│   ├── api/
│   │   ├── .dockerignore
│   │   ├── .env.example
│   │   ├── .python-version
│   │   ├── Dockerfile
│   │   ├── README.md
│   │   ├── alembic/
│   │   │   ├── README
│   │   │   ├── env.py
│   │   │   ├── script.py.mako
│   │   │   └── versions/
│   │   │       ├── 0b53c747049a_initial_migration.py
│   │   │       ├── add_config_table.py
│   │   │       └── afd00efbd06b_add_unique_user_id_constraints.py
│   │   ├── alembic.ini
│   │   ├── app/
│   │   │   ├── __init__.py
│   │   │   ├── config.py
│   │   │   ├── database.py
│   │   │   ├── mcp_server.py
│   │   │   ├── models.py
│   │   │   ├── routers/
│   │   │   │   ├── __init__.py
│   │   │   │   ├── apps.py
│   │   │   │   ├── backup.py
│   │   │   │   ├── config.py
│   │   │   │   ├── memories.py
│   │   │   │   └── stats.py
│   │   │   ├── schemas.py
│   │   │   └── utils/
│   │   │       ├── __init__.py
│   │   │       ├── categorization.py
│   │   │       ├── db.py
│   │   │       ├── memory.py
│   │   │       ├── permissions.py
│   │   │       └── prompts.py
│   │   ├── config.json
│   │   ├── default_config.json
│   │   ├── main.py
│   │   └── requirements.txt
│   ├── backup-scripts/
│   │   └── export_openmemory.sh
│   ├── compose/
│   │   ├── chroma.yml
│   │   ├── elasticsearch.yml
│   │   ├── faiss.yml
│   │   ├── milvus.yml
│   │   ├── opensearch.yml
│   │   ├── pgvector.yml
│   │   ├── qdrant.yml
│   │   ├── redis.yml
│   │   └── weaviate.yml
│   ├── docker-compose.yml
│   ├── run.sh
│   └── ui/
│       ├── .dockerignore
│       ├── .env.example
│       ├── Dockerfile
│       ├── app/
│       │   ├── apps/
│       │   │   ├── [appId]/
│       │   │   │   ├── components/
│       │   │   │   │   ├── AppDetailCard.tsx
│       │   │   │   │   └── MemoryCard.tsx
│       │   │   │   └── page.tsx
│       │   │   ├── components/
│       │   │   │   ├── AppCard.tsx
│       │   │   │   ├── AppFilters.tsx
│       │   │   │   └── AppGrid.tsx
│       │   │   └── page.tsx
│       │   ├── globals.css
│       │   ├── layout.tsx
│       │   ├── loading.tsx
│       │   ├── memories/
│       │   │   ├── components/
│       │   │   │   ├── CreateMemoryDialog.tsx
│       │   │   │   ├── FilterComponent.tsx
│       │   │   │   ├── MemoriesSection.tsx
│       │   │   │   ├── MemoryFilters.tsx
│       │   │   │   ├── MemoryPagination.tsx
│       │   │   │   ├── MemoryTable.tsx
│       │   │   │   └── PageSizeSelector.tsx
│       │   │   └── page.tsx
│       │   ├── memory/
│       │   │   └── [id]/
│       │   │       ├── components/
│       │   │       │   ├── AccessLog.tsx
│       │   │       │   ├── MemoryActions.tsx
│       │   │       │   ├── MemoryDetails.tsx
│       │   │       │   └── RelatedMemories.tsx
│       │   │       └── page.tsx
│       │   ├── not-found.tsx
│       │   ├── page.tsx
│       │   ├── providers.tsx
│       │   └── settings/
│       │       └── page.tsx
│       ├── components/
│       │   ├── Navbar.tsx
│       │   ├── dashboard/
│       │   │   ├── Install.tsx
│       │   │   └── Stats.tsx
│       │   ├── form-view.tsx
│       │   ├── json-editor.tsx
│       │   ├── shared/
│       │   │   ├── categories.tsx
│       │   │   ├── source-app.tsx
│       │   │   └── update-memory.tsx
│       │   ├── theme-provider.tsx
│       │   ├── types.ts
│       │   └── ui/
│       │       ├── accordion.tsx
│       │       ├── alert-dialog.tsx
│       │       ├── alert.tsx
│       │       ├── aspect-ratio.tsx
│       │       ├── avatar.tsx
│       │       ├── badge.tsx
│       │       ├── breadcrumb.tsx
│       │       ├── button.tsx
│       │       ├── calendar.tsx
│       │       ├── card.tsx
│       │       ├── carousel.tsx
│       │       ├── chart.tsx
│       │       ├── checkbox.tsx
│       │       ├── collapsible.tsx
│       │       ├── command.tsx
│       │       ├── context-menu.tsx
│       │       ├── dialog.tsx
│       │       ├── drawer.tsx
│       │       ├── dropdown-menu.tsx
│       │       ├── form.tsx
│       │       ├── hover-card.tsx
│       │       ├── input-otp.tsx
│       │       ├── input.tsx
│       │       ├── label.tsx
│       │       ├── menubar.tsx
│       │       ├── navigation-menu.tsx
│       │       ├── pagination.tsx
│       │       ├── popover.tsx
│       │       ├── progress.tsx
│       │       ├── radio-group.tsx
│       │       ├── resizable.tsx
│       │       ├── scroll-area.tsx
│       │       ├── select.tsx
│       │       ├── separator.tsx
│       │       ├── sheet.tsx
│       │       ├── sidebar.tsx
│       │       ├── skeleton.tsx
│       │       ├── slider.tsx
│       │       ├── sonner.tsx
│       │       ├── switch.tsx
│       │       ├── table.tsx
│       │       ├── tabs.tsx
│       │       ├── textarea.tsx
│       │       ├── toast.tsx
│       │       ├── toaster.tsx
│       │       ├── toggle-group.tsx
│       │       ├── toggle.tsx
│       │       ├── tooltip.tsx
│       │       ├── use-mobile.tsx
│       │       └── use-toast.ts
│       ├── components.json
│       ├── entrypoint.sh
│       ├── hooks/
│       │   ├── use-mobile.tsx
│       │   ├── use-toast.ts
│       │   ├── useAppsApi.ts
│       │   ├── useConfig.ts
│       │   ├── useFiltersApi.ts
│       │   ├── useMemoriesApi.ts
│       │   ├── useStats.ts
│       │   └── useUI.ts
│       ├── next-env.d.ts
│       ├── next.config.dev.mjs
│       ├── next.config.mjs
│       ├── package.json
│       ├── postcss.config.mjs
│       ├── skeleton/
│       │   ├── AppCardSkeleton.tsx
│       │   ├── AppDetailCardSkeleton.tsx
│       │   ├── AppFiltersSkeleton.tsx
│       │   ├── MemoryCardSkeleton.tsx
│       │   ├── MemorySkeleton.tsx
│       │   └── MemoryTableSkeleton.tsx
│       ├── store/
│       │   ├── appsSlice.ts
│       │   ├── configSlice.ts
│       │   ├── filtersSlice.ts
│       │   ├── memoriesSlice.ts
│       │   ├── profileSlice.ts
│       │   ├── store.ts
│       │   └── uiSlice.ts
│       ├── styles/
│       │   ├── animation.css
│       │   ├── globals.css
│       │   └── notfound.scss
│       ├── tailwind.config.ts
│       └── tsconfig.json
├── pyproject.toml
├── server/
│   ├── Dockerfile
│   ├── Makefile
│   ├── README.md
│   ├── dev.Dockerfile
│   ├── docker-compose.yaml
│   ├── main.py
│   └── requirements.txt
├── skills/
│   └── mem0/
│       ├── LICENSE
│       ├── README.md
│       ├── SKILL.md
│       ├── references/
│       │   ├── api-reference.md
│       │   ├── architecture.md
│       │   ├── features.md
│       │   ├── integration-patterns.md
│       │   ├── quickstart.md
│       │   ├── sdk-guide.md
│       │   └── use-cases.md
│       └── scripts/
│           └── mem0_doc_search.py
├── tests/
│   ├── __init__.py
│   ├── configs/
│   │   └── test_prompts.py
│   ├── embeddings/
│   │   ├── test_azure_openai_embeddings.py
│   │   ├── test_fastembed_embeddings.py
│   │   ├── test_gemini_emeddings.py
│   │   ├── test_huggingface_embeddings.py
│   │   ├── test_lm_studio_embeddings.py
│   │   ├── test_ollama_embeddings.py
│   │   ├── test_openai_embeddings.py
│   │   └── test_vertexai_embeddings.py
│   ├── llms/
│   │   ├── test_azure_openai.py
│   │   ├── test_azure_openai_structured.py
│   │   ├── test_deepseek.py
│   │   ├── test_gemini.py
│   │   ├── test_groq.py
│   │   ├── test_langchain.py
│   │   ├── test_litellm.py
│   │   ├── test_lm_studio.py
│   │   ├── test_ollama.py
│   │   ├── test_openai.py
│   │   ├── test_together.py
│   │   └── test_vllm.py
│   ├── memory/
│   │   ├── test_json_prompt_fix.py
│   │   ├── test_kuzu.py
│   │   ├── test_main.py
│   │   ├── test_memgraph_memory.py
│   │   ├── test_neo4j_cypher_syntax.py
│   │   ├── test_neptune_analytics_memory.py
│   │   ├── test_neptune_memory.py
│   │   ├── test_safe_deepcopy_config.py
│   │   └── test_storage.py
│   ├── rerankers/
│   │   ├── conftest.py
│   │   ├── test_llm_reranker_config.py
│   │   ├── test_llm_reranker_nested_config.py
│   │   └── test_llm_reranker_rerank.py
│   ├── test_main.py
│   ├── test_memory.py
│   ├── test_memory_integration.py
│   ├── test_proxy.py
│   ├── test_telemetry.py
│   └── vector_stores/
│       ├── test_azure_ai_search.py
│       ├── test_azure_mysql.py
│       ├── test_baidu.py
│       ├── test_cassandra.py
│       ├── test_chroma.py
│       ├── test_databricks.py
│       ├── test_elasticsearch.py
│       ├── test_faiss.py
│       ├── test_langchain_vector_store.py
│       ├── test_milvus.py
│       ├── test_mongodb.py
│       ├── test_neptune_analytics.py
│       ├── test_opensearch.py
│       ├── test_pgvector.py
│       ├── test_pinecone.py
│       ├── test_qdrant.py
│       ├── test_s3_vectors.py
│       ├── test_supabase.py
│       ├── test_upstash_vector.py
│       ├── test_valkey.py
│       ├── test_vertex_ai_vector_search.py
│       └── test_weaviate.py
└── vercel-ai-sdk/
    ├── .gitattributes
    ├── .gitignore
    ├── README.md
    ├── config/
    │   └── test-config.ts
    ├── jest.config.js
    ├── nodemon.json
    ├── package.json
    ├── src/
    │   ├── index.ts
    │   ├── mem0-facade.ts
    │   ├── mem0-generic-language-model.ts
    │   ├── mem0-provider-selector.ts
    │   ├── mem0-provider.ts
    │   ├── mem0-types.ts
    │   ├── mem0-utils.ts
    │   ├── provider-response-provider.ts
    │   └── stream-utils.ts
    ├── teardown.ts
    ├── tests/
    │   ├── generate-output.test.ts
    │   ├── mem0-provider-tests/
    │   │   ├── mem0-cohere.test.ts
    │   │   ├── mem0-google.test.ts
    │   │   ├── mem0-groq.test.ts
    │   │   ├── mem0-openai-structured-ouput.test.ts
    │   │   ├── mem0-openai.test.ts
    │   │   └── mem0_anthropic.test.ts
    │   ├── mem0-toolcalls.test.ts
    │   ├── memory-core.test.ts
    │   ├── text-properties.test.ts
    │   └── utils-test/
    │       ├── anthropic-integration.test.ts
    │       ├── cohere-integration.test.ts
    │       ├── google-integration.test.ts
    │       ├── groq-integration.test.ts
    │       └── openai-integration.test.ts
    ├── tsconfig.json
    └── tsup.config.ts
Download .txt
Showing preview only (391K chars total). Download the full file or copy to clipboard to get everything.
SYMBOL INDEX (4378 symbols across 729 files)

FILE: cookbooks/helper/mem0_teachability.py
  class Mem0Teachability (line 19) | class Mem0Teachability(AgentCapability):
    method __init__ (line 20) | def __init__(
    method add_to_agent (line 42) | def add_to_agent(self, agent: ConversableAgent):
    method process_last_received_message (line 57) | def process_last_received_message(self, text: Union[Dict, str]):
    method _consider_memo_storage (line 64) | def _consider_memo_storage(self, comment: Union[Dict, str]):
    method _consider_memo_retrieval (line 114) | def _consider_memo_retrieval(self, comment: Union[Dict, str]):
    method _retrieve_relevant_memos (line 141) | def _retrieve_relevant_memos(self, input_text: str) -> list:
    method _concatenate_memo_texts (line 153) | def _concatenate_memo_texts(self, memo_list: list) -> str:
    method _analyze (line 164) | def _analyze(self, text_to_analyze: Union[Dict, str], analysis_instruc...

FILE: embedchain/embedchain/app.py
  class App (line 48) | class App(EmbedChain):
    method __init__ (line 55) | def __init__(
    method _init_db (line 147) | def _init_db(self):
    method _init_cache (line 155) | def _init_cache(self):
    method _init_client (line 172) | def _init_client(self):
    method _get_pipeline (line 185) | def _get_pipeline(self, id):
    method _create_pipeline (line 203) | def _create_pipeline(self):
    method _get_presigned_url (line 233) | def _get_presigned_url(self, data_type, data_value):
    method _upload_file_to_presigned_url (line 243) | def _upload_file_to_presigned_url(self, presigned_url, file_path):
    method _upload_data_to_pipeline (line 254) | def _upload_data_to_pipeline(self, data_type, data_value, metadata=None):
    method _send_api_request (line 268) | def _send_api_request(self, endpoint, payload):
    method _process_and_upload_data (line 275) | def _process_and_upload_data(self, data_hash, data_type, data_value):
    method _mark_data_as_uploaded (line 299) | def _mark_data_as_uploaded(self, data_hash):
    method get_data_sources (line 302) | def get_data_sources(self):
    method deploy (line 309) | def deploy(self):
    method from_config (line 327) | def from_config(
    method _eval (line 419) | def _eval(self, dataset: list[EvalData], metric: Union[BaseMetric, str]):
    method evaluate (line 439) | def evaluate(

FILE: embedchain/embedchain/bots/base.py
  class BaseBot (line 15) | class BaseBot(JSONSerializable):
    method __init__ (line 16) | def __init__(self):
    method add (line 19) | def add(self, data: Any, config: AddConfig = None):
    method query (line 32) | def query(self, query: str, config: BaseLlmConfig = None) -> str:
    method start (line 46) | def start(self):

FILE: embedchain/embedchain/bots/discord.py
  class DiscordBot (line 31) | class DiscordBot(BaseBot):
    method __init__ (line 32) | def __init__(self, *args, **kwargs):
    method add_data (line 35) | def add_data(self, message):
    method ask_bot (line 45) | def ask_bot(self, message):
    method start (line 53) | def start(self):
  function query_command (line 61) | async def query_command(interaction: discord.Interaction, question: str):
  function add_command (line 78) | async def add_command(interaction: discord.Interaction, url_or_text: str):
  function ping (line 91) | async def ping(interaction: discord.Interaction):
  function on_app_command_error (line 96) | async def on_app_command_error(interaction: discord.Interaction, error: ...
  function on_ready (line 104) | async def on_ready():
  function start_command (line 112) | def start_command():

FILE: embedchain/embedchain/bots/poe.py
  function start_command (line 18) | def start_command():
  class PoeBot (line 38) | class PoeBot(BaseBot, PoeBot):
    method __init__ (line 39) | def __init__(self):
    method get_response (line 43) | async def get_response(self, query):
    method handle_message (line 56) | def handle_message(self, message, history: Optional[list[str]] = None):
    method ask_bot (line 73) | def ask_bot(self, message, history: list[str]):
    method start (line 82) | def start(self):

FILE: embedchain/embedchain/bots/slack.py
  class SlackBot (line 28) | class SlackBot(BaseBot):
    method __init__ (line 29) | def __init__(self):
    method handle_message (line 35) | def handle_message(self, event_data):
    method send_slack_message (line 65) | def send_slack_message(self, channel, message):
    method start (line 69) | def start(self, host="0.0.0.0", port=5000, debug=True):
  function start_command (line 90) | def start_command():

FILE: embedchain/embedchain/bots/whatsapp.py
  class WhatsAppBot (line 15) | class WhatsAppBot(BaseBot):
    method __init__ (line 16) | def __init__(self):
    method handle_message (line 27) | def handle_message(self, message):
    method add_data (line 34) | def add_data(self, message):
    method ask_bot (line 44) | def ask_bot(self, message):
    method start (line 52) | def start(self, host="0.0.0.0", port=5000, debug=True):
  function start_command (line 72) | def start_command():

FILE: embedchain/embedchain/cache.py
  function gptcache_pre_function (line 22) | def gptcache_pre_function(data: dict[str, Any], **params: dict[str, Any]):
  function gptcache_data_manager (line 26) | def gptcache_data_manager(vector_dimension):
  function gptcache_data_convert (line 30) | def gptcache_data_convert(cache_data):
  function gptcache_update_cache_callback (line 35) | def gptcache_update_cache_callback(llm_data, update_cache_func, *args, *...
  function _gptcache_session_hit_func (line 41) | def _gptcache_session_hit_func(cur_session_id: str, cache_session_ids: l...
  function get_gptcache_session (line 45) | def get_gptcache_session(session_id: str):

FILE: embedchain/embedchain/chunkers/audio.py
  class AudioChunker (line 11) | class AudioChunker(BaseChunker):
    method __init__ (line 14) | def __init__(self, config: Optional[ChunkerConfig] = None):

FILE: embedchain/embedchain/chunkers/base_chunker.py
  class BaseChunker (line 12) | class BaseChunker(JSONSerializable):
    method __init__ (line 13) | def __init__(self, text_splitter):
    method create_chunks (line 18) | def create_chunks(
    method get_chunks (line 76) | def get_chunks(self, content):
    method set_data_type (line 84) | def set_data_type(self, data_type: DataType):
    method get_word_count (line 93) | def get_word_count(documents) -> int:

FILE: embedchain/embedchain/chunkers/beehiiv.py
  class BeehiivChunker (line 11) | class BeehiivChunker(BaseChunker):
    method __init__ (line 14) | def __init__(self, config: Optional[ChunkerConfig] = None):

FILE: embedchain/embedchain/chunkers/common_chunker.py
  class CommonChunker (line 11) | class CommonChunker(BaseChunker):
    method __init__ (line 14) | def __init__(self, config: Optional[ChunkerConfig] = None):

FILE: embedchain/embedchain/chunkers/discourse.py
  class DiscourseChunker (line 11) | class DiscourseChunker(BaseChunker):
    method __init__ (line 14) | def __init__(self, config: Optional[ChunkerConfig] = None):

FILE: embedchain/embedchain/chunkers/docs_site.py
  class DocsSiteChunker (line 11) | class DocsSiteChunker(BaseChunker):
    method __init__ (line 14) | def __init__(self, config: Optional[ChunkerConfig] = None):

FILE: embedchain/embedchain/chunkers/docx_file.py
  class DocxFileChunker (line 11) | class DocxFileChunker(BaseChunker):
    method __init__ (line 14) | def __init__(self, config: Optional[ChunkerConfig] = None):

FILE: embedchain/embedchain/chunkers/excel_file.py
  class ExcelFileChunker (line 11) | class ExcelFileChunker(BaseChunker):
    method __init__ (line 14) | def __init__(self, config: Optional[ChunkerConfig] = None):

FILE: embedchain/embedchain/chunkers/gmail.py
  class GmailChunker (line 11) | class GmailChunker(BaseChunker):
    method __init__ (line 14) | def __init__(self, config: Optional[ChunkerConfig] = None):

FILE: embedchain/embedchain/chunkers/google_drive.py
  class GoogleDriveChunker (line 11) | class GoogleDriveChunker(BaseChunker):
    method __init__ (line 14) | def __init__(self, config: Optional[ChunkerConfig] = None):

FILE: embedchain/embedchain/chunkers/image.py
  class ImageChunker (line 11) | class ImageChunker(BaseChunker):
    method __init__ (line 14) | def __init__(self, config: Optional[ChunkerConfig] = None):

FILE: embedchain/embedchain/chunkers/json.py
  class JSONChunker (line 11) | class JSONChunker(BaseChunker):
    method __init__ (line 14) | def __init__(self, config: Optional[ChunkerConfig] = None):

FILE: embedchain/embedchain/chunkers/mdx.py
  class MdxChunker (line 11) | class MdxChunker(BaseChunker):
    method __init__ (line 14) | def __init__(self, config: Optional[ChunkerConfig] = None):

FILE: embedchain/embedchain/chunkers/mysql.py
  class MySQLChunker (line 11) | class MySQLChunker(BaseChunker):
    method __init__ (line 14) | def __init__(self, config: Optional[ChunkerConfig] = None):

FILE: embedchain/embedchain/chunkers/notion.py
  class NotionChunker (line 11) | class NotionChunker(BaseChunker):
    method __init__ (line 14) | def __init__(self, config: Optional[ChunkerConfig] = None):

FILE: embedchain/embedchain/chunkers/openapi.py
  class OpenAPIChunker (line 9) | class OpenAPIChunker(BaseChunker):
    method __init__ (line 10) | def __init__(self, config: Optional[ChunkerConfig] = None):

FILE: embedchain/embedchain/chunkers/pdf_file.py
  class PdfFileChunker (line 11) | class PdfFileChunker(BaseChunker):
    method __init__ (line 14) | def __init__(self, config: Optional[ChunkerConfig] = None):

FILE: embedchain/embedchain/chunkers/postgres.py
  class PostgresChunker (line 11) | class PostgresChunker(BaseChunker):
    method __init__ (line 14) | def __init__(self, config: Optional[ChunkerConfig] = None):

FILE: embedchain/embedchain/chunkers/qna_pair.py
  class QnaPairChunker (line 11) | class QnaPairChunker(BaseChunker):
    method __init__ (line 14) | def __init__(self, config: Optional[ChunkerConfig] = None):

FILE: embedchain/embedchain/chunkers/rss_feed.py
  class RSSFeedChunker (line 11) | class RSSFeedChunker(BaseChunker):
    method __init__ (line 14) | def __init__(self, config: Optional[ChunkerConfig] = None):

FILE: embedchain/embedchain/chunkers/sitemap.py
  class SitemapChunker (line 11) | class SitemapChunker(BaseChunker):
    method __init__ (line 14) | def __init__(self, config: Optional[ChunkerConfig] = None):

FILE: embedchain/embedchain/chunkers/slack.py
  class SlackChunker (line 11) | class SlackChunker(BaseChunker):
    method __init__ (line 14) | def __init__(self, config: Optional[ChunkerConfig] = None):

FILE: embedchain/embedchain/chunkers/substack.py
  class SubstackChunker (line 11) | class SubstackChunker(BaseChunker):
    method __init__ (line 14) | def __init__(self, config: Optional[ChunkerConfig] = None):

FILE: embedchain/embedchain/chunkers/table.py
  class TableChunker (line 9) | class TableChunker(BaseChunker):
    method __init__ (line 12) | def __init__(self, config: Optional[ChunkerConfig] = None):

FILE: embedchain/embedchain/chunkers/text.py
  class TextChunker (line 11) | class TextChunker(BaseChunker):
    method __init__ (line 14) | def __init__(self, config: Optional[ChunkerConfig] = None):

FILE: embedchain/embedchain/chunkers/unstructured_file.py
  class UnstructuredFileChunker (line 11) | class UnstructuredFileChunker(BaseChunker):
    method __init__ (line 14) | def __init__(self, config: Optional[ChunkerConfig] = None):

FILE: embedchain/embedchain/chunkers/web_page.py
  class WebPageChunker (line 11) | class WebPageChunker(BaseChunker):
    method __init__ (line 14) | def __init__(self, config: Optional[ChunkerConfig] = None):

FILE: embedchain/embedchain/chunkers/xml.py
  class XmlChunker (line 11) | class XmlChunker(BaseChunker):
    method __init__ (line 14) | def __init__(self, config: Optional[ChunkerConfig] = None):

FILE: embedchain/embedchain/chunkers/youtube_video.py
  class YoutubeVideoChunker (line 11) | class YoutubeVideoChunker(BaseChunker):
    method __init__ (line 14) | def __init__(self, config: Optional[ChunkerConfig] = None):

FILE: embedchain/embedchain/cli.py
  function signal_handler (line 40) | def signal_handler(sig, frame):
  function cli (line 54) | def cli():
  function create_app (line 62) | def create_app(ctx, app_name, docker):
  function install_reqs (line 117) | def install_reqs():
  function start (line 142) | def start(docker):
  function create (line 187) | def create(template, extra_args):
  function run_dev_fly_io (line 219) | def run_dev_fly_io(debug, host, port):
  function run_dev_modal_com (line 236) | def run_dev_modal_com():
  function run_dev_streamlit_io (line 247) | def run_dev_streamlit_io():
  function run_dev_render_com (line 258) | def run_dev_render_com(debug, host, port):
  function run_dev_gradio (line 275) | def run_dev_gradio():
  function dev (line 290) | def dev(debug, host, port):
  function deploy (line 312) | def deploy():

FILE: embedchain/embedchain/client.py
  class Client (line 13) | class Client:
    method __init__ (line 14) | def __init__(self, api_key=None, host="https://apiv2.embedchain.ai"):
    method setup (line 36) | def setup(cls):
    method load_config (line 57) | def load_config(cls):
    method save (line 64) | def save(self):
    method clear (line 71) | def clear(self):
    method update (line 81) | def update(self, api_key):
    method check (line 89) | def check(self, api_key):
    method get (line 99) | def get(self):
    method __str__ (line 102) | def __str__(self):

FILE: embedchain/embedchain/config/add_config.py
  class ChunkerConfig (line 12) | class ChunkerConfig(BaseConfig):
    method __init__ (line 17) | def __init__(
    method load_func (line 40) | def load_func(dotpath: str):
  class LoaderConfig (line 50) | class LoaderConfig(BaseConfig):
    method __init__ (line 55) | def __init__(self):
  class AddConfig (line 60) | class AddConfig(BaseConfig):
    method __init__ (line 65) | def __init__(

FILE: embedchain/embedchain/config/app_config.py
  class AppConfig (line 9) | class AppConfig(BaseAppConfig):
    method __init__ (line 14) | def __init__(

FILE: embedchain/embedchain/config/base_app_config.py
  class BaseAppConfig (line 11) | class BaseAppConfig(BaseConfig, JSONSerializable):
    method __init__ (line 16) | def __init__(
    method _setup_logging (line 56) | def _setup_logging(self, log_level):

FILE: embedchain/embedchain/config/base_config.py
  class BaseConfig (line 6) | class BaseConfig(JSONSerializable):
    method __init__ (line 11) | def __init__(self):
    method as_dict (line 15) | def as_dict(self) -> dict[str, Any]:

FILE: embedchain/embedchain/config/cache_config.py
  class CacheSimilarityEvalConfig (line 8) | class CacheSimilarityEvalConfig(BaseConfig):
    method __init__ (line 22) | def __init__(
    method from_config (line 33) | def from_config(config: Optional[dict[str, Any]]):
  class CacheInitConfig (line 45) | class CacheInitConfig(BaseConfig):
    method __init__ (line 56) | def __init__(
    method from_config (line 68) | def from_config(config: Optional[dict[str, Any]]):
  class CacheConfig (line 79) | class CacheConfig(BaseConfig):
    method __init__ (line 80) | def __init__(
    method from_config (line 89) | def from_config(config: Optional[dict[str, Any]]):

FILE: embedchain/embedchain/config/embedder/aws_bedrock.py
  class AWSBedrockEmbedderConfig (line 8) | class AWSBedrockEmbedderConfig(BaseEmbedderConfig):
    method __init__ (line 9) | def __init__(

FILE: embedchain/embedchain/config/embedder/base.py
  class BaseEmbedderConfig (line 9) | class BaseEmbedderConfig:
    method __init__ (line 10) | def __init__(

FILE: embedchain/embedchain/config/embedder/google.py
  class GoogleAIEmbedderConfig (line 8) | class GoogleAIEmbedderConfig(BaseEmbedderConfig):
    method __init__ (line 9) | def __init__(

FILE: embedchain/embedchain/config/embedder/ollama.py
  class OllamaEmbedderConfig (line 8) | class OllamaEmbedderConfig(BaseEmbedderConfig):
    method __init__ (line 9) | def __init__(

FILE: embedchain/embedchain/config/evaluation/base.py
  class GroundednessConfig (line 51) | class GroundednessConfig(BaseConfig):
    method __init__ (line 52) | def __init__(
  class AnswerRelevanceConfig (line 65) | class AnswerRelevanceConfig(BaseConfig):
    method __init__ (line 66) | def __init__(
  class ContextRelevanceConfig (line 81) | class ContextRelevanceConfig(BaseConfig):
    method __init__ (line 82) | def __init__(

FILE: embedchain/embedchain/config/llm/base.py
  class BaseLlmConfig (line 111) | class BaseLlmConfig(BaseConfig):
    method __init__ (line 116) | def __init__(
    method validate_prompt (line 255) | def validate_prompt(prompt: Template) -> Optional[re.Match[str]]:
    method _validate_prompt_history (line 267) | def _validate_prompt_history(prompt: Template) -> Optional[re.Match[st...

FILE: embedchain/embedchain/config/mem0_config.py
  class Mem0Config (line 8) | class Mem0Config(BaseConfig):
    method __init__ (line 9) | def __init__(self, api_key: str, top_k: Optional[int] = 10):
    method from_config (line 14) | def from_config(config: Optional[dict[str, Any]]):

FILE: embedchain/embedchain/config/vector_db/base.py
  class BaseVectorDbConfig (line 6) | class BaseVectorDbConfig(BaseConfig):
    method __init__ (line 7) | def __init__(

FILE: embedchain/embedchain/config/vector_db/chroma.py
  class ChromaDbConfig (line 8) | class ChromaDbConfig(BaseVectorDbConfig):
    method __init__ (line 9) | def __init__(

FILE: embedchain/embedchain/config/vector_db/elasticsearch.py
  class ElasticsearchDBConfig (line 9) | class ElasticsearchDBConfig(BaseVectorDbConfig):
    method __init__ (line 10) | def __init__(

FILE: embedchain/embedchain/config/vector_db/lancedb.py
  class LanceDBConfig (line 8) | class LanceDBConfig(BaseVectorDbConfig):
    method __init__ (line 9) | def __init__(

FILE: embedchain/embedchain/config/vector_db/opensearch.py
  class OpenSearchDBConfig (line 8) | class OpenSearchDBConfig(BaseVectorDbConfig):
    method __init__ (line 9) | def __init__(

FILE: embedchain/embedchain/config/vector_db/pinecone.py
  class PineconeDBConfig (line 9) | class PineconeDBConfig(BaseVectorDbConfig):
    method __init__ (line 10) | def __init__(

FILE: embedchain/embedchain/config/vector_db/qdrant.py
  class QdrantDBConfig (line 8) | class QdrantDBConfig(BaseVectorDbConfig):
    method __init__ (line 14) | def __init__(

FILE: embedchain/embedchain/config/vector_db/weaviate.py
  class WeaviateDBConfig (line 8) | class WeaviateDBConfig(BaseVectorDbConfig):
    method __init__ (line 9) | def __init__(

FILE: embedchain/embedchain/config/vector_db/zilliz.py
  class ZillizDBConfig (line 9) | class ZillizDBConfig(BaseVectorDbConfig):
    method __init__ (line 10) | def __init__(

FILE: embedchain/embedchain/data_formatter/data_formatter.py
  class DataFormatter (line 12) | class DataFormatter(JSONSerializable):
    method __init__ (line 19) | def __init__(
    method _lazy_load (line 38) | def _lazy_load(module_path: str):
    method _get_loader (line 43) | def _get_loader(
    method _get_chunker (line 107) | def _get_chunker(self, data_type: DataType, config: ChunkerConfig, chu...

FILE: embedchain/embedchain/deployment/fly.io/app.py
  class SourceModel (line 13) | class SourceModel(BaseModel):
  class QuestionModel (line 17) | class QuestionModel(BaseModel):
  function add_source (line 22) | async def add_source(source_model: SourceModel):
  function handle_query (line 33) | async def handle_query(question_model: QuestionModel):
  function handle_chat (line 44) | async def handle_chat(question_model: QuestionModel):
  function root (line 55) | async def root():

FILE: embedchain/embedchain/deployment/gradio.app/app.py
  function query (line 12) | def query(message, history):

FILE: embedchain/embedchain/deployment/modal.com/app.py
  function add (line 36) | async def add(
  function query (line 55) | async def query(question: str = Body(..., description="Question to be an...
  function chat (line 67) | async def chat(question: str = Body(..., description="Question to be ans...
  function root (line 79) | async def root():
  function fastapi_app (line 85) | def fastapi_app():

FILE: embedchain/embedchain/deployment/render.com/app.py
  class SourceModel (line 10) | class SourceModel(BaseModel):
  class QuestionModel (line 14) | class QuestionModel(BaseModel):
  function add_source (line 19) | async def add_source(source_model: SourceModel):
  function handle_query (line 30) | async def handle_query(question_model: QuestionModel):
  function handle_chat (line 41) | async def handle_chat(question_model: QuestionModel):
  function root (line 52) | async def root():

FILE: embedchain/embedchain/deployment/streamlit.io/app.py
  function embedchain_bot (line 7) | def embedchain_bot():

FILE: embedchain/embedchain/embedchain.py
  class EmbedChain (line 38) | class EmbedChain(JSONSerializable):
    method __init__ (line 39) | def __init__(
    method collect_metrics (line 98) | def collect_metrics(self):
    method collect_metrics (line 102) | def collect_metrics(self, value):
    method online (line 108) | def online(self):
    method online (line 112) | def online(self, value):
    method add (line 117) | def add(
    method _get_existing_doc_id (line 239) | def _get_existing_doc_id(self, chunker: BaseChunker, src: Any):
    method _load_and_embed (line 297) | def _load_and_embed(
    method _format_result (line 428) | def _format_result(results):
    method _retrieve_from_database (line 438) | def _retrieve_from_database(
    method query (line 482) | def query(
    method chat (line 563) | def chat(
    method search (line 676) | def search(self, query, num_documents=3, where=None, raw_filter=None, ...
    method set_collection_name (line 714) | def set_collection_name(self, name: str):
    method reset (line 729) | def reset(self):
    method get_history (line 748) | def get_history(
    method delete_session_chat_history (line 764) | def delete_session_chat_history(self, session_id: str = "default"):
    method delete_all_chat_history (line 768) | def delete_all_chat_history(self, app_id: str):
    method delete (line 772) | def delete(self, source_id: str):

FILE: embedchain/embedchain/embedder/aws_bedrock.py
  class AWSBedrockEmbedder (line 15) | class AWSBedrockEmbedder(BaseEmbedder):
    method __init__ (line 16) | def __init__(self, config: Optional[AWSBedrockEmbedderConfig] = None):

FILE: embedchain/embedchain/embedder/azure_openai.py
  class AzureOpenAIEmbedder (line 10) | class AzureOpenAIEmbedder(BaseEmbedder):
    method __init__ (line 11) | def __init__(self, config: Optional[BaseEmbedderConfig] = None):

FILE: embedchain/embedchain/embedder/base.py
  class EmbeddingFunc (line 15) | class EmbeddingFunc(EmbeddingFunction):
    method __init__ (line 16) | def __init__(self, embedding_fn: Callable[[list[str]], list[str]]):
    method __call__ (line 19) | def __call__(self, input: Embeddable) -> Embeddings:
  class BaseEmbedder (line 23) | class BaseEmbedder:
    method __init__ (line 31) | def __init__(self, config: Optional[BaseEmbedderConfig] = None):
    method set_embedding_fn (line 44) | def set_embedding_fn(self, embedding_fn: Callable[[list[str]], list[st...
    method set_vector_dimension (line 56) | def set_vector_dimension(self, vector_dimension: int):
    method _langchain_default_concept (line 68) | def _langchain_default_concept(embeddings: Any):
    method to_embeddings (line 80) | def to_embeddings(self, data: str, **_):

FILE: embedchain/embedchain/embedder/clarifai.py
  class ClarifaiEmbeddingFunction (line 10) | class ClarifaiEmbeddingFunction(EmbeddingFunction):
    method __init__ (line 11) | def __init__(self, config: BaseEmbedderConfig) -> None:
    method __call__ (line 27) | def __call__(self, input: Union[str, list[str]]) -> Embeddings:
  class ClarifaiEmbedder (line 47) | class ClarifaiEmbedder(BaseEmbedder):
    method __init__ (line 48) | def __init__(self, config: Optional[BaseEmbedderConfig] = None):

FILE: embedchain/embedchain/embedder/cohere.py
  class CohereEmbedder (line 10) | class CohereEmbedder(BaseEmbedder):
    method __init__ (line 11) | def __init__(self, config: Optional[BaseEmbedderConfig] = None):

FILE: embedchain/embedchain/embedder/google.py
  class GoogleAIEmbeddingFunction (line 11) | class GoogleAIEmbeddingFunction(EmbeddingFunction):
    method __init__ (line 12) | def __init__(self, config: Optional[GoogleAIEmbedderConfig] = None) ->...
    method __call__ (line 16) | def __call__(self, input: Union[list[str], str]) -> Embeddings:
  class GoogleAIEmbedder (line 31) | class GoogleAIEmbedder(BaseEmbedder):
    method __init__ (line 32) | def __init__(self, config: Optional[GoogleAIEmbedderConfig] = None):

FILE: embedchain/embedchain/embedder/gpt4all.py
  class GPT4AllEmbedder (line 8) | class GPT4AllEmbedder(BaseEmbedder):
    method __init__ (line 9) | def __init__(self, config: Optional[BaseEmbedderConfig] = None):

FILE: embedchain/embedchain/embedder/huggingface.py
  class HuggingFaceEmbedder (line 19) | class HuggingFaceEmbedder(BaseEmbedder):
    method __init__ (line 20) | def __init__(self, config: Optional[BaseEmbedderConfig] = None):

FILE: embedchain/embedchain/embedder/mistralai.py
  class MistralAIEmbeddingFunction (line 11) | class MistralAIEmbeddingFunction(EmbeddingFunction):
    method __init__ (line 12) | def __init__(self, config: BaseEmbedderConfig) -> None:
    method __call__ (line 26) | def __call__(self, input: Union[list[str], str]) -> Embeddings:
  class MistralAIEmbedder (line 35) | class MistralAIEmbedder(BaseEmbedder):
    method __init__ (line 36) | def __init__(self, config: Optional[BaseEmbedderConfig] = None):

FILE: embedchain/embedchain/embedder/nvidia.py
  class NvidiaEmbedder (line 14) | class NvidiaEmbedder(BaseEmbedder):
    method __init__ (line 15) | def __init__(self, config: Optional[BaseEmbedderConfig] = None):

FILE: embedchain/embedchain/embedder/ollama.py
  class OllamaEmbedder (line 18) | class OllamaEmbedder(BaseEmbedder):
    method __init__ (line 19) | def __init__(self, config: Optional[OllamaEmbedderConfig] = None):

FILE: embedchain/embedchain/embedder/openai.py
  class OpenAIEmbedder (line 12) | class OpenAIEmbedder(BaseEmbedder):
    method __init__ (line 13) | def __init__(self, config: Optional[BaseEmbedderConfig] = None):

FILE: embedchain/embedchain/embedder/vertexai.py
  class VertexAIEmbedder (line 10) | class VertexAIEmbedder(BaseEmbedder):
    method __init__ (line 11) | def __init__(self, config: Optional[BaseEmbedderConfig] = None):

FILE: embedchain/embedchain/evaluation/base.py
  class BaseMetric (line 6) | class BaseMetric(ABC):
    method __init__ (line 12) | def __init__(self, name: str = "base_metric"):
    method evaluate (line 19) | def evaluate(self, dataset: list[EvalData]):

FILE: embedchain/embedchain/evaluation/metrics/answer_relevancy.py
  class AnswerRelevance (line 18) | class AnswerRelevance(BaseMetric):
    method __init__ (line 23) | def __init__(self, config: Optional[AnswerRelevanceConfig] = AnswerRel...
    method _generate_prompt (line 31) | def _generate_prompt(self, data: EvalData) -> str:
    method _generate_questions (line 39) | def _generate_questions(self, prompt: str) -> list[str]:
    method _generate_embedding (line 49) | def _generate_embedding(self, question: str) -> np.ndarray:
    method _compute_similarity (line 59) | def _compute_similarity(self, original: np.ndarray, generated: np.ndar...
    method _compute_score (line 67) | def _compute_score(self, data: EvalData) -> float:
    method evaluate (line 78) | def evaluate(self, dataset: list[EvalData]) -> float:

FILE: embedchain/embedchain/evaluation/metrics/context_relevancy.py
  class ContextRelevance (line 16) | class ContextRelevance(BaseMetric):
    method __init__ (line 21) | def __init__(self, config: Optional[ContextRelevanceConfig] = ContextR...
    method _sentence_segmenter (line 30) | def _sentence_segmenter(self, text: str) -> list[str]:
    method _compute_score (line 36) | def _compute_score(self, data: EvalData) -> float:
    method evaluate (line 53) | def evaluate(self, dataset: list[EvalData]) -> float:

FILE: embedchain/embedchain/evaluation/metrics/groundedness.py
  class Groundedness (line 18) | class Groundedness(BaseMetric):
    method __init__ (line 23) | def __init__(self, config: Optional[GroundednessConfig] = None):
    method _generate_answer_claim_prompt (line 31) | def _generate_answer_claim_prompt(self, data: EvalData) -> str:
    method _get_claim_statements (line 38) | def _get_claim_statements(self, prompt: str) -> np.ndarray:
    method _generate_claim_inference_prompt (line 50) | def _generate_claim_inference_prompt(self, data: EvalData, claim_state...
    method _get_claim_verdict_scores (line 59) | def _get_claim_verdict_scores(self, prompt: str) -> np.ndarray:
    method _compute_score (line 73) | def _compute_score(self, data: EvalData) -> float:
    method evaluate (line 84) | def evaluate(self, dataset: list[EvalData]):

FILE: embedchain/embedchain/factory.py
  function load_class (line 4) | def load_class(class_type):
  class LlmFactory (line 10) | class LlmFactory:
    method create (line 38) | def create(cls, provider_name, config_data):
  class EmbedderFactory (line 51) | class EmbedderFactory:
    method create (line 78) | def create(cls, provider_name, config_data):
  class VectorDBFactory (line 91) | class VectorDBFactory:
    method create (line 114) | def create(cls, provider_name, config_data):

FILE: embedchain/embedchain/helpers/callbacks.py
  class StreamingStdOutCallbackHandlerYield (line 13) | class StreamingStdOutCallbackHandlerYield(StreamingStdOutCallbackHandler):
    method __init__ (line 24) | def __init__(self, q: queue.Queue) -> None:
    method on_llm_start (line 32) | def on_llm_start(self, serialized: dict[str, Any], prompts: list[str],...
    method on_llm_new_token (line 37) | def on_llm_new_token(self, token: str, **kwargs: Any) -> None:
    method on_llm_end (line 41) | def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:
    method on_llm_error (line 45) | def on_llm_error(self, error: Union[Exception, KeyboardInterrupt], **k...
  function generate (line 51) | def generate(rq: queue.Queue):

FILE: embedchain/embedchain/helpers/json_serializable.py
  function register_deserializable (line 14) | def register_deserializable(cls: Type[T]) -> Type[T]:
  class JSONSerializable (line 42) | class JSONSerializable:
    method serialize (line 52) | def serialize(self) -> str:
    method deserialize (line 66) | def deserialize(cls, json_str: str) -> Any:
    method _auto_encoder (line 89) | def _auto_encoder(obj: Any) -> Union[dict[str, Any], None]:
    method _auto_decoder (line 130) | def _auto_decoder(cls, dct: dict[str, Any]) -> Any:
    method save_to_file (line 161) | def save_to_file(self, filename: str) -> None:
    method load_from_file (line 172) | def load_from_file(cls, filename: str) -> Any:
    method _register_class_as_deserializable (line 187) | def _register_class_as_deserializable(cls, target_class: Type[T]) -> N...

FILE: embedchain/embedchain/llm/anthropic.py
  class AnthropicLlm (line 18) | class AnthropicLlm(BaseLlm):
    method __init__ (line 19) | def __init__(self, config: Optional[BaseLlmConfig] = None):
    method get_llm_model_answer (line 24) | def get_llm_model_answer(self, prompt) -> tuple[str, Optional[dict[str...
    method _get_answer (line 47) | def _get_answer(prompt: str, config: BaseLlmConfig) -> str:

FILE: embedchain/embedchain/llm/aws_bedrock.py
  class AWSBedrockLlm (line 17) | class AWSBedrockLlm(BaseLlm):
    method __init__ (line 18) | def __init__(self, config: Optional[BaseLlmConfig] = None):
    method get_llm_model_answer (line 21) | def get_llm_model_answer(self, prompt) -> str:
    method _get_answer (line 25) | def _get_answer(self, prompt: str, config: BaseLlmConfig) -> str:

FILE: embedchain/embedchain/llm/azure_openai.py
  class AzureOpenAILlm (line 12) | class AzureOpenAILlm(BaseLlm):
    method __init__ (line 13) | def __init__(self, config: Optional[BaseLlmConfig] = None):
    method get_llm_model_answer (line 16) | def get_llm_model_answer(self, prompt):
    method _get_answer (line 20) | def _get_answer(prompt: str, config: BaseLlmConfig) -> str:

FILE: embedchain/embedchain/llm/base.py
  class BaseLlm (line 24) | class BaseLlm(JSONSerializable):
    method __init__ (line 25) | def __init__(self, config: Optional[BaseLlmConfig] = None):
    method get_llm_model_answer (line 45) | def get_llm_model_answer(self):
    method set_history (line 51) | def set_history(self, history: Any):
    method update_history (line 61) | def update_history(self, app_id: str, session_id: str = "default"):
    method add_history (line 66) | def add_history(
    method _format_history (line 80) | def _format_history(self) -> str:
    method _format_memories (line 88) | def _format_memories(self, memories: list[dict]) -> str:
    method generate_prompt (line 98) | def generate_prompt(self, input_query: str, contexts: list[str], **kwa...
    method _append_search_and_context (line 153) | def _append_search_and_context(context: str, web_search_result: str) -...
    method get_answer_from_llm (line 165) | def get_answer_from_llm(self, prompt: str):
    method access_search_and_get_results (line 178) | def access_search_and_get_results(input_query: str):
    method _stream_response (line 198) | def _stream_response(answer: Any, token_info: Optional[dict[str, Any]]...
    method query (line 214) | def query(self, input_query: str, contexts: list[str], config: BaseLlm...
    method chat (line 274) | def chat(
    method _get_messages (line 333) | def _get_messages(prompt: str, system_prompt: Optional[str] = None) ->...

FILE: embedchain/embedchain/llm/clarifai.py
  class ClarifaiLlm (line 11) | class ClarifaiLlm(BaseLlm):
    method __init__ (line 12) | def __init__(self, config: Optional[BaseLlmConfig] = None):
    method get_llm_model_answer (line 17) | def get_llm_model_answer(self, prompt):
    method _get_answer (line 21) | def _get_answer(prompt: str, config: BaseLlmConfig) -> str:

FILE: embedchain/embedchain/llm/cohere.py
  class CohereLlm (line 13) | class CohereLlm(BaseLlm):
    method __init__ (line 14) | def __init__(self, config: Optional[BaseLlmConfig] = None):
    method get_llm_model_answer (line 27) | def get_llm_model_answer(self, prompt) -> tuple[str, Optional[dict[str...
    method _get_answer (line 53) | def _get_answer(prompt: str, config: BaseLlmConfig) -> str:

FILE: embedchain/embedchain/llm/google.py
  class GoogleLlm (line 19) | class GoogleLlm(BaseLlm):
    method __init__ (line 20) | def __init__(self, config: Optional[BaseLlmConfig] = None):
    method get_llm_model_answer (line 28) | def get_llm_model_answer(self, prompt):
    method _get_answer (line 34) | def _get_answer(self, prompt: str) -> Union[str, Generator[Any, Any, N...

FILE: embedchain/embedchain/llm/gpt4all.py
  class GPT4ALLLlm (line 15) | class GPT4ALLLlm(BaseLlm):
    method __init__ (line 16) | def __init__(self, config: Optional[BaseLlmConfig] = None):
    method get_llm_model_answer (line 23) | def get_llm_model_answer(self, prompt):
    method _get_instance (line 27) | def _get_instance(model):
    method _get_answer (line 44) | def _get_answer(self, prompt: str, config: BaseLlmConfig) -> Union[str...

FILE: embedchain/embedchain/llm/groq.py
  class GroqLlm (line 19) | class GroqLlm(BaseLlm):
    method __init__ (line 20) | def __init__(self, config: Optional[BaseLlmConfig] = None):
    method get_llm_model_answer (line 25) | def get_llm_model_answer(self, prompt) -> tuple[str, Optional[dict[str...
    method _get_answer (line 47) | def _get_answer(self, prompt: str, config: BaseLlmConfig) -> str:

FILE: embedchain/embedchain/llm/huggingface.py
  class HuggingFaceLlm (line 18) | class HuggingFaceLlm(BaseLlm):
    method __init__ (line 19) | def __init__(self, config: Optional[BaseLlmConfig] = None):
    method get_llm_model_answer (line 32) | def get_llm_model_answer(self, prompt):
    method _get_answer (line 38) | def _get_answer(prompt: str, config: BaseLlmConfig) -> str:
    method _from_model (line 50) | def _from_model(prompt: str, config: BaseLlmConfig) -> str:
    method _from_endpoint (line 72) | def _from_endpoint(prompt: str, config: BaseLlmConfig) -> str:
    method _from_pipeline (line 83) | def _from_pipeline(prompt: str, config: BaseLlmConfig) -> str:

FILE: embedchain/embedchain/llm/jina.py
  class JinaLlm (line 13) | class JinaLlm(BaseLlm):
    method __init__ (line 14) | def __init__(self, config: Optional[BaseLlmConfig] = None):
    method get_llm_model_answer (line 19) | def get_llm_model_answer(self, prompt):
    method _get_answer (line 24) | def _get_answer(prompt: str, config: BaseLlmConfig) -> str:

FILE: embedchain/embedchain/llm/llama2.py
  class Llama2Llm (line 13) | class Llama2Llm(BaseLlm):
    method __init__ (line 14) | def __init__(self, config: Optional[BaseLlmConfig] = None):
    method get_llm_model_answer (line 39) | def get_llm_model_answer(self, prompt):

FILE: embedchain/embedchain/llm/mistralai.py
  class MistralAILlm (line 10) | class MistralAILlm(BaseLlm):
    method __init__ (line 11) | def __init__(self, config: Optional[BaseLlmConfig] = None):
    method get_llm_model_answer (line 16) | def get_llm_model_answer(self, prompt) -> tuple[str, Optional[dict[str...
    method _get_answer (line 39) | def _get_answer(prompt: str, config: BaseLlmConfig):

FILE: embedchain/embedchain/llm/nvidia.py
  class NvidiaLlm (line 22) | class NvidiaLlm(BaseLlm):
    method __init__ (line 23) | def __init__(self, config: Optional[BaseLlmConfig] = None):
    method get_llm_model_answer (line 28) | def get_llm_model_answer(self, prompt) -> tuple[str, Optional[dict[str...
    method _get_answer (line 51) | def _get_answer(prompt: str, config: BaseLlmConfig) -> Union[str, Iter...

FILE: embedchain/embedchain/llm/ollama.py
  class OllamaLlm (line 23) | class OllamaLlm(BaseLlm):
    method __init__ (line 24) | def __init__(self, config: Optional[BaseLlmConfig] = None):
    method get_llm_model_answer (line 35) | def get_llm_model_answer(self, prompt):
    method _get_answer (line 39) | def _get_answer(prompt: str, config: BaseLlmConfig) -> Union[str, Iter...

FILE: embedchain/embedchain/llm/openai.py
  class OpenAILlm (line 18) | class OpenAILlm(BaseLlm):
    method __init__ (line 19) | def __init__(
    method get_llm_model_answer (line 27) | def get_llm_model_answer(self, prompt) -> tuple[str, Optional[dict[str...
    method _get_answer (line 50) | def _get_answer(self, prompt: str, config: BaseLlmConfig) -> str:
    method _query_function_call (line 106) | def _query_function_call(

FILE: embedchain/embedchain/llm/together.py
  class TogetherLlm (line 18) | class TogetherLlm(BaseLlm):
    method __init__ (line 19) | def __init__(self, config: Optional[BaseLlmConfig] = None):
    method get_llm_model_answer (line 32) | def get_llm_model_answer(self, prompt) -> tuple[str, Optional[dict[str...
    method _get_answer (line 58) | def _get_answer(prompt: str, config: BaseLlmConfig) -> str:

FILE: embedchain/embedchain/llm/vertex_ai.py
  class VertexAILlm (line 16) | class VertexAILlm(BaseLlm):
    method __init__ (line 17) | def __init__(self, config: Optional[BaseLlmConfig] = None):
    method get_llm_model_answer (line 27) | def get_llm_model_answer(self, prompt) -> tuple[str, Optional[dict[str...
    method _get_answer (line 52) | def _get_answer(prompt: str, config: BaseLlmConfig) -> str:

FILE: embedchain/embedchain/llm/vllm.py
  class VLLM (line 14) | class VLLM(BaseLlm):
    method __init__ (line 15) | def __init__(self, config: Optional[BaseLlmConfig] = None):
    method get_llm_model_answer (line 20) | def get_llm_model_answer(self, prompt):
    method _get_answer (line 24) | def _get_answer(prompt: str, config: BaseLlmConfig) -> Union[str, Iter...

FILE: embedchain/embedchain/loaders/audio.py
  class AudioLoader (line 18) | class AudioLoader(BaseLoader):
    method __init__ (line 19) | def __init__(self):
    method load_data (line 26) | def load_data(self, url: str):

FILE: embedchain/embedchain/loaders/base_loader.py
  class BaseLoader (line 6) | class BaseLoader(JSONSerializable):
    method __init__ (line 7) | def __init__(self):
    method load_data (line 10) | def load_data(self, url, **kwargs: Optional[dict[str, Any]]):

FILE: embedchain/embedchain/loaders/beehiiv.py
  class BeehiivLoader (line 16) | class BeehiivLoader(BaseLoader):
    method load_data (line 21) | def load_data(self, url: str):

FILE: embedchain/embedchain/loaders/csv.py
  class CsvLoader (line 11) | class CsvLoader(BaseLoader):
    method _detect_delimiter (line 13) | def _detect_delimiter(first_line):
    method _get_file_content (line 19) | def _get_file_content(content):
    method load_data (line 35) | def load_data(content):

FILE: embedchain/embedchain/loaders/directory_loader.py
  class DirectoryLoader (line 17) | class DirectoryLoader(BaseLoader):
    method __init__ (line 20) | def __init__(self, config: Optional[dict[str, Any]] = None):
    method load_data (line 27) | def load_data(self, path: str):
    method _process_directory (line 41) | def _process_directory(self, directory_path: Path):
    method _predict_loader (line 54) | def _predict_loader(self, file_path: Path) -> BaseLoader:

FILE: embedchain/embedchain/loaders/discord.py
  class DiscordLoader (line 12) | class DiscordLoader(BaseLoader):
    method __init__ (line 17) | def __init__(self):
    method _format_message (line 24) | def _format_message(message):
    method load_data (line 99) | def load_data(self, channel_id: str):

FILE: embedchain/embedchain/loaders/discourse.py
  class DiscourseLoader (line 14) | class DiscourseLoader(BaseLoader):
    method __init__ (line 15) | def __init__(self, config: Optional[dict[str, Any]] = None):
    method _check_query (line 28) | def _check_query(self, query):
    method _load_post (line 34) | def _load_post(self, post_id):
    method load_data (line 57) | def load_data(self, query):

FILE: embedchain/embedchain/loaders/docs_site_loader.py
  class DocsSiteLoader (line 22) | class DocsSiteLoader(BaseLoader):
    method __init__ (line 23) | def __init__(self):
    method _get_child_links_recursive (line 26) | def _get_child_links_recursive(self, url):
    method _get_all_urls (line 50) | def _get_all_urls(self, url):
    method _load_data_from_url (line 57) | def _load_data_from_url(url: str) -> list:
    method load_data (line 110) | def load_data(self, url):

FILE: embedchain/embedchain/loaders/docx_file.py
  class DocxFileLoader (line 12) | class DocxFileLoader(BaseLoader):
    method load_data (line 13) | def load_data(self, url):

FILE: embedchain/embedchain/loaders/dropbox.py
  class DropboxLoader (line 12) | class DropboxLoader(BaseLoader):
    method __init__ (line 13) | def __init__(self):
    method _download_folder (line 29) | def _download_folder(self, path: str, local_root: str) -> list[FileMet...
    method _generate_dir_id_from_all_paths (line 41) | def _generate_dir_id_from_all_paths(self, path: str) -> str:
    method load_data (line 47) | def load_data(self, path: str):
    method _clean_directory (line 71) | def _clean_directory(self, dir_path):

FILE: embedchain/embedchain/loaders/excel_file.py
  class ExcelFileLoader (line 21) | class ExcelFileLoader(BaseLoader):
    method load_data (line 22) | def load_data(self, excel_url):

FILE: embedchain/embedchain/loaders/github.py
  class GithubLoader (line 19) | class GithubLoader(BaseLoader):
    method __init__ (line 22) | def __init__(self, config: Optional[dict[str, Any]] = None):
    method _github_search_code (line 50) | def _github_search_code(self, query: str):
    method _get_github_repo_data (line 69) | def _get_github_repo_data(self, repo_name: str, branch_name: str = Non...
    method _github_search_repo (line 113) | def _github_search_repo(self, query: str) -> list[dict]:
    method _github_search_issues_and_pr (line 121) | def _github_search_issues_and_pr(self, query: str, type: str) -> list[...
    method _github_search_discussions (line 160) | def _github_search_discussions(self, query: str):
    method _get_github_repo_branch (line 196) | def _get_github_repo_branch(self, query: str, type: str) -> list[dict]:
    method _get_github_repo_file (line 215) | def _get_github_repo_file(self, query: str, type: str) -> list[dict]:
    method _search_github_data (line 233) | def _search_github_data(self, search_type: str, query: str):
    method _get_valid_github_query (line 255) | def _get_valid_github_query(query: str):
    method load_data (line 291) | def load_data(self, search_query: str, max_results: int = 1000):

FILE: embedchain/embedchain/loaders/gmail.py
  class GmailReader (line 28) | class GmailReader:
    method __init__ (line 31) | def __init__(self, query: str, service=None, results_per_page: int = 10):
    method _initialize_service (line 37) | def _initialize_service():
    method _get_credentials (line 42) | def _get_credentials():
    method load_emails (line 62) | def load_emails(self) -> list[dict]:
    method _get_email (line 68) | def _get_email(self, message_id: str):
    method _parse_email (line 72) | def _parse_email(self, raw_email) -> dict:
    method _get_header (line 83) | def _get_header(mime_msg, header_name: str) -> str:
    method _format_date (line 87) | def _format_date(mime_msg) -> Optional[str]:
    method _get_body (line 92) | def _get_body(mime_msg) -> str:
  class GmailLoader (line 115) | class GmailLoader(BaseLoader):
    method load_data (line 116) | def load_data(self, query: str):
    method _process_email (line 129) | def _process_email(email: dict) -> str:
    method _generate_doc_id (line 142) | def _generate_doc_id(query: str, data: list[dict]) -> str:

FILE: embedchain/embedchain/loaders/google_drive.py
  class GoogleDriveLoader (line 26) | class GoogleDriveLoader(BaseLoader):
    method _get_drive_id_from_url (line 28) | def _get_drive_id_from_url(url: str):
    method load_data (line 37) | def load_data(self, url: str):

FILE: embedchain/embedchain/loaders/image.py
  class ImageLoader (line 15) | class ImageLoader(BaseLoader):
    method __init__ (line 16) | def __init__(self, max_tokens: int = 500, api_key: str = None, prompt:...
    method _encode_image (line 24) | def _encode_image(image_path: str):
    method _create_completion_request (line 28) | def _create_completion_request(self, content: str):
    method _process_url (line 33) | def _process_url(self, url: str):
    method load_data (line 44) | def load_data(self, url: str):

FILE: embedchain/embedchain/loaders/json.py
  class JSONReader (line 13) | class JSONReader:
    method __init__ (line 14) | def __init__(self) -> None:
    method load_data (line 19) | def load_data(json_data: Union[dict, str]) -> list[str]:
  class JSONLoader (line 44) | class JSONLoader(BaseLoader):
    method _check_content (line 46) | def _check_content(content):
    method load_data (line 56) | def load_data(content):

FILE: embedchain/embedchain/loaders/local_qna_pair.py
  class LocalQnaPairLoader (line 8) | class LocalQnaPairLoader(BaseLoader):
    method load_data (line 9) | def load_data(self, content):

FILE: embedchain/embedchain/loaders/local_text.py
  class LocalTextLoader (line 8) | class LocalTextLoader(BaseLoader):
    method load_data (line 9) | def load_data(self, content):

FILE: embedchain/embedchain/loaders/mdx.py
  class MdxLoader (line 8) | class MdxLoader(BaseLoader):
    method load_data (line 9) | def load_data(self, url):

FILE: embedchain/embedchain/loaders/mysql.py
  class MySQLLoader (line 11) | class MySQLLoader(BaseLoader):
    method __init__ (line 12) | def __init__(self, config: Optional[dict[str, Any]]):
    method _setup_loader (line 25) | def _setup_loader(self, config: dict[str, Any]):
    method _check_query (line 45) | def _check_query(query):
    method load_data (line 53) | def load_data(self, query):

FILE: embedchain/embedchain/loaders/notion.py
  class NotionDocument (line 15) | class NotionDocument:
    method __init__ (line 20) | def __init__(self, text: str, extra_info: dict[str, Any]):
  class NotionPageLoader (line 25) | class NotionPageLoader:
    method __init__ (line 33) | def __init__(self, integration_token: Optional[str] = None) -> None:
    method _read_block (line 48) | def _read_block(self, block_id: str, num_tabs: int = 0) -> str:
    method load_data (line 87) | def load_data(self, page_ids: list[str]) -> list[NotionDocument]:
  class NotionLoader (line 97) | class NotionLoader(BaseLoader):
    method load_data (line 98) | def load_data(self, source):

FILE: embedchain/embedchain/loaders/openapi.py
  class OpenAPILoader (line 11) | class OpenAPILoader(BaseLoader):
    method _get_file_content (line 13) | def _get_file_content(content):
    method load_data (line 29) | def load_data(content):

FILE: embedchain/embedchain/loaders/pdf_file.py
  class PdfFileLoader (line 11) | class PdfFileLoader(BaseLoader):
    method load_data (line 12) | def load_data(self, url):

FILE: embedchain/embedchain/loaders/postgres.py
  class PostgresLoader (line 10) | class PostgresLoader(BaseLoader):
    method __init__ (line 11) | def __init__(self, config: Optional[dict[str, Any]] = None):
    method _setup_loader (line 20) | def _setup_loader(self, config: dict[str, Any]):
    method _check_query (line 42) | def _check_query(query):
    method load_data (line 48) | def load_data(self, query):
    method close_connection (line 67) | def close_connection(self):

FILE: embedchain/embedchain/loaders/rss_feed.py
  class RSSFeedLoader (line 8) | class RSSFeedLoader(BaseLoader):
    method load_data (line 11) | def load_data(self, url):
    method serialize_metadata (line 21) | def serialize_metadata(metadata):
    method get_rss_content (line 29) | def get_rss_content(url: str):

FILE: embedchain/embedchain/loaders/sitemap.py
  class SitemapLoader (line 26) | class SitemapLoader(BaseLoader):
    method load_data (line 33) | def load_data(self, sitemap_source):

FILE: embedchain/embedchain/loaders/slack.py
  class SlackLoader (line 17) | class SlackLoader(BaseLoader):
    method __init__ (line 18) | def __init__(self, config: Optional[dict[str, Any]] = None):
    method _setup_loader (line 29) | def _setup_loader(self, config: dict[str, Any]):
    method _check_query (line 62) | def _check_query(query):
    method load_data (line 68) | def load_data(self, query):

FILE: embedchain/embedchain/loaders/substack.py
  class SubstackLoader (line 16) | class SubstackLoader(BaseLoader):
    method load_data (line 21) | def load_data(self, url: str):

FILE: embedchain/embedchain/loaders/text_file.py
  class TextFileLoader (line 9) | class TextFileLoader(BaseLoader):
    method load_data (line 10) | def load_data(self, url: str):

FILE: embedchain/embedchain/loaders/unstructured_file.py
  class UnstructuredLoader (line 9) | class UnstructuredLoader(BaseLoader):
    method load_data (line 10) | def load_data(self, url):

FILE: embedchain/embedchain/loaders/web_page.py
  class WebPageLoader (line 22) | class WebPageLoader(BaseLoader):
    method load_data (line 26) | def load_data(self, url, **kwargs: Optional[dict[str, Any]]):
    method _get_clean_content (line 65) | def _get_clean_content(html, url) -> str:
    method close_session (line 115) | def close_session(cls):
    method fetch_reference_links (line 118) | def fetch_reference_links(self, response):

FILE: embedchain/embedchain/loaders/xml.py
  class XmlLoader (line 16) | class XmlLoader(BaseLoader):
    method load_data (line 17) | def load_data(self, xml_url):

FILE: embedchain/embedchain/loaders/youtube_channel.py
  class YoutubeChannelLoader (line 13) | class YoutubeChannelLoader(BaseLoader):
    method load_data (line 16) | def load_data(self, channel_name):

FILE: embedchain/embedchain/loaders/youtube_video.py
  class YoutubeVideoLoader (line 20) | class YoutubeVideoLoader(BaseLoader):
    method load_data (line 21) | def load_data(self, url):

FILE: embedchain/embedchain/memory/base.py
  class ChatHistory (line 14) | class ChatHistory:
    method __init__ (line 15) | def __init__(self) -> None:
    method add (line 18) | def add(self, app_id, session_id, chat_message: ChatMessage) -> Option...
    method delete (line 43) | def delete(self, app_id: str, session_id: Optional[str] = None):
    method get (line 63) | def get(
    method count (line 103) | def count(self, app_id: str, session_id: Optional[str] = None):
    method _serialize_json (line 119) | def _serialize_json(metadata: dict[str, Any]):
    method _deserialize_json (line 123) | def _deserialize_json(metadata: str):
    method close_connection (line 126) | def close_connection(self):

FILE: embedchain/embedchain/memory/message.py
  class BaseMessage (line 9) | class BaseMessage(JSONSerializable):
    method __init__ (line 25) | def __init__(self, content: str, created_by: str, metadata: Optional[d...
    method type (line 32) | def type(self) -> str:
    method is_lc_serializable (line 36) | def is_lc_serializable(cls) -> bool:
    method __str__ (line 40) | def __str__(self) -> str:
  class ChatMessage (line 44) | class ChatMessage(JSONSerializable):
    method add_user_message (line 55) | def add_user_message(self, message: str, metadata: Optional[dict] = No...
    method add_ai_message (line 64) | def add_ai_message(self, message: str, metadata: Optional[dict] = None):
    method __str__ (line 73) | def __str__(self) -> str:

FILE: embedchain/embedchain/memory/utils.py
  function merge_metadata_dict (line 4) | def merge_metadata_dict(left: Optional[dict[str, Any]], right: Optional[...

FILE: embedchain/embedchain/migrations/env.py
  function run_migrations_offline (line 21) | def run_migrations_offline() -> None:
  function run_migrations_online (line 45) | def run_migrations_online() -> None:

FILE: embedchain/embedchain/migrations/versions/40a327b3debd_create_initial_migrations.py
  function upgrade (line 21) | def upgrade() -> None:
  function downgrade (line 53) | def downgrade() -> None:

FILE: embedchain/embedchain/models/data_type.py
  class DirectDataType (line 4) | class DirectDataType(Enum):
  class IndirectDataType (line 12) | class IndirectDataType(Enum):
  class SpecialDataType (line 47) | class SpecialDataType(Enum):
  class DataType (line 55) | class DataType(Enum):

FILE: embedchain/embedchain/models/embedding_functions.py
  class EmbeddingFunctions (line 4) | class EmbeddingFunctions(Enum):

FILE: embedchain/embedchain/models/providers.py
  class Providers (line 4) | class Providers(Enum):

FILE: embedchain/embedchain/models/vector_dimensions.py
  class VectorDimensions (line 5) | class VectorDimensions(Enum):

FILE: embedchain/embedchain/pipeline.py
  class Pipeline (line 4) | class Pipeline(App):

FILE: embedchain/embedchain/store/assistants.py
  class OpenAIAssistant (line 25) | class OpenAIAssistant:
    method __init__ (line 26) | def __init__(
    method add (line 51) | def add(self, source, data_type=None):
    method chat (line 62) | def chat(self, message):
    method delete_thread (line 67) | def delete_thread(self):
    method _initialize_assistant (line 72) | def _initialize_assistant(self, assistant_id):
    method _create_thread (line 82) | def _create_thread(self):
    method _prepare_source_path (line 86) | def _prepare_source_path(self, source, data_type=None):
    method _add_file_to_assistant (line 94) | def _add_file_to_assistant(self, file_path):
    method _generate_file_ids (line 98) | def _generate_file_ids(self, data_sources):
    method _send_message (line 104) | def _send_message(self, message):
    method _wait_for_completion (line 108) | def _wait_for_completion(self):
    method _get_latest_response (line 124) | def _get_latest_response(self):
    method _get_history (line 128) | def _get_history(self):
    method _format_message (line 133) | def _format_message(thread_message):
    method _save_temp_data (line 139) | def _save_temp_data(data, source):
  class AIAssistant (line 149) | class AIAssistant:
    method __init__ (line 150) | def __init__(
    method add (line 187) | def add(self, source, data_type=None):
    method chat (line 196) | def chat(self, query):
    method delete (line 205) | def delete(self):

FILE: embedchain/embedchain/telemetry/posthog.py
  class AnonymousTelemetry (line 12) | class AnonymousTelemetry:
    method __init__ (line 13) | def __init__(self, host="https://app.posthog.com", enabled=True):
    method _get_user_id (line 36) | def _get_user_id():
    method capture (line 49) | def capture(self, event_name, properties=None):

FILE: embedchain/embedchain/utils/cli.py
  function get_pkg_path_from_name (line 12) | def get_pkg_path_from_name(template: str):
  function setup_fly_io_app (line 30) | def setup_fly_io_app(extra_args):
  function setup_modal_com_app (line 45) | def setup_modal_com_app(extra_args):
  function setup_render_com_app (line 67) | def setup_render_com_app():
  function setup_streamlit_io_app (line 89) | def setup_streamlit_io_app():
  function setup_gradio_app (line 94) | def setup_gradio_app():
  function setup_hf_app (line 99) | def setup_hf_app():
  function run_dev_fly_io (line 118) | def run_dev_fly_io(debug, host, port):
  function run_dev_modal_com (line 135) | def run_dev_modal_com():
  function run_dev_streamlit_io (line 146) | def run_dev_streamlit_io():
  function run_dev_render_com (line 157) | def run_dev_render_com(debug, host, port):
  function run_dev_gradio (line 174) | def run_dev_gradio():
  function read_env_file (line 185) | def read_env_file(env_file_path):
  function deploy_fly (line 211) | def deploy_fly():
  function deploy_modal (line 244) | def deploy_modal():
  function deploy_streamlit (line 258) | def deploy_streamlit():
  function deploy_render (line 279) | def deploy_render():
  function deploy_gradio_app (line 294) | def deploy_gradio_app():
  function deploy_hf_spaces (line 309) | def deploy_hf_spaces(ec_app_name):

FILE: embedchain/embedchain/utils/evaluation.py
  class EvalMetric (line 7) | class EvalMetric(Enum):
  class EvalData (line 13) | class EvalData(BaseModel):

FILE: embedchain/embedchain/utils/misc.py
  function parse_content (line 18) | def parse_content(content, type):
  function clean_string (line 74) | def clean_string(text):
  function is_readable (line 105) | def is_readable(s):
  function use_pysqlite3 (line 120) | def use_pysqlite3():
  function format_source (line 159) | def format_source(source: str, limit: int = 20) -> str:
  function detect_datatype (line 170) | def detect_datatype(source: Any) -> DataType:
  function is_valid_json_string (line 381) | def is_valid_json_string(source: str):
  function validate_config (line 389) | def validate_config(config_data):
  function chunks (line 536) | def chunks(iterable, batch_size=100, desc="Processing chunks"):

FILE: embedchain/embedchain/vectordb/base.py
  class BaseVectorDB (line 6) | class BaseVectorDB(JSONSerializable):
    method __init__ (line 9) | def __init__(self, config: BaseVectorDbConfig):
    method _initialize (line 18) | def _initialize(self):
    method _get_or_create_db (line 26) | def _get_or_create_db(self):
    method _get_or_create_collection (line 30) | def _get_or_create_collection(self):
    method _set_embedder (line 34) | def _set_embedder(self, embedder: BaseEmbedder):
    method get (line 43) | def get(self):
    method add (line 47) | def add(self):
    method query (line 51) | def query(self):
    method count (line 55) | def count(self) -> int:
    method reset (line 64) | def reset(self):
    method set_collection_name (line 70) | def set_collection_name(self, name: str):
    method delete (line 79) | def delete(self):

FILE: embedchain/embedchain/vectordb/chroma.py
  class ChromaDB (line 29) | class ChromaDB(BaseVectorDB):
    method __init__ (line 32) | def __init__(self, config: Optional[ChromaDbConfig] = None):
    method _initialize (line 66) | def _initialize(self):
    method _get_or_create_db (line 76) | def _get_or_create_db(self):
    method _generate_where_clause (line 81) | def _generate_where_clause(where: dict[str, any]) -> dict[str, any]:
    method _get_or_create_collection (line 94) | def _get_or_create_collection(self, name: str) -> Collection:
    method get (line 112) | def get(self, ids: Optional[list[str]] = None, where: Optional[dict[st...
    method add (line 134) | def add(
    method _format_result (line 167) | def _format_result(results: QueryResult) -> list[tuple[Document, float]]:
    method query (line 185) | def query(
    method set_collection_name (line 246) | def set_collection_name(self, name: str):
    method count (line 258) | def count(self) -> int:
    method delete (line 267) | def delete(self, where):
    method reset (line 270) | def reset(self):

FILE: embedchain/embedchain/vectordb/elasticsearch.py
  class ElasticsearchDB (line 21) | class ElasticsearchDB(BaseVectorDB):
    method __init__ (line 26) | def __init__(
    method _initialize (line 62) | def _initialize(self):
    method _get_or_create_db (line 81) | def _get_or_create_db(self):
    method _get_or_create_collection (line 85) | def _get_or_create_collection(self, name):
    method get (line 88) | def get(self, ids: Optional[list[str]] = None, where: Optional[dict[st...
    method add (line 122) | def add(
    method query (line 165) | def query(
    method set_collection_name (line 221) | def set_collection_name(self, name: str):
    method count (line 232) | def count(self) -> int:
    method reset (line 244) | def reset(self):
    method _get_index (line 253) | def _get_index(self) -> str:
    method delete (line 263) | def delete(self, where):

FILE: embedchain/embedchain/vectordb/lancedb.py
  class LanceDB (line 16) | class LanceDB(BaseVectorDB):
    method __init__ (line 21) | def __init__(
    method _initialize (line 40) | def _initialize(self):
    method _get_or_create_db (line 57) | def _get_or_create_db(self):
    method _generate_where_clause (line 63) | def _generate_where_clause(self, where: Dict[str, any]) -> str:
    method _get_or_create_collection (line 87) | def _get_or_create_collection(self, table_name: str, reset=False):
    method get (line 127) | def get(self, ids: Optional[List[str]] = None, where: Optional[Dict[st...
    method add (line 169) | def add(
    method _format_result (line 206) | def _format_result(self, results) -> list:
    method query (line 217) | def query(
    method set_collection_name (line 263) | def set_collection_name(self, name: str):
    method count (line 275) | def count(self) -> int:
    method delete (line 284) | def delete(self, where):
    method reset (line 287) | def reset(self):

FILE: embedchain/embedchain/vectordb/opensearch.py
  class OpenSearchDB (line 26) | class OpenSearchDB(BaseVectorDB):
    method __init__ (line 31) | def __init__(self, config: OpenSearchDBConfig):
    method _initialize (line 51) | def _initialize(self):
    method _get_or_create_db (line 74) | def _get_or_create_db(self):
    method _get_or_create_collection (line 78) | def _get_or_create_collection(self, name):
    method get (line 81) | def get(
    method add (line 118) | def add(self, documents: list[str], metadatas: list[object], ids: list...
    method query (line 146) | def query(
    method set_collection_name (line 208) | def set_collection_name(self, name: str):
    method count (line 219) | def count(self) -> int:
    method reset (line 231) | def reset(self):
    method delete (line 240) | def delete(self, where):
    method _get_index (line 247) | def _get_index(self) -> str:

FILE: embedchain/embedchain/vectordb/pinecone.py
  class PineconeDB (line 23) | class PineconeDB(BaseVectorDB):
    method __init__ (line 28) | def __init__(
    method _initialize (line 59) | def _initialize(self):
    method _setup_pinecone_index (line 66) | def _setup_pinecone_index(self):
    method get (line 91) | def get(self, ids: Optional[list[str]] = None, where: Optional[dict[st...
    method add (line 114) | def add(
    method query (line 149) | def query(
    method set_collection_name (line 197) | def set_collection_name(self, name: str):
    method count (line 208) | def count(self) -> int:
    method _get_or_create_db (line 218) | def _get_or_create_db(self):
    method reset (line 222) | def reset(self):
    method _generate_filter (line 231) | def _generate_filter(where: dict):
    method delete (line 240) | def delete(self, where: dict):

FILE: embedchain/embedchain/vectordb/qdrant.py
  class QdrantDB (line 19) | class QdrantDB(BaseVectorDB):
    method __init__ (line 24) | def __init__(self, config: QdrantDBConfig = None):
    method _initialize (line 43) | def _initialize(self):
    method _get_or_create_db (line 65) | def _get_or_create_db(self):
    method _get_or_create_collection (line 68) | def _get_or_create_collection(self):
    method get (line 71) | def get(self, ids: Optional[list[str]] = None, where: Optional[dict[st...
    method add (line 126) | def add(
    method query (line 161) | def query(
    method count (line 217) | def count(self) -> int:
    method reset (line 221) | def reset(self):
    method set_collection_name (line 225) | def set_collection_name(self, name: str):
    method _generate_query (line 238) | def _generate_query(where: dict):
    method delete (line 251) | def delete(self, where: dict):

FILE: embedchain/embedchain/vectordb/weaviate.py
  class WeaviateDB (line 18) | class WeaviateDB(BaseVectorDB):
    method __init__ (line 23) | def __init__(
    method _initialize (line 54) | def _initialize(self):
    method get (line 121) | def get(self, ids: Optional[list[str]] = None, where: Optional[dict[st...
    method add (line 190) | def add(self, documents: list[str], metadatas: list[object], ids: list...
    method query (line 220) | def query(
    method set_collection_name (line 292) | def set_collection_name(self, name: str):
    method count (line 302) | def count(self) -> int:
    method _get_or_create_db (line 311) | def _get_or_create_db(self):
    method reset (line 315) | def reset(self):
    method _get_index_name (line 325) | def _get_index_name(self) -> str:
    method _query_with_offset (line 333) | def _query_with_offset(query, offset):
    method _generate_query (line 339) | def _generate_query(self, where: dict):
    method delete (line 357) | def delete(self, where: dict):

FILE: embedchain/embedchain/vectordb/zilliz.py
  class ZillizVectorDB (line 27) | class ZillizVectorDB(BaseVectorDB):
    method __init__ (line 30) | def __init__(self, config: ZillizDBConfig = None):
    method _initialize (line 54) | def _initialize(self):
    method _get_or_create_db (line 62) | def _get_or_create_db(self):
    method _get_or_create_collection (line 66) | def _get_or_create_collection(self, name):
    method get (line 94) | def get(self, ids: Optional[list[str]] = None, where: Optional[dict[st...
    method add (line 128) | def add(
    method query (line 146) | def query(
    method count (line 202) | def count(self) -> int:
    method reset (line 211) | def reset(self, collection_names: list[str] = None):
    method set_collection_name (line 224) | def set_collection_name(self, name: str):
    method _generate_zilliz_filter (line 235) | def _generate_zilliz_filter(self, where: dict[str, str]):
    method delete (line 241) | def delete(self, where: dict[str, Any]):

FILE: embedchain/examples/api_server/api_server.py
  function add (line 14) | def add():
  function query (line 29) | def query():
  function chat (line 43) | def chat():

FILE: embedchain/examples/chainlit/app.py
  function on_chat_start (line 11) | async def on_chat_start():
  function on_message (line 29) | async def on_message(message: cl.Message):

FILE: embedchain/examples/chat-pdf/app.py
  function embedchain_bot (line 14) | def embedchain_bot(db_path, api_key):
  function get_db_path (line 38) | def get_db_path():
  function get_ec_app (line 43) | def get_ec_app(api_key):
  function app_response (line 126) | def app_response(result):

FILE: embedchain/examples/discord_bot/discord_bot.py
  function initialize_chat_bot (line 17) | def initialize_chat_bot():
  function on_ready (line 23) | async def on_ready():
  function on_command_error (line 29) | async def on_command_error(ctx, error):
  function add (line 37) | async def add(ctx, data_type: str, *, url_or_text: str):
  function query (line 48) | async def query(ctx, *, question: str):
  function chat (line 59) | async def chat(ctx, *, question: str):
  function send_response (line 69) | async def send_response(ctx, message):

FILE: embedchain/examples/full_stack/backend/models.py
  class APIKey (line 6) | class APIKey(db.Model):
  class BotList (line 11) | class BotList(db.Model):

FILE: embedchain/examples/full_stack/backend/routes/chat_response.py
  function get_answer (line 14) | def get_answer():

FILE: embedchain/examples/full_stack/backend/routes/dashboard.py
  function set_key (line 9) | def set_key():
  function check_key (line 24) | def check_key():
  function create_bot (line 34) | def create_bot():
  function delete_bot (line 49) | def delete_bot():
  function get_bots (line 62) | def get_bots():

FILE: embedchain/examples/full_stack/backend/routes/sources.py
  function add_sources (line 14) | def add_sources():

FILE: embedchain/examples/full_stack/backend/server.py
  function load_app (line 18) | def load_app():

FILE: embedchain/examples/full_stack/frontend/next.config.js
  method rewrites (line 3) | async rewrites() {
  method webpack (line 15) | webpack(config) {

FILE: embedchain/examples/full_stack/frontend/src/components/PageWrapper.js
  function PageWrapper (line 1) | function PageWrapper({ children }) {

FILE: embedchain/examples/full_stack/frontend/src/components/chat/BotWrapper.js
  function BotWrapper (line 1) | function BotWrapper({ children }) {

FILE: embedchain/examples/full_stack/frontend/src/components/chat/HumanWrapper.js
  function HumanWrapper (line 1) | function HumanWrapper({ children }) {

FILE: embedchain/examples/full_stack/frontend/src/components/dashboard/CreateBot.js
  function CreateBot (line 4) | function CreateBot() {

FILE: embedchain/examples/full_stack/frontend/src/components/dashboard/DeleteBot.js
  function DeleteBot (line 4) | function DeleteBot() {

FILE: embedchain/examples/full_stack/frontend/src/components/dashboard/PurgeChats.js
  function PurgeChats (line 3) | function PurgeChats() {

FILE: embedchain/examples/full_stack/frontend/src/components/dashboard/SetOpenAIKey.js
  function SetOpenAIKey (line 3) | function SetOpenAIKey({ setIsKeyPresent }) {

FILE: embedchain/examples/full_stack/frontend/src/containers/ChatWindow.js
  function ChatWindow (line 7) | function ChatWindow({ embedding_model, app_type, setBotTitle }) {

FILE: embedchain/examples/full_stack/frontend/src/containers/SetSources.js
  function SetSources (line 11) | function SetSources({

FILE: embedchain/examples/full_stack/frontend/src/containers/Sidebar.js
  function Sidebar (line 13) | function Sidebar() {

FILE: embedchain/examples/full_stack/frontend/src/pages/[bot_slug]/app.js
  function App (line 7) | function App() {

FILE: embedchain/examples/full_stack/frontend/src/pages/_app.js
  function App (line 4) | function App({ Component, pageProps }) {

FILE: embedchain/examples/full_stack/frontend/src/pages/_document.js
  function Document (line 3) | function Document() {

FILE: embedchain/examples/full_stack/frontend/src/pages/index.js
  function Home (line 9) | function Home() {

FILE: embedchain/examples/mistral-streamlit/app.py
  function ec_app (line 9) | def ec_app():

FILE: embedchain/examples/nextjs/ec_app/app.py
  class SourceModel (line 13) | class SourceModel(BaseModel):
  class QuestionModel (line 17) | class QuestionModel(BaseModel):
  function add_source (line 22) | async def add_source(source_model: SourceModel):
  function handle_query (line 33) | async def handle_query(question_model: QuestionModel):
  function handle_chat (line 44) | async def handle_chat(question_model: QuestionModel):
  function root (line 55) | async def root():

FILE: embedchain/examples/nextjs/nextjs_discord/app.py
  class NextJSBot (line 18) | class NextJSBot:
    method __init__ (line 19) | def __init__(self) -> None:
    method add (line 22) | def add(self, _):
    method query (line 25) | def query(self, message, citations: bool = False):
    method start (line 44) | def start(self):
  function on_ready (line 53) | async def on_ready():
  function _get_question (line 57) | def _get_question(message):
  function answer_query (line 66) | async def answer_query(message):
  function on_message (line 100) | async def on_message(message):
  function start_bot (line 106) | def start_bot():

FILE: embedchain/examples/nextjs/nextjs_slack/app.py
  function remove_mentions (line 15) | def remove_mentions(message):
  class SlackBotApp (line 22) | class SlackBotApp:
    method __init__ (line 23) | def __init__(self) -> None:
    method add (line 26) | def add(self, _):
    method query (line 29) | def query(self, query, citations: bool = False):
  function app_message_handler (line 57) | def app_message_handler(message, say):
  function app_mention_handler (line 62) | def app_mention_handler(body, say, client):
  function start_bot (line 118) | def start_bot():

FILE: embedchain/examples/rest-api/main.py
  function get_db (line 21) | def get_db():
  function check_status (line 41) | def check_status():
  function get_all_apps (line 49) | async def get_all_apps(db: Session = Depends(get_db)):
  function create_app_using_default_config (line 58) | async def create_app_using_default_config(app_id: str, config: UploadFil...
  function get_datasources_associated_with_app_id (line 97) | async def get_datasources_associated_with_app_id(app_id: str, db: Sessio...
  function add_datasource_to_an_app (line 134) | async def add_datasource_to_an_app(body: SourceApp, app_id: str, db: Ses...
  function query_an_app (line 173) | async def query_an_app(body: QueryApp, app_id: str, db: Session = Depend...
  function deploy_app (line 251) | async def deploy_app(body: DeployAppRequest, app_id: str, db: Session = ...
  function delete_app (line 294) | async def delete_app(app_id: str, db: Session = Depends(get_db)):

FILE: embedchain/examples/rest-api/models.py
  class QueryApp (line 8) | class QueryApp(BaseModel):
  class SourceApp (line 20) | class SourceApp(BaseModel):
  class DeployAppRequest (line 27) | class DeployAppRequest(BaseModel):
  class MessageApp (line 33) | class MessageApp(BaseModel):
  class DefaultResponse (line 37) | class DefaultResponse(BaseModel):
  class AppModel (line 41) | class AppModel(Base):

FILE: embedchain/examples/rest-api/services.py
  function get_app (line 5) | def get_app(db: Session, app_id: str):
  function get_apps (line 9) | def get_apps(db: Session, skip: int = 0, limit: int = 100):
  function save_app (line 13) | def save_app(db: Session, app_id: str, config: str):
  function remove_app (line 21) | def remove_app(db: Session, app_id: str):

FILE: embedchain/examples/rest-api/utils.py
  function generate_error_message_for_api_keys (line 1) | def generate_error_message_for_api_keys(error: ValueError) -> str:

FILE: embedchain/examples/sadhguru-ai/app.py
  function sadhguru_ai (line 15) | def sadhguru_ai():
  function read_csv_row_by_row (line 21) | def read_csv_row_by_row(file_path):
  function add_data_to_app (line 29) | def add_data_to_app():
  function app_response (line 77) | def app_response(result):

FILE: embedchain/examples/telegram_bot/telegram_bot.py
  function telegram_webhook (line 16) | def telegram_webhook():
  function add_to_chat_bot (line 39) | def add_to_chat_bot(data_type, url_or_text):
  function query_chat_bot (line 49) | def query_chat_bot(question):
  function send_message (line 59) | def send_message(chat_id, text):

FILE: embedchain/examples/unacademy-ai/app.py
  function unacademy_ai (line 11) | def unacademy_ai():
  function app_response (line 81) | def app_response(result):

FILE: embedchain/examples/whatsapp_bot/run.py
  function main (line 4) | def main():

FILE: embedchain/examples/whatsapp_bot/whatsapp_bot.py
  function chat (line 11) | def chat():
  function handle_message (line 19) | def handle_message(message):
  function add_sources (line 27) | def add_sources(message):
  function query (line 42) | def query(message):

FILE: embedchain/tests/chunkers/test_base_chunker.py
  function text_splitter_mock (line 12) | def text_splitter_mock():
  function loader_mock (line 17) | def loader_mock():
  function app_id (line 22) | def app_id():
  function data_type (line 27) | def data_type():
  function chunker (line 32) | def chunker(text_splitter_mock, data_type):
  function test_create_chunks_with_config (line 39) | def test_create_chunks_with_config(chunker, text_splitter_mock, loader_m...
  function test_create_chunks (line 51) | def test_create_chunks(chunker, text_splitter_mock, loader_mock, app_id,...
  function test_get_chunks (line 81) | def test_get_chunks(chunker, text_splitter_mock):
  function test_set_data_type (line 91) | def test_set_data_type(chunker):
  function test_get_word_count (line 96) | def test_get_word_count(chunker):

FILE: embedchain/tests/chunkers/test_chunkers.py
  function test_default_config_values (line 53) | def test_default_config_values():
  function test_custom_config_values (line 61) | def test_custom_config_values():

FILE: embedchain/tests/chunkers/test_text.py
  class TestTextChunker (line 8) | class TestTextChunker:
    method test_chunks_without_app_id (line 9) | def test_chunks_without_app_id(self):
    method test_chunks_with_app_id (line 22) | def test_chunks_with_app_id(self):
    method test_big_chunksize (line 34) | def test_big_chunksize(self):
    method test_small_chunksize (line 47) | def test_small_chunksize(self):
    method test_word_count (line 61) | def test_word_count(self):
  class MockLoader (line 71) | class MockLoader:
    method load_data (line 73) | def load_data(src) -> dict:

FILE: embedchain/tests/conftest.py
  function clean_db (line 9) | def clean_db():
  function disable_telemetry (line 32) | def disable_telemetry():

FILE: embedchain/tests/embedchain/test_add.py
  function app (line 13) | def app(mocker):
  function test_add (line 18) | def test_add(app):
  function test_add_forced_type (line 29) | def test_add_forced_type(app):
  function test_dry_run (line 35) | def test_dry_run(app):

FILE: embedchain/tests/embedchain/test_embedchain.py
  function app_instance (line 17) | def app_instance():
  function test_whole_app (line 22) | def test_whole_app(app_instance, mocker):
  function test_add_after_reset (line 41) | def test_add_after_reset(app_instance, mocker):
  function test_add_with_incorrect_content (line 71) | def test_add_with_incorrect_content(app_instance, mocker):

FILE: embedchain/tests/embedchain/test_utils.py
  class TestApp (line 9) | class TestApp(unittest.TestCase):
    method test_detect_datatype_youtube (line 12) | def test_detect_datatype_youtube(self):
    method test_detect_datatype_local_file (line 21) | def test_detect_datatype_local_file(self):
    method test_detect_datatype_pdf (line 24) | def test_detect_datatype_pdf(self):
    method test_detect_datatype_local_pdf (line 27) | def test_detect_datatype_local_pdf(self):
    method test_detect_datatype_xml (line 30) | def test_detect_datatype_xml(self):
    method test_detect_datatype_local_xml (line 33) | def test_detect_datatype_local_xml(self):
    method test_detect_datatype_docx (line 36) | def test_detect_datatype_docx(self):
    method test_detect_datatype_local_docx (line 39) | def test_detect_datatype_local_docx(self):
    method test_detect_data_type_json (line 42) | def test_detect_data_type_json(self):
    method test_detect_data_type_local_json (line 45) | def test_detect_data_type_local_json(self):
    method test_detect_datatype_regular_filesystem_docx (line 49) | def test_detect_datatype_regular_filesystem_docx(self, mock_isfile):
    method test_detect_datatype_docs_site (line 54) | def test_detect_datatype_docs_site(self):
    method test_detect_datatype_docs_sitein_path (line 57) | def test_detect_datatype_docs_sitein_path(self):
    method test_detect_datatype_web_page (line 61) | def test_detect_datatype_web_page(self):
    method test_detect_datatype_invalid_url (line 64) | def test_detect_datatype_invalid_url(self):
    method test_detect_datatype_qna_pair (line 67) | def test_detect_datatype_qna_pair(self):
    method test_detect_datatype_qna_pair_types (line 72) | def test_detect_datatype_qna_pair_types(self):
    method test_detect_datatype_text (line 79) | def test_detect_datatype_text(self):
    method test_detect_datatype_non_string_error (line 82) | def test_detect_datatype_non_string_error(self):
    method test_detect_datatype_regular_filesystem_file_txt (line 88) | def test_detect_datatype_regular_filesystem_file_txt(self, mock_isfile):
    method test_detect_datatype_regular_filesystem_no_file (line 93) | def test_detect_datatype_regular_filesystem_no_file(self):
    method test_doc_examples_quickstart (line 97) | def test_doc_examples_quickstart(self):
    method test_doc_examples_introduction (line 102) | def test_doc_examples_introduction(self):
    method test_doc_examples_app_types (line 113) | def test_doc_examples_app_types(self):
    method test_doc_examples_configuration (line 118) | def test_doc_examples_configuration(self):

FILE: embedchain/tests/embedder/test_aws_bedrock_embedder.py
  function test_aws_bedrock_embedder_with_model (line 7) | def test_aws_bedrock_embedder_with_model():

FILE: embedchain/tests/embedder/test_azure_openai_embedder.py
  function test_azure_openai_embedder_with_http_client (line 9) | def test_azure_openai_embedder_with_http_client(monkeypatch):
  function test_azure_openai_embedder_with_http_async_client (line 32) | def test_azure_openai_embedder_with_http_async_client(monkeypatch):

FILE: embedchain/tests/embedder/test_embedder.py
  function base_embedder (line 9) | def base_embedder():
  function test_initialization (line 13) | def test_initialization(base_embedder):
  function test_set_embedding_fn (line 20) | def test_set_embedding_fn(base_embedder):
  function test_set_embedding_fn_when_not_a_function (line 31) | def test_set_embedding_fn_when_not_a_function(base_embedder):
  function test_set_vector_dimension (line 36) | def test_set_vector_dimension(base_embedder):
  function test_set_vector_dimension_type_error (line 42) | def test_set_vector_dimension_type_error(base_embedder):
  function test_embedder_with_config (line 47) | def test_embedder_with_config():

FILE: embedchain/tests/embedder/test_huggingface_embedder.py
  function test_huggingface_embedder_with_model (line 8) | def test_huggingface_embedder_with_model(monkeypatch):

FILE: embedchain/tests/evaluation/test_answer_relevancy_metric.py
  function mock_data (line 10) | def mock_data():
  function mock_answer_relevance_metric (line 31) | def mock_answer_relevance_metric(monkeypatch):
  function test_answer_relevance_init (line 38) | def test_answer_relevance_init(monkeypatch):
  function test_answer_relevance_init_with_config (line 49) | def test_answer_relevance_init_with_config():
  function test_answer_relevance_init_without_api_key (line 58) | def test_answer_relevance_init_without_api_key(monkeypatch):
  function test_generate_prompt (line 64) | def test_generate_prompt(mock_answer_relevance_metric, mock_data):
  function test_generate_questions (line 72) | def test_generate_questions(mock_answer_relevance_metric, mock_data, mon...
  function test_generate_embedding (line 112) | def test_generate_embedding(mock_answer_relevance_metric, mock_data, mon...
  function test_compute_similarity (line 122) | def test_compute_similarity(mock_answer_relevance_metric, mock_data):
  function test_compute_score (line 131) | def test_compute_score(mock_answer_relevance_metric, mock_data, monkeypa...
  function test_evaluate (line 179) | def test_evaluate(mock_answer_relevance_metric, mock_data, monkeypatch):

FILE: embedchain/tests/evaluation/test_context_relevancy_metric.py
  function mock_data (line 9) | def mock_data():
  function mock_context_relevance_metric (line 30) | def mock_context_relevance_metric(monkeypatch):
  function test_context_relevance_init (line 36) | def test_context_relevance_init(monkeypatch):
  function test_context_relevance_init_with_config (line 46) | def test_context_relevance_init_with_config():
  function test_context_relevance_init_without_api_key (line 54) | def test_context_relevance_init_without_api_key(monkeypatch):
  function test_sentence_segmenter (line 60) | def test_sentence_segmenter(mock_context_relevance_metric):
  function test_compute_score (line 68) | def test_compute_score(mock_context_relevance_metric, mock_data, monkeyp...
  function test_evaluate (line 86) | def test_evaluate(mock_context_relevance_metric, mock_data, monkeypatch):

FILE: embedchain/tests/evaluation/test_groundedness_metric.py
  function mock_data (line 10) | def mock_data():
  function mock_groundedness_metric (line 31) | def mock_groundedness_metric(monkeypatch):
  function test_groundedness_init (line 37) | def test_groundedness_init(monkeypatch):
  function test_groundedness_init_with_config (line 46) | def test_groundedness_init_with_config():
  function test_groundedness_init_without_api_key (line 53) | def test_groundedness_init_without_api_key(monkeypatch):
  function test_generate_answer_claim_prompt (line 59) | def test_generate_answer_claim_prompt(mock_groundedness_metric, mock_data):
  function test_get_claim_statements (line 65) | def test_get_claim_statements(mock_groundedness_metric, mock_data, monke...
  function test_generate_claim_inference_prompt (line 99) | def test_generate_claim_inference_prompt(mock_groundedness_metric, mock_...
  function test_get_claim_verdict_scores (line 112) | def test_get_claim_verdict_scores(mock_groundedness_metric, mock_data, m...
  function test_compute_score (line 133) | def test_compute_score(mock_groundedness_metric, mock_data, monkeypatch):
  function test_evaluate (line 149) | def test_evaluate(mock_groundedness_metric, mock_data, monkeypatch):

FILE: embedchain/tests/helper_classes/test_json_serializable.py
  class TestJsonSerializable (line 13) | class TestJsonSerializable(unittest.TestCase):
    method test_base_function (line 16) | def test_base_function(self):
    method test_registration_required (line 42) | def test_registration_required(self):
    method test_recursive (line 62) | def test_recursive(self):
    method test_special_subclasses (line 75) | def test_special_subclasses(self):

FILE: embedchain/tests/llm/conftest.py
  function mock_alembic_command_upgrade (line 8) | def mock_alembic_command_upgrade():

FILE: embedchain/tests/llm/test_anthrophic.py
  function anthropic_llm (line 12) | def anthropic_llm():
  function test_get_llm_model_answer (line 18) | def test_get_llm_model_answer(anthropic_llm):
  function test_get_messages (line 26) | def test_get_messages(anthropic_llm):
  function test_get_llm_model_answer_with_token_usage (line 36) | def test_get_llm_model_answer_with_token_usage(anthropic_llm):

FILE: embedchain/tests/llm/test_aws_bedrock.py
  function config (line 9) | def config(monkeypatch):
  function test_get_llm_model_answer (line 25) | def test_get_llm_model_answer(config, mocker):
  function test_get_llm_model_answer_empty_prompt (line 35) | def test_get_llm_model_answer_empty_prompt(config, mocker):
  function test_get_llm_model_answer_with_streaming (line 45) | def test_get_llm_model_answer_with_streaming(config, mocker):

FILE: embedchain/tests/llm/test_azure_openai.py
  function azure_openai_llm (line 12) | def azure_openai_llm():
  function test_get_llm_model_answer (line 23) | def test_get_llm_model_answer(azure_openai_llm):
  function test_get_answer (line 31) | def test_get_answer(azure_openai_llm):
  function test_get_messages (line 52) | def test_get_messages(azure_openai_llm):
  function test_when_no_deployment_name_provided (line 62) | def test_when_no_deployment_name_provided():
  function test_with_api_version (line 69) | def test_with_api_version():
  function test_get_llm_model_answer_with_http_client_proxies (line 95) | def test_get_llm_model_answer_with_http_client_proxies():
  function test_get_llm_model_answer_with_http_async_client_proxies (line 131) | def test_get_llm_model_answer_with_http_async_client_proxies():

FILE: embedchain/tests/llm/test_base_llm.py
  function base_llm (line 9) | def base_llm():
  function test_is_get_llm_model_answer_not_implemented (line 14) | def test_is_get_llm_model_answer_not_implemented(base_llm):
  function test_is_stream_bool (line 19) | def test_is_stream_bool():
  function test_template_string_gets_converted_to_Template_instance (line 25) | def test_template_string_gets_converted_to_Template_instance():
  function test_is_get_llm_model_answer_implemented (line 31) | def test_is_get_llm_model_answer_implemented():
  function test_stream_response (line 41) | def test_stream_response(base_llm):
  function test_append_search_and_context (line 47) | def test_append_search_and_context(base_llm):
  function test_access_search_and_get_results (line 55) | def test_access_search_and_get_results(base_llm, mocker):

FILE: embedchain/tests/llm/test_chat.py
  class TestApp (line 12) | class TestApp(unittest.TestCase):
    method setUp (line 13) | def setUp(self):
    method test_chat_with_memory (line 19) | def test_chat_with_memory(self, mock_get_answer, mock_retrieve):
    method test_template_replacement (line 47) | def test_template_replacement(self, mock_get_answer, mock_retrieve):
    method test_chat_with_where_in_params (line 72) | def test_chat_with_where_in_params(self):
    method test_chat_with_where_in_chat_config (line 89) | def test_chat_with_where_in_chat_config(self):

FILE: embedchain/tests/llm/test_clarifai.py
  function clarifai_llm_config (line 9) | def clarifai_llm_config(monkeypatch):
  function test_clarifai__llm_get_llm_model_answer (line 18) | def test_clarifai__llm_get_llm_model_answer(clarifai_llm_config, mocker):

FILE: embedchain/tests/llm/test_cohere.py
  function cohere_llm_config (line 10) | def cohere_llm_config():
  function test_init_raises_value_error_without_api_key (line 17) | def test_init_raises_value_error_without_api_key(mocker):
  function test_get_llm_model_answer_raises_value_error_for_system_prompt (line 23) | def test_get_llm_model_answer_raises_value_error_for_system_prompt(coher...
  function test_get_llm_model_answer (line 30) | def test_get_llm_model_answer(cohere_llm_config, mocker):
  function test_get_llm_model_answer_with_token_usage (line 39) | def test_get_llm_model_answer_with_token_usage(cohere_llm_config, mocker):
  function test_get_answer_mocked_cohere (line 65) | def test_get_answer_mocked_cohere(cohere_llm_config, mocker):

FILE: embedchain/tests/llm/test_generate_prompt.py
  class TestGeneratePrompt (line 8) | class TestGeneratePrompt(unittest.TestCase):
    method setUp (line 9) | def setUp(self):
    method test_generate_prompt_with_template (line 12) | def test_generate_prompt_with_template(self):
    method test_generate_prompt_with_contexts_list (line 38) | def test_generate_prompt_with_contexts_list(self):
    method test_generate_prompt_with_history (line 59) | def test_generate_prompt_with_history(self):

FILE: embedchain/tests/llm/test_google.py
  function google_llm_config (line 8) | def google_llm_config():
  function test_google_llm_init_missing_api_key (line 12) | def test_google_llm_init_missing_api_key(monkeypatch):
  function test_google_llm_init (line 18) | def test_google_llm_init(monkeypatch):
  function test_google_llm_get_llm_model_answer_with_system_prompt (line 26) | def test_google_llm_get_llm_model_answer_with_system_prompt(monkeypatch):
  function test_google_llm_get_llm_model_answer (line 34) | def test_google_llm_get_llm_model_answer(monkeypatch, google_llm_config):

FILE: embedchain/tests/llm/test_gpt4all.py
  function config (line 9) | def config():
  function gpt4all_with_config (line 22) | def gpt4all_with_config(config):
  function gpt4all_without_config (line 27) | def gpt4all_without_config():
  function test_gpt4all_init_with_config (line 31) | def test_gpt4all_init_with_config(config, gpt4all_with_config):
  function test_gpt4all_init_without_config (line 42) | def test_gpt4all_init_without_config(gpt4all_without_config):
  function test_get_llm_model_answer (line 47) | def test_get_llm_model_answer(mocker, gpt4all_with_config):
  function test_gpt4all_model_switching (line 58) | def test_gpt4all_model_switching(gpt4all_with_config):

FILE: embedchain/tests/llm/test_huggingface.py
  function huggingface_llm_config (line 11) | def huggingface_llm_config():
  function huggingface_endpoint_config (line 19) | def huggingface_endpoint_config():
  function test_init_raises_value_error_without_api_key (line 26) | def test_init_raises_value_error_without_api_key(mocker):
  function test_get_llm_model_answer_raises_value_error_for_system_prompt (line 32) | def test_get_llm_model_answer_raises_value_error_for_system_prompt(huggi...
  function test_top_p_value_within_range (line 39) | def test_top_p_value_within_range():
  function test_dependency_is_imported (line 45) | def test_dependency_is_imported():
  function test_get_llm_model_answer (line 54) | def test_get_llm_model_answer(huggingface_llm_config, mocker):
  function test_hugging_face_mock (line 63) | def test_hugging_face_mock(huggingface_llm_config, mocker):
  function test_custom_endpoint (line 74) | def test_custom_endpoint(huggingface_endpoint_config, mocker):

FILE: embedchain/tests/llm/test_jina.py
  function config (line 11) | def config():
  function test_init_raises_value_error_without_api_key (line 18) | def test_init_raises_value_error_without_api_key(mocker):
  function test_get_llm_model_answer (line 24) | def test_get_llm_model_answer(config, mocker):
  function test_get_llm_model_answer_with_system_prompt (line 34) | def test_get_llm_model_answer_with_system_prompt(config, mocker):
  function test_get_llm_model_answer_empty_prompt (line 45) | def test_get_llm_model_answer_empty_prompt(config, mocker):
  function test_get_llm_model_answer_with_streaming (line 55) | def test_get_llm_model_answer_with_streaming(config, mocker):
  function test_get_llm_model_answer_without_system_prompt (line 67) | def test_get_llm_model_answer_without_system_prompt(config, mocker):

FILE: embedchain/tests/llm/test_llama2.py
  function llama2_llm (line 9) | def llama2_llm():
  function test_init_raises_value_error_without_api_key (line 15) | def test_init_raises_value_error_without_api_key(mocker):
  function test_get_llm_model_answer_raises_value_error_for_system_prompt (line 21) | def test_get_llm_model_answer_raises_value_error_for_system_prompt(llama...
  function test_get_llm_model_answer (line 27) | def test_get_llm_model_answer(llama2_llm, mocker):

FILE: embedchain/tests/llm/test_mistralai.py
  function mistralai_llm_config (line 8) | def mistralai_llm_config(monkeypatch):
  function test_mistralai_llm_init_missing_api_key (line 14) | def test_mistralai_llm_init_missing_api_key(monkeypatch):
  function test_mistralai_llm_init (line 20) | def test_mistralai_llm_init(monkeypatch):
  function test_get_llm_model_answer (line 26) | def test_get_llm_model_answer(monkeypatch, mistralai_llm_config):
  function test_get_llm_model_answer_with_system_prompt (line 37) | def test_get_llm_model_answer_with_system_prompt(monkeypatch, mistralai_...
  function test_get_llm_model_answer_empty_prompt (line 46) | def test_get_llm_model_answer_empty_prompt(monkeypatch, mistralai_llm_co...
  function test_get_llm_model_answer_without_system_prompt (line 54) | def test_get_llm_model_answer_without_system_prompt(monkeypatch, mistral...
  function test_get_llm_model_answer_with_token_usage (line 63) | def test_get_llm_model_answer_with_token_usage(monkeypatch, mistralai_ll...

FILE: embedchain/tests/llm/test_ollama.py
  function ollama_llm_config (line 9) | def ollama_llm_config():
  function test_get_llm_model_answer (line 14) | def test_get_llm_model_answer(ollama_llm_config, mocker):
  function test_get_answer_mocked_ollama (line 24) | def test_get_answer_mocked_ollama(ollama_llm_config, mocker):
  function test_get_llm_model_answer_with_streaming (line 37) | def test_get_llm_model_answer_with_streaming(ollama_llm_config, mocker):

FILE: embedchain/tests/llm/test_openai.py
  function env_config (line 12) | def env_config():
  function config (line 20) | def config(env_config):
  function test_get_llm_model_answer (line 34) | def test_get_llm_model_answer(config, mocker):
  function test_get_llm_model_answer_with_system_prompt (line 44) | def test_get_llm_model_answer_with_system_prompt(config, mocker):
  function test_get_llm_model_answer_empty_prompt (line 55) | def test_get_llm_model_answer_empty_prompt(config, mocker):
  function test_get_llm_model_answer_with_token_usage (line 65) | def test_get_llm_model_answer_with_token_usage(config, mocker):
  function test_get_llm_model_answer_with_streaming (line 94) | def test_get_llm_model_answer_with_streaming(config, mocker):
  function test_get_llm_model_answer_without_system_prompt (line 106) | def test_get_llm_model_answer_without_system_prompt(config, mocker):
  function test_get_llm_model_answer_with_special_headers (line 126) | def test_get_llm_model_answer_with_special_headers(config, mocker):
  function test_get_llm_model_answer_with_model_kwargs (line 147) | def test_get_llm_model_answer_with_model_kwargs(config, mocker):
  function test_get_llm_model_answer_with_tools (line 174) | def test_get_llm_model_answer_with_tools(config, mocker, mock_return, ex...
  function test_get_llm_model_answer_with_http_client_proxies (line 200) | def test_get_llm_model_answer_with_http_client_proxies(env_config, mocker):
  function test_get_llm_model_answer_with_http_async_client_proxies (line 235) | def test_get_llm_model_answer_with_http_async_client_proxies(env_config,...

FILE: embedchain/tests/llm/test_query.py
  function app (line 12) | def app():
  function test_query (line 19) | def test_query(app):
  function test_query_config_app_passing (line 35) | def test_query_config_app_passing(mock_get_answer):
  function test_query_with_where_in_params (line 50) | def test_query_with_where_in_params(app):
  function test_query_with_where_in_query_config (line 65) | def test_query_with_where_in_query_config(app):

FILE: embedchain/tests/llm/test_together.py
  function together_llm_config (line 10) | def together_llm_config():
  function test_init_raises_value_error_without_api_key (line 17) | def test_init_raises_value_error_without_api_key(mocker):
  function test_get_llm_model_answer_raises_value_error_for_system_prompt (line 23) | def test_get_llm_model_answer_raises_value_error_for_system_prompt(toget...
  function test_get_llm_model_answer (line 30) | def test_get_llm_model_answer(together_llm_config, mocker):
  function test_get_llm_model_answer_with_token_usage (line 39) | def test_get_llm_model_answer_with_token_usage(together_llm_config, mock...
  function test_get_answer_mocked_together (line 65) | def test_get_answer_mocked_together(together_llm_config, mocker):

FILE: embedchain/tests/llm/test_vertex_ai.py
  function setup_database (line 12) | def setup_database():
  function vertexai_llm (line 17) | def vertexai_llm():
  function test_get_llm_model_answer (line 22) | def test_get_llm_model_answer(vertexai_llm):
  function test_get_llm_model_answer_with_token_usage (line 30) | def test_get_llm_model_answer_with_token_usage(vertexai_llm):
  function test_get_answer (line 56) | def test_get_answer(mock_chat_vertexai, vertexai_llm, caplog):
  function test_get_messages (line 69) | def test_get_messages(vertexai_llm):

FILE: embedchain/tests/loaders/test_audio.py
  function setup_audio_loader (line 15) | def setup_audio_loader(mocker):
  function test_initialization (line 33) | def test_initialization(setup_audio_loader):
  function test_load_data_from_url (line 42) | def test_load_data_from_url(setup_audio_loader):
  function test_load_data_from_file (line 73) | def test_load_data_from_file(setup_audio_loader):

FILE: embedchain/tests/loaders/test_csv.py
  function test_load_data (line 13) | def test_load_data(delimiter):
  function test_load_data_with_file_uri (line 52) | def test_load_data_with_file_uri(delimiter):
  function test_get_file_content (line 91) | def test_get_file_content(content):
  function test_get_file_content_http (line 98) | def test_get_file_content_http(content):

FILE: embedchain/tests/loaders/test_discourse.py
  function discourse_loader_config (line 8) | def discourse_loader_config():
  function discourse_loader (line 15) | def discourse_loader(discourse_loader_config):
  function test_discourse_loader_init_with_valid_config (line 19) | def test_discourse_loader_init_with_valid_config():
  function test_discourse_loader_init_with_missing_config (line 25) | def test_discourse_loader_init_with_missing_config():
  function test_discourse_loader_init_with_missing_domain (line 30) | def test_discourse_loader_init_with_missing_domain():
  function test_discourse_loader_check_query_with_valid_query (line 36) | def test_discourse_loader_check_query_with_valid_query(discourse_loader):
  function test_discourse_loader_check_query_with_empty_query (line 40) | def test_discourse_loader_check_query_with_empty_query(discourse_loader):
  function test_discourse_loader_check_query_with_invalid_query_type (line 45) | def test_discourse_loader_check_query_with_invalid_query_type(discourse_...
  function test_discourse_loader_load_post_with_valid_post_id (line 50) | def test_discourse_loader_load_post_with_valid_post_id(discourse_loader,...
  function test_discourse_loader_load_data_with_valid_query (line 69) | def test_discourse_loader_load_data_with_valid_query(discourse_loader, m...

FILE: embedchain/tests/loaders/test_docs_site.py
  function mock_requests_get (line 11) | def mock_requests_get():
  function docs_site_loader (line 17) | def docs_site_loader():
  function test_get_child_links_recursive (line 21) | def test_get_child_links_recursive(mock_requests_get, docs_site_loader):
  function test_get_child_links_recursive_status_not_200 (line 39) | def test_get_child_links_recursive_status_not_200(mock_requests_get, doc...
  function test_get_all_urls (line 49) | def test_get_all_urls(mock_requests_get, docs_site_loader):
  function test_load_data_from_url (line 69) | def test_load_data_from_url(mock_requests_get, docs_site_loader):
  function test_load_data_from_url_status_not_200 (line 91) | def test_load_data_from_url_status_not_200(mock_requests_get, docs_site_...
  function test_load_data (line 102) | def test_load_data(mock_requests_get, docs_site_loader):
  function test_if_response_status_not_200 (line 120) | def test_if_response_status_not_200(mock_requests_get, docs_site_loader):

FILE: embedchain/tests/loaders/test_docs_site_loader.py
  function test_load_data_gets_by_selectors_and_ignored_tags (line 93) | def test_load_data_gets_by_selectors_and_ignored_tags(selectee, ignored_...
  function test_load_data_gets_child_links_recursively (line 134) | def test_load_data_gets_child_links_recursively(loader, mocked_responses...
  function test_load_data_fails_to_fetch_website (line 184) | def test_load_data_fails_to_fetch_website(loader, mocked_responses, mock...
  function loader (line 209) | def loader():
  function mocked_responses (line 216) | def mocked_responses():

FILE: embedchain/tests/loaders/test_docx_file.py
  function mock_docx2txt_loader (line 10) | def mock_docx2txt_loader():
  function docx_file_loader (line 16) | def docx_file_loader():
  function test_load_data (line 20) | def test_load_data(mock_docx2txt_loader, docx_file_loader):

FILE: embedchain/tests/loaders/test_dropbox.py
  function setup_dropbox_loader (line 11) | def setup_dropbox_loader(mocker):
  function test_initialization (line 25) | def test_initialization(setup_dropbox_loader):
  function test_download_folder (line 31) | def test_download_folder(setup_dropbox_loader, mocker):
  function test_generate_dir_id_from_all_paths (line 44) | def test_generate_dir_id_from_all_paths(setup_dropbox_loader, mocker):
  function test_clean_directory (line 55) | def test_clean_directory(setup_dropbox_loader, mocker):
  function test_load_data (line 65) | def test_load_data(mocker, setup_dropbox_loader, tmp_path):

FILE: embedchain/tests/loaders/test_excel_file.py
  function excel_file_loader (line 10) | def excel_file_loader():
  function test_load_data (line 14) | def test_load_data(excel_file_loader):

FILE: embedchain/tests/loaders/test_github.py
  function mock_github_loader_config (line 7) | def mock_github_loader_config():
  function mock_github_loader (line 14) | def mock_github_loader(mocker, mock_github_loader_config):
  function test_github_loader_init (line 20) | def test_github_loader_init(mocker, mock_github_loader_config):
  function test_github_loader_init_empty_config (line 26) | def test_github_loader_init_empty_config(mocker):
  function test_github_loader_init_missing_token (line 31) | def test_github_loader_init_missing_token():

FILE: embedchain/tests/loaders/test_gmail.py
  function mock_beautifulsoup (line 7) | def mock_beautifulsoup(mocker):
  function gmail_loader (line 12) | def gmail_loader(mock_beautifulsoup):
  function test_load_data_file_not_found (line 16) | def test_load_data_file_not_found(gmail_loader, mocker):
  function test_load_data (line 23) | def test_load_data(gmail_loader, mocker):

FILE: embedchain/tests/loaders/test_google_drive.py
  function google_drive_folder_loader (line 7) | def google_drive_folder_loader():
  function test_load_data_invalid_drive_url (line 11) | def test_load_data_invalid_drive_url(google_drive_folder_loader):
  function test_load_data_incorrect_drive_url (line 22) | def test_load_data_incorrect_drive_url(google_drive_folder_loader):
  function test_load_data (line 31) | def test_load_data(google_drive_folder_loader):

FILE: embedchain/tests/loaders/test_json.py
  function test_load_data (line 8) | def test_load_data(mocker):
  function test_load_data_url (line 39) | def test_load_data_url(mocker):
  function test_load_data_invalid_string_content (line 77) | def test_load_data_invalid_string_content(mocker):
  function test_load_data_invalid_url (line 87) | def test_load_data_invalid_url(mocker):
  function test_load_data_from_json_string (line 100) | def test_load_data_from_json_string(mocker):

FILE: embedchain/tests/loaders/test_local_qna_pair.py
  function qna_pair_loader (line 9) | def qna_pair_loader():
  function test_load_data (line 13) | def test_load_data(qna_pair_loader):

FILE: embedchain/tests/loaders/test_local_text.py
  function text_loader (line 9) | def text_loader():
  function test_load_data (line 13) | def test_load_data(text_loader):

FILE: embedchain/tests/loaders/test_mdx.py
  function mdx_loader (line 10) | def mdx_loader():
  function test_load_data (line 14) | def test_load_data(mdx_loader):

FILE: embedchain/tests/loaders/test_mysql.py
  function mysql_loader (line 10) | def mysql_loader(mocker):
  function test_mysql_loader_initialization (line 23) | def test_mysql_loader_initialization(mysql_loader):
  function test_mysql_loader_invalid_config (line 29) | def test_mysql_loader_invalid_config():
  function test_mysql_loader_setup_loader_successful (line 34) | def test_mysql_loader_setup_loader_successful(mysql_loader):
  function test_mysql_loader_setup_loader_connection_error (line 39) | def test_mysql_loader_setup_loader_connection_error(mysql_loader, mocker):
  function test_mysql_loader_check_query_successful (line 45) | def test_mysql_loader_check_query_successful(mysql_loader):
  function test_mysql_loader_check_query_invalid (line 50) | def test_mysql_loader_check_query_invalid(mysql_loader):
  function test_mysql_loader_load_data_successful (line 55) | def test_mysql_loader_load_data_successful(mysql_loader, mocker):
  function test_mysql_loader_load_data_invalid_query (line 75) | def test_mysql_loader_load_data_invalid_query(mysql_loader):

FILE: embedchain/tests/loaders/test_notion.py
  function notion_loader (line 11) | def notion_loader():
  function test_load_data (line 16) | def test_load_data(notion_loader):

FILE: embedchain/tests/loaders/test_openapi.py
  function openapi_loader (line 7) | def openapi_loader():
  function test_load_data (line 11) | def test_load_data(openapi_loader, mocker):

FILE: embedchain/tests/loaders/test_pdf_file.py
  function test_load_data (line 5) | def test_load_data(loader, mocker):
  function test_load_data_fails_to_find_data (line 24) | def test_load_data_fails_to_find_data(loader, mocker):
  function loader (line 33) | def loader():

FILE: embedchain/tests/loaders/test_postgres.py
  function postgres_loader (line 10) | def postgres_loader(mocker):
  function test_postgres_loader_initialization (line 17) | def test_postgres_loader_initialization(postgres_loader):
  function test_postgres_loader_invalid_config (line 22) | def test_postgres_loader_invalid_config():
  function test_load_data (line 27) | def test_load_data(postgres_loader, monkeypatch):
  function test_load_data_exception (line 44) | def test_load_data_exception(postgres_loader, monkeypatch):
  function test_close_connection (line 57) | def test_close_connection(postgres_loader):

FILE: embedchain/tests/loaders/test_slack.py
  function slack_loader (line 7) | def slack_loader(mocker, monkeypatch):
  function test_slack_loader_initialization (line 18) | def test_slack_loader_initialization(slack_loader):
  function test_slack_loader_setup_loader (line 23) | def test_slack_loader_setup_loader(slack_loader):
  function test_slack_loader_check_query (line 29) | def test_slack_loader_check_query(slack_loader):
  function test_slack_loader_load_data (line 39) | def test_slack_loader_load_data(slack_loader, mocker):

FILE: embedchain/tests/loaders/test_web_page.py
  function web_page_loader (line 11) | def web_page_loader():
  function test_load_data (line 15) | def test_load_data(web_page_loader):
  function test_get_clean_content_excludes_unnecessary_info (line 50) | def test_get_clean_content_excludes_unnecessary_info(web_page_loader):
  function test_fetch_reference_links_success (line 121) | def test_fetch_reference_links_success(web_page_loader):
  function test_fetch_reference_links_failure (line 140) | def test_fetch_reference_links_failure(web_page_loader):

FILE: embedchain/tests/loaders/test_xml.py
  function test_load_data (line 38) | def test_load_data(xml: str):

FILE: embedchain/tests/loaders/test_youtube_video.py
  function youtube_video_loader (line 10) | def youtube_video_loader():
  function test_load_data (line 14) | def test_load_data(youtube_video_loader):
  function test_load_data_with_empty_doc (line 46) | def test_load_data_with_empty_doc(youtube_video_loader):

FILE: embedchain/tests/memory/test_chat_memory.py
  function chat_memory_instance (line 9) | def chat_memory_instance():
  function test_add_chat_memory (line 13) | def test_add_chat_memory(chat_memory_instance):
  function test_get (line 29) | def test_get(chat_memory_instance):
  function test_delete_chat_history (line 52) | def test_delete_chat_history(chat_memory_instance):
  function close_connection (line 89) | def close_connection(chat_memory_instance):

FILE: embedchain/tests/memory/test_memory_messages.py
  function test_ec_base_message (line 4) | def test_ec_base_message():
  function test_ec_base_chat_message (line 19) | def test_ec_base_chat_message():

FILE: embedchain/tests/models/test_data_type.py
  function test_subclass_types_in_data_type (line 9) | def test_subclass_types_in_data_type():
  function test_data_type_in_subclasses (line 24) | def test_data_type_in_subclasses():

FILE: embedchain/tests/telemetry/test_posthog.py
  class TestAnonymousTelemetry (line 7) | class TestAnonymousTelemetry:
    method test_init (line 8) | def test_init(self, mocker):
    method test_init_with_disabled_telemetry (line 19) | def test_init_with_disabled_telemetry(self, mocker):
    method test_get_user_id (line 25) | def test_get_user_id(self, mocker, tmpdir):
    method test_capture (line 36) | def test_capture(self, mocker):
    method test_capture_with_exception (line 55) | def test_capture_with_exception(self, mocker, caplog):

FILE: embedchain/tests/test_app.py
  function app (line 15) | def app():
  function test_app (line 21) | def test_app(app):
  class TestConfigForAppComponents (line 27) | class TestConfigForAppComponents:
    method test_constructor_config (line 28) | def test_constructor_config(self):
    method test_component_config (line 34) | def test_component_config(self):
  class TestAppFromConfig (line 41) | class TestAppFromConfig:
    method load_config_data (line 42) | def load_config_data(self, yaml_path):
    method test_from_chroma_config (line 46) | def test_from_chroma_config(self, mocker):
    method test_from_opensource_config (line 80) | def test_from_opensource_config(self, mocker):

FILE: embedchain/tests/test_client.py
  class TestClient (line 6) | class TestClient:
    method mock_requests_post (line 8) | def mock_requests_post(self, mocker):
    method test_valid_api_key (line 11) | def test_valid_api_key(self, mock_requests_post):
    method test_invalid_api_key (line 16) | def test_invalid_api_key(self, mock_requests_post):
    method test_update_valid_api_key (line 21) | def test_update_valid_api_key(self, mock_requests_post):
    method test_clear_api_key (line 27) | def test_clear_api_key(self, mock_requests_post):
    method test_save_api_key (line 33) | def test_save_api_key(self, mock_requests_post):
    method test_load_api_key_from_config (line 40) | def test_load_api_key_from_config(self, mocker):
    method test_load_invalid_api_key_from_config (line 45) | def test_load_invalid_api_key_from_config(self, mocker):
    method test_load_missing_api_key_from_config (line 50) | def test_load_missing_api_key_from_config(self, mocker):

FILE: embedchain/tests/test_factory.py
  class TestFactories (line 18) | class TestFactories:
    method test_llm_factory_create (line 26) | def test_llm_factory_create(self, provider_name, config_data, expected...
    method test_embedder_factory_create (line 46) | def test_embedder_factory_create(self, mocker, provider_name, config_d...
    method test_vectordb_factory_create (line 63) | def test_vectordb_factory_create(self, mocker, provider_name, config_d...

FILE: embedchain/tests/test_utils.py
  function test_all_config_yamls (line 27) | def test_all_config_yamls():

FILE: embedchain/tests/vectordb/test_chroma_db.py
  function chroma_db (line 16) | def chroma_db():
  function app_with_settings (line 21) | def app_with_settings():
  function cleanup_db (line 29) | def cleanup_db():
  function test_chroma_db_init_with_host_and_port (line 38) | def test_chroma_db_init_with_host_and_port(mock_client):
  function test_chroma_db_init_with_basic_auth (line 46) | def test_chroma_db_init_with_basic_auth(mock_client):
  function test_app_init_with_host_and_port (line 70) | def test_app_init_with_host_and_port(mock_client):
  function test_app_init_with_host_and_port_none (line 84) | def test_app_init_with_host_and_port_none(mock_client):
  function test_chroma_db_duplicates_throw_warning (line 93) | def test_chroma_db_duplicates_throw_warning(caplog):
  function test_chroma_db_duplicates_collections_no_warning (line 103) | def test_chroma_db_duplicates_collections_no_warning(caplog):
  function test_chroma_db_collection_init_with_default_collection (line 117) | def test_chroma_db_collection_init_with_default_collection():
  function test_chroma_db_collection_init_with_custom_collection (line 123) | def test_chroma_db_collection_init_with_custom_collection():
  function test_chroma_db_collection_set_collection_name (line 130) | def test_chroma_db_collection_set_collection_name():
  function test_chroma_db_collection_changes_encapsulated (line 137) | def test_chroma_db_collection_changes_encapsulated():
  function test_chroma_db_collection_collections_are_persistent (line 157) | def test_chroma_db_collection_collections_are_persistent():
  function test_chroma_db_collection_parallel_collections (line 172) | def test_chroma_db_collection_parallel_collections():
  function test_chroma_db_collection_ids_share_collections (line 205) | def test_chroma_db_collection_ids_share_collections():
  function test_chroma_db_collection_reset (line 224) | def test_chroma_db_collection_reset():

FILE: embedchain/tests/vectordb/test_elasticsearch_db.py
  class TestEsDB (line 11) | class TestEsDB(unittest.TestCase):
    method test_setUp (line 13) | def test_setUp(self, mock_client):
    method test_query (line 23) | def test_query(self, mock_client):
    method test_init_without_url (line 73) | def test_init_without_url(self):
    method test_init_with_invalid_es_config (line 83) | def test_init_with_invalid_es_config(self):

FILE: embedchain/tests/vectordb/test_lancedb.py
  function lancedb (line 15) | def lancedb():
  function app_with_settings (line 20) | def app_with_settings():
  function cleanup_db (line 28) | def cleanup_db():
  function test_lancedb_duplicates_throw_warning (line 37) | def test_lancedb_duplicates_throw_warning(caplog):
  function test_lancedb_duplicates_collections_no_warning (line 47) | def test_lancedb_duplicates_collections_no_warning(caplog):
  function test_lancedb_collection_init_with_default_collection (line 61) | def test_lancedb_collection_init_with_default_collection():
  function test_lancedb_collection_init_with_custom_collection (line 67) | def test_lancedb_collection_init_with_custom_collection():
  function test_lancedb_collection_set_collection_name (line 74) | def test_lancedb_collection_set_collection_name():
  function test_lancedb_collection_changes_encapsulated (line 81) | def test_lancedb_collection_changes_encapsulated():
  function test_lancedb_collection_collections_are_persistent (line 100) | def test_lancedb_collection_collections_are_persistent():
  function test_lancedb_collection_parallel_collections (line 115) | def test_lancedb_collection_parallel_collections():
  function test_lancedb_collection_ids_share_collections (line 149) | def test_lancedb_collection_ids_share_collections():
  function test_lancedb_collection_reset (line 172) | def test_lancedb_collection_reset():
  function generate_embeddings (line 210) | def generate_embeddings(dummy_embed, embed_size):

FILE: embedchain/tests/vectordb/test_pinecone.py
  function pinecone_pod_config (line 8) | def pinecone_pod_config():
  function pinecone_serverless_config (line 18) | def pinecone_serverless_config():
  function test_pinecone_init_without_config (line 30) | def test_pinecone_init_without_config(monkeypatch):
  function test_pinecone_init_with_config (line 42) | def test_pinecone_init_with_config(pinecone_pod_config, monkeypatch):
  class MockListIndexes (line 60) | class MockListIndexes:
    method names (line 61) | def names(self):
  class MockPineconeIndex (line 65) | class MockPineconeIndex:
    method __init__ (line 68) | def __init__(*args, **kwargs):
    method upsert (line 71) | def upsert(self, chunk, **kwargs):
    method delete (line 75) | def delete(self, *args, **kwargs):
    method query (line 78) | def query(self, *args, **kwargs):
    method fetch (line 98) | def fetch(self, *args, **kwargs):
    method describe_index_stats (line 114) | def describe_index_stats(self, *args, **kwargs):
  class MockPineconeClient (line 118) | class MockPineconeClient:
    method __init__ (line 119) | def __init__(*args, **kwargs):
    method list_indexes (line 122) | def list_indexes(self):
    method create_index (line 125) | def create_index(self, *args, **kwargs):
    method Index (line 128) | def Index(self, *args, **kwargs):
    method delete_index (line 131) | def delete_index(self, *args, **kwargs):
  class MockPinecone (line 135) | class MockPinecone:
    method __init__ (line 136) | def __init__(*args, **kwargs):
    method Pinecone (line 139) | def Pinecone(*args, **kwargs):
    method PodSpec (line 142) | def PodSpec(*args, **kwargs):
    method ServerlessSpec (line 145) | def ServerlessSpec(*args, **kwargs):
  class MockEmbedder (line 149) | class MockEmbedder:
    method embedding_fn (line 150) | def embedding_fn(self, documents):
  function test_setup_pinecone_index (line 154) | def test_setup_pinecone_index(pinecone_pod_config, pinecone_serverless_c...
  function test_get (line 174) | def test_get(monkeypatch):
  function test_add (line 188) | def test_add(monkeypatch):
  function test_query (line 206) | def test_query(monkeypatch):

FILE: embedchain/tests/vectordb/test_qdrant.py
  function mock_embedding_fn (line 15) | def mock_embedding_fn(texts: list[str]) -> list[list[float]]:
  class TestQdrantDB (line 20) | class TestQdrantDB(unittest.TestCase):
    method test_incorrect_config_throws_error (line 23) | def test_incorrect_config_throws_error(self):
    method test_initialize (line 29) | def test_initialize(self, qdrant_client_mock):
    method test_get (line 45) | def test_get(self, qdrant_client_mock):
    method test_add (line 65) | def test_add(self, uuid_mock, qdrant_client_mock):
    method test_query (line 103) | def test_query(self, qdrant_client_mock):
    method test_count (line 134) | def test_count(self, qdrant_client_mock):
    method test_reset (line 149) | def test_reset(self, qdrant_client_mock):

FILE: embedchain/tests/vectordb/test_weaviate.py
  function mock_embedding_fn (line 11) | def mock_embedding_fn(texts: list[str]) -> list[list[float]]:
  class TestWeaviateDb (line 16) | class TestWeaviateDb(unittest.TestCase):
    method test_incorrect_config_throws_error (line 17) | def test_incorrect_config_throws_error(self):
    method test_initialize (line 23) | def test_initialize(self, weaviate_mock):
    method test_get_or_create_db (line 95) | def test_get_or_create_db(self, weaviate_mock):
    method test_add (line 112) | def test_add(self, weaviate_mock):
    method test_query_without_where (line 146) | def test_query_without_where(self, weaviate_mock):
    method test_query_with_where (line 169) | def test_query_with_where(self, weaviate_mock):
    method test_reset (line 196) | def test_reset(self, weaviate_mock):
    method test_count (line 219) | def test_count(self, weaviate_mock):

FILE: embedchain/tests/vectordb/test_zilliz_db.py
  class TestZillizVectorDBConfig (line 14) | class TestZillizVectorDBConfig:
    method test_init_with_uri_and_token (line 16) | def test_init_with_uri_and_token(self):
    method test_init_without_uri (line 30) | def test_init_without_uri(self):
    method test_init_without_token (line 43) | def test_init_without_token(self):
  class TestZillizVectorDB (line 56) | class TestZillizVectorDB:
    method mock_config (line 59) | def mock_config(self, mocker):
    method test_zilliz_vector_db_setup (line 64) | def test_zilliz_vector_db_setup(self, mock_connect, mock_client, mock_...
  class TestZillizDBCollection (line 77) | class TestZillizDBCollection:
    method mock_config (line 80) | def mock_config(self, mocker):
    method mock_embedder (line 84) | def mock_embedder(self, mocker):
    method test_init_with_default_collection (line 88) | def test_init_with_default_collection(self):
    method test_init_with_custom_collection (line 98) | def test_init_with_custom_collection(self):
    method test_query (line 111) | def test_query(self, mock_connect, mock_client, mock_embedder, mock_co...

FILE: evaluation/evals.py
  function process_item (line 12) | def process_item(item_data):
  function main (line 45) | def main():

FILE: evaluation/metrics/llm_judge.py
  function evaluate_llm_judge (line 39) | def evaluate_llm_judge(question, gold_answer, generated_answer):
  function main (line 58) | def main():

FILE: evaluation/metrics/utils.py
  function simple_tokenize (line 42) | def simple_tokenize(text):
  function calculate_rouge_scores (line 49) | def calculate_rouge_scores(prediction: str, reference: str) -> Dict[str,...
  function calculate_bleu_scores (line 60) | def calculate_bleu_scores(prediction: str, reference: str) -> Dict[str, ...
  function calculate_bert_scores (line 80) | def calculate_bert_scores(prediction: str, reference: str) -> Dict[str, ...
  function calculate_meteor_score (line 90) | def calculate_meteor_score(prediction: str, reference: str) -> float:
  function calculate_sentence_similarity (line 99) | def calculate_sentence_similarity(prediction: str, reference: str) -> fl...
  function calculate_metrics (line 116) | def calculate_metrics(prediction: str, reference: str) -> Dict[str, float]:
  function aggregate_metrics (line 167) | def aggregate_metrics(

FILE: evaluation/run_experiments.py
  class Experiment (line 14) | class Experiment:
    method __init__ (line 15) | def __init__(self, technique_type, chunk_size):
    method run (line 19) | def run(self):
  function main (line 23) | def main():

FILE: evaluation/src/langmem.py
  function get_answer (line 25) | def get_answer(question, speaker_1_user_id, speaker_1_memories, speaker_...
  function prompt (line 42) | def prompt(state):
  class LangMem (line 59) | class LangMem:
    method __init__ (line 60) | def __init__(
    method add_memory (line 82) | def add_memory(self, message, config):
    method search_memory (line 85) | def search_memory(self, query, config):
  class LangMemManager (line 96) | class LangMemManager:
    method __init__ (line 97) | def __init__(self, dataset_path):
    method process_all_conversations (line 102) | def process_all_conversations(self, output_file_path):

FILE: evaluation/src/memzero/add.py
  class MemoryADD (line 45) | class MemoryADD:
    method __init__ (line 46) | def __init__(self, data_path=None, batch_size=2, is_graph=False):
    method load_data (line 61) | def load_data(self):
    method add_memory (line 66) | def add_memory(self, user_id, message, metadata, retries=3):
    method add_memories_for_speaker (line 80) | def add_memories_for_speaker(self, speaker, messages, timestamp, desc):
    method process_conversation (line 85) | def process_conversation(self, item, idx):
    method process_all_conversations (line 134) | def process_all_conversations(self, max_workers=10):

FILE: evaluation/src/memzero/search.py
  class MemorySearch (line 18) | class MemorySearch:
    method __init__ (line 19) | def __init__(self, output_path="results.json", top_k=10, filter_memori...
    method search_memory (line 37) | def search_memory(self, user_id, query, max_retries=3, retry_delay=1):
    method answer_question (line 90) | def answer_question(self, speaker_1_user_id, speaker_2_user_id, questi...
    method process_question (line 129) | def process_question(self, val, speaker_a_user_id, speaker_b_user_id):
    method process_data_file (line 171) | def process_data_file(self, file_path):
    method process_questions_parallel (line 198) | def process_questions_parallel(self, qa_list, speaker_a_user_id, speak...

FILE: evaluation/src/openai/predict.py
  class OpenAIPredict (line 55) | class OpenAIPredict:
    method __init__ (line 56) | def __init__(self, model="gpt-4o-mini"):
    method search_memory (line 61) | def search_memory(self, idx):
    method process_question (line 67) | def process_question(self, val, idx):
    method answer_question (line 90) | def answer_question(self, idx, question):
    method process_data_file (line 104) | def process_data_file(self, file_path, output_file_path):

FILE: evaluation/src/rag.py
  class RAGManager (line 26) | class RAGManager:
    method __init__ (line 27) | def __init__(self, data_path="dataset/locomo10_rag.json", chunk_size=5...
    method generate_response (line 34) | def generate_response(self, question, context):
    method clean_chat_history (line 68) | def clean_chat_history(self, chat_history):
    method calculate_embedding (line 75) | def calculate_embedding(self, document):
    method calculate_similarity (line 79) | def calculate_similarity(self, embedding1, embedding2):
    method search (line 82) | def search(self, query, chunks, embeddings, k=1):
    method create_chunks (line 114) | def create_chunks(self, chat_history, chunk_size=500):
    method process_all_conversations (line 144) | def process_all_conversations(self, output_file_path):

FILE: evaluation/src/zep/add.py
  class ZepAdd (line 13) | class ZepAdd:
    method __init__ (line 14) | def __init__(self, data_path=None):
    method load_data (line 21) | def load_data(self):
    method process_conversation (line 26) | def process_conversation(self, run_id, item, idx):
    method process_all_conversations (line 63) | def process_all_conversations(self, run_id):

FILE: evaluation/src/zep/search.py
  class ZepSearch (line 34) | class ZepSearch:
    method __init__ (line 35) | def __init__(self):
    method format_edge_date_range (line 40) | def format_edge_date_range(self, edge: EntityEdge) -> str:
    method compose_search_context (line 44) | def compose_search_context(self, edges: list[EntityEdge], nodes: list[...
    method search_memory (line 49) | def search_memory(self, run_id, idx, query, max_retries=3, retry_delay...
    method process_question (line 76) | def process_question(self, run_id, val, idx):
    method answer_question (line 99) | def answer_question(self, run_id, idx, question):
    method process_data_file (line 113) | def process_data_file(self, file_path, run_id, output_file_path):

FILE: examples/mem0-demo/app/api/chat/route.ts
  constant SYSTEM_HIGHLIGHT_PROMPT (line 10) | const SYSTEM_HIGHLIGHT_PROMPT = `
  function POST (line 70) | async function POST(req: Request) {

FILE: examples/mem0-demo/app/layout.tsx
  function RootLayout (line 20) | function RootLayout({

FILE: examples/mem0-demo/app/page.tsx
  function Page (line 3) | function Page() {

FILE: examples/mem0-demo/components/assistant-ui/memory-indicator.tsx
  type Memory (line 14) | type Memory = {
  type MemoryIndicatorProps (line 21) | interface MemoryIndicatorProps {
  function MemoryIndicator (line 25) | function MemoryIndicator({ memories }: MemoryIndicatorProps) {

FILE: examples/mem0-demo/components/assistant-ui/memory-ui.tsx
  type RetrievedMemory (line 5) | type RetrievedMemory = {
  type NewMemory (line 17) | type NewMemory = {
  type NewMemoryAnnotation (line 25) | type NewMemoryAnnotation = {
  type GetMemoryAnnotation (line 30) | type GetMemoryAnnotation = {
  type MemoryAnnotation (line 35) | type MemoryAnnotation = NewMemoryAnnotation | GetMemoryAnnotation;

FILE: examples/mem0-demo/components/assistant-ui/theme-aware-logo.tsx
  function ThemeAwareLogo (line 7) | function ThemeAwareLogo({

FILE: examples/mem0-demo/components/assistant-ui/thread-list.tsx
  type ThreadListProps (line 24) | interface ThreadListProps {

FILE: examples/mem0-demo/components/assistant-ui/thread.tsx
  type ThreadProps (line 50) | interface ThreadProps {
  type ThreadWelcomeProps (line 248) | interface ThreadWelcomeProps {
  type ThreadWelcomeSuggestionsProps (line 278) | interface ThreadWelcomeSuggestionsProps {
  type ComposerProps (line 325) | interface ComposerProps {

FILE: examples/mem0-demo/components/assistant-ui/tooltip-icon-button.tsx
  type TooltipIconButtonProps (line 14) | type TooltipIconButtonProps = ButtonProps & {

FILE: examples/mem0-demo/components/mem0/markdown.tsx
  type MarkdownRendererProps (line 15) | interface MarkdownRendererProps {

FILE: examples/mem0-demo/components/mem0/theme-aware-logo.tsx
  function ThemeAwareLogo (line 8) | function ThemeAwareLogo({

FILE: examples/mem0-demo/components/ui/badge.tsx
  type BadgeProps (line 26) | interface BadgeProps
  function Badge (line 30) | function Badge({ className, variant, ...props }: BadgeProps) {

FILE: examples/mem0-demo/components/ui/button.tsx
  type ButtonProps (line 37) | interface ButtonProps

FILE: examples/mem0-demo/lib/utils.ts
  function cn (line 4) | function cn(...inputs: ClassValue[]) {

FILE: examples/misc/diet_assistant_voice_cartesia.py
  function get_food_recommendation (line 46) | def get_food_recommendation(user_query: str, user_id):
  function initialize_food_memory (line 84) | def initialize_food_memory(user_id):

FILE: examples/misc/fitness_checker.py
  function store_user_preferences (line 28) | def store_user_preferences(conversation: list, user_id: str = USER_ID):
  function fitness_coach (line 34) | def fitness_coach(user_input: str, user_id: str = USER_ID):

FILE: examples/misc/healthcare_assistant_google_adk.py
  function save_patient_info (line 19) | def save_patient_info(information: str) -> dict:
  function retrieve_patient_info (line 37) | def retrieve_patient_info(query: str) -> str:
  function schedule_appointment (line 61) | def schedule_appointment(date: str, time: str, reason: str) -> dict:
  function call_agent_async (line 113) | async def call_agent_async(query, runner, user_id, session_id):
  function run_conversation (line 136) | async def run_conversation():
  function interactive_mode (line 168) | async def interactive_mode():

FILE: examples/misc/movie_recommendation_grok3.py
  function recommend_movie_with_memory (line 46) | def recommend_movie_with_memory(user_id: str, user_query: str):

FILE: examples/misc/multillm_memory.py
  function get_team_knowledge (line 51) | def get_team_knowledge(topic: str, project_id: str) -> str:
  function research_with_specialist (line 67) | def research_with_specialist(task: str, specialist: str, project_id: str...
  function show_team_knowledge (line 107) | def show_team_knowledge(project_id: str):
  function demo_research_team (line 132) | def demo_research_team():

FILE: examples/misc/personal_assistant_agno.py
  function chat_user (line 32) | def chat_user(user_input: str = None, user_id: str = "user_123", image_p...

FILE: examples/misc/personalized_search.py
  function setup_user_history (line 41) | def setup_user_history(user_id):
  function get_user_context (line 72) | def get_user_context(user_id, query):
  function create_personalized_search_agent (line 100) | def create_personalized_search_agent(user_context):
  function conduct_personalized_search (line 155) | def conduct_personalized_search(user_id, query):
  function store_search_interaction (line 204) | def store_search_interaction(user_id, original_query, agent_response):
  function personalized_search_agent (line 220) | def personalized_search_agent():

FILE: examples/misc/strands_agent_aws_elasticache_neptune.py
  function get_assistant_response (line 90) | def get_assistant_response(messages):
  function store_memory_tool (line 117) | def store_memory_tool(information: str, user_id: str = "user", category:...
  function store_graph_memory_tool (line 147) | def store_graph_memory_tool(information: str, user_id: str = "user", cat...
  function search_memory_tool (line 179) | def search_memory_tool(query: str, user_id: str = "user") -> str:
  function search_graph_memory_tool (line 225) | def search_graph_memory_tool(query: str, user_id: str = "user") -> str:
  function get_all_memories_tool (line 280) | def get_all_memories_tool(user_id: str = "user") -> str:
  function store_memory (line 318) | def store_memory(messages, user_id="alice", category="conversation"):
  function get_agent_metrics (line 338) | def get_agent_metrics(result):

FILE: examples/misc/study_buddy.py
  function upload_pdf (line 32) | def upload_pdf(pdf_url: str, user_id: str):
  function study_buddy (line 39) | async def study_buddy(user_id: str, topic: str, user_input: str):
  function main (line 64) | async def main():

FILE: examples/misc/test.py
  function search_memory (line 16) | def search_memory(query: str, user_id: str) -> str:
  function save_memory (line 25) | def save_memory(content: str, user_id: str) -> str:
  function chat_with_handoffs (line 62) | def chat_with_handoffs(user_input: str, user_id: str) -> str:

FILE: examples/misc/vllm_example.py
  function main (line 47) | def main():

FILE: examples/misc/voice_assistant_elevenlabs.py
  function store_user_preferences (line 41) | def store_user_preferences(user_id: str, conversation: list):
  function initialize_memory (line 47) | def initialize_memory():
  function record_audio (line 119) | def record_audio(filename="input.wav", record_seconds=5):
  function transcribe_whisper (line 146) | def transcribe_whisper(audio_path):
  function get_agent_response (line 159) | def get_agent_response(user_input):
  function speak_response (line 196) | def speak_response(text):
  function run_voice_agent (line 205) | def run_voice_agent():

FILE: examples/multiagents/llamaindex_learning_system.py
  class MultiAgentLearningSystem (line 35) | class MultiAgentLearningSystem:
    method __init__ (line 43) | def __init__(self, student_id: str):
    method _setup_agents (line 53) | def _setup_agents(self):
    method start_learning_session (line 148) | async def start_learning_session(self, topic: str, student_message: st...
    method get_learning_history (line 163) | async def get_learning_history(self) -> str:
  function run_learning_agent (line 179) | async def run_learning_agent():
  function main (line 205) | async def main():

FILE: examples/multimodal-demo/src/App.tsx
  function App (line 4) | function App() {

FILE: examples/multimodal-demo/src/components/api-settings-popup.tsx
  function ApiSettingsPopup (line 9) | function ApiSettingsPopup(props: { isOpen: boolean, setIsOpen: Dispatch<...

FILE: examples/multimodal-demo/src/components/ui/badge.tsx
  type BadgeProps (line 26) | interface BadgeProps
  function Badge (line 30) | function Badge({ className, variant, ...props }: BadgeProps) {

FILE: examples/multimodal-demo/src/components/ui/button.tsx
  type ButtonProps (line 37) | interface ButtonProps

FILE: examples/multimodal-demo/src/components/ui/input.tsx
  type InputProps (line 5) | interface InputProps

FILE: examples/multimodal-demo/src/constants/messages.ts
  constant WELCOME_MESSAGE (line 3) | const WELCOME_MESSAGE: Message = {
  constant INVALID_CONFIG_MESSAGE (line 10) | const INVALID_CONFIG_MESSAGE: Message = {
  constant ERROR_MESSAGE (line 17) | const ERROR_MESSAGE: Message = {
  constant AI_MODELS (line 24) | const AI_MODELS = {
  type Provider (line 31) | type Provider = keyof typeof AI_MODELS;

FILE: examples/multimodal-demo/src/contexts/GlobalContext.tsx
  type GlobalContextType (line 9) | interface GlobalContextType {

FILE: examples/multimodal-demo/src/hooks/useAuth.ts
  type UseAuthReturn (line 4) | interface UseAuthReturn {

FILE: examples/multimodal-demo/src/hooks/useChat.ts
  type UseChatProps (line 7) | interface UseChatProps {
  type UseChatReturn (line 14) | interface UseChatReturn {
  type MessageContent (line 21) | type MessageContent = string | {
  type PromptMessage (line 28) | interface PromptMessage {

FILE: examples/multimodal-demo/src/hooks/useFileHandler.ts
  type UseFileHandlerReturn (line 5) | interface UseFileHandlerReturn {

FILE: examples/multimodal-demo/src/libs/utils.ts
  function cn (line 4) | function cn(...inputs: ClassValue[]) {

FILE: examples/multimodal-demo/src/page.tsx
  function Home (line 6) | function Home() {

FILE: examples/multimodal-demo/src/pages/home.tsx
  function Home (line 10) | function Home() {

FILE: examples/multimodal-demo/src/types.ts
  type Memory (line 2) | interface Memory {
  type Message (line 9) | interface Message {
  type FileInfo (line 18) | interface FileInfo {

FILE: examples/multimodal-demo/useChat.ts
  type UseChatProps (line 7) | interface UseChatProps {
  type UseChatReturn (line 14) | interface UseChatReturn {
  type MessageContent (line 21) | type MessageContent = string | {
  type PromptMessage (line 28) | interface PromptMessage {

FILE: examples/openai-inbuilt-tools/index.js
  function run (line 11) | async function run() {
  function main (line 37) | async function main(memory = false) {
  function addSampleMemories (line 66) | async function addSampleMemories() {

FILE: examples/vercel-ai-sdk-chat-app/src/App.tsx
  function App (line 4) | function App() {

FILE: examples/vercel-ai-sdk-chat-app/src/components/api-settings-popup.tsx
  function ApiSettingsPopup (line 9) | function ApiSettingsPopup(props: { isOpen: boolean, setIsOpen: Dispatch<...

FILE: examples/vercel-ai-sdk-chat-app/src/components/ui/badge.tsx
  type BadgeProps (line 26) | interface BadgeProps
  function Badge (line 30) | function Badge({ className, variant, ...props }: BadgeProps) {

FILE: examples/vercel-ai-sdk-chat-app/src/components/ui/button.tsx
  type ButtonProps (line 37) | interface ButtonProps

FILE: examples/vercel-ai-sdk-chat-app/src/components/ui/input.tsx
  type InputProps (line 5) | interface InputProps

FILE: examples/vercel-ai-sdk-chat-app/src/constants/messages.ts
  constant WELCOME_MESSAGE (line 3) | const WELCOME_MESSAGE: Message = {
  constant INVALID_CONFIG_MESSAGE (line 10) | const INVALID_CONFIG_MESSAGE: Message = {
  constant ERROR_MESSAGE (line 17) | const ERROR_MESSAGE: Message = {
  constant AI_MODELS (line 24) | const AI_MODELS = {
  type Provider (line 31) | type Provider = keyof typeof AI_MODELS;

FILE: examples/vercel-ai-sdk-chat-app/src/contexts/GlobalContext.tsx
  type GlobalContextType (line 9) | interface GlobalContextType {

FILE: examples/vercel-ai-sdk-chat-app/src/hooks/useAuth.ts
  type UseAuthReturn (line 4) | interface UseAuthReturn {

FILE: examples/vercel-ai-sdk-chat-app/src/hooks/useChat.ts
  type UseChatProps (line 7) | interface UseChatProps {
  type UseChatReturn (line 14) | interface UseChatReturn {
  type MemoryResponse (line 21) | interface MemoryResponse {
  type MessageContent (line 28) | type MessageContent =
  type PromptMessage (line 33) | interface PromptMessage {

FILE: examples/vercel-ai-sdk-chat-app/src/hooks/useFileHandler.ts
  type UseFileHandlerReturn (line 5) | interface UseFileHandlerReturn {

FILE: examples/vercel-ai-sdk-chat-app/src/libs/utils.ts
  function cn (line 4) | function cn(...inputs: ClassValue[]) {

FILE: examples/vercel-ai-sdk-chat-app/src/page.tsx
  function Home (line 6) | function Home() {

FILE: examples/vercel-ai-sdk-chat-app/src/pages/home.tsx
  function Home (line 10) | function Home() {

FILE: examples/vercel-ai-sdk-chat-app/src/types.ts
  type Memory (line 2) | interface Memory {
  type Message (line 9) | interface Message {
  type FileInfo (line 18) | interface FileInfo {

FILE: examples/yt-assistant-chrome/src/background.js
  function saveConfig (line 114) | async function saveConfig(newConfig) {
  function validateApiKey (line 135) | async function validateApiKey(apiKey) {
  function sendChatRequest (line 157) | async function sendChatRequest(messages, model) {
  function mem0Integration (line 245) | function mem0Integration() {

FILE: examples/yt-assistant-chrome/src/content.js
  function initializeMem0AI (line 19) | async function initializeMem0AI() {
  function getYouTubeVideoId (line 99) | function getYouTubeVideoId(url) {
  function fetchAndLogTranscript (line 106) | async function fetchAndLogTranscript() {
  function init (line 163) | function init() {
  function extractVideoContext (line 184) | function extractVideoContext() {
  function injectChatInterface (line 220) | function injectChatInterface() {
  function setupEventListeners (line 293) | function setupEventListeners() {
  function addMessage (line 375) | function addMessage(role, text, isStreaming = false) {
  function formatStreamingText (line 415) | function formatStreamingText(text) {
  function sendMessage (line 435) | async function sendMessage() {
  function prepareMessagesWithContext (line 525) | function prepareMessagesWithContext() {
  function loadMemories (line 613) | async function loadMemories() {

FILE: examples/yt-assistant-chrome/src/options.js
  function init (line 19) | async function init() {
  function initializeMem0AI (line 60) | async function initializeMem0AI() {
  function loadConfig (line 85) | async function loadConfig() {
  function saveOptions (line 111) | async function saveOptions() {
  function resetToDefaults (line 157) | function resetToDefaults() {
  function fetchMemories (line 179) | async function fetchMemories() {
  function displayMemories (line 198) | function displayMemories(memories) {
  function showMemoriesError (line 241) | function showMemoriesError(message) {
  function deleteAllMemories (line 250) | async function deleteAllMemories() {
  function editMemory (line 276) | function editMemory(memory) {
  function closeEditModal (line 284) | function closeEditModal() {
  function saveMemory (line 290) | async function saveMemory() {
  function deleteMemory (line 318) | async function deleteMemory(memoryId) {
  function showStatus (line 343) | function showStatus(message, type = "info") {
  function addMemory (line 371) | async function addMemory() {
  function showMemoryResult (line 436) | function showMemoryResult(message, type) {

FILE: examples/yt-assistant-chrome/src/popup.js
  function init (line 6) | async function init() {
  function toggleChat (line 41) | function toggleChat() {
  function openOptions (line 83) | function openOptions() {
  function togglePasswordVisibility (line 102) | function togglePasswordVisibility(inputId) {
  function saveApiKey (line 120) | async function saveApiKey() {
  function saveMem0ApiKey (line 146) | async function saveMem0ApiKey() {
  function loadConfig (line 172) | async function loadConfig() {
  function showStatus (line 216) | function showStatus(message, type = "info") {

FILE: mem0-ts/src/client/mem0.ts
  class APIError (line 23) | class APIError extends Error {
    method constructor (line 24) | constructor(message: string) {
  type ClientOptions (line 30) | interface ClientOptions {
  class MemoryClient (line 39) | class MemoryClient {
    method _validateApiKey (line 50) | _validateApiKey(): any {
    method _validateOrgProject (line 62) | _validateOrgProject(): void {
    method constructor (line 84) | constructor(options: ClientOptions) {
    method _initializeClient (line 112) | private async _initializeClient() {
    method _captureEvent (line 139) | private _captureEvent(methodName: string, args: any[]) {
    method _fetchWithErrorHandling (line 149) | async _fetchWithErrorHandling(url: string, options: any): Promise<any> {
    method _preparePayload (line 166) | _preparePayload(messages: Array<Message>, options: MemoryOptions): obj...
    method _prepareParams (line 172) | _prepareParams(options: MemoryOptions): object {
    method ping (line 178) | async ping(): Promise<void> {
    method add (line 216) | async add(
    method update (line 256) | async update(
    method get (line 299) | async get(memoryId: string): Promise<Memory> {
    method getAll (line 310) | async getAll(options?: SearchOptions): Promise<Array<Memory>> {
    method search (line 358) | async search(
    method delete (line 393) | async delete(memoryId: string): Promise<{ message: string }> {
    method deleteAll (line 405) | async deleteAll(options: MemoryOptions = {}): Promise<{ message: strin...
    method history (line 434) | async history(memoryId: string): Promise<Array<MemoryHistory>> {
    method users (line 446) | async users(): Promise<AllUsers> {
    method deleteUser (line 477) | async deleteUser(data: {
    method deleteUsers (line 496) | async deleteUsers(
    method batchUpdate (line 578) | async batchUpdate(memories: Array<MemoryUpdateBody>): Promise<string> {
    method batchDelete (line 596) | async batchDelete(memories: Array<string>): Promise<string> {
    method getProject (line 613) | async getProject(options: ProjectOptions): Promise<ProjectResponse> {
    method updateProject (line 638) | async updateProject(
    method getWebhooks (line 662) | async getWebhooks(data?: { projectId?: string }): Promise<Array<Webhoo...
    method createWebhook (line 675) | async createWebhook(webhook: WebhookCreatePayload): Promise<Webhook> {
    method updateWebhook (line 694) | async updateWebhook(
    method deleteWebhook (line 714) | async deleteWebhook(data: {
    method feedback (line 730) | async feedback(data: FeedbackPayload): Promise<{ message: string }> {
    method createMemoryExport (line 745) | async createMemoryExport(
    method getMemoryExport (line 772) | async getMemoryExport(

FILE: mem0-ts/src/client/mem0.types.ts
  type Common (line 1) | interface Common {
  type MemoryOptions (line 6) | interface MemoryOptions {
  type ProjectOptions (line 37) | interface ProjectOptions {
  type OutputFormat (line 41) | enum OutputFormat {
  type API_VERSION (line 46) | enum API_VERSION {
  type Feedback (line 51) | enum Feedback {
  type MultiModalMessages (line 57) | interface MultiModalMessages {
  type Messages (line 64) | interface Messages {
  type Message (line 69) | interface Message extends Messages {}
  type MemoryHistory (line 71) | interface MemoryHistory {
  type SearchOptions (line 84) | interface SearchOptions extends MemoryOptions {
  type Event (line 97) | enum Event {
  type MemoryData (line 104) | interface MemoryData {
  type Memory (line 108) | interface Memory {
  type MemoryUpdateBody (line 128) | interface MemoryUpdateBody {
  type User (line 133) | interface User {
  type AllUsers (line 143) | interface AllUsers {
  type ProjectResponse (line 150) | interface ProjectResponse {
  type custom_categories (line 156) | interface custom_categories {
  type PromptUpdatePayload (line 160) | interface PromptUpdatePayload {
  type WebhookEvent (line 173) | enum WebhookEvent {
  type Webhook (line 180) | interface Webhook {
  type WebhookCreatePayload (line 191) | interface WebhookCreatePayload {
  type WebhookUpdatePayload (line 197) | interface WebhookUpdatePayload {
  type FeedbackPayload (line 204) | interface FeedbackPayload {
  type CreateMemoryExportPayload (line 210) | interface CreateMemoryExportPayload extends Common {
  type GetMemoryExportPayload (line 216) | interface GetMemoryExportPayload extends Common {

FILE: mem0-ts/src/client/telemetry.ts
  constant MEM0_TELEMETRY (line 7) | let MEM0_TELEMETRY = true;
  constant POSTHOG_API_KEY (line 11) | const POSTHOG_API_KEY = "phc_hgJkUVJFYtmaJqrvf6CYN67TIQ8yhXAkWzUn9AMU4yX";
  constant POSTHOG_HOST (line 12) | const POSTHOG_HOST = "https://us.i.posthog.com/i/v0/e/";
  function generateHash (line 15) | function generateHash(input: string): string {
  class UnifiedTelemetry (line 22) | class UnifiedTelemetry implements TelemetryClient {
    method constructor (line 26) | constructor(projectApiKey: string, host: string) {
    method captureEvent (line 31) | async captureEvent(distinctId: string, eventName: string, properties =...
    method shutdown (line 66) | async shutdown() {
  function captureClientEvent (line 73) | async function captureClientEvent(

FILE: mem0-ts/src/client/telemetry.types.ts
  type TelemetryClient (line 1) | interface TelemetryClient {
  type TelemetryInstance (line 10) | interface TelemetryInstance {
  type TelemetryEventData (line 19) | interface TelemetryEventData {
  type TelemetryOptions (line 29) | interface TelemetryOptions {

FILE: mem0-ts/src/client/tests/helpers.ts
  type MockResponse (line 8) | interface MockResponse {
  function createMockFetch (line 19) | function createMockFetch(
  type MockMemory (line 64) | interface MockMemory {
  function createMockMemory (line 82) | function createMockMemory(
  type MockMemoryHistory (line 97) | interface MockMemoryHistory {
  function createMockMemoryHistory (line 110) | function createMockMemoryHistory(
  type MockUser (line 128) | interface MockUser {
  function createMockUser (line 138) | function createMockUser(overrides: Partial<MockUser> = {}): MockUser {
  type MockAllUsers (line 151) | interface MockAllUsers {
  function createMockAllUsers (line 158) | function createMockAllUsers(users: MockUser[] = []): MockAllUsers {
  constant TEST_API_KEY (line 169) | const TEST_API_KEY = "test-api-key-12345";
  constant TEST_HOST (line 170) | const TEST_HOST = "https://api.test.mem0.ai";
  constant TEST_ORG_ID (line 171) | const TEST_ORG_ID = "org_test_123";
  constant TEST_PROJECT_ID (line 172) | const TEST_PROJECT_ID = "proj_test_456";
  constant MOCK_PING_RESPONSE (line 174) | const MOCK_PING_RESPONSE = {
  function createStandardMockResponses (line 185) | function createStandardMockResponses(): Map<string, MockResponse> {

FILE: mem0-ts/src/client/tests/integration/batch.test.ts
  constant TEST_USER_ID (line 20) | const TEST_USER_ID = `integration-batch-${randomUUID()}`;

FILE: mem0-ts/src/client/tests/integration/crud.test.ts
  constant TEST_USER_ID (line 22) | const TEST_USER_ID = `integration-crud-${randomUUID()}`;

FILE: mem0-ts/src/client/tests/integration/global-setup.ts
  function globalSetup (line 9) | async function globalSetup() {

FILE: mem0-ts/src/client/tests/integration/global-teardown.ts
  function globalTeardown (line 9) | async function globalTeardown() {

FILE: mem0-ts/src/client/tests/integration/helpers.ts
  constant API_KEY (line 14) | const API_KEY = process.env.MEM0_API_KEY;
  function createTestClient (line 22) | function createTestClient(): MemoryClient {
  function withRetry (line 30) | async function withRetry<T>(
  function waitForMemories (line 59) | async function waitForMemories(
  function waitForSearchResults (line 86) | async function waitForSearchResults(
  function suppressTelemetryNoise (line 110) | function suppressTelemetryNoise(): () => void {
  function seedTestMemories (line 138) | async function seedTestMemories(
  function cleanupTestUser (line 182) | async function cleanupTestUser(
  function fullProjectCleanup (line 206) | async function fullProjectCleanup(client: MemoryClient): Promise<void> {

FILE: mem0-ts/src/client/tests/integration/management.test.ts
  constant TEST_USER_ID (line 22) | const TEST_USER_ID = `integration-mgmt-${randomUUID()}`;

FILE: mem0-ts/src/client/tests/integration/search.test.ts
  constant TEST_USER_ID (line 21) | const TEST_USER_ID = `integration-search-${randomUUID()}`;

FILE: mem0-ts/src/client/tests/memoryClient.users.test.ts
  function createClientWithMockedAxios (line 46) | function createClientWithMockedAxios() {

FILE: mem0-ts/src/client/tests/memoryClient.webhooks.test.ts
  function webhookMock (line 19) | function webhookMock(extra?: Map<string, { status: number; body: unknown...
  function createClient (line 23) | function createClient() {
  function callCreate (line 50) | async function callCreate() {
  function callUpdate (line 122) | async function callUpdate() {

FILE: mem0-ts/src/client/tests/setup.ts
  function setupMockFetch (line 15) | function setupMockFetch(
  function installConsoleSuppression (line 32) | function installConsoleSuppression(): void {
  function findFetchCall (line 65) | function findFetchCall(
  function getFetchBody (line 77) | function getFetchBody(

FILE: mem0-ts/src/common/exceptions.ts
  type MemoryErrorOptions (line 23) | interface MemoryErrorOptions {
  class MemoryError (line 35) | class MemoryError extends Error {
    method constructor (line 41) | constructor(
  class AuthenticationError (line 59) | class AuthenticationError extends MemoryError {
    method constructor (line 60) | constructor(
  class RateLimitError (line 71) | class RateLimitError extends MemoryError {
    method constructor (line 72) | constructor(
  class ValidationError (line 83) | class ValidationError extends MemoryError {
    method constructor (line 84) | constructor(
  class MemoryNotFoundError (line 95) | class MemoryNotFoundError extends MemoryError {
    method constructor (line 96) | constructor(
  class NetworkError (line 107) | class NetworkError extends MemoryError {
    method constructor (line 108) | constructor(
  class ConfigurationError (line 119) | class ConfigurationError extends MemoryError {
    method constructor (line 120) | constructor(
  class MemoryQuotaExceededError (line 131) | class MemoryQuotaExceededError extends MemoryError {
    method constructor (line 132) | constructor(
  type MemoryErrorConstructor (line 144) | type MemoryErrorConstructor = new (
  constant HTTP_STATUS_TO_EXCEPTION (line 150) | const HTTP_STATUS_TO_EXCEPTION: Record<number, MemoryErrorConstructor> =
  constant HTTP_SUGGESTIONS (line 167) | const HTTP_SUGGESTIONS: Record<number, string> = {
  function createExceptionFromResponse (line 191) | function createExceptionFromResponse(

FILE: mem0-ts/src/community/src/integrations/langchain/mem0.ts
  type ClientOptions (line 102) | interface ClientOptions {
  type Mem0MemoryInput (line 115) | interface Mem0MemoryInput extends BaseChatMemoryInput {
  class Mem0Memory (line 150) | class Mem0Memory extends BaseChatMemory implements Mem0MemoryInput {
    method constructor (line 171) | constructor(fields: Mem0MemoryInput) {
    method memoryKeys (line 207) | get memoryKeys(): string[] {
    method loadMemoryVariables (line 216) | async loadMemoryVariables(values: InputValues): Promise<MemoryVariable...
    method saveContext (line 264) | async saveContext(
    method clear (line 304) | async clear(): Promise<void> {

FILE: mem0-ts/src/oss/examples/basic.ts
  function demoDefaultConfig (line 7) | async function demoDefaultConfig() {
  function run_examples (line 14) | async function run_examples() {
  function runTests (line 21) | async function runTests(memory: Memory) {
  function demoLocalMemory (line 119) | async function demoLocalMemory() {
  function demoMemoryStore (line 149) | async function demoMemoryStore() {
  function demoPGVector (line 181) | async function demoPGVector() {
  function demoQdrant (line 220) | async function demoQdrant() {
  function demoRedis (line 260) | async function demoRedis() {
  function demoGraphMemory (line 295) | async function demoGraphMemory() {
  function main (line 375) | async function main() {

FILE: mem0-ts/src/oss/examples/llms/mistral-example.ts
  function testMistral (line 7) | async function testMistral() {

FILE: mem0-ts/src/oss/examples/local-llms.ts
  function chatWithMemories (line 28) | async function chatWithMemories(message: string, userId = "default_user") {
  function main (line 58) | async function main() {

FILE: mem0-ts/src/oss/examples/utils/test-utils.ts
  function runTests (line 3) | async function runTests(memory: Memory) {

FILE: mem0-ts/src/oss/examples/vector-stores/azure-ai-search.ts
  function demoAzureAISearch (line 4) | async function demoAzureAISearch() {

FILE: mem0-ts/src/oss/examples/vector-stores/index.ts
  function main (line 12) | async function main() {

FILE: mem0-ts/src/oss/examples/vector-stores/memory.ts
  function demoMemoryStore (line 4) | async function demoMemoryStore() {

FILE: mem0-ts/src/oss/examples/vector-stores/pgvector.ts
  function demoPGVector (line 4) | async function demoPGVector() {

FILE: mem0-ts/src/oss/examples/vector-stores/qdrant.ts
  function demoQdrant (line 4) | async function demoQdrant() {

FILE: mem0-ts/src/oss/examples/vector-stores/redis.ts
  function demoRedis (line 4) | async function demoRedis() {

FILE: mem0-ts/src/oss/examples/vector-stores/supabase.ts
  function demoSupabase (line 8) | async function demoSupabase() {

FILE: mem0-ts/src/oss/src/config/defaults.ts
  constant DEFAULT_MEMORY_CONFIG (line 3) | const DEFAULT_MEMORY_CONFIG: MemoryConfig = {

FILE: mem0-ts/src/oss/src/config/manager.ts
  class ConfigManager (line 4) | class ConfigManager {
    method mergeConfig (line 5) | static mergeConfig(userConfig: Partial<MemoryConfig> = {}): MemoryConf...

FILE: mem0-ts/src/oss/src/embeddings/azure.ts
  class AzureOpenAIEmbedder (line 5) | class AzureOpenAIEmbedder implements Embedder {
    method constructor (line 10) | constructor(config: EmbeddingConfig) {
    method embed (line 26) | async embed(text: string): Promise<number[]> {
    method embedBatch (line 34) | async embedBatch(texts: string[]): Promise<number[][]> {

FILE: mem0-ts/src/oss/src/embeddings/base.ts
  type Embedder (line 1) | interface Embedder {

FILE: mem0-ts/src/oss/src/embeddings/google.ts
  class GoogleEmbedder (line 5) | class GoogleEmbedder implements Embedder {
    method constructor (line 10) | constructor(config: EmbeddingConfig) {
    method embed (line 18) | async embed(text: string): Promise<number[]> {
    method embedBatch (line 27) | async embedBatch(texts: string[]): Promise<number[][]> {

FILE: mem0-ts/src/oss/src/embeddings/langchain.ts
  class LangchainEmbedder (line 5) | class LangchainEmbedder implements Embedder {
    method constructor (line 9) | constructor(config: EmbeddingConfig) {
    method embed (line 30) | async embed(text: string): Promise<number[]> {
    method embedBatch (line 40) | async embedBatch(texts: string[]): Promise<number[][]> {

FILE: mem0-ts/src/oss/src/embeddings/lmstudio.ts
  constant DEFAULT_BASE_URL (line 5) | const DEFAULT_BASE_URL = "http://localhost:1234/v1";
  constant DEFAULT_MODEL (line 6) | const DEFAULT_MODEL =
  constant DEFAULT_LMSTUDIO_API_KEY (line 8) | const DEFAULT_LMSTUDIO_API_KEY = "lm-studio";
  class LMStudioEmbedder (line 10) | class LMStudioEmbedder implements Embedder {
    method constructor (line 14) | constructor(config: EmbeddingConfig) {
    method embed (line 21) | async embed(text: string): Promise<number[]> {
    method embedBatch (line 37) | async embedBatch(texts: string[]): Promise<number[][]> {

FILE: mem0-ts/src/oss/src/embeddings/ollama.ts
  class OllamaEmbedder (line 6) | class OllamaEmbedder implements Embedder {
    method constructor (line 13) | constructor(config: EmbeddingConfig) {
    method embed (line 24) | async embed(text: string): Promise<number[]> {
    method embedBatch (line 44) | async embedBatch(texts: string[]): Promise<number[][]> {
    method normalizeModelName (line 49) | private static normalizeModelName(name: string): string {
    method ensureModelExists (line 53) | private async ensureModelExists(): Promise<boolean> {

FILE: mem0-ts/src/oss/src/embeddings/openai.ts
  class OpenAIEmbedder (line 5) | class OpenAIEmbedder implements Embedder {
    method constructor (line 10) | constructor(config: EmbeddingConfig) {
    method embed (line 19) | async embed(text: string): Promise<number[]> {
    method embedBatch (line 27) | async embedBatch(texts: string[]): Promise<number[][]> {

FILE: mem0-ts/src/oss/src/graphs/configs.ts
  type Neo4jConfig (line 3) | interface Neo4jConfig {
  type GraphStoreConfig (line 9) | interface GraphStoreConfig {
  function validateNeo4jConfig (line 16) | function validateNeo4jConfig(config: Neo4jConfig): void {
  function validateGraphStoreConfig (line 23) | function validateGraphStoreConfig(config: GraphStoreConfig): void {

FILE: mem0-ts/src/oss/src/graphs/tools.ts
  type GraphToolParameters (line 3) | interface GraphToolParameters {
  type GraphEntitiesParameters (line 11) | interface GraphEntitiesParameters {
  type GraphRelationsParameters (line 18) | interface GraphRelationsParameters {
  constant UPDATE_MEMORY_TOOL_GRAPH (line 78) | const UPDATE_MEMORY_TOOL_GRAPH = {
  constant ADD_MEMORY_TOOL_GRAPH (line 109) | const ADD_MEMORY_TOOL_GRAPH = {
  constant NOOP_TOOL (line 153) | const NOOP_TOOL = {
  constant RELATIONS_TOOL (line 167) | const RELATIONS_TOOL = {
  constant EXTRACT_ENTITIES_TOOL (line 206) | const EXTRACT_ENTITIES_TOOL = {
  constant DELETE_MEMORY_TOOL_GRAPH (line 240) | const DELETE_MEMORY_TOOL_GRAPH = {

FILE: mem0-ts/src/oss/src/graphs/utils.ts
  constant UPDATE_GRAPH_PROMPT (line 1) | const UPDATE_GRAPH_PROMPT = `
  constant EXTRACT_RELATIONS_PROMPT (line 35) | const EXTRACT_RELATIONS_PROMPT = `
  constant DELETE_RELATIONS_SYSTEM_PROMPT (line 57) | const DELETE_RELATIONS_SYSTEM_PROMPT = `
  function getDeleteMessages (line 95) | function getDeleteMessages(
  function formatEntities (line 106) | function formatEntities(

FILE: mem0-ts/src/oss/src/llms/anthropic.ts
  class AnthropicLLM (line 5) | class AnthropicLLM implements LLM {
    method constructor (line 9) | constructor(config: LLMConfig) {
    method generateResponse (line 18) | async generateResponse(
    method generateChat (line 50) | async generateChat(messages: Message[]): Promise<LLMResponse> {

FILE: mem0-ts/src/oss/src/llms/azure.ts
  class AzureOpenAILLM (line 5) | class AzureOpenAILLM implements LLM {
    method constructor (line 9) | constructor(config: LLMConfig) {
    method generateResponse (line 24) | async generateResponse(
    method generateChat (line 61) | async generateChat(messages: Message[]): Promise<LLMResponse> {

FILE: mem0-ts/src/oss/src/llms/base.ts
  type LLMResponse (line 3) | interface LLMResponse {
  type LLM (line 12) | interface LLM {

FILE: mem0-ts/src/oss/src/llms/google.ts
  class GoogleLLM (line 5) | class GoogleLLM implements LLM {
    method constructor (line 9) | constructor(config: LLMConfig) {
    method generateResponse (line 14) | async generateResponse(
    method generateChat (line 70) | async generateChat(messages: Message[]): Promise<LLMResponse> {

FILE: mem0-ts/src/oss/src/llms/groq.ts
  class GroqLLM (line 5) | class GroqLLM implements LLM {
    method constructor (line 9) | constructor(config: LLMConfig) {
    method generateResponse (line 18) | async generateResponse(
    method generateChat (line 37) | async generateChat(messages: Message[]): Promise<LLMResponse> {

FILE: mem0-ts/src/oss/src/llms/langchain.ts
  class LangchainLLM (line 44) | class LangchainLLM implements LLM {
    method constructor (line 48) | constructor(config: LLMConfig) {
    method generateResponse (line 66) | async generateResponse(
    method generateChat (line 228) | async generateChat(messages: Message[]): Promise<LLMResponse> {

FILE: mem0-ts/src/oss/src/llms/lmstudio.ts
  constant DEFAULT_BASE_URL (line 5) | const DEFAULT_BASE_URL = "http://localhost:1234/v1";
  constant DEFAULT_MODEL (line 6) | const DEFAULT_MODEL =
  constant DEFAULT_LMSTUDIO_API_KEY (line 8) | const DEFAULT_LMSTUDIO_API_KEY = "lm-studio";
  class LMStudioLLM (line 10) | class LMStudioLLM extends OpenAILLM {
    method constructor (line 11) | constructor(config: LLMConfig) {
    method generateResponse (line 20) | async generateResponse(
    method generateChat (line 33) | async generateChat(messages: Message[]): Promise<LLMResponse> {

FILE: mem0-ts/src/oss/src/llms/mistral.ts
  class MistralLLM (line 5) | class MistralLLM implements LLM {
    method constructor (line 9) | constructor(config: LLMConfig) {
    method contentToString (line 20) | private contentToString(content: any): string {
    method generateResponse (line 39) | async generateResponse(
    method generateChat (line 84) | async generateChat(messages: Message[]): Promise<LLMResponse> {

FILE: mem0-ts/src/oss/src/llms/ollama.ts
  class OllamaLLM (line 6) | class OllamaLLM implements LLM {
    method constructor (line 12) | constructor(config: LLMConfig) {
    method generateResponse (line 22) | async generateResponse(
    method generateChat (line 65) | async generateChat(messages: Message[]): Promise<LLMResponse> {
    method ensureModelExists (line 92) | private async ensureModelExists(): Promise<boolean> {

FILE: mem0-ts/src/oss/src/llms/openai.ts
  class OpenAILLM (line 5) | class OpenAILLM implements LLM {
    method constructor (line 9) | constructor(config: LLMConfig) {
    method generateResponse (line 17) | async generateResponse(
    method generateChat (line 54) | async generateChat(messages: Message[]): Promise<LLMResponse> {

FILE: mem0-ts/src/oss/src/llms/openai_structured.ts
  class OpenAIStructuredLLM (line 5) | class OpenAIStructuredLLM implements LLM {
    method constructor (line 9) | constructor(config: LLMConfig) {
    method generateResponse (line 14) | async generateResponse(
    method generateChat (line 65) | async generateChat(messages: Message[]): Promise<LLMResponse> {

FILE: mem0-ts/src/oss/src/memory/graph_memory.ts
  type SearchOutput (line 16) | interface SearchOutput {
  type ToolCall (line 26) | interface ToolCall {
  type LLMResponse (line 31) | interface LLMResponse {
  type Tool (line 35) | interface Tool {
  type GraphMemoryResult (line 44) | interface GraphMemoryResult {
  class MemoryGraph (line 50) | class MemoryGraph {
    method constructor (line 59) | constructor(config: MemoryConfig) {
    method add (line 98) | async add(
    method search (line 139) | async search(query: string, filters: Record<string, any>, limit = 100) {
    method deleteAll (line 170) | async deleteAll(filters: Record<string, any>) {
    method getAll (line 181) | async getAll(filters: Record<string, any>, limit = 100) {
    method _retrieveNodesFromData (line 206) | private async _retrieveNodesFromData(
    method _establishNodesRelationsFromData (line 250) | private async _establishNodesRelationsFromData(
    method _searchGraphDb (line 307) | private async _searchGraphDb(
    method _getDeleteEntitiesFromSearchOutput (line 369) | private async _getDeleteEntitiesFromSearchOutput(
    method _deleteEntities (line 413) | private async _deleteEntities(toBeDeleted: any[], userId: string) {
    method _addEntities (line 447) | private async _addEntities(
    method _removeSpacesFromEntities (line 575) | private _removeSpacesFromEntities(entityList: any[]) {
    method _searchSourceNode (line 584) | private async _searchSourceNode(
    method _searchDestinationNode (line 630) | private async _searchDestinationNode(

FILE: mem0-ts/src/oss/src/memory/index.ts
  class Memory (line 40) | class Memory {
    method constructor (line 55) | constructor(config: Partial<MemoryConfig> = {}) {
    method _autoInitialize (line 103) | private async _autoInitialize(): Promise<void> {
    method _ensureInitialized (line 135) | private async _ensureInitialized(): Promise<void> {
    method _initializeTelemetry (line 153) | private async _initializeTelemetry() {
    method _getTelemetryId (line 167) | private async _getTelemetryId() {
    method _captureEvent (line 183) | private async _captureEvent(methodName: string, additionalData = {}) {
    method fromConfig (line 196) | static fromConfig(configDict: Record<string, any>): Memory {
    method add (line 206) | async add(
    method addToVectorStore (line 269) | private async addToVectorStore(
    method get (line 435) | async get(memoryId: string): Promise<MemoryItem | null> {
    method search (line 474) | async search(
    method update (line 544) | async update(memoryId: string, data: string): Promise<{ message: strin...
    method delete (line 552) | async delete(memoryId: string): Promise<{ message: string }> {
    method deleteAll (line 559) | async deleteAll(
    method history (line 589) | async history(memoryId: string): Promise<any[]> {
    method reset (line 594) | async reset(): Promise<void> {
    method getAll (line 642) | async getAll(config: GetAllMemoryOptions): Promise<SearchResult> {
    method createMemory (line 685) | private async createMemory(
    method updateMemory (line 713) | private async updateMemory(
    method deleteMemory (line 758) | private async deleteMemory(memoryId: string): Promise<string> {

FILE: mem0-ts/src/oss/src/memory/memory.types.ts
  type Entity (line 4) | interface Entity {
  type AddMemoryOptions (line 10) | interface AddMemoryOptions extends Entity {
  type SearchMemoryOptions (line 16) | interface SearchMemoryOptions extends Entity {
  type GetAllMemoryOptions (line 21) | interface GetAllMemoryOptions extends Entity {
  type DeleteAllMemoryOptions (line 25) | interface DeleteAllMemoryOptions extends Entity {}

FILE: mem0-ts/src/oss/src/prompts/index.ts
  function getFactRetrievalMessages (line 44) | function getFactRetrievalMessages(
  function getUpdateMemoryMessages (line 104) | function getUpdateMemoryMessages(
  function parseMessages (line 276) | function parseMessages(messages: string[]): string {
  function removeCodeBlocks (line 280) | function removeCodeBlocks(text: string): string {

FILE: mem0-ts/src/oss/src/storage/DummyHistoryManager.ts
  class DummyHistoryManager (line 1) | class DummyHistoryManager {
    method constructor (line 2) | constructor() {}
    method addHistory (line 4) | async addHistory(
    method getHistory (line 16) | async getHistory(memoryId: string): Promise<any[]> {
    method reset (line 20) | async reset(): Promise<void> {
    method close (line 24) | close(): void {

FILE: mem0-ts/src/oss/src/storage/MemoryHistoryManager.ts
  type HistoryEntry (line 3) | interface HistoryEntry {
  class MemoryHistoryManager (line 14) | class MemoryHistoryManager implements HistoryManager {
    method addHistory (line 17) | async addHistory(
    method getHistory (line 40) | async getHistory(memoryId: string): Promise<any[]> {
    method reset (line 50) | async reset(): Promise<void> {
    method close (line 54) | close(): void {

FILE: mem0-ts/src/oss/src/storage/SQLiteManager.ts
  class SQLiteManager (line 5) | class SQLiteManager implements HistoryManager {
    method constructor (line 10) | constructor(dbPath: string) {
    method init (line 16) | private init(): void {
    method addHistory (line 39) | async addHistory(
    method getHistory (line 59) | async getHistory(memoryId: string): Promise<any[]> {
    method reset (line 63) | async reset(): Promise<void> {
    method close (line 68) | close(): void {

FILE: mem0-ts/src/oss/src/storage/SupabaseHistoryManager.ts
  type HistoryEntry (line 5) | interface HistoryEntry {
  type SupabaseHistoryConfig (line 16) | interface SupabaseHistoryConfig {
  class SupabaseHistoryManager (line 22) | class SupabaseHistoryManager implements HistoryManager {
    method constructor (line 26) | constructor(config: SupabaseHistoryConfig) {
    method initializeSupabase (line 32) | private async initializeSupabase(): Promise<void> {
    method addHistory (line 59) | async addHistory(
    method getHistory (line 89) | async getHistory(memoryId: string): Promise<any[]> {
    method reset (line 105) | async reset(): Promise<void> {
    method close (line 117) | close(): void {

FILE: mem0-ts/src/oss/src/storage/base.ts
  type HistoryManager (line 1) | interface HistoryManager {

FILE: mem0-ts/src/oss/src/tests/better-sqlite3-migration.test.ts
  function normalize (line 178) | function normalize(v: number[]): number[] {

FILE: mem0-ts/src/oss/src/tests/sqlite-backward-compat.test.ts
  function normalize (line 18) | function normalize(vector: number[]): number[] {

FILE: mem0-ts/src/oss/src/tests/sqlite-path-resolution.test.ts
  function normalize (line 12) | function normalize(vector: number[]): number[] {

FILE: mem0-ts/src/oss/src/types/index.ts
  type MultiModalMessages (line 3) | interface MultiModalMessages {
  type Message (line 10) | interface Message {
  type EmbeddingConfig (line 15) | interface EmbeddingConfig {
  type VectorStoreConfig (line 24) | interface VectorStoreConfig {
  type HistoryStoreConfig (line 33) | interface HistoryStoreConfig {
  type LLMConfig (line 43) | interface LLMConfig {
  type Neo4jConfig (line 53) | interface Neo4jConfig {
  type GraphStoreConfig (line 59) | interface GraphStoreConfig {
  type MemoryConfig (line 66) | interface MemoryConfig {
  type MemoryItem (line 88) | interface MemoryItem {
  type SearchFilters (line 98) | interface SearchFilters {
  type SearchResult (line 105) | interface SearchResult {
  type VectorStoreResult (line 110) | interface VectorStoreResult {

FILE: mem0-ts/src/oss/src/utils/bm25.ts
  class BM25 (line 1) | class BM25 {
    method constructor (line 10) | constructor(documents: string[][], k1 = 1.5, b = 0.75) {
    method computeIdf (line 22) | private computeIdf() {
    method score (line 39) | private score(query: string[], doc: string[], index: number): number {
    method search (line 56) | search(query: string[]): string[][] {

FILE: mem0-ts/src/oss/src/utils/factory.ts
  class EmbedderFactory (line 38) | class EmbedderFactory {
    method create (line 39) | static create(provider: string, config: EmbeddingConfig): Embedder {
  class LLMFactory (line 60) | class LLMFactory {
    method create (line 61) | static create(provider: string, config: LLMConfig): LLM {
  class VectorStoreFactory (line 90) | class VectorStoreFactory {
    method create (line 91) | static create(provider: string, config: VectorStoreConfig): VectorStore {
  class HistoryManagerFactory (line 113) | class HistoryManagerFactory {
    method create (line 114) | static create(provider: string, config: HistoryStoreConfig): HistoryMa...

FILE: mem0-ts/src/oss/src/utils/logger.ts
  type Logger (line 1) | interface Logger {

FILE: mem0-ts/src/oss/src/utils/sqlite.ts
  function getDefaultVectorStoreDbPath (line 5) | function getDefaultVectorStoreDbPath(): string {
  function ensureSQLiteDirectory (line 9) | function ensureSQLiteDirectory(dbPath: string): void {

FILE: mem0-ts/src/oss/src/utils/telemetry.ts
  constant MEM0_TELEMETRY (line 10) | let MEM0_TELEMETRY = true;
  constant POSTHOG_API_KEY (line 14) | const POSTHOG_API_KEY = "phc_hgJkUVJFYtmaJqrvf6CYN67TIQ8yhXAkWzUn9AMU4yX";
  constant POSTHOG_HOST (line 15) | const POSTHOG_HOST = "https://us.i.posthog.com/i/v0/e/";
  class UnifiedTelemetry (line 17) | class UnifiedTelemetry implements TelemetryClient {
    method constructor (line 21) | constructor(projectApiKey: string, host: string) {
    method captureEvent (line 26) | async captureEvent(distinctId: string, eventName: string, properties =...
    method shutdown (line 64) | async shutdown() {
  function captureClientEvent (line 71) | async function captureClientEvent(

FILE: mem0-ts/src/oss/src/utils/telemetry.types.ts
  type TelemetryClient (line 1) | interface TelemetryClient {
  type TelemetryInstance (line 10) | interface TelemetryInstance {
  type TelemetryEventData (line 19) | interface TelemetryEventData {
  type TelemetryOptions (line 29) | interface TelemetryOptions {

FILE: mem0-ts/src/oss/src/vector_stores/azure_ai_search.ts
  type AzureAISearchConfig (line 23) | interface AzureAISearchConfig extends VectorStoreConfig {
  class AzureAISearch (line 73) | class AzureAISearch implements VectorStore {
    method constructor (line 86) | constructor(config: AzureAISearchConfig) {
    method initialize (line 120) | async initialize(): Promise<void> {
    method _doInitialize (line 127) | private async _doInitialize(): Promise<void> {
    method createCol (line 142) | private async createCol(): Promise<void> {
    method generateDocument (line 236) | private generateDocument(
    method insert (line 260) | async insert(
    method sanitizeKey (line 288) | private sanitizeKey(key: string): string {
    method buildFilterExpression (line 295) | private buildFilterExpression(filters: SearchFilters): string {
    method extractJson (line 317) | private extractJson(payload: string): string {
    method search (line 332) | async search(
    method delete (line 392) | async delete(vectorId: string): Promise<void> {
    method update (line 413) | async update(
    method get (line 449) | async get(vectorId: string): Promise<VectorStoreResult | null> {
    method listCols (line 471) | private async listCols(): Promise<string[]> {
    method deleteCol (line 484) | async deleteCol(): Promise<void> {
    method colInfo (line 491) | private async colInfo(): Promise<{ name: string; fields: SearchField[]...
    method list (line 502) | async list(
    method generateUUID (line 534) | private generateUUID(): string {
    method getUserId (line 549) | async getUserId(): Promise<string> {
    method setUserId (line 611) | async setUserId(userId: string): Promise<void> {
    method reset (line 640) | async reset(): Promise<void> {

FILE: mem0-ts/src/oss/src/vector_stores/base.ts
  type VectorStore (line 3) | interface VectorStore {

FILE: mem0-ts/src/oss/src/vector_stores/langchain.ts
  type LangchainStoreConfig (line 7) | interface LangchainStoreConfig extends VectorStoreConfig {
  class LangchainVectorStore (line 12) | class LangchainVectorStore implements VectorStore {
    method constructor (line 17) | constructor(config: LangchainStoreConfig) {
    method insert (line 59) | async insert(
    method search (line 102) | async search(
    method get (line 136) | async get(vectorId: string): Promise<VectorStoreResult | null> {
    method update (line 149) | async update(
    method delete (line 164) | async delete(vectorId: string): Promise<void> {
    method list (line 193) | async list(
    method deleteCol (line 207) | async deleteCol(): Promise<void> {
    method getUserId (line 218) | async getUserId(): Promise<string> {
    method setUserId (line 222) | async setUserId(userId: string): Promise<void> {
    method initialize (line 226) | async initialize(): Promise<void> {

FILE: mem0-ts/src/oss/src/vector_stores/memory.ts
  type MemoryVector (line 11) | interface MemoryVector {
  class MemoryVectorStore (line 17) | class MemoryVectorStore implements VectorStore {
    method constructor (line 22) | constructor(config: VectorStoreConfig) {
    method init (line 41) | private init(): void {
    method cosineSimilarity (line 58) | private cosineSimilarity(a: number[], b: number[]): number {
    method filterVector (line 70) | private filterVector(vector: MemoryVector, filters?: SearchFilters): b...
    method insert (line 77) | async insert(
    method search (line 101) | async search(
    method get (line 142) | async get(vectorId: string): Promise<VectorStoreResult | null> {
    method update (line 155) | async update(
    method delete (line 171) | async delete(vectorId: string): Promise<void> {
    method deleteCol (line 175) | async deleteCol(): Promise<void> {
    method list (line 180) | async list(
    method getUserId (line 212) | async getUserId(): Promise<string> {
    method setUserId (line 230) | async setUserId(userId: string): Promise<void> {
    method initialize (line 237) | async initialize(): Promise<void> {

FILE: mem0-ts/src/oss/src/vector_stores/pgvector.ts
  type PGVectorConfig (line 5) | interface PGVectorConfig extends VectorStoreConfig {
  class PGVector (line 16) | class PGVector implements VectorStore {
    method constructor (line 24) | constructor(config: PGVectorConfig) {
    method initialize (line 40) | async initialize(): Promise<void> {
    method checkDatabaseExists (line 85) | private async checkDatabaseExists(dbName: string): Promise<boolean> {
    method createDatabase (line 93) | private async createDatabase(dbName: string): Promise<void> {
    method createCol (line 98) | private async createCol(embeddingModelDims: number): Promise<void> {
    method insert (line 138) | async insert(
    method search (line 162) | async search(
    method get (line 202) | async get(vectorId: string): Promise<VectorStoreResult | null> {
    method update (line 216) | async update(
    method delete (line 232) | async delete(vectorId: string): Promise<void> {
    method deleteCol (line 239) | async deleteCol(): Promise<void> {
    method listCols (line 243) | private async listCols(): Promise<string[]> {
    method list (line 252) | async list(
    method close (line 301) | async close(): Promise<void> {
    method getUserId (line 305) | async getUserId(): Promise<string> {
    method setUserId (line 325) | async setUserId(userId: string): Promise<void> {

FILE: mem0-ts/src/oss/src/vector_stores/qdrant.ts
  type QdrantConfig (line 6) | interface QdrantConfig extends VectorStoreConfig {
  type QdrantFilter (line 19) | interface QdrantFilter {
  type QdrantCondition (line 25) | interface QdrantCondition {
  class Qdrant (line 31) | class Qdrant implements VectorStore {
    method constructor (line 37) | constructor(config: QdrantConfig) {
    method createFilter (line 72) | private createFilter(filters?: SearchFilters): QdrantFilter | undefined {
    method insert (line 103) | async insert(
    method search (line 119) | async search(
    method get (line 138) | async get(vectorId: string): Promise<VectorStoreResult | null> {
    method update (line 152) | async update(
    method delete (line 168) | async delete(vectorId: string): Promise<void> {
    method deleteCol (line 174) | async deleteCol(): Promise<void> {
    method list (line 178) | async list(
    method generateUUID (line 202) | private generateUUID(): string {
    method getUserId (line 213) | async getUserId(): Promise<string> {
    method setUserId (line 250) | async setUserId(userId: string): Promise<void> {
    method ensureCollection (line 276) | private async ensureCollection(name: string, size: number): Promise<vo...
    method initialize (line 321) | async initialize(): Promise<void> {
    method _doInitialize (line 328) | private async _doInitialize(): Promise<void> {

FILE: mem0-ts/src/oss/src/vector_stores/redis.ts
  type RedisConfig (line 12) | interface RedisConfig extends VectorStoreConfig {
  type RedisField (line 20) | interface RedisField {
  type RedisSchema (line 31) | interface RedisSchema {
  type RedisEntry (line 39) | interface RedisEntry {
  type RedisDocument (line 53) | interface RedisDocument {
  type RedisSearchResult (line 69) | interface RedisSearchResult {
  type RedisModule (line 74) | interface RedisModule {
  constant DEFAULT_FIELDS (line 79) | const DEFAULT_FIELDS: RedisField[] = [
  constant EXCLUDED_KEYS (line 101) | const EXCLUDED_KEYS = new Set([
  function toSnakeCase (line 112) | function toSnakeCase(obj: Record<string, any>): Record<string, any> {
  function toCamelCase (line 124) | function toCamelCase(obj: Record<string, any>): Record<string, any> {
  class RedisDB (line 135) | class RedisDB implements VectorStore {
    method constructor (line 144) | constructor(config: RedisConfig) {
    method createIndex (line 191) | private async createIndex(): Promise<void> {
    method initialize (line 243) | async initialize(): Promise<void> {
    method _doInitialize (line 250) | private async _doInitialize(): Promise<void> {
    method insert (line 309) | async insert(
    method search (line 360) | async search(
    method get (line 431) | async get(vectorId: string): Promise<VectorStoreResult | null> {
    method update (line 533) | async update(
    method delete (line 570) | async delete(vectorId: string): Promise<void> {
    method deleteCol (line 595) | async deleteCol(): Promise<void> {
    method list (line 599) | async list(
    method close (line 645) | async close(): Promise<void> {
    method getUserId (line 649) | async getUserId(): Promise<string> {
    method setUserId (line 671) | async setUserId(userId: string): Promise<void> {

FILE: mem0-ts/src/oss/src/vector_stores/supabase.ts
  type VectorData (line 5) | interface VectorData {
  type VectorQueryParams (line 12) | interface VectorQueryParams {
  type VectorSearchResult (line 18) | interface VectorSearchResult {
  type SupabaseConfig (line 25) | interface SupabaseConfig extends VectorStoreConfig {
  class SupabaseDB (line 84) | class SupabaseDB implements VectorStore {
    method constructor (line 91) | constructor(config: SupabaseConfig) {
    method initialize (line 103) | async initialize(): Promise<void> {
    method _doInitialize (line 110) | private async _doInitialize(): Promise<void> {
    method insert (line 208) | async insert(
    method search (line 232) | async search(
    method get (line 264) | async get(vectorId: string): Promise<VectorStoreResult | null> {
    method update (line 285) | async update(
    method delete (line 309) | async delete(vectorId: string): Promise<void> {
    method deleteCol (line 323) | async deleteCol(): Promise<void> {
    method list (line 337) | async list(
    method getUserId (line 369) | async getUserId(): Promise<string> {
    method setUserId (line 420) | async setUserId(userId: string): Promise<void> {

FILE: mem0-ts/src/oss/src/vector_stores/vectorize.ts
  type VectorizeConfig (line 6) | interface VectorizeConfig extends VectorStoreConfig {
  type CloudflareVector (line 12) | interface CloudflareVector {
  class VectorizeDB (line 18) | class VectorizeDB implements VectorStore {
    method constructor (line 25) | constructor(config: VectorizeConfig) {
    method insert (line 33) | async insert(
    method search (line 77) | async search(
    method get (line 109) | async get(vectorId: string): Promise<VectorStoreResult | null> {
    method update (line 133) | async update(
    method delete (line 171) | async delete(vectorId: string): Promise<void> {
    method deleteCol (line 185) | async deleteCol(): Promise<void> {
    method list (line 198) | async list(
    method generateUUID (line 230) | private generateUUID(): string {
    method getUserId (line 241) | async getUserId(): Promise<string> {
    method setUserId (line 307) | async setUserId(userId: string): Promise<void> {
    method initialize (line 346) | async initialize(): Promise<void> {
    method _doInitialize (line 353) | private async _doInitialize(): Promise<void> {

FILE: mem0-ts/src/oss/tests/dimension-autodetect.test.ts
  function createMockEmbedder (line 241) | function createMockEmbedder(dims: number) {
  function createMockVectorStore (line 248) | function createMockVectorStore() {

FILE: mem0-ts/src/oss/tests/graph-memory-parsing.test.ts
  function makeConfig (line 59) | function makeConfig(overrides: Record<string, any> = {}) {
  function graph (line 75) | function graph(overrides: Record<string, any> = {}): any {
  constant FILTERS (line 79) | const FILTERS = { userId: "test-user" };

FILE: mem0-ts/src/oss/tests/memory.add.test.ts
  function createMemory (line 52) | function createMemory(overrides: Partial<MemoryConfig> = {}): Memory {

FILE: mem0-ts/src/oss/tests/memory.crud.test.ts
  function createMemory (line 52) | function createMemory(): Memory {

FILE: mem0-ts/src/oss/tests/memory.init.test.ts
  function createMemory (line 54) | function createMemory(overrides: Partial<MemoryConfig> = {}): Memory {

FILE: mem0-ts/src/oss/tests/vector-store.unit.test.ts
  constant DIM (line 9) | const DIM = 4;
  function createStore (line 11) | function createStore(): MemoryVectorStore {
  function vec (line 19) | function vec(values: number[]): number[] {

FILE: mem0-ts/src/oss/tests/vector-stores-compat.test.ts
  function createMockQdrantClient (line 209) | function createMockQdrantClient() {
  function createMockEmbedder (line 872) | function createMockEmbedder(dims: number) {
  function createMockVectorStore (line 879) | function createMockVectorStore() {

FILE: mem0/client/main.py
  class MemoryClient (line 24) | class MemoryClient:
    method __init__ (line 39) | def __init__(
    method _validate_api_key (line 107) | def _validate_api_key(self):
    method add (line 131) | def add(self, messages, **kwargs) -> Dict[str, Any]:
    method get (line 179) | def get(self, memory_id: str) -> Dict[str, Any]:
    method get_all (line 203) | def get_all(self, **kwargs) -> Dict[str, Any]:
    method search (line 252) | def search(self, query: str, **kwargs) -> Dict[str, Any]:
    method update (line 298) | def update(
    method delete (line 339) | def delete(self, memory_id: str) -> Dict[str, Any]:
    method delete_all (line 363) | def delete_all(self, **kwargs) -> Dict[str, str]:
    method history (line 392) | def history(self, memory_id: str) -> List[Dict[str, Any]]:
    method users (line 416) | def users(self) -> Dict[str, Any]:
    method delete_users (line 425) | def delete_users(
    method reset (line 492) | def reset(self) -> Dict[str, str]:
    method batch_update (line 515) | def batch_update(self, memories: List[Dict[str, Any]]) -> Dict[str, Any]:
    method batch_delete (line 542) | def batch_delete(self, memories: List[Dict[str, Any]]) -> Dict[str, Any]:
    method create_memory_export (line 568) | def create_memory_export(self, schema: str, **kwargs) -> Dict[str, Any]:
    method get_memory_export (line 595) | def get_memory_export(self, **kwargs) -> Dict[str, Any]:
    method get_summary (line 614) | def get_summary(self, filters: Optional[Dict[str, Any]] = None) -> Dic...
    method get_project (line 630) | def get_project(self, fields: Optional[List[str]] = None) -> Dict[str,...
    method update_project (line 668) | def update_project(
    method chat (line 764) | def chat(self):
    method get_webhooks (line 773) | def get_webhooks(self, project_id: str) -> Dict[str, Any]:
    method create_webhook (line 798) | def create_webhook(self, url: str, name: str, project_id: str, event_t...
    method update_webhook (line 826) | def update_webhook(
    method delete_webhook (line 860) | def delete_webhook(self, webhook_id: int) -> Dict[str, str]:
    method feedback (line 888) | def feedback(
    method _prepare_payload (line 911) | def _prepare_payload(self, messages: List[Dict[str, str]], kwargs: Dic...
    method _prepare_params (line 927) | def _prepare_params(self, kwargs: Optional[Dict[str, Any]] = None) -> ...
  class AsyncMemoryClient (line 953) | class AsyncMemoryClient:
    method __init__ (line 960) | def __init__(
    method _validate_api_key (line 1029) | def _validate_api_key(self):
    method _prepare_payload (line 1059) | def _prepare_payload(self, messages: List[Dict[str, str]], kwargs: Dic...
    method _prepare_params (line 1075) | def _prepare_params(self, kwargs: Optional[Dict[str, Any]] = None) -> ...
    method __aenter__ (line 1100) | async def __aenter__(self):
    method __aexit__ (line 1103) | async def __aexit__(self, exc_type, exc_val, exc_tb):
    method add (line 1107) | async def add(self, messages, **kwargs) -> Dict[str, Any]:
    method get (line 1135) | async def get(self, memory_id: str) -> Dict[str, Any]:
    method get_all (line 1143) | async def get_all(self, **kwargs) -> Dict[str, Any]:
    method search (line 1175) | async def search(self, query: str, **kwargs) -> Dict[str, Any]:
    method update (line 1203) | async def update(
    method delete (line 1244) | async def delete(self, memory_id: str) -> Dict[str, Any]:
    method delete_all (line 1268) | async def delete_all(self, **kwargs) -> Dict[str, str]:
    method history (line 1292) | async def history(self, memory_id: str) -> List[Dict[str, Any]]:
    method users (line 1316) | async def users(self) -> Dict[str, Any]:
    method delete_users (line 1325) | async def delete_users(
    method reset (line 1392) | async def reset(self) -> Dict[str, str]:
    method batch_update (line 1414) | async def batch_update(self, memories: List[Dict[str, Any]]) -> Dict[s...
    method batch_delete (line 1441) | async def batch_delete(self, memories: List[Dict[str, Any]]) -> Dict[s...
    method create_memory_export (line 1467) | async def create_memory_export(self, schema: str, **kwargs) -> Dict[st...
    method get_memory_export (line 1485) | async def get_memory_export(self, **kwargs) -> Dict[str, Any]:
    method get_summary (line 1500) | async def get_summary(self, filters: Optional[Dict[str, Any]] = None) ...
    method get_project (line 1516) | async def get_project(self, fields: Optional[List[str]] = None) -> Dic...
    method update_project (line 1550) | async def update_project(
    method chat (line 1624) | async def chat(self):
    method get_webhooks (line 1633) | async def get_webhooks(self, project_id: str) -> Dict[str, Any]:
    method create_webhook (line 1658) | async def create_webhook(self, url: str, name: str, project_id: str, e...
    method update_webhook (line 1686) | async def update_webhook(
    method delete_webhook (line 1720) | async def delete_webhook(self, webhook_id: int) -> Dict[str, str]:
    method feedback (line 1744) | async def feedback(

FILE: mem0/client/project.py
  class ProjectConfig (line 15) | class ProjectConfig(BaseModel):
  class BaseProject (line 27) | class BaseProject(ABC):
    method __init__ (line 32) | def __init__(
    method org_id (line 60) | def org_id(self) -> Optional[str]:
    method project_id (line 65) | def project_id(self) -> Optional[str]:
    method user_email (line 70) | def user_email(self) -> Optional[str]:
    method _validate_org_project (line 74) | def _validate_org_project(self) -> None:
    method _prepare_params (line 84) | def _prepare_params(self, kwargs: Optional[Dict[str, Any]] = None) -> ...
    method _prepare_org_params (line 109) | def _prepare_org_params(self, kwargs: Optional[Dict[str, Any]] = None)...
    method get (line 134) | def get(self, fields: Optional[List[str]] = None) -> Dict[str, Any]:
    method create (line 154) | def create(self, name: str, description: Optional[str] = None) -> Dict...
    method update (line 175) | def update(
    method delete (line 204) | def delete(self) -> Dict[str, Any]:
    method get_members (line 221) | def get_members(self) -> Dict[str, Any]:
    method add_member (line 238) | def add_member(self, email: str, role: str = "READER") -> Dict[str, Any]:
    method update_member (line 259) | def update_member(self, email: str, role: str) -> Dict[str, Any]:
    method remove_member (line 280) | def remove_member(self, email: str) -> Dict[str, Any]:
  class Project (line 300) | class Project(BaseProject):
    method __init__ (line 305) | def __init__(
    method get (line 327) | def get(self, fields: Optional[List[str]] = None) -> Dict[str, Any]:
    method create (line 358) | def create(self, name: str, description: Optional[str] = None) -> Dict...
    method update (line 396) | def update(
    method delete (line 461) | def delete(self) -> Dict[str, Any]:
    method get_members (line 487) | def get_members(self) -> Dict[str, Any]:
    method add_member (line 513) | def add_member(self, email: str, role: str = "READER") -> Dict[str, Any]:
    method update_member (line 549) | def update_member(self, email: str, role: str) -> Dict[str, Any]:
    method remove_member (line 585) | def remove_member(self, email: str) -> Dict[str, Any]:
  class AsyncProject (line 617) | class AsyncProject(BaseProject):
    method __init__ (line 622) | def __init__(
    method get (line 644) | async def get(self, fields: Optional[List[str]] = None) -> Dict[str, A...
    method create (line 675) | async def create(self, name: str, description: Optional[str] = None) -...
    method update (line 713) | async def update(
    method delete (line 778) | async def delete(self) -> Dict[str, Any]:
    method get_members (line 804) | async def get_members(self) -> Dict[str, Any]:
    method add_member (line 830) | async def add_member(self, email: str, role: str = "READER") -> Dict[s...
    method update_member (line 866) | async def update_member(self, email: str, role: str) -> Dict[str, Any]:
    method remove_member (line 902) | async def remove_member(self, email: str) -> Dict[str, Any]:

FILE: mem0/client/utils.py
  class APIError (line 13) | class APIError(Exception):
  function api_error_handler (line 23) | def api_error_handler(func):

FILE: mem0/configs/base.py
  class MemoryItem (line 17) | class MemoryItem(BaseModel):
  class MemoryConfig (line 30) | class MemoryConfig(BaseModel):
  class AzureConfig (line 69) | class AzureConfig(BaseModel):

FILE: mem0/configs/embeddings/base.py
  class BaseEmbedderConfig (line 10) | class BaseEmbedderConfig(ABC):
    method __init__ (line 15) | def __init__(

FILE: mem0/configs/enums.py
  class MemoryType (line 4) | class MemoryType(Enum):

FILE: mem0/configs/llms/anthropic.py
  class AnthropicConfig (line 6) | class AnthropicConfig(BaseLlmConfig):
    method __init__ (line 12) | def __init__(

FILE: mem0/configs/llms/aws_bedrock.py
  class AWSBedrockConfig (line 7) | class AWSBedrockConfig(BaseLlmConfig):
    method __init__ (line 14) | def __init__(
    method provider (line 63) | def provider(self) -> str:
    method model_name (line 70) | def model_name(self) -> str:
    method get_model_config (line 76) | def get_model_config(self) -> Dict[str, Any]:
    method get_aws_config (line 
Condensed preview — 1580 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (7,038K chars).
[
  {
    "path": ".github/ISSUE_TEMPLATE/bug_report.yml",
    "chars": 1696,
    "preview": "name: 🐛 Bug Report\ndescription: Create a report to help us reproduce and fix the bug\n\nbody:\n- type: markdown\n  attribute"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/config.yml",
    "chars": 334,
    "preview": "blank_issues_enabled: true\ncontact_links:\n  - name: 1-on-1 Session\n    url: https://cal.com/taranjeetio/ec\n    about: Sp"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/documentation_issue.yml",
    "chars": 353,
    "preview": "name: Documentation\ndescription: Report an issue related to the Embedchain docs.\ntitle: \"DOC: <Please write a comprehens"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/feature_request.yml",
    "chars": 717,
    "preview": "name: 🚀 Feature request\ndescription: Submit a proposal/request for a new Embedchain feature\n\nbody:\n- type: textarea\n  id"
  },
  {
    "path": ".github/PULL_REQUEST_TEMPLATE.md",
    "chars": 1619,
    "preview": "## Description\n\nPlease include a summary of the change and which issue is fixed. Please also include relevant motivation"
  },
  {
    "path": ".github/workflows/cd.yml",
    "chars": 1133,
    "preview": "name: Publish Python 🐍 distributions 📦 to PyPI and TestPyPI\n\non:\n  release:\n    types: [published]\n\njobs:\n  build-n-publ"
  },
  {
    "path": ".github/workflows/ci.yml",
    "chars": 3592,
    "preview": "name: ci\n\non:\n  push:\n    branches: [main]\n    paths:\n      - 'mem0/**'\n      - 'tests/**'\n      - 'embedchain/**'\n     "
  },
  {
    "path": ".github/workflows/openclaw-checks.yml",
    "chars": 2515,
    "preview": "name: openclaw checks\n\non:\n  workflow_dispatch:\n  push:\n    branches: [main]\n    paths:\n      - 'openclaw/**'\n      - '."
  },
  {
    "path": ".github/workflows/ts-sdk-ci.yml",
    "chars": 2709,
    "preview": "name: TypeScript SDK CI\n\non:\n  push:\n    branches: [main]\n    paths:\n      - 'mem0-ts/**'\n      - '.github/workflows/ts-"
  },
  {
    "path": ".gitignore",
    "chars": 3338,
    "preview": "# Byte-compiled / optimized / DLL files\n__pycache__/\n*.py[cod]\n*$py.class\n**/node_modules/\n\n# C extensions\n*.so\n\n# Distr"
  },
  {
    "path": ".pre-commit-config.yaml",
    "chars": 314,
    "preview": "repos:\n  - repo: local\n    hooks:\n      - id: ruff\n        name: Ruff\n        entry: ruff check\n        language: system"
  },
  {
    "path": "CONTRIBUTING.md",
    "chars": 1949,
    "preview": "# Contributing to mem0\n\nLet us make contribution easy, collaborative and fun.\n\n## Submit your Contribution through PR\n\nT"
  },
  {
    "path": "LICENSE",
    "chars": 11349,
    "preview": "                                 Apache License\n                           Version 2.0, January 2004\n                   "
  },
  {
    "path": "LLM.md",
    "chars": 37200,
    "preview": "# Mem0 - The Memory Layer for Personalized AI\n\n## Overview\n\nMem0 (\"mem-zero\") is an intelligent memory layer that enhanc"
  },
  {
    "path": "MIGRATION_GUIDE_v1.0.md",
    "chars": 5049,
    "preview": "# Migration Guide: Upgrading to mem0 1.0.0\n\n## TL;DR\n\n**What changed?** We simplified the API by removing confusing vers"
  },
  {
    "path": "Makefile",
    "chars": 1031,
    "preview": ".PHONY: format sort lint\n\n# Variables\nISORT_OPTIONS = --profile black\nPROJECT_NAME := mem0ai\n\n# Default target\nall: form"
  },
  {
    "path": "README.md",
    "chars": 6767,
    "preview": "<p align=\"center\">\n  <a href=\"https://github.com/mem0ai/mem0\">\n    <img src=\"docs/images/banner-sm.png\" width=\"800px\" al"
  },
  {
    "path": "cookbooks/customer-support-chatbot.ipynb",
    "chars": 9586,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": 1,\n   \"metadata\": {},\n   \"outputs\": [],\n   \"source\": [\n "
  },
  {
    "path": "cookbooks/helper/__init__.py",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "cookbooks/helper/mem0_teachability.py",
    "chars": 7367,
    "preview": "# Copyright (c) 2023 - 2024, Owners of https://github.com/autogen-ai\n#\n# SPDX-License-Identifier: Apache-2.0\n#\n# Portion"
  },
  {
    "path": "cookbooks/mem0-autogen.ipynb",
    "chars": 59586,
    "preview": "{\n \"cells\": [\n  {\n   \"cell_type\": \"code\",\n   \"execution_count\": null,\n   \"id\": \"1e8a980a2e0b9a85\",\n   \"metadata\": {},\n  "
  },
  {
    "path": "docs/README.md",
    "chars": 957,
    "preview": "# Mintlify Starter Kit\n\nClick on `Use this template` to copy the Mintlify starter kit. The starter kit contains examples"
  },
  {
    "path": "docs/_snippets/async-memory-add.mdx",
    "chars": 211,
    "preview": "<Note type=\"info\">\n  📢 Heads up!\n  We're moving to async memory add for a faster experience.\n  If you signed up after Ju"
  },
  {
    "path": "docs/_snippets/blank-notif.mdx",
    "chars": 0,
    "preview": ""
  },
  {
    "path": "docs/_snippets/get-help.mdx",
    "chars": 404,
    "preview": "<CardGroup cols={3}>\n  <Card title=\"Discord\" icon=\"discord\" href=\"https://mem0.dev/DiD\" color=\"#7289DA\">\n    Join our co"
  },
  {
    "path": "docs/_snippets/paper-release.mdx",
    "chars": 129,
    "preview": "<Note type=\"info\">\n  <strong>🎉 Mem0 1.0.0 is here!</strong> Enhanced filtering, reranking, and smarter memory management"
  },
  {
    "path": "docs/api-reference/entities/delete-user.mdx",
    "chars": 84,
    "preview": "---\ntitle: 'Delete User'\nopenapi: delete /v2/entities/{entity_type}/{entity_id}/\n---"
  },
  {
    "path": "docs/api-reference/entities/get-users.mdx",
    "chars": 53,
    "preview": "---\ntitle: 'Get Users'\nopenapi: get /v1/entities/\n---"
  },
  {
    "path": "docs/api-reference/events/get-event.mdx",
    "chars": 260,
    "preview": "---\ntitle: 'Get Event'\nopenapi: get /v1/event/{event_id}/\n---\n\nRetrieve details about a specific event by passing its `e"
  },
  {
    "path": "docs/api-reference/events/get-events.mdx",
    "chars": 346,
    "preview": "---\ntitle: 'Get Events'\nopenapi: get /v1/events/\n---\n\nList recent events for your organization and project.\n\n## Use Case"
  },
  {
    "path": "docs/api-reference/memory/add-memories.mdx",
    "chars": 3430,
    "preview": "---\ntitle: 'Add Memories'\nopenapi: post /v1/memories/\n---\n\nAdd new facts, messages, or metadata to a user’s memory store"
  },
  {
    "path": "docs/api-reference/memory/batch-delete.mdx",
    "chars": 66,
    "preview": "---\ntitle: 'Batch Delete Memories'\nopenapi: delete /v1/batch/\n---\n"
  },
  {
    "path": "docs/api-reference/memory/batch-update.mdx",
    "chars": 62,
    "preview": "---\ntitle: 'Batch Update Memories'\nopenapi: put /v1/batch/\n---"
  },
  {
    "path": "docs/api-reference/memory/create-memory-export.mdx",
    "chars": 462,
    "preview": "---\ntitle: 'Create Memory Export'\nopenapi: post /v1/exports/\n---\n\nSubmit a job to create a structured export of memories"
  },
  {
    "path": "docs/api-reference/memory/delete-memories.mdx",
    "chars": 63,
    "preview": "---\ntitle: 'Delete Memories'\nopenapi: delete /v1/memories/\n---\n"
  },
  {
    "path": "docs/api-reference/memory/delete-memory.mdx",
    "chars": 72,
    "preview": "---\ntitle: 'Delete Memory'\nopenapi: delete /v1/memories/{memory_id}/\n---"
  },
  {
    "path": "docs/api-reference/memory/feedback.mdx",
    "chars": 54,
    "preview": "---\ntitle: 'Feedback'\nopenapi: post /v1/feedback/\n---\n"
  },
  {
    "path": "docs/api-reference/memory/get-memories.mdx",
    "chars": 2579,
    "preview": "---\ntitle: \"Get Memories\"\nopenapi: post /v2/memories/\n---\n\nThe v2 get memories API is powerful and flexible, allowing fo"
  },
  {
    "path": "docs/api-reference/memory/get-memory-export.mdx",
    "chars": 271,
    "preview": "---\ntitle: 'Get Memory Export'\nopenapi: post /v1/exports/get\n---\n\nRetrieve the latest structured memory export after sub"
  },
  {
    "path": "docs/api-reference/memory/get-memory.mdx",
    "chars": 66,
    "preview": "---\ntitle: 'Get Memory'\nopenapi: get /v1/memories/{memory_id}/\n---"
  },
  {
    "path": "docs/api-reference/memory/history-memory.mdx",
    "chars": 78,
    "preview": "---\ntitle: 'Memory History'\nopenapi: get /v1/memories/{memory_id}/history/\n---"
  },
  {
    "path": "docs/api-reference/memory/search-memories.mdx",
    "chars": 2435,
    "preview": "---\ntitle: 'Search Memories'\nopenapi: post /v2/memories/search/\n---\n\nThe v2 search API is powerful and flexible, allowin"
  },
  {
    "path": "docs/api-reference/memory/update-memory.mdx",
    "chars": 69,
    "preview": "---\ntitle: 'Update Memory'\nopenapi: put /v1/memories/{memory_id}/\n---"
  },
  {
    "path": "docs/api-reference/organization/add-org-member.mdx",
    "chars": 287,
    "preview": "---\ntitle: 'Add Member'\nopenapi: post /api/v1/orgs/organizations/{org_id}/members/\n---\n\nThe API provides two roles for o"
  },
  {
    "path": "docs/api-reference/organization/create-org.mdx",
    "chars": 78,
    "preview": "---\ntitle: 'Create Organization'\nopenapi: post /api/v1/orgs/organizations/\n---"
  },
  {
    "path": "docs/api-reference/organization/delete-org.mdx",
    "chars": 89,
    "preview": "---\ntitle: 'Delete Organization'\nopenapi: delete /api/v1/orgs/organizations/{org_id}/\n---"
  },
  {
    "path": "docs/api-reference/organization/get-org-members.mdx",
    "chars": 86,
    "preview": "---\ntitle: 'Get Members'\nopenapi: get /api/v1/orgs/organizations/{org_id}/members/\n---"
  },
  {
    "path": "docs/api-reference/organization/get-org.mdx",
    "chars": 83,
    "preview": "---\ntitle: 'Get Organization'\nopenapi: get /api/v1/orgs/organizations/{org_id}/\n---"
  },
  {
    "path": "docs/api-reference/organization/get-orgs.mdx",
    "chars": 75,
    "preview": "---\ntitle: 'Get Organizations'\nopenapi: get /api/v1/orgs/organizations/\n---"
  },
  {
    "path": "docs/api-reference/organizations-projects.mdx",
    "chars": 4999,
    "preview": "---\ntitle: Organizations & Projects\nicon: \"building\"\ndescription: \"Manage multi-tenant applications with organization an"
  },
  {
    "path": "docs/api-reference/project/add-project-member.mdx",
    "chars": 294,
    "preview": "---\ntitle: 'Add Member'\nopenapi: post /api/v1/orgs/organizations/{org_id}/projects/{project_id}/members/\n---\n\nThe API pr"
  },
  {
    "path": "docs/api-reference/project/create-project.mdx",
    "chars": 91,
    "preview": "---\ntitle: 'Create Project'\nopenapi: post /api/v1/orgs/organizations/{org_id}/projects/\n---"
  },
  {
    "path": "docs/api-reference/project/delete-project.mdx",
    "chars": 106,
    "preview": "---\ntitle: 'Delete Project'\nopenapi: delete /api/v1/orgs/organizations/{org_id}/projects/{project_id}/\n---"
  },
  {
    "path": "docs/api-reference/project/get-project-members.mdx",
    "chars": 108,
    "preview": "---\ntitle: 'Get Members'\nopenapi: get /api/v1/orgs/organizations/{org_id}/projects/{project_id}/members/\n---"
  },
  {
    "path": "docs/api-reference/project/get-project.mdx",
    "chars": 100,
    "preview": "---\ntitle: 'Get Project'\nopenapi: get /api/v1/orgs/organizations/{org_id}/projects/{project_id}/\n---"
  },
  {
    "path": "docs/api-reference/project/get-projects.mdx",
    "chars": 88,
    "preview": "---\ntitle: 'Get Projects'\nopenapi: get /api/v1/orgs/organizations/{org_id}/projects/\n---"
  },
  {
    "path": "docs/api-reference/webhook/create-webhook.mdx",
    "chars": 87,
    "preview": "---\ntitle: 'Create Webhook'\nopenapi: post /api/v1/webhooks/projects/{project_id}/\n---\n\n"
  },
  {
    "path": "docs/api-reference/webhook/delete-webhook.mdx",
    "chars": 79,
    "preview": "---\ntitle: 'Delete Webhook'\nopenapi: delete /api/v1/webhooks/{webhook_id}/\n---\n"
  },
  {
    "path": "docs/api-reference/webhook/get-webhook.mdx",
    "chars": 83,
    "preview": "---\ntitle: 'Get Webhook'\nopenapi: get /api/v1/webhooks/projects/{project_id}/\n---\n\n"
  },
  {
    "path": "docs/api-reference/webhook/update-webhook.mdx",
    "chars": 77,
    "preview": "---\ntitle: 'Update Webhook'\nopenapi: put /api/v1/webhooks/{webhook_id}/\n---\n\n"
  },
  {
    "path": "docs/api-reference.mdx",
    "chars": 3557,
    "preview": "---\ntitle: \"Overview\"\nicon: \"terminal\"\niconType: \"solid\"\ndescription: \"REST APIs for memory management, search, and enti"
  },
  {
    "path": "docs/changelog.mdx",
    "chars": 41207,
    "preview": "---\ntitle: \"Product Updates\"\nmode: \"wide\"\n---\n\n \n<Tabs>\n<Tab title=\"Python\">\n\n<Update label=\"2026-03-19\" description=\"v1"
  },
  {
    "path": "docs/components/embedders/config.mdx",
    "chars": 3660,
    "preview": "---\ntitle: Configurations\n---\n\n\nConfig in mem0 is a dictionary that specifies the settings for your embedding models. It"
  },
  {
    "path": "docs/components/embedders/models/aws_bedrock.mdx",
    "chars": 1961,
    "preview": "---\ntitle: AWS Bedrock\n---\n\nTo use AWS Bedrock embedding models, you need to have the appropriate AWS credentials and pe"
  },
  {
    "path": "docs/components/embedders/models/azure_openai.mdx",
    "chars": 5143,
    "preview": "---\ntitle: Azure OpenAI\n---\n\nTo use Azure OpenAI embedding models, set the `EMBEDDING_AZURE_OPENAI_API_KEY`, `EMBEDDING_"
  },
  {
    "path": "docs/components/embedders/models/google_AI.mdx",
    "chars": 2907,
    "preview": "---\ntitle: Google AI\n---\n\nTo use Google AI embedding models, set the `GOOGLE_API_KEY` environment variables. You can obt"
  },
  {
    "path": "docs/components/embedders/models/huggingface.mdx",
    "chars": 2137,
    "preview": "---\ntitle: Hugging Face\n---\n\nYou can use embedding models from Huggingface to run Mem0 locally.\n\n### Usage\n\n```python\nim"
  },
  {
    "path": "docs/components/embedders/models/langchain.mdx",
    "chars": 5571,
    "preview": "---\ntitle: LangChain\n---\n\nMem0 supports LangChain as a provider to access a wide range of embedding models. LangChain is"
  },
  {
    "path": "docs/components/embedders/models/lmstudio.mdx",
    "chars": 1275,
    "preview": "You can use embedding models from LM Studio to run Mem0 locally.\n\n### Usage\n\n```python\nimport os\nfrom mem0 import Memory"
  },
  {
    "path": "docs/components/embedders/models/ollama.mdx",
    "chars": 2342,
    "preview": "You can use embedding models from Ollama to run Mem0 locally.\n\n### Usage\n\n<CodeGroup>\n```python Python\nimport os\nfrom me"
  },
  {
    "path": "docs/components/embedders/models/openai.mdx",
    "chars": 1974,
    "preview": "---\ntitle: OpenAI\n---\n\nTo use OpenAI embedding models, set the `OPENAI_API_KEY` environment variable. You can obtain the"
  },
  {
    "path": "docs/components/embedders/models/together.mdx",
    "chars": 1518,
    "preview": "---\ntitle: Together\n---\n\nTo use Together embedding models, set the `TOGETHER_API_KEY` environment variable. You can obta"
  },
  {
    "path": "docs/components/embedders/models/vertexai.mdx",
    "chars": 2709,
    "preview": "### Vertex AI\n\nTo use Google Cloud's Vertex AI for text embedding models, set the `GOOGLE_APPLICATION_CREDENTIALS` envir"
  },
  {
    "path": "docs/components/embedders/overview.mdx",
    "chars": 1531,
    "preview": "---\ntitle: Overview\n---\n\nMem0 offers support for various embedding models, allowing users to choose the one that best su"
  },
  {
    "path": "docs/components/llms/config.mdx",
    "chars": 6422,
    "preview": "---\ntitle: Configurations\n---\n\n## How to define configurations?\n\n<Tabs>\n  <Tab title=\"Python\">\n    The `config` is defin"
  },
  {
    "path": "docs/components/llms/models/anthropic.mdx",
    "chars": 2107,
    "preview": "---\ntitle: Anthropic\n---\n\n\nTo use Anthropic's models, please set the `ANTHROPIC_API_KEY` which you find on their [Accoun"
  },
  {
    "path": "docs/components/llms/models/aws_bedrock.mdx",
    "chars": 1621,
    "preview": "---\ntitle: AWS Bedrock\n---\n\n### Setup\n- Before using the AWS Bedrock LLM, make sure you have the appropriate model acces"
  },
  {
    "path": "docs/components/llms/models/azure_openai.mdx",
    "chars": 6110,
    "preview": "---\ntitle: Azure OpenAI\n---\n\n<Note> Mem0 Now Supports Azure OpenAI Models in TypeScript SDK </Note>\n\nTo use Azure OpenAI"
  },
  {
    "path": "docs/components/llms/models/deepseek.mdx",
    "chars": 1675,
    "preview": "---\ntitle: DeepSeek\n---\n\nTo use DeepSeek LLM models, you have to set the `DEEPSEEK_API_KEY` environment variable. You ca"
  },
  {
    "path": "docs/components/llms/models/google_AI.mdx",
    "chars": 2588,
    "preview": "---\ntitle: Google AI\n---\n\nTo use the Gemini model, set the `GOOGLE_API_KEY` environment variable. You can obtain the Goo"
  },
  {
    "path": "docs/components/llms/models/groq.mdx",
    "chars": 2310,
    "preview": "---\ntitle: Groq\n---\n\n[Groq](https://groq.com/) is the creator of the world's first Language Processing Unit (LPU), provi"
  },
  {
    "path": "docs/components/llms/models/langchain.mdx",
    "chars": 3624,
    "preview": "---\ntitle: LangChain\n---\n\n\nMem0 supports LangChain as a provider to access a wide range of LLM models. LangChain is a fr"
  },
  {
    "path": "docs/components/llms/models/litellm.mdx",
    "chars": 1280,
    "preview": "[Litellm](https://litellm.vercel.app/docs/) is compatible with over 100 large language models (LLMs), all using a standa"
  },
  {
    "path": "docs/components/llms/models/lmstudio.mdx",
    "chars": 2859,
    "preview": "---\ntitle: LM Studio\n---\n\nTo use LM Studio with Mem0, you'll need to have LM Studio running locally with its server enab"
  },
  {
    "path": "docs/components/llms/models/mistral_AI.mdx",
    "chars": 2198,
    "preview": "---\ntitle: Mistral AI\n---\n\nTo use mistral's models, please obtain the Mistral AI api key from their [console](https://co"
  },
  {
    "path": "docs/components/llms/models/ollama.mdx",
    "chars": 1988,
    "preview": "---\ntitle: Ollama\n---\n\nYou can use LLMs from Ollama to run Mem0 locally. These [models](https://ollama.com/search?c=tool"
  },
  {
    "path": "docs/components/llms/models/openai.mdx",
    "chars": 2966,
    "preview": "---\ntitle: OpenAI\n---\n\nTo use OpenAI LLM models, you have to set the `OPENAI_API_KEY` environment variable. You can obta"
  },
  {
    "path": "docs/components/llms/models/sarvam.mdx",
    "chars": 2382,
    "preview": "---\ntitle: Sarvam AI\n---\n\n**Sarvam AI** is an Indian AI company developing language models with a focus on Indian langua"
  },
  {
    "path": "docs/components/llms/models/together.mdx",
    "chars": 1287,
    "preview": "---\ntitle: Together\n---\n\nTo use Together LLM models, you have to set the `TOGETHER_API_KEY` environment variable. You ca"
  },
  {
    "path": "docs/components/llms/models/vllm.mdx",
    "chars": 3513,
    "preview": "---\ntitle: vLLM\n---\n\n[vLLM](https://docs.vllm.ai/) is a high-performance inference engine for large language models that"
  },
  {
    "path": "docs/components/llms/models/xAI.mdx",
    "chars": 1494,
    "preview": "---\ntitle: xAI\n---\n\n[xAI](https://x.ai/) is a new AI company founded by Elon Musk that develops large language models, i"
  },
  {
    "path": "docs/components/llms/overview.mdx",
    "chars": 2789,
    "preview": "---\ntitle: Overview\n---\n\nMem0 includes built-in support for various popular large language models. Memory can utilize th"
  },
  {
    "path": "docs/components/rerankers/config.mdx",
    "chars": 4978,
    "preview": "---\ntitle: Config\ndescription: \"Configuration options for rerankers in Mem0\"\n---\n\n## Common Configuration Parameters\n\nAl"
  },
  {
    "path": "docs/components/rerankers/custom-prompts.mdx",
    "chars": 5398,
    "preview": "---\ntitle: Custom Prompts\n---\n\nWhen using LLM rerankers, you can customize the prompts used for ranking to better suit y"
  },
  {
    "path": "docs/components/rerankers/models/cohere.mdx",
    "chars": 3920,
    "preview": "---\ntitle: Cohere\ndescription: \"Reranking with Cohere\"\n---\n\nCohere provides enterprise-grade reranking models with excel"
  },
  {
    "path": "docs/components/rerankers/models/huggingface.mdx",
    "chars": 7514,
    "preview": "---\ntitle: Hugging Face Reranker\ndescription: 'Access thousands of reranking models from Hugging Face Hub'\n---\n\n## Overv"
  },
  {
    "path": "docs/components/rerankers/models/llm.mdx",
    "chars": 6027,
    "preview": "---\ntitle: LLM as Reranker\ndescription: 'Flexible reranking using LLMs'\n---\n\n<Warning>\n**This page has been superseded.*"
  },
  {
    "path": "docs/components/rerankers/models/llm_reranker.mdx",
    "chars": 12059,
    "preview": "---\ntitle: LLM Reranker\ndescription: 'Use any language model as a reranker with custom prompts'\n---\n\n## Overview\n\nThe LL"
  },
  {
    "path": "docs/components/rerankers/models/sentence_transformer.mdx",
    "chars": 4549,
    "preview": "---\ntitle: Sentence Transformer\ndescription: 'Local reranking with HuggingFace cross-encoder models'\n---\n\nSentence Trans"
  },
  {
    "path": "docs/components/rerankers/models/zero_entropy.mdx",
    "chars": 3120,
    "preview": "---\ntitle: Zero Entropy\ndescription: 'Neural reranking with Zero Entropy'\n---\n\n[Zero Entropy](https://www.zeroentropy.de"
  },
  {
    "path": "docs/components/rerankers/optimization.mdx",
    "chars": 7743,
    "preview": "---\ntitle: Performance Optimization\n---\n\nOptimizing reranker performance is crucial for maintaining fast search response"
  },
  {
    "path": "docs/components/rerankers/overview.mdx",
    "chars": 2698,
    "preview": "---\ntitle: Overview\ndescription: 'Pick the right reranker path to boost Mem0 search relevance.'\n---\n\nMem0 rerankers resc"
  },
  {
    "path": "docs/components/vectordbs/config.mdx",
    "chars": 4498,
    "preview": "---\ntitle: Configurations\n---\n\n## How to define configurations?\n\nThe `config` is defined as an object with two main keys"
  },
  {
    "path": "docs/components/vectordbs/dbs/azure.mdx",
    "chars": 8325,
    "preview": "---\ntitle: Azure AI Search\n---\n\n[Azure AI Search](https://learn.microsoft.com/azure/search/search-what-is-azure-search/)"
  },
  {
    "path": "docs/components/vectordbs/dbs/azure_mysql.mdx",
    "chars": 4070,
    "preview": "---\ntitle: Azure MySQL\n---\n\n[Azure Database for MySQL](https://azure.microsoft.com/products/mysql) is a fully managed re"
  },
  {
    "path": "docs/components/vectordbs/dbs/baidu.mdx",
    "chars": 2390,
    "preview": "---\ntitle: Baidu VectorDB (Mochow)\n---\n\n[Baidu VectorDB](https://cloud.baidu.com/doc/VDB/index.html) is an enterprise-le"
  },
  {
    "path": "docs/components/vectordbs/dbs/cassandra.mdx",
    "chars": 5397,
    "preview": "---\ntitle: Apache Cassandra\n---\n\n[Apache Cassandra](https://cassandra.apache.org/) is a highly scalable, distributed NoS"
  },
  {
    "path": "docs/components/vectordbs/dbs/chroma.mdx",
    "chars": 1838,
    "preview": "[Chroma](https://www.trychroma.com/) is an AI-native open-source vector database that simplifies building LLM apps by pr"
  },
  {
    "path": "docs/components/vectordbs/dbs/databricks.mdx",
    "chars": 5415,
    "preview": "[Databricks Vector Search](https://docs.databricks.com/en/generative-ai/vector-search.html) is a serverless similarity s"
  },
  {
    "path": "docs/components/vectordbs/dbs/elasticsearch.mdx",
    "chars": 4222,
    "preview": "[Elasticsearch](https://www.elastic.co/) is a distributed, RESTful search and analytics engine that can efficiently stor"
  },
  {
    "path": "docs/components/vectordbs/dbs/faiss.mdx",
    "chars": 2954,
    "preview": "[FAISS](https://github.com/facebookresearch/faiss) is a library for efficient similarity search and clustering of dense "
  },
  {
    "path": "docs/components/vectordbs/dbs/langchain.mdx",
    "chars": 4104,
    "preview": "---\ntitle: LangChain\n---\n\nMem0 supports LangChain as a provider for vector store integration. LangChain provides a unifi"
  },
  {
    "path": "docs/components/vectordbs/dbs/milvus.mdx",
    "chars": 1629,
    "preview": "[Milvus](https://milvus.io/) is an open-source vector database that suits AI applications of every size, from running a "
  },
  {
    "path": "docs/components/vectordbs/dbs/mongodb.mdx",
    "chars": 1677,
    "preview": "# MongoDB\n\n[MongoDB](https://www.mongodb.com/) is a versatile document database that supports vector search capabilities"
  },
  {
    "path": "docs/components/vectordbs/dbs/neptune_analytics.mdx",
    "chars": 1506,
    "preview": "# Neptune Analytics Vector Store\n\n[Neptune Analytics](https://docs.aws.amazon.com/neptune-analytics/latest/userguide/wha"
  },
  {
    "path": "docs/components/vectordbs/dbs/opensearch.mdx",
    "chars": 2770,
    "preview": "[OpenSearch](https://opensearch.org/) is an enterprise-grade search and observability suite that brings order to unstruc"
  },
  {
    "path": "docs/components/vectordbs/dbs/pgvector.mdx",
    "chars": 3452,
    "preview": "[pgvector](https://github.com/pgvector/pgvector) is an open-source vector similarity search extension for Postgres. Afte"
  },
  {
    "path": "docs/components/vectordbs/dbs/pinecone.mdx",
    "chars": 4088,
    "preview": "[Pinecone](https://www.pinecone.io/) is a fully managed vector database designed for machine learning applications, offe"
  },
  {
    "path": "docs/components/vectordbs/dbs/qdrant.mdx",
    "chars": 3200,
    "preview": "[Qdrant](https://qdrant.tech/) is an open-source vector search engine. It is designed to work with large-scale datasets "
  },
  {
    "path": "docs/components/vectordbs/dbs/redis.mdx",
    "chars": 2900,
    "preview": "[Redis](https://redis.io/) is a scalable, real-time database that can store, search, and analyze vector data.\n\n### Insta"
  },
  {
    "path": "docs/components/vectordbs/dbs/s3_vectors.mdx",
    "chars": 3267,
    "preview": "---\ntitle: Amazon S3 Vectors\n---\n\n[Amazon S3 Vectors](https://aws.amazon.com/s3/features/vectors/) is a purpose-built, c"
  },
  {
    "path": "docs/components/vectordbs/dbs/supabase.mdx",
    "chars": 5824,
    "preview": "[Supabase](https://supabase.com/) is an open-source Firebase alternative that provides a PostgreSQL database with pgvect"
  },
  {
    "path": "docs/components/vectordbs/dbs/upstash-vector.mdx",
    "chars": 2152,
    "preview": "[Upstash Vector](https://upstash.com/docs/vector) is a serverless vector database with built-in embedding models.\n\n### U"
  },
  {
    "path": "docs/components/vectordbs/dbs/valkey.mdx",
    "chars": 1816,
    "preview": "# Valkey Vector Store\n\n[Valkey](https://valkey.io/) is an open source (BSD) high-performance key/value datastore that su"
  },
  {
    "path": "docs/components/vectordbs/dbs/vectorize.mdx",
    "chars": 1476,
    "preview": "[Cloudflare Vectorize](https://developers.cloudflare.com/vectorize/) is a vector database offering from Cloudflare, allo"
  },
  {
    "path": "docs/components/vectordbs/dbs/vertex_ai.mdx",
    "chars": 1923,
    "preview": "---\ntitle: Vertex AI Vector Search\n---\n\n\n### Usage\n\nTo use Google Cloud Vertex AI Vector Search with `mem0`, you need to"
  },
  {
    "path": "docs/components/vectordbs/dbs/weaviate.mdx",
    "chars": 1566,
    "preview": "[Weaviate](https://weaviate.io/) is an open-source vector search engine. It allows efficient storage and retrieval of hi"
  },
  {
    "path": "docs/components/vectordbs/overview.mdx",
    "chars": 2672,
    "preview": "---\ntitle: Overview\n---\n\nMem0 includes built-in support for various popular databases. Memory can utilize the database p"
  },
  {
    "path": "docs/contributing/development.mdx",
    "chars": 2419,
    "preview": "---\ntitle: Development\nicon: \"code\"\n---\n\n# Development Contributions\n\nWe strive to make contributions **easy, collaborat"
  },
  {
    "path": "docs/contributing/documentation.mdx",
    "chars": 933,
    "preview": "---\ntitle: Documentation\nicon: \"book\"\n---\n\n# Documentation Contributions\n\n## Prerequisites\n\nBefore getting started, ensu"
  },
  {
    "path": "docs/cookbooks/companions/ai-tutor.mdx",
    "chars": 4190,
    "preview": "---\ntitle: Personalized AI Tutor\ndescription: \"Keep student progress and preferences persistent across tutoring sessions"
  },
  {
    "path": "docs/cookbooks/companions/local-companion-ollama.mdx",
    "chars": 2925,
    "preview": "---\ntitle: Self-Hosted AI Companion\ndescription: \"Run Mem0 end-to-end on your machine using Ollama-powered LLMs and embe"
  },
  {
    "path": "docs/cookbooks/companions/nodejs-companion.mdx",
    "chars": 4513,
    "preview": "---\ntitle: Build a Node.js Companion\ndescription: \"Build a JavaScript fitness coach that remembers user goals run after "
  },
  {
    "path": "docs/cookbooks/companions/quickstart-demo.mdx",
    "chars": 2825,
    "preview": "---\ntitle: Interactive Memory Demo\ndescription: \"Spin up the showcase companion app to see Mem0 memories in action.\"\n---"
  },
  {
    "path": "docs/cookbooks/companions/travel-assistant.mdx",
    "chars": 6809,
    "preview": "---\ntitle: Smart Travel Assistant\ndescription: \"Plan itineraries that remember traveler preferences across trips.\"\n---\n\n"
  },
  {
    "path": "docs/cookbooks/companions/voice-companion-openai.mdx",
    "chars": 17977,
    "preview": "---\ntitle: Voice-First AI Companion\ndescription: \"Pair the OpenAI Agents SDK with Mem0 to build a voice assistant that r"
  },
  {
    "path": "docs/cookbooks/companions/youtube-research.mdx",
    "chars": 3101,
    "preview": "---\ntitle: Research Assistant for YouTube\ndescription: \"Layer personalized context over any video using the Mem0 YouTube"
  },
  {
    "path": "docs/cookbooks/essentials/building-ai-companion.mdx",
    "chars": 14630,
    "preview": "---\ntitle: Build a Companion with Mem0\ndescription: \"Spin up a fitness coach that remembers goals, adapts tone, and keep"
  },
  {
    "path": "docs/cookbooks/essentials/choosing-memory-architecture-vector-vs-graph.mdx",
    "chars": 10386,
    "preview": "---\ntitle: Choose Vector vs Graph Memory\ndescription: \"Blend vector search with graph relationships to answer multi-hop "
  },
  {
    "path": "docs/cookbooks/essentials/controlling-memory-ingestion.mdx",
    "chars": 14535,
    "preview": "---\ntitle: Control Memory Ingestion\ndescription: \"Filter speculation, enforce formats, and gate low-confidence data befo"
  },
  {
    "path": "docs/cookbooks/essentials/entity-partitioning-playbook.mdx",
    "chars": 9490,
    "preview": "---\ntitle: Partition Memories by Entity\ndescription: Keep memories separate by tagging each write and query with user, a"
  },
  {
    "path": "docs/cookbooks/essentials/exporting-memories.mdx",
    "chars": 6895,
    "preview": "---\ntitle: Export Stored Memories\ndescription: \"Retrieve, review, and migrate user memories with structured exports.\"\n--"
  },
  {
    "path": "docs/cookbooks/essentials/memory-expiration-short-and-long-term.mdx",
    "chars": 7421,
    "preview": "---\ntitle: Set Memory Expiration\ndescription: \"Define short-term versus long-term retention so the store stays fresh.\"\n-"
  },
  {
    "path": "docs/cookbooks/essentials/tagging-and-organizing-memories.mdx",
    "chars": 7581,
    "preview": "---\ntitle: Tag and Organize Memories\ndescription: \"Let Mem0 auto-categorize support data so teams retrieve the right fac"
  },
  {
    "path": "docs/cookbooks/frameworks/chrome-extension.mdx",
    "chars": 3712,
    "preview": "---\ntitle: Browser Extension Memory\ndescription: \"Add Mem0's universal memory layer to Chrome chat surfaces.\"\n---\n\n\nEnha"
  },
  {
    "path": "docs/cookbooks/frameworks/eliza-os-character.mdx",
    "chars": 2393,
    "preview": "---\ntitle: Persistent Eliza Characters\ndescription: \"Bring persistent personality to Eliza OS agents using Mem0.\"\n---\n\n\n"
  },
  {
    "path": "docs/cookbooks/frameworks/gemini-3-with-mem0-mcp.mdx",
    "chars": 8402,
    "preview": "---\ntitle: \"Gemini 3 with Mem0 MCP\"\ndescription: \"Create snappy, smart, memory-aware agents by pairing Gemini 3 with Mem"
  },
  {
    "path": "docs/cookbooks/frameworks/llamaindex-multiagent.mdx",
    "chars": 13793,
    "preview": "---\ntitle: Multi-Agent Collaboration\ndescription: \"Share a persistent memory layer across collaborating LlamaIndex agent"
  },
  {
    "path": "docs/cookbooks/frameworks/llamaindex-react.mdx",
    "chars": 6088,
    "preview": "---\ntitle: ReAct Agents with Memory\ndescription: \"Teach a ReAct agent to store and recall context via Mem0.\"\n---\n\n\nCreat"
  },
  {
    "path": "docs/cookbooks/frameworks/mirofish-swarm-memory.mdx",
    "chars": 33184,
    "preview": "---\ntitle: MiroFish Swarm Memory\ndescription: \"Build a multi-agent swarm simulation with graph-powered memory using Mem0"
  },
  {
    "path": "docs/cookbooks/frameworks/multimodal-retrieval.mdx",
    "chars": 2341,
    "preview": "---\ntitle: Visual Memory Retrieval\ndescription: \"Store and recall visual context alongside text conversations.\"\n---\n\n\nEn"
  },
  {
    "path": "docs/cookbooks/integrations/agents-sdk-tool.mdx",
    "chars": 6486,
    "preview": "---\ntitle: Memory-Powered Agent SDK\ndescription: \"Expose Mem0 memories as callable tools inside OpenAI agent workflows.\""
  },
  {
    "path": "docs/cookbooks/integrations/aws-bedrock.mdx",
    "chars": 4366,
    "preview": "---\ntitle: Bedrock with Persistent Memory\ndescription: \"Pair Mem0 with AWS Bedrock, OpenSearch, and Neptune for a manage"
  },
  {
    "path": "docs/cookbooks/integrations/healthcare-google-adk.mdx",
    "chars": 9889,
    "preview": "---\ntitle: Healthcare Coach with ADK\ndescription: \"Guide patients with an assistant that remembers history across ADK se"
  },
  {
    "path": "docs/cookbooks/integrations/mastra-agent.mdx",
    "chars": 4220,
    "preview": "---\ntitle: Persistent Mastra Agents\ndescription: \"Extend Mastra agents with persistent memories powered by Mem0.\"\n---\n\n\n"
  },
  {
    "path": "docs/cookbooks/integrations/neptune-analytics.mdx",
    "chars": 3941,
    "preview": "---\ntitle: Graph Memory on Neptune\ndescription: \"Combine Mem0 graph memory with AWS Neptune Analytics and Bedrock.\"\n---\n"
  },
  {
    "path": "docs/cookbooks/integrations/openai-tool-calls.mdx",
    "chars": 10013,
    "preview": "---\ntitle: Memory as OpenAI Tool\ndescription: \"Wire Mem0 memories into OpenAI's inbuilt function-calling flow.\"\n---\n\n\nIn"
  },
  {
    "path": "docs/cookbooks/integrations/tavily-search.mdx",
    "chars": 7905,
    "preview": "---\ntitle: Search with Personal Context\ndescription: \"Blend Tavily's realtime results with personal context stored in Me"
  },
  {
    "path": "docs/cookbooks/operations/content-writing.mdx",
    "chars": 6464,
    "preview": "---\ntitle: Content Creation Workflow\ndescription: \"Store voice guidelines once and apply them across every draft.\"\n---\n\n"
  },
  {
    "path": "docs/cookbooks/operations/deep-research.mdx",
    "chars": 2787,
    "preview": "---\ntitle: Multi-Session Research Agent\ndescription: \"Run multi-session investigations that remember past findings and p"
  },
  {
    "path": "docs/cookbooks/operations/email-automation.mdx",
    "chars": 6841,
    "preview": "---\ntitle: Automated Email Intelligence\ndescription: \"Capture, categorize, and recall inbox threads using persistent mem"
  },
  {
    "path": "docs/cookbooks/operations/support-inbox.mdx",
    "chars": 4071,
    "preview": "---\ntitle: Memory-Powered Support Agent\ndescription: \"Build a support assistant that keeps past tickets and resolutions "
  },
  {
    "path": "docs/cookbooks/operations/team-task-agent.mdx",
    "chars": 4634,
    "preview": "---\ntitle: Collaborative Task Assistant\ndescription: \"Coordinate multi-user projects with shared memories and roles.\"\n--"
  },
  {
    "path": "docs/cookbooks/overview.mdx",
    "chars": 6268,
    "preview": "---\ntitle: Overview\ndescription: How to use mem0 in your existing applications?\n---\n\nWith Mem0, you can create stateful "
  },
  {
    "path": "docs/core-concepts/memory-operations/add.mdx",
    "chars": 7114,
    "preview": "---\ntitle: Add Memory\ndescription: Add memory into the Mem0 platform by storing user-assistant interactions and facts fo"
  },
  {
    "path": "docs/core-concepts/memory-operations/delete.mdx",
    "chars": 7197,
    "preview": "---\ntitle: Delete Memory\ndescription: Remove memories from Mem0 either individually, in bulk, or via filters.\nicon: \"tra"
  },
  {
    "path": "docs/core-concepts/memory-operations/search.mdx",
    "chars": 8099,
    "preview": "---\ntitle: Search Memory\ndescription: Retrieve relevant memories from Mem0 using powerful semantic and filtered search c"
  },
  {
    "path": "docs/core-concepts/memory-operations/update.mdx",
    "chars": 5479,
    "preview": "---\ntitle: Update Memory\ndescription: Modify an existing memory by updating its content or metadata.\nicon: \"pen-to-squar"
  },
  {
    "path": "docs/core-concepts/memory-types.mdx",
    "chars": 5106,
    "preview": "---\ntitle: Memory Types\ndescription: \"See how Mem0 layers conversation, session, and user memories to keep agents contex"
  },
  {
    "path": "docs/docs.json",
    "chars": 37940,
    "preview": "{\n  \"$schema\": \"https://mintlify.com/docs.json\",\n  \"name\": \"Mem0\",\n  \"description\": \"Mem0 is a self-improving memory lay"
  },
  {
    "path": "docs/integrations/agentops.mdx",
    "chars": 6240,
    "preview": "---\ntitle: AgentOps\n---\n\nIntegrate [**Mem0**](https://github.com/mem0ai/mem0) with [AgentOps](https://agentops.ai), a co"
  },
  {
    "path": "docs/integrations/agno.mdx",
    "chars": 6730,
    "preview": "---\ntitle: Agno\n---\n\nThis integration of [**Mem0**](https://github.com/mem0ai/mem0) with [Agno](https://github.com/agno-"
  },
  {
    "path": "docs/integrations/autogen.mdx",
    "chars": 4747,
    "preview": "---\ntitle: AutoGen\n---\n\nBuild conversational AI agents with memory capabilities. This integration combines AutoGen for c"
  },
  {
    "path": "docs/integrations/aws-bedrock.mdx",
    "chars": 3831,
    "preview": "---\ntitle: AWS Bedrock\n---\n\nThis integration demonstrates how to use **Mem0** with **AWS Bedrock** and **Amazon OpenSear"
  },
  {
    "path": "docs/integrations/camel-ai.mdx",
    "chars": 3811,
    "preview": "---\ntitle: Camel AI\ndescription: \"Plug Mem0 cloud memory into Camel's agents with the built‑in Mem0Storage.\"\npartnerBadg"
  },
  {
    "path": "docs/integrations/crewai.mdx",
    "chars": 5053,
    "preview": "---\ntitle: CrewAI\n---\n\nBuild an AI system that combines CrewAI's agent-based architecture with Mem0's memory capabilitie"
  },
  {
    "path": "docs/integrations/dify.mdx",
    "chars": 2007,
    "preview": "---\ntitle: Dify\n---\n\n# Integrating Mem0 with Dify AI\n\nMem0 brings a robust memory layer to Dify AI, empowering your AI a"
  },
  {
    "path": "docs/integrations/elevenlabs.mdx",
    "chars": 15135,
    "preview": "---\ntitle: ElevenLabs\n---\n\nCreate voice-based conversational AI agents with memory capabilities by integrating ElevenLab"
  },
  {
    "path": "docs/integrations/flowise.mdx",
    "chars": 4557,
    "preview": "---\ntitle: Flowise\n---\n\nThe [**Mem0 Memory**](https://github.com/mem0ai/mem0) integration with [Flowise](https://github."
  },
  {
    "path": "docs/integrations/google-ai-adk.mdx",
    "chars": 10198,
    "preview": "---\ntitle: Google ADK\n---\n\nIntegrate [**Mem0**](https://github.com/mem0ai/mem0) with [Google ADK (Agent Development Kit)"
  },
  {
    "path": "docs/integrations/keywords.mdx",
    "chars": 4364,
    "preview": "---\ntitle: Keywords AI\n---\n\nBuild AI applications with persistent memory and comprehensive LLM observability by integrat"
  },
  {
    "path": "docs/integrations/langchain-tools.mdx",
    "chars": 9078,
    "preview": "---\ntitle: Langchain Tools\ndescription: 'Integrate Mem0 with LangChain tools to enable AI agents to store, search, and m"
  },
  {
    "path": "docs/integrations/langchain.mdx",
    "chars": 5623,
    "preview": "---\ntitle: Langchain\n---\n\nBuild a personalized Travel Agent AI using LangChain for conversation flow and Mem0 for memory"
  },
  {
    "path": "docs/integrations/langgraph.mdx",
    "chars": 5522,
    "preview": "---\ntitle: LangGraph\n---\n\nBuild a personalized Customer Support AI Agent using LangGraph for conversation flow and Mem0 "
  },
  {
    "path": "docs/integrations/livekit.mdx",
    "chars": 8793,
    "preview": "---\ntitle: Livekit\n---\n\nThis guide demonstrates how to create a memory-enabled voice assistant using LiveKit, Deepgram, "
  },
  {
    "path": "docs/integrations/llama-index.mdx",
    "chars": 5765,
    "preview": "---\ntitle: LlamaIndex\n---\n\nLlamaIndex supports Mem0 as a [memory store](https://llamahub.ai/l/memory/llama-index-memory-"
  },
  {
    "path": "docs/integrations/mastra.mdx",
    "chars": 4610,
    "preview": "---\ntitle: Mastra\n---\n\nThe [**Mastra**](https://mastra.ai/) integration demonstrates how to use Mastra's agent system wi"
  },
  {
    "path": "docs/integrations/openai-agents-sdk.mdx",
    "chars": 7709,
    "preview": "---\ntitle: OpenAI Agents SDK\n---\n\nIntegrate [**Mem0**](https://github.com/mem0ai/mem0) with [OpenAI Agents SDK](https://"
  },
  {
    "path": "docs/integrations/openclaw.mdx",
    "chars": 7629,
    "preview": "---\ntitle: OpenClaw\n---\n\nAdd long-term memory to [OpenClaw](https://github.com/openclaw/openclaw) agents with the `@mem0"
  },
  {
    "path": "docs/integrations/pipecat.mdx",
    "chars": 7215,
    "preview": "---\ntitle: 'Pipecat'\ndescription: 'Integrate Mem0 with Pipecat for conversational memory in AI agents'\n---\n\n# Pipecat In"
  },
  {
    "path": "docs/integrations/raycast.mdx",
    "chars": 1869,
    "preview": "---\ntitle: \"Raycast Extension\"\ndescription: \"Mem0 Raycast extension for intelligent memory management\"\n---\n\nMem0 is a se"
  },
  {
    "path": "docs/integrations/vercel-ai-sdk.mdx",
    "chars": 10055,
    "preview": "---\ntitle: Vercel AI SDK\n---\n\nThe [**Mem0 AI SDK Provider**](https://www.npmjs.com/package/@mem0/vercel-ai-provider) is "
  },
  {
    "path": "docs/integrations.mdx",
    "chars": 31557,
    "preview": "---\ntitle: Overview\ndescription: How to integrate Mem0 into other frameworks\n---\n\nMem0 seamlessly integrates with popula"
  },
  {
    "path": "docs/introduction.mdx",
    "chars": 8225,
    "preview": "---\ntitle: \"Welcome to Mem0\"\ndescription: \"Memory layer for AI agents\"\nmode: \"custom\"\n---\n\n{/* debug: welcome-layout-v2 "
  },
  {
    "path": "docs/llms.txt",
    "chars": 20080,
    "preview": "# Mem0\n\n> Mem0 is a self-improving memory layer for LLM applications, enabling personalized AI experiences that retain c"
  },
  {
    "path": "docs/migration/api-changes.mdx",
    "chars": 12722,
    "preview": "---\ntitle: API Reference Changes\ndescription: 'Complete API changes between v0.x and v1.0.0 Beta'\nicon: \"code\"\niconType:"
  },
  {
    "path": "docs/migration/breaking-changes.mdx",
    "chars": 8457,
    "preview": "---\ntitle: Breaking Changes in v1.0.0 \ndescription: 'Complete list of breaking changes when upgrading from v0.x to v1.0."
  },
  {
    "path": "docs/migration/oss-to-platform.mdx",
    "chars": 12208,
    "preview": "---\ntitle: \"Migrate from Open Source to Platform\"\ndescription: \"Migrate your Mem0 Open Source implementation to Mem0 Pla"
  },
  {
    "path": "docs/migration/v0-to-v1.mdx",
    "chars": 10505,
    "preview": "---\ntitle: Migrating from v0.x to v1.0.0 \ndescription: 'Complete guide to upgrade your Mem0 implementation to version 1."
  },
  {
    "path": "docs/open-source/configuration.mdx",
    "chars": 4383,
    "preview": "---\ntitle: \"Configure the OSS Stack\"\ndescription: \"Wire up Mem0 OSS with your preferred LLM, vector store, embedder, and"
  }
]

// ... and 1380 more files (download for full content)

About this extraction

This page contains the full source code of the mem0ai/mem0 GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 1580 files (6.2 MB), approximately 1.7M tokens, and a symbol index with 4378 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!