Full Code of 666ghj/MiroFish for AI

main 1536a7933450 cached
71 files
1.2 MB
330.4k tokens
562 symbols
2 requests
Download .txt
Showing preview only (1,263K chars total). Download the full file or copy to clipboard to get everything.
Repository: 666ghj/MiroFish
Branch: main
Commit: 1536a7933450
Files: 71
Total size: 1.2 MB

Directory structure:
gitextract_d857zhx1/

├── .dockerignore
├── .github/
│   └── workflows/
│       └── docker-image.yml
├── .gitignore
├── Dockerfile
├── LICENSE
├── README-EN.md
├── README.md
├── backend/
│   ├── app/
│   │   ├── __init__.py
│   │   ├── api/
│   │   │   ├── __init__.py
│   │   │   ├── graph.py
│   │   │   ├── report.py
│   │   │   └── simulation.py
│   │   ├── config.py
│   │   ├── models/
│   │   │   ├── __init__.py
│   │   │   ├── project.py
│   │   │   └── task.py
│   │   ├── services/
│   │   │   ├── __init__.py
│   │   │   ├── graph_builder.py
│   │   │   ├── oasis_profile_generator.py
│   │   │   ├── ontology_generator.py
│   │   │   ├── report_agent.py
│   │   │   ├── simulation_config_generator.py
│   │   │   ├── simulation_ipc.py
│   │   │   ├── simulation_manager.py
│   │   │   ├── simulation_runner.py
│   │   │   ├── text_processor.py
│   │   │   ├── zep_entity_reader.py
│   │   │   ├── zep_graph_memory_updater.py
│   │   │   └── zep_tools.py
│   │   └── utils/
│   │       ├── __init__.py
│   │       ├── file_parser.py
│   │       ├── llm_client.py
│   │       ├── logger.py
│   │       ├── retry.py
│   │       └── zep_paging.py
│   ├── pyproject.toml
│   ├── requirements.txt
│   ├── run.py
│   └── scripts/
│       ├── action_logger.py
│       ├── run_parallel_simulation.py
│       ├── run_reddit_simulation.py
│       ├── run_twitter_simulation.py
│       └── test_profile_format.py
├── docker-compose.yml
├── frontend/
│   ├── .gitignore
│   ├── index.html
│   ├── package.json
│   ├── src/
│   │   ├── App.vue
│   │   ├── api/
│   │   │   ├── graph.js
│   │   │   ├── index.js
│   │   │   ├── report.js
│   │   │   └── simulation.js
│   │   ├── components/
│   │   │   ├── GraphPanel.vue
│   │   │   ├── HistoryDatabase.vue
│   │   │   ├── Step1GraphBuild.vue
│   │   │   ├── Step2EnvSetup.vue
│   │   │   ├── Step3Simulation.vue
│   │   │   ├── Step4Report.vue
│   │   │   └── Step5Interaction.vue
│   │   ├── main.js
│   │   ├── router/
│   │   │   └── index.js
│   │   ├── store/
│   │   │   └── pendingUpload.js
│   │   └── views/
│   │       ├── Home.vue
│   │       ├── InteractionView.vue
│   │       ├── MainView.vue
│   │       ├── Process.vue
│   │       ├── ReportView.vue
│   │       ├── SimulationRunView.vue
│   │       └── SimulationView.vue
│   └── vite.config.js
└── package.json

================================================
FILE CONTENTS
================================================

================================================
FILE: .dockerignore
================================================
.git
.github
.gitignore
.cursor
.DS_Store
.env

node_modules
frontend/node_modules
backend/.venv
.venv
.python-version

__pycache__
*.pyc
.pytest_cache
.mypy_cache
.ruff_cache

frontend/dist
frontend/.vite

backend/uploads


================================================
FILE: .github/workflows/docker-image.yml
================================================
name: Build and push Docker image

on:
  push:
    tags: ["*"]
  workflow_dispatch:

permissions:
  contents: read
  packages: write

jobs:
  build-and-push:
    runs-on: ubuntu-latest
    steps:
      - name: Checkout
        uses: actions/checkout@v4

      - name: Set up QEMU
        uses: docker/setup-qemu-action@v3

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v3

      - name: Log in to GHCR
        uses: docker/login-action@v3
        with:
          registry: ghcr.io
          username: ${{ github.repository_owner }}
          password: ${{ secrets.GITHUB_TOKEN }}

      - name: Extract metadata
        id: meta
        uses: docker/metadata-action@v5
        with:
          images: ghcr.io/${{ github.repository_owner }}/mirofish
          tags: |
            type=ref,event=tag
            type=sha
            type=raw,value=latest

      - name: Build and push
        uses: docker/build-push-action@v5
        with:
          context: .
          file: ./Dockerfile
          push: true
          tags: ${{ steps.meta.outputs.tags }}
          labels: ${{ steps.meta.outputs.labels }}


================================================
FILE: .gitignore
================================================
# OS
.DS_Store
Thumbs.db

# 环境变量(保护敏感信息)
.env
.env.local
.env.*.local
.env.development
.env.test
.env.production

# Python
__pycache__/
*.py[cod]
*$py.class
*.so
.Python
.venv/
venv/
ENV/
.eggs/
*.egg-info/
dist/
build/

# Node.js
node_modules/
npm-debug.log*
yarn-debug.log*
yarn-error.log*

# IDE
.vscode/
.idea/
*.swp
*.swo

# 测试
.pytest_cache/
.coverage
htmlcov/

# Cursor
.cursor/
.claude/

# 文档与测试程序
mydoc/
mytest/

# 日志文件
backend/logs/
*.log

# 上传文件
backend/uploads/

# Docker 数据
data/

================================================
FILE: Dockerfile
================================================
FROM python:3.11

# 安装 Node.js (满足 >=18)及必要工具
RUN apt-get update \
  && apt-get install -y --no-install-recommends nodejs npm \
  && rm -rf /var/lib/apt/lists/*

# 从 uv 官方镜像复制 uv
COPY --from=ghcr.io/astral-sh/uv:0.9.26 /uv /uvx /bin/

WORKDIR /app

# 先复制依赖描述文件以利用缓存
COPY package.json package-lock.json ./
COPY frontend/package.json frontend/package-lock.json ./frontend/
COPY backend/pyproject.toml backend/uv.lock ./backend/

# 安装依赖(Node + Python)
RUN npm ci \
  && npm ci --prefix frontend \
  && cd backend && uv sync --frozen

# 复制项目源码
COPY . .

EXPOSE 3000 5001

# 同时启动前后端(开发模式)
CMD ["npm", "run", "dev"]

================================================
FILE: LICENSE
================================================
                    GNU AFFERO GENERAL PUBLIC LICENSE
                       Version 3, 19 November 2007

 Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
 Everyone is permitted to copy and distribute verbatim copies
 of this license document, but changing it is not allowed.

                            Preamble

  The GNU Affero General Public License is a free, copyleft license for
software and other kinds of works, specifically designed to ensure
cooperation with the community in the case of network server software.

  The licenses for most software and other practical works are designed
to take away your freedom to share and change the works.  By contrast,
our General Public Licenses are intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users.

  When we speak of free software, we are referring to freedom, not
price.  Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.

  Developers that use our General Public Licenses protect your rights
with two steps: (1) assert copyright on the software, and (2) offer
you this License which gives you legal permission to copy, distribute
and/or modify the software.

  A secondary benefit of defending all users' freedom is that
improvements made in alternate versions of the program, if they
receive widespread use, become available for other developers to
incorporate.  Many developers of free software are heartened and
encouraged by the resulting cooperation.  However, in the case of
software used on network servers, this result may fail to come about.
The GNU General Public License permits making a modified version and
letting the public access it on a server without ever releasing its
source code to the public.

  The GNU Affero General Public License is designed specifically to
ensure that, in such cases, the modified source code becomes available
to the community.  It requires the operator of a network server to
provide the source code of the modified version running there to the
users of that server.  Therefore, public use of a modified version, on
a publicly accessible server, gives the public access to the source
code of the modified version.

  An older license, called the Affero General Public License and
published by Affero, was designed to accomplish similar goals.  This is
a different license, not a version of the Affero GPL, but Affero has
released a new version of the Affero GPL which permits relicensing under
this license.

  The precise terms and conditions for copying, distribution and
modification follow.

                       TERMS AND CONDITIONS

  0. Definitions.

  "This License" refers to version 3 of the GNU Affero General Public License.

  "Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.

  "The Program" refers to any copyrightable work licensed under this
License.  Each licensee is addressed as "you".  "Licensees" and
"recipients" may be individuals or organizations.

  To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy.  The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.

  A "covered work" means either the unmodified Program or a work based
on the Program.

  To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy.  Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.

  To "convey" a work means any kind of propagation that enables other
parties to make or receive copies.  Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.

  An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License.  If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.

  1. Source Code.

  The "source code" for a work means the preferred form of the work
for making modifications to it.  "Object code" means any non-source
form of a work.

  A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.

  The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form.  A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.

  The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities.  However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work.  For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.

  The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.

  The Corresponding Source for a work in source code form is that
same work.

  2. Basic Permissions.

  All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met.  This License explicitly affirms your unlimited
permission to run the unmodified Program.  The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work.  This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.

  You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force.  You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright.  Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.

  Conveying under any other circumstances is permitted solely under
the conditions stated below.  Sublicensing is not allowed; section 10
makes it unnecessary.

  3. Protecting Users' Legal Rights From Anti-Circumvention Law.

  No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.

  When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.

  4. Conveying Verbatim Copies.

  You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.

  You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.

  5. Conveying Modified Source Versions.

  You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:

    a) The work must carry prominent notices stating that you modified
    it, and giving a relevant date.

    b) The work must carry prominent notices stating that it is
    released under this License and any conditions added under section
    7.  This requirement modifies the requirement in section 4 to
    "keep intact all notices".

    c) You must license the entire work, as a whole, under this
    License to anyone who comes into possession of a copy.  This
    License will therefore apply, along with any applicable section 7
    additional terms, to the whole of the work, and all its parts,
    regardless of how they are packaged.  This License gives no
    permission to license the work in any other way, but it does not
    invalidate such permission if you have separately received it.

    d) If the work has interactive user interfaces, each must display
    Appropriate Legal Notices; however, if the Program has interactive
    interfaces that do not display Appropriate Legal Notices, your
    work need not make them do so.

  A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit.  Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.

  6. Conveying Non-Source Forms.

  You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:

    a) Convey the object code in, or embodied in, a physical product
    (including a physical distribution medium), accompanied by the
    Corresponding Source fixed on a durable physical medium
    customarily used for software interchange.

    b) Convey the object code in, or embodied in, a physical product
    (including a physical distribution medium), accompanied by a
    written offer, valid for at least three years and valid for as
    long as you offer spare parts or customer support for that product
    model, to give anyone who possesses the object code either (1) a
    copy of the Corresponding Source for all the software in the
    product that is covered by this License, on a durable physical
    medium customarily used for software interchange, for a price no
    more than your reasonable cost of physically performing this
    conveying of source, or (2) access to copy the
    Corresponding Source from a network server at no charge.

    c) Convey individual copies of the object code with a copy of the
    written offer to provide the Corresponding Source.  This
    alternative is allowed only occasionally and noncommercially, and
    only if you received the object code with such an offer, in accord
    with subsection 6b.

    d) Convey the object code by offering access from a designated
    place (gratis or for a charge), and offer equivalent access to the
    Corresponding Source in the same way through the same place at no
    further charge.  You need not require recipients to copy the
    Corresponding Source along with the object code.  If the place to
    copy the object code is a network server, the Corresponding Source
    may be on a different server (operated by you or a third party)
    that supports equivalent copying facilities, provided you maintain
    clear directions next to the object code saying where to find the
    Corresponding Source.  Regardless of what server hosts the
    Corresponding Source, you remain obligated to ensure that it is
    available for as long as needed to satisfy these requirements.

    e) Convey the object code using peer-to-peer transmission, provided
    you inform other peers where the object code and Corresponding
    Source of the work are being offered to the general public at no
    charge under subsection 6d.

  A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.

  A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling.  In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage.  For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product.  A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.

  "Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source.  The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.

  If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information.  But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).

  The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed.  Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.

  Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.

  7. Additional Terms.

  "Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law.  If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.

  When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it.  (Additional permissions may be written to require their own
removal in certain cases when you modify the work.)  You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.

  Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:

    a) Disclaiming warranty or limiting liability differently from the
    terms of sections 15 and 16 of this License; or

    b) Requiring preservation of specified reasonable legal notices or
    author attributions in that material or in the Appropriate Legal
    Notices displayed by works containing it; or

    c) Prohibiting misrepresentation of the origin of that material, or
    requiring that modified versions of such material be marked in
    reasonable ways as different from the original version; or

    d) Limiting the use for publicity purposes of names of licensors or
    authors of the material; or

    e) Declining to grant rights under trademark law for use of some
    trade names, trademarks, or service marks; or

    f) Requiring indemnification of licensors and authors of that
    material by anyone who conveys the material (or modified versions of
    it) with contractual assumptions of liability to the recipient, for
    any liability that these contractual assumptions directly impose on
    those licensors and authors.

  All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10.  If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term.  If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.

  If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.

  Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.

  8. Termination.

  You may not propagate or modify a covered work except as expressly
provided under this License.  Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).

  However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.

  Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.

  Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License.  If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.

  9. Acceptance Not Required for Having Copies.

  You are not required to accept this License in order to receive or
run a copy of the Program.  Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance.  However,
nothing other than this License grants you permission to propagate or
modify any covered work.  These actions infringe copyright if you do
not accept this License.  Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.

  10. Automatic Licensing of Downstream Recipients.

  Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License.  You are not responsible
for enforcing compliance by third parties with this License.

  An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations.  If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.

  You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License.  For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.

  11. Patents.

  A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based.  The
work thus licensed is called the contributor's "contributor version".

  A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version.  For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.

  Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.

  In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement).  To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.

  If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients.  "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.

  If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.

  A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License.  You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.

  Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.

  12. No Surrender of Others' Freedom.

  If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License.  If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all.  For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.

  13. Remote Network Interaction; Use with the GNU General Public License.

  Notwithstanding any other provision of this License, if you modify the
Program, your modified version must prominently offer all users
interacting with it remotely through a computer network (if your version
supports such interaction) an opportunity to receive the Corresponding
Source of your version by providing access to the Corresponding Source
from a network server at no charge, through some standard or customary
means of facilitating copying of software.  This Corresponding Source
shall include the Corresponding Source for any work covered by version 3
of the GNU General Public License that is incorporated pursuant to the
following paragraph.

  Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU General Public License into a single
combined work, and to convey the resulting work.  The terms of this
License will continue to apply to the part which is the covered work,
but the work with which it is combined will remain governed by version
3 of the GNU General Public License.

  14. Revised Versions of this License.

  The Free Software Foundation may publish revised and/or new versions of
the GNU Affero General Public License from time to time.  Such new versions
will be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.

  Each version is given a distinguishing version number.  If the
Program specifies that a certain numbered version of the GNU Affero General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation.  If the Program does not specify a version number of the
GNU Affero General Public License, you may choose any version ever published
by the Free Software Foundation.

  If the Program specifies that a proxy can decide which future
versions of the GNU Affero General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.

  Later license versions may give you additional or different
permissions.  However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.

  15. Disclaimer of Warranty.

  THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW.  EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE.  THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU.  SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.

  16. Limitation of Liability.

  IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.

  17. Interpretation of Sections 15 and 16.

  If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.

                     END OF TERMS AND CONDITIONS

            How to Apply These Terms to Your New Programs

  If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.

  To do so, attach the following notices to the program.  It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.

    <one line to give the program's name and a brief idea of what it does.>
    Copyright (C) <year>  <name of author>

    This program is free software: you can redistribute it and/or modify
    it under the terms of the GNU Affero General Public License as published
    by the Free Software Foundation, either version 3 of the License, or
    (at your option) any later version.

    This program is distributed in the hope that it will be useful,
    but WITHOUT ANY WARRANTY; without even the implied warranty of
    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
    GNU Affero General Public License for more details.

    You should have received a copy of the GNU Affero General Public License
    along with this program.  If not, see <https://www.gnu.org/licenses/>.

Also add information on how to contact you by electronic and paper mail.

  If your software can interact with users remotely through a computer
network, you should also make sure that it provides a way for users to
get its source.  For example, if your program is a web application, its
interface could display a "Source" link that leads users to an archive
of the code.  There are many ways you could offer source, and different
solutions will be better for different programs; see section 13 for the
specific requirements.

  You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU AGPL, see
<https://www.gnu.org/licenses/>.


================================================
FILE: README-EN.md
================================================
<div align="center">

<img src="./static/image/MiroFish_logo_compressed.jpeg" alt="MiroFish Logo" width="75%"/>

<a href="https://trendshift.io/repositories/16144" target="_blank"><img src="https://trendshift.io/api/badge/repositories/16144" alt="666ghj%2FMiroFish | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>

简洁通用的群体智能引擎,预测万物
</br>
<em>A Simple and Universal Swarm Intelligence Engine, Predicting Anything</em>

<a href="https://www.shanda.com/" target="_blank"><img src="./static/image/shanda_logo.png" alt="666ghj%2MiroFish | Shanda" height="40"/></a>

[![GitHub Stars](https://img.shields.io/github/stars/666ghj/MiroFish?style=flat-square&color=DAA520)](https://github.com/666ghj/MiroFish/stargazers)
[![GitHub Watchers](https://img.shields.io/github/watchers/666ghj/MiroFish?style=flat-square)](https://github.com/666ghj/MiroFish/watchers)
[![GitHub Forks](https://img.shields.io/github/forks/666ghj/MiroFish?style=flat-square)](https://github.com/666ghj/MiroFish/network)
[![Docker](https://img.shields.io/badge/Docker-Build-2496ED?style=flat-square&logo=docker&logoColor=white)](https://hub.docker.com/)
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/666ghj/MiroFish)

[![Discord](https://img.shields.io/badge/Discord-Join-5865F2?style=flat-square&logo=discord&logoColor=white)](http://discord.gg/ePf5aPaHnA)
[![X](https://img.shields.io/badge/X-Follow-000000?style=flat-square&logo=x&logoColor=white)](https://x.com/mirofish_ai)
[![Instagram](https://img.shields.io/badge/Instagram-Follow-E4405F?style=flat-square&logo=instagram&logoColor=white)](https://www.instagram.com/mirofish_ai/)

[English](./README-EN.md) | [中文文档](./README.md)

</div>

## ⚡ Overview

**MiroFish** is a next-generation AI prediction engine powered by multi-agent technology. By extracting seed information from the real world (such as breaking news, policy drafts, or financial signals), it automatically constructs a high-fidelity parallel digital world. Within this space, thousands of intelligent agents with independent personalities, long-term memory, and behavioral logic freely interact and undergo social evolution. You can inject variables dynamically from a "God's-eye view" to precisely deduce future trajectories — **rehearse the future in a digital sandbox, and win decisions after countless simulations**.

> You only need to: Upload seed materials (data analysis reports or interesting novel stories) and describe your prediction requirements in natural language</br>
> MiroFish will return: A detailed prediction report and a deeply interactive high-fidelity digital world

### Our Vision

MiroFish is dedicated to creating a swarm intelligence mirror that maps reality. By capturing the collective emergence triggered by individual interactions, we break through the limitations of traditional prediction:

- **At the Macro Level**: We are a rehearsal laboratory for decision-makers, allowing policies and public relations to be tested at zero risk
- **At the Micro Level**: We are a creative sandbox for individual users — whether deducing novel endings or exploring imaginative scenarios, everything can be fun, playful, and accessible

From serious predictions to playful simulations, we let every "what if" see its outcome, making it possible to predict anything.

## 🌐 Live Demo

Welcome to visit our online demo environment and experience a prediction simulation on trending public opinion events we've prepared for you: [mirofish-live-demo](https://666ghj.github.io/mirofish-demo/)

## 📸 Screenshots

<div align="center">
<table>
<tr>
<td><img src="./static/image/Screenshot/运行截图1.png" alt="Screenshot 1" width="100%"/></td>
<td><img src="./static/image/Screenshot/运行截图2.png" alt="Screenshot 2" width="100%"/></td>
</tr>
<tr>
<td><img src="./static/image/Screenshot/运行截图3.png" alt="Screenshot 3" width="100%"/></td>
<td><img src="./static/image/Screenshot/运行截图4.png" alt="Screenshot 4" width="100%"/></td>
</tr>
<tr>
<td><img src="./static/image/Screenshot/运行截图5.png" alt="Screenshot 5" width="100%"/></td>
<td><img src="./static/image/Screenshot/运行截图6.png" alt="Screenshot 6" width="100%"/></td>
</tr>
</table>
</div>

## 🎬 Demo Videos

### 1. Wuhan University Public Opinion Simulation + MiroFish Project Introduction

<div align="center">
<a href="https://www.bilibili.com/video/BV1VYBsBHEMY/" target="_blank"><img src="./static/image/武大模拟演示封面.png" alt="MiroFish Demo Video" width="75%"/></a>

Click the image to watch the complete demo video for prediction using BettaFish-generated "Wuhan University Public Opinion Report"
</div>

### 2. Dream of the Red Chamber Lost Ending Simulation

<div align="center">
<a href="https://www.bilibili.com/video/BV1cPk3BBExq" target="_blank"><img src="./static/image/红楼梦模拟推演封面.jpg" alt="MiroFish Demo Video" width="75%"/></a>

Click the image to watch MiroFish's deep prediction of the lost ending based on hundreds of thousands of words from the first 80 chapters of "Dream of the Red Chamber"
</div>

> **Financial Prediction**, **Political News Prediction** and more examples coming soon...

## 🔄 Workflow

1. **Graph Building**: Seed extraction & Individual/collective memory injection & GraphRAG construction
2. **Environment Setup**: Entity relationship extraction & Persona generation & Agent configuration injection
3. **Simulation**: Dual-platform parallel simulation & Auto-parse prediction requirements & Dynamic temporal memory updates
4. **Report Generation**: ReportAgent with rich toolset for deep interaction with post-simulation environment
5. **Deep Interaction**: Chat with any agent in the simulated world & Interact with ReportAgent

## 🚀 Quick Start

### Option 1: Source Code Deployment (Recommended)

#### Prerequisites

| Tool | Version | Description | Check Installation |
|------|---------|-------------|-------------------|
| **Node.js** | 18+ | Frontend runtime, includes npm | `node -v` |
| **Python** | ≥3.11, ≤3.12 | Backend runtime | `python --version` |
| **uv** | Latest | Python package manager | `uv --version` |

#### 1. Configure Environment Variables

```bash
# Copy the example configuration file
cp .env.example .env

# Edit the .env file and fill in the required API keys
```

**Required Environment Variables:**

```env
# LLM API Configuration (supports any LLM API with OpenAI SDK format)
# Recommended: Alibaba Qwen-plus model via Bailian Platform: https://bailian.console.aliyun.com/
# High consumption, try simulations with fewer than 40 rounds first
LLM_API_KEY=your_api_key
LLM_BASE_URL=https://dashscope.aliyuncs.com/compatible-mode/v1
LLM_MODEL_NAME=qwen-plus

# Zep Cloud Configuration
# Free monthly quota is sufficient for simple usage: https://app.getzep.com/
ZEP_API_KEY=your_zep_api_key
```

#### 2. Install Dependencies

```bash
# One-click installation of all dependencies (root + frontend + backend)
npm run setup:all
```

Or install step by step:

```bash
# Install Node dependencies (root + frontend)
npm run setup

# Install Python dependencies (backend, auto-creates virtual environment)
npm run setup:backend
```

#### 3. Start Services

```bash
# Start both frontend and backend (run from project root)
npm run dev
```

**Service URLs:**
- Frontend: `http://localhost:3000`
- Backend API: `http://localhost:5001`

**Start Individually:**

```bash
npm run backend   # Start backend only
npm run frontend  # Start frontend only
```

### Option 2: Docker Deployment

```bash
# 1. Configure environment variables (same as source deployment)
cp .env.example .env

# 2. Pull image and start
docker compose up -d
```

Reads `.env` from root directory by default, maps ports `3000 (frontend) / 5001 (backend)`

> Mirror address for faster pulling is provided as comments in `docker-compose.yml`, replace if needed.

## 📬 Join the Conversation

<div align="center">
<img src="./static/image/QQ群.png" alt="QQ Group" width="60%"/>
</div>

&nbsp;

The MiroFish team is recruiting full-time/internship positions. If you're interested in multi-agent simulation and LLM applications, feel free to send your resume to: **mirofish@shanda.com**

## 📄 Acknowledgments

**MiroFish has received strategic support and incubation from Shanda Group!**

MiroFish's simulation engine is powered by **[OASIS (Open Agent Social Interaction Simulations)](https://github.com/camel-ai/oasis)**, We sincerely thank the CAMEL-AI team for their open-source contributions!

## 📈 Project Statistics

<a href="https://www.star-history.com/#666ghj/MiroFish&type=date&legend=top-left">
 <picture>
   <source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=666ghj/MiroFish&type=date&theme=dark&legend=top-left" />
   <source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=666ghj/MiroFish&type=date&legend=top-left" />
   <img alt="Star History Chart" src="https://api.star-history.com/svg?repos=666ghj/MiroFish&type=date&legend=top-left" />
 </picture>
</a>

================================================
FILE: README.md
================================================
<div align="center">

<img src="./static/image/MiroFish_logo_compressed.jpeg" alt="MiroFish Logo" width="75%"/>

<a href="https://trendshift.io/repositories/16144" target="_blank"><img src="https://trendshift.io/api/badge/repositories/16144" alt="666ghj%2FMiroFish | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>

简洁通用的群体智能引擎,预测万物
</br>
<em>A Simple and Universal Swarm Intelligence Engine, Predicting Anything</em>

<a href="https://www.shanda.com/" target="_blank"><img src="./static/image/shanda_logo.png" alt="666ghj%2MiroFish | Shanda" height="40"/></a>

[![GitHub Stars](https://img.shields.io/github/stars/666ghj/MiroFish?style=flat-square&color=DAA520)](https://github.com/666ghj/MiroFish/stargazers)
[![GitHub Watchers](https://img.shields.io/github/watchers/666ghj/MiroFish?style=flat-square)](https://github.com/666ghj/MiroFish/watchers)
[![GitHub Forks](https://img.shields.io/github/forks/666ghj/MiroFish?style=flat-square)](https://github.com/666ghj/MiroFish/network)
[![Docker](https://img.shields.io/badge/Docker-Build-2496ED?style=flat-square&logo=docker&logoColor=white)](https://hub.docker.com/)
[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/666ghj/MiroFish)

[![Discord](https://img.shields.io/badge/Discord-Join-5865F2?style=flat-square&logo=discord&logoColor=white)](http://discord.gg/ePf5aPaHnA)
[![X](https://img.shields.io/badge/X-Follow-000000?style=flat-square&logo=x&logoColor=white)](https://x.com/mirofish_ai)
[![Instagram](https://img.shields.io/badge/Instagram-Follow-E4405F?style=flat-square&logo=instagram&logoColor=white)](https://www.instagram.com/mirofish_ai/)

[English](./README-EN.md) | [中文文档](./README.md)

</div>

## ⚡ 项目概述

**MiroFish** 是一款基于多智能体技术的新一代 AI 预测引擎。通过提取现实世界的种子信息(如突发新闻、政策草案、金融信号),自动构建出高保真的平行数字世界。在此空间内,成千上万个具备独立人格、长期记忆与行为逻辑的智能体进行自由交互与社会演化。你可透过「上帝视角」动态注入变量,精准推演未来走向——**让未来在数字沙盘中预演,助决策在百战模拟后胜出**。

> 你只需:上传种子材料(数据分析报告或者有趣的小说故事),并用自然语言描述预测需求</br>
> MiroFish 将返回:一份详尽的预测报告,以及一个可深度交互的高保真数字世界

### 我们的愿景

MiroFish 致力于打造映射现实的群体智能镜像,通过捕捉个体互动引发的群体涌现,突破传统预测的局限:

- **于宏观**:我们是决策者的预演实验室,让政策与公关在零风险中试错
- **于微观**:我们是个人用户的创意沙盘,无论是推演小说结局还是探索脑洞,皆可有趣、好玩、触手可及

从严肃预测到趣味仿真,我们让每一个如果都能看见结果,让预测万物成为可能。

## 🌐 在线体验

欢迎访问在线 Demo 演示环境,体验我们为你准备的一次关于热点舆情事件的推演预测:[mirofish-live-demo](https://666ghj.github.io/mirofish-demo/)

## 📸 系统截图

<div align="center">
<table>
<tr>
<td><img src="./static/image/Screenshot/运行截图1.png" alt="截图1" width="100%"/></td>
<td><img src="./static/image/Screenshot/运行截图2.png" alt="截图2" width="100%"/></td>
</tr>
<tr>
<td><img src="./static/image/Screenshot/运行截图3.png" alt="截图3" width="100%"/></td>
<td><img src="./static/image/Screenshot/运行截图4.png" alt="截图4" width="100%"/></td>
</tr>
<tr>
<td><img src="./static/image/Screenshot/运行截图5.png" alt="截图5" width="100%"/></td>
<td><img src="./static/image/Screenshot/运行截图6.png" alt="截图6" width="100%"/></td>
</tr>
</table>
</div>

## 🎬 演示视频

### 1. 武汉大学舆情推演预测 + MiroFish项目讲解

<div align="center">
<a href="https://www.bilibili.com/video/BV1VYBsBHEMY/" target="_blank"><img src="./static/image/武大模拟演示封面.png" alt="MiroFish Demo Video" width="75%"/></a>

点击图片查看使用微舆BettaFish生成的《武大舆情报告》进行预测的完整演示视频
</div>

### 2. 《红楼梦》失传结局推演预测

<div align="center">
<a href="https://www.bilibili.com/video/BV1cPk3BBExq" target="_blank"><img src="./static/image/红楼梦模拟推演封面.jpg" alt="MiroFish Demo Video" width="75%"/></a>

点击图片查看基于《红楼梦》前80回数十万字,MiroFish深度预测失传结局
</div>

> **金融方向推演预测**、**时政要闻推演预测**等示例陆续更新中...

## 🔄 工作流程

1. **图谱构建**:现实种子提取 & 个体与群体记忆注入 & GraphRAG构建
2. **环境搭建**:实体关系抽取 & 人设生成 & 环境配置Agent注入仿真参数
3. **开始模拟**:双平台并行模拟 & 自动解析预测需求 & 动态更新时序记忆
4. **报告生成**:ReportAgent拥有丰富的工具集与模拟后环境进行深度交互
5. **深度互动**:与模拟世界中的任意一位进行对话 & 与ReportAgent进行对话

## 🚀 快速开始

### 一、源码部署(推荐)

#### 前置要求

| 工具 | 版本要求 | 说明 | 安装检查 |
|------|---------|------|---------|
| **Node.js** | 18+ | 前端运行环境,包含 npm | `node -v` |
| **Python** | ≥3.11, ≤3.12 | 后端运行环境 | `python --version` |
| **uv** | 最新版 | Python 包管理器 | `uv --version` |

#### 1. 配置环境变量

```bash
# 复制示例配置文件
cp .env.example .env

# 编辑 .env 文件,填入必要的 API 密钥
```

**必需的环境变量:**

```env
# LLM API配置(支持 OpenAI SDK 格式的任意 LLM API)
# 推荐使用阿里百炼平台qwen-plus模型:https://bailian.console.aliyun.com/
# 注意消耗较大,可先进行小于40轮的模拟尝试
LLM_API_KEY=your_api_key
LLM_BASE_URL=https://dashscope.aliyuncs.com/compatible-mode/v1
LLM_MODEL_NAME=qwen-plus

# Zep Cloud 配置
# 每月免费额度即可支撑简单使用:https://app.getzep.com/
ZEP_API_KEY=your_zep_api_key
```

#### 2. 安装依赖

```bash
# 一键安装所有依赖(根目录 + 前端 + 后端)
npm run setup:all
```

或者分步安装:

```bash
# 安装 Node 依赖(根目录 + 前端)
npm run setup

# 安装 Python 依赖(后端,自动创建虚拟环境)
npm run setup:backend
```

#### 3. 启动服务

```bash
# 同时启动前后端(在项目根目录执行)
npm run dev
```

**服务地址:**
- 前端:`http://localhost:3000`
- 后端 API:`http://localhost:5001`

**单独启动:**

```bash
npm run backend   # 仅启动后端
npm run frontend  # 仅启动前端
```

### 二、Docker 部署

```bash
# 1. 配置环境变量(同源码部署)
cp .env.example .env

# 2. 拉取镜像并启动
docker compose up -d
```

默认会读取根目录下的 `.env`,并映射端口 `3000(前端)/5001(后端)`

> 在 `docker-compose.yml` 中已通过注释提供加速镜像地址,可按需替换

## 📬 更多交流

<div align="center">
<img src="./static/image/QQ群.png" alt="QQ交流群" width="60%"/>
</div>

&nbsp;

MiroFish团队长期招募全职/实习,如果你对多Agent应用感兴趣,欢迎投递简历至:**mirofish@shanda.com**

## 📄 致谢

**MiroFish 得到了盛大集团的战略支持和孵化!**

MiroFish 的仿真引擎由 **[OASIS](https://github.com/camel-ai/oasis)** 驱动,我们衷心感谢 CAMEL-AI 团队的开源贡献!

## 📈 项目统计

<a href="https://www.star-history.com/#666ghj/MiroFish&type=date&legend=top-left">
 <picture>
   <source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=666ghj/MiroFish&type=date&theme=dark&legend=top-left" />
   <source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=666ghj/MiroFish&type=date&legend=top-left" />
   <img alt="Star History Chart" src="https://api.star-history.com/svg?repos=666ghj/MiroFish&type=date&legend=top-left" />
 </picture>
</a>


================================================
FILE: backend/app/__init__.py
================================================
"""
MiroFish Backend - Flask应用工厂
"""

import os
import warnings

# 抑制 multiprocessing resource_tracker 的警告(来自第三方库如 transformers)
# 需要在所有其他导入之前设置
warnings.filterwarnings("ignore", message=".*resource_tracker.*")

from flask import Flask, request
from flask_cors import CORS

from .config import Config
from .utils.logger import setup_logger, get_logger


def create_app(config_class=Config):
    """Flask应用工厂函数"""
    app = Flask(__name__)
    app.config.from_object(config_class)
    
    # 设置JSON编码:确保中文直接显示(而不是 \uXXXX 格式)
    # Flask >= 2.3 使用 app.json.ensure_ascii,旧版本使用 JSON_AS_ASCII 配置
    if hasattr(app, 'json') and hasattr(app.json, 'ensure_ascii'):
        app.json.ensure_ascii = False
    
    # 设置日志
    logger = setup_logger('mirofish')
    
    # 只在 reloader 子进程中打印启动信息(避免 debug 模式下打印两次)
    is_reloader_process = os.environ.get('WERKZEUG_RUN_MAIN') == 'true'
    debug_mode = app.config.get('DEBUG', False)
    should_log_startup = not debug_mode or is_reloader_process
    
    if should_log_startup:
        logger.info("=" * 50)
        logger.info("MiroFish Backend 启动中...")
        logger.info("=" * 50)
    
    # 启用CORS
    CORS(app, resources={r"/api/*": {"origins": "*"}})
    
    # 注册模拟进程清理函数(确保服务器关闭时终止所有模拟进程)
    from .services.simulation_runner import SimulationRunner
    SimulationRunner.register_cleanup()
    if should_log_startup:
        logger.info("已注册模拟进程清理函数")
    
    # 请求日志中间件
    @app.before_request
    def log_request():
        logger = get_logger('mirofish.request')
        logger.debug(f"请求: {request.method} {request.path}")
        if request.content_type and 'json' in request.content_type:
            logger.debug(f"请求体: {request.get_json(silent=True)}")
    
    @app.after_request
    def log_response(response):
        logger = get_logger('mirofish.request')
        logger.debug(f"响应: {response.status_code}")
        return response
    
    # 注册蓝图
    from .api import graph_bp, simulation_bp, report_bp
    app.register_blueprint(graph_bp, url_prefix='/api/graph')
    app.register_blueprint(simulation_bp, url_prefix='/api/simulation')
    app.register_blueprint(report_bp, url_prefix='/api/report')
    
    # 健康检查
    @app.route('/health')
    def health():
        return {'status': 'ok', 'service': 'MiroFish Backend'}
    
    if should_log_startup:
        logger.info("MiroFish Backend 启动完成")
    
    return app



================================================
FILE: backend/app/api/__init__.py
================================================
"""
API路由模块
"""

from flask import Blueprint

graph_bp = Blueprint('graph', __name__)
simulation_bp = Blueprint('simulation', __name__)
report_bp = Blueprint('report', __name__)

from . import graph  # noqa: E402, F401
from . import simulation  # noqa: E402, F401
from . import report  # noqa: E402, F401



================================================
FILE: backend/app/api/graph.py
================================================
"""
图谱相关API路由
采用项目上下文机制,服务端持久化状态
"""

import os
import traceback
import threading
from flask import request, jsonify

from . import graph_bp
from ..config import Config
from ..services.ontology_generator import OntologyGenerator
from ..services.graph_builder import GraphBuilderService
from ..services.text_processor import TextProcessor
from ..utils.file_parser import FileParser
from ..utils.logger import get_logger
from ..models.task import TaskManager, TaskStatus
from ..models.project import ProjectManager, ProjectStatus

# 获取日志器
logger = get_logger('mirofish.api')


def allowed_file(filename: str) -> bool:
    """检查文件扩展名是否允许"""
    if not filename or '.' not in filename:
        return False
    ext = os.path.splitext(filename)[1].lower().lstrip('.')
    return ext in Config.ALLOWED_EXTENSIONS


# ============== 项目管理接口 ==============

@graph_bp.route('/project/<project_id>', methods=['GET'])
def get_project(project_id: str):
    """
    获取项目详情
    """
    project = ProjectManager.get_project(project_id)
    
    if not project:
        return jsonify({
            "success": False,
            "error": f"项目不存在: {project_id}"
        }), 404
    
    return jsonify({
        "success": True,
        "data": project.to_dict()
    })


@graph_bp.route('/project/list', methods=['GET'])
def list_projects():
    """
    列出所有项目
    """
    limit = request.args.get('limit', 50, type=int)
    projects = ProjectManager.list_projects(limit=limit)
    
    return jsonify({
        "success": True,
        "data": [p.to_dict() for p in projects],
        "count": len(projects)
    })


@graph_bp.route('/project/<project_id>', methods=['DELETE'])
def delete_project(project_id: str):
    """
    删除项目
    """
    success = ProjectManager.delete_project(project_id)
    
    if not success:
        return jsonify({
            "success": False,
            "error": f"项目不存在或删除失败: {project_id}"
        }), 404
    
    return jsonify({
        "success": True,
        "message": f"项目已删除: {project_id}"
    })


@graph_bp.route('/project/<project_id>/reset', methods=['POST'])
def reset_project(project_id: str):
    """
    重置项目状态(用于重新构建图谱)
    """
    project = ProjectManager.get_project(project_id)
    
    if not project:
        return jsonify({
            "success": False,
            "error": f"项目不存在: {project_id}"
        }), 404
    
    # 重置到本体已生成状态
    if project.ontology:
        project.status = ProjectStatus.ONTOLOGY_GENERATED
    else:
        project.status = ProjectStatus.CREATED
    
    project.graph_id = None
    project.graph_build_task_id = None
    project.error = None
    ProjectManager.save_project(project)
    
    return jsonify({
        "success": True,
        "message": f"项目已重置: {project_id}",
        "data": project.to_dict()
    })


# ============== 接口1:上传文件并生成本体 ==============

@graph_bp.route('/ontology/generate', methods=['POST'])
def generate_ontology():
    """
    接口1:上传文件,分析生成本体定义
    
    请求方式:multipart/form-data
    
    参数:
        files: 上传的文件(PDF/MD/TXT),可多个
        simulation_requirement: 模拟需求描述(必填)
        project_name: 项目名称(可选)
        additional_context: 额外说明(可选)
        
    返回:
        {
            "success": true,
            "data": {
                "project_id": "proj_xxxx",
                "ontology": {
                    "entity_types": [...],
                    "edge_types": [...],
                    "analysis_summary": "..."
                },
                "files": [...],
                "total_text_length": 12345
            }
        }
    """
    try:
        logger.info("=== 开始生成本体定义 ===")
        
        # 获取参数
        simulation_requirement = request.form.get('simulation_requirement', '')
        project_name = request.form.get('project_name', 'Unnamed Project')
        additional_context = request.form.get('additional_context', '')
        
        logger.debug(f"项目名称: {project_name}")
        logger.debug(f"模拟需求: {simulation_requirement[:100]}...")
        
        if not simulation_requirement:
            return jsonify({
                "success": False,
                "error": "请提供模拟需求描述 (simulation_requirement)"
            }), 400
        
        # 获取上传的文件
        uploaded_files = request.files.getlist('files')
        if not uploaded_files or all(not f.filename for f in uploaded_files):
            return jsonify({
                "success": False,
                "error": "请至少上传一个文档文件"
            }), 400
        
        # 创建项目
        project = ProjectManager.create_project(name=project_name)
        project.simulation_requirement = simulation_requirement
        logger.info(f"创建项目: {project.project_id}")
        
        # 保存文件并提取文本
        document_texts = []
        all_text = ""
        
        for file in uploaded_files:
            if file and file.filename and allowed_file(file.filename):
                # 保存文件到项目目录
                file_info = ProjectManager.save_file_to_project(
                    project.project_id, 
                    file, 
                    file.filename
                )
                project.files.append({
                    "filename": file_info["original_filename"],
                    "size": file_info["size"]
                })
                
                # 提取文本
                text = FileParser.extract_text(file_info["path"])
                text = TextProcessor.preprocess_text(text)
                document_texts.append(text)
                all_text += f"\n\n=== {file_info['original_filename']} ===\n{text}"
        
        if not document_texts:
            ProjectManager.delete_project(project.project_id)
            return jsonify({
                "success": False,
                "error": "没有成功处理任何文档,请检查文件格式"
            }), 400
        
        # 保存提取的文本
        project.total_text_length = len(all_text)
        ProjectManager.save_extracted_text(project.project_id, all_text)
        logger.info(f"文本提取完成,共 {len(all_text)} 字符")
        
        # 生成本体
        logger.info("调用 LLM 生成本体定义...")
        generator = OntologyGenerator()
        ontology = generator.generate(
            document_texts=document_texts,
            simulation_requirement=simulation_requirement,
            additional_context=additional_context if additional_context else None
        )
        
        # 保存本体到项目
        entity_count = len(ontology.get("entity_types", []))
        edge_count = len(ontology.get("edge_types", []))
        logger.info(f"本体生成完成: {entity_count} 个实体类型, {edge_count} 个关系类型")
        
        project.ontology = {
            "entity_types": ontology.get("entity_types", []),
            "edge_types": ontology.get("edge_types", [])
        }
        project.analysis_summary = ontology.get("analysis_summary", "")
        project.status = ProjectStatus.ONTOLOGY_GENERATED
        ProjectManager.save_project(project)
        logger.info(f"=== 本体生成完成 === 项目ID: {project.project_id}")
        
        return jsonify({
            "success": True,
            "data": {
                "project_id": project.project_id,
                "project_name": project.name,
                "ontology": project.ontology,
                "analysis_summary": project.analysis_summary,
                "files": project.files,
                "total_text_length": project.total_text_length
            }
        })
        
    except Exception as e:
        return jsonify({
            "success": False,
            "error": str(e),
            "traceback": traceback.format_exc()
        }), 500


# ============== 接口2:构建图谱 ==============

@graph_bp.route('/build', methods=['POST'])
def build_graph():
    """
    接口2:根据project_id构建图谱
    
    请求(JSON):
        {
            "project_id": "proj_xxxx",  // 必填,来自接口1
            "graph_name": "图谱名称",    // 可选
            "chunk_size": 500,          // 可选,默认500
            "chunk_overlap": 50         // 可选,默认50
        }
        
    返回:
        {
            "success": true,
            "data": {
                "project_id": "proj_xxxx",
                "task_id": "task_xxxx",
                "message": "图谱构建任务已启动"
            }
        }
    """
    try:
        logger.info("=== 开始构建图谱 ===")
        
        # 检查配置
        errors = []
        if not Config.ZEP_API_KEY:
            errors.append("ZEP_API_KEY未配置")
        if errors:
            logger.error(f"配置错误: {errors}")
            return jsonify({
                "success": False,
                "error": "配置错误: " + "; ".join(errors)
            }), 500
        
        # 解析请求
        data = request.get_json() or {}
        project_id = data.get('project_id')
        logger.debug(f"请求参数: project_id={project_id}")
        
        if not project_id:
            return jsonify({
                "success": False,
                "error": "请提供 project_id"
            }), 400
        
        # 获取项目
        project = ProjectManager.get_project(project_id)
        if not project:
            return jsonify({
                "success": False,
                "error": f"项目不存在: {project_id}"
            }), 404
        
        # 检查项目状态
        force = data.get('force', False)  # 强制重新构建
        
        if project.status == ProjectStatus.CREATED:
            return jsonify({
                "success": False,
                "error": "项目尚未生成本体,请先调用 /ontology/generate"
            }), 400
        
        if project.status == ProjectStatus.GRAPH_BUILDING and not force:
            return jsonify({
                "success": False,
                "error": "图谱正在构建中,请勿重复提交。如需强制重建,请添加 force: true",
                "task_id": project.graph_build_task_id
            }), 400
        
        # 如果强制重建,重置状态
        if force and project.status in [ProjectStatus.GRAPH_BUILDING, ProjectStatus.FAILED, ProjectStatus.GRAPH_COMPLETED]:
            project.status = ProjectStatus.ONTOLOGY_GENERATED
            project.graph_id = None
            project.graph_build_task_id = None
            project.error = None
        
        # 获取配置
        graph_name = data.get('graph_name', project.name or 'MiroFish Graph')
        chunk_size = data.get('chunk_size', project.chunk_size or Config.DEFAULT_CHUNK_SIZE)
        chunk_overlap = data.get('chunk_overlap', project.chunk_overlap or Config.DEFAULT_CHUNK_OVERLAP)
        
        # 更新项目配置
        project.chunk_size = chunk_size
        project.chunk_overlap = chunk_overlap
        
        # 获取提取的文本
        text = ProjectManager.get_extracted_text(project_id)
        if not text:
            return jsonify({
                "success": False,
                "error": "未找到提取的文本内容"
            }), 400
        
        # 获取本体
        ontology = project.ontology
        if not ontology:
            return jsonify({
                "success": False,
                "error": "未找到本体定义"
            }), 400
        
        # 创建异步任务
        task_manager = TaskManager()
        task_id = task_manager.create_task(f"构建图谱: {graph_name}")
        logger.info(f"创建图谱构建任务: task_id={task_id}, project_id={project_id}")
        
        # 更新项目状态
        project.status = ProjectStatus.GRAPH_BUILDING
        project.graph_build_task_id = task_id
        ProjectManager.save_project(project)
        
        # 启动后台任务
        def build_task():
            build_logger = get_logger('mirofish.build')
            try:
                build_logger.info(f"[{task_id}] 开始构建图谱...")
                task_manager.update_task(
                    task_id, 
                    status=TaskStatus.PROCESSING,
                    message="初始化图谱构建服务..."
                )
                
                # 创建图谱构建服务
                builder = GraphBuilderService(api_key=Config.ZEP_API_KEY)
                
                # 分块
                task_manager.update_task(
                    task_id,
                    message="文本分块中...",
                    progress=5
                )
                chunks = TextProcessor.split_text(
                    text, 
                    chunk_size=chunk_size, 
                    overlap=chunk_overlap
                )
                total_chunks = len(chunks)
                
                # 创建图谱
                task_manager.update_task(
                    task_id,
                    message="创建Zep图谱...",
                    progress=10
                )
                graph_id = builder.create_graph(name=graph_name)
                
                # 更新项目的graph_id
                project.graph_id = graph_id
                ProjectManager.save_project(project)
                
                # 设置本体
                task_manager.update_task(
                    task_id,
                    message="设置本体定义...",
                    progress=15
                )
                builder.set_ontology(graph_id, ontology)
                
                # 添加文本(progress_callback 签名是 (msg, progress_ratio))
                def add_progress_callback(msg, progress_ratio):
                    progress = 15 + int(progress_ratio * 40)  # 15% - 55%
                    task_manager.update_task(
                        task_id,
                        message=msg,
                        progress=progress
                    )
                
                task_manager.update_task(
                    task_id,
                    message=f"开始添加 {total_chunks} 个文本块...",
                    progress=15
                )
                
                episode_uuids = builder.add_text_batches(
                    graph_id, 
                    chunks,
                    batch_size=3,
                    progress_callback=add_progress_callback
                )
                
                # 等待Zep处理完成(查询每个episode的processed状态)
                task_manager.update_task(
                    task_id,
                    message="等待Zep处理数据...",
                    progress=55
                )
                
                def wait_progress_callback(msg, progress_ratio):
                    progress = 55 + int(progress_ratio * 35)  # 55% - 90%
                    task_manager.update_task(
                        task_id,
                        message=msg,
                        progress=progress
                    )
                
                builder._wait_for_episodes(episode_uuids, wait_progress_callback)
                
                # 获取图谱数据
                task_manager.update_task(
                    task_id,
                    message="获取图谱数据...",
                    progress=95
                )
                graph_data = builder.get_graph_data(graph_id)
                
                # 更新项目状态
                project.status = ProjectStatus.GRAPH_COMPLETED
                ProjectManager.save_project(project)
                
                node_count = graph_data.get("node_count", 0)
                edge_count = graph_data.get("edge_count", 0)
                build_logger.info(f"[{task_id}] 图谱构建完成: graph_id={graph_id}, 节点={node_count}, 边={edge_count}")
                
                # 完成
                task_manager.update_task(
                    task_id,
                    status=TaskStatus.COMPLETED,
                    message="图谱构建完成",
                    progress=100,
                    result={
                        "project_id": project_id,
                        "graph_id": graph_id,
                        "node_count": node_count,
                        "edge_count": edge_count,
                        "chunk_count": total_chunks
                    }
                )
                
            except Exception as e:
                # 更新项目状态为失败
                build_logger.error(f"[{task_id}] 图谱构建失败: {str(e)}")
                build_logger.debug(traceback.format_exc())
                
                project.status = ProjectStatus.FAILED
                project.error = str(e)
                ProjectManager.save_project(project)
                
                task_manager.update_task(
                    task_id,
                    status=TaskStatus.FAILED,
                    message=f"构建失败: {str(e)}",
                    error=traceback.format_exc()
                )
        
        # 启动后台线程
        thread = threading.Thread(target=build_task, daemon=True)
        thread.start()
        
        return jsonify({
            "success": True,
            "data": {
                "project_id": project_id,
                "task_id": task_id,
                "message": "图谱构建任务已启动,请通过 /task/{task_id} 查询进度"
            }
        })
        
    except Exception as e:
        return jsonify({
            "success": False,
            "error": str(e),
            "traceback": traceback.format_exc()
        }), 500


# ============== 任务查询接口 ==============

@graph_bp.route('/task/<task_id>', methods=['GET'])
def get_task(task_id: str):
    """
    查询任务状态
    """
    task = TaskManager().get_task(task_id)
    
    if not task:
        return jsonify({
            "success": False,
            "error": f"任务不存在: {task_id}"
        }), 404
    
    return jsonify({
        "success": True,
        "data": task.to_dict()
    })


@graph_bp.route('/tasks', methods=['GET'])
def list_tasks():
    """
    列出所有任务
    """
    tasks = TaskManager().list_tasks()
    
    return jsonify({
        "success": True,
        "data": [t.to_dict() for t in tasks],
        "count": len(tasks)
    })


# ============== 图谱数据接口 ==============

@graph_bp.route('/data/<graph_id>', methods=['GET'])
def get_graph_data(graph_id: str):
    """
    获取图谱数据(节点和边)
    """
    try:
        if not Config.ZEP_API_KEY:
            return jsonify({
                "success": False,
                "error": "ZEP_API_KEY未配置"
            }), 500
        
        builder = GraphBuilderService(api_key=Config.ZEP_API_KEY)
        graph_data = builder.get_graph_data(graph_id)
        
        return jsonify({
            "success": True,
            "data": graph_data
        })
        
    except Exception as e:
        return jsonify({
            "success": False,
            "error": str(e),
            "traceback": traceback.format_exc()
        }), 500


@graph_bp.route('/delete/<graph_id>', methods=['DELETE'])
def delete_graph(graph_id: str):
    """
    删除Zep图谱
    """
    try:
        if not Config.ZEP_API_KEY:
            return jsonify({
                "success": False,
                "error": "ZEP_API_KEY未配置"
            }), 500
        
        builder = GraphBuilderService(api_key=Config.ZEP_API_KEY)
        builder.delete_graph(graph_id)
        
        return jsonify({
            "success": True,
            "message": f"图谱已删除: {graph_id}"
        })
        
    except Exception as e:
        return jsonify({
            "success": False,
            "error": str(e),
            "traceback": traceback.format_exc()
        }), 500


================================================
FILE: backend/app/api/report.py
================================================
"""
Report API路由
提供模拟报告生成、获取、对话等接口
"""

import os
import traceback
import threading
from flask import request, jsonify, send_file

from . import report_bp
from ..config import Config
from ..services.report_agent import ReportAgent, ReportManager, ReportStatus
from ..services.simulation_manager import SimulationManager
from ..models.project import ProjectManager
from ..models.task import TaskManager, TaskStatus
from ..utils.logger import get_logger

logger = get_logger('mirofish.api.report')


# ============== 报告生成接口 ==============

@report_bp.route('/generate', methods=['POST'])
def generate_report():
    """
    生成模拟分析报告(异步任务)
    
    这是一个耗时操作,接口会立即返回task_id,
    使用 GET /api/report/generate/status 查询进度
    
    请求(JSON):
        {
            "simulation_id": "sim_xxxx",    // 必填,模拟ID
            "force_regenerate": false        // 可选,强制重新生成
        }
    
    返回:
        {
            "success": true,
            "data": {
                "simulation_id": "sim_xxxx",
                "task_id": "task_xxxx",
                "status": "generating",
                "message": "报告生成任务已启动"
            }
        }
    """
    try:
        data = request.get_json() or {}
        
        simulation_id = data.get('simulation_id')
        if not simulation_id:
            return jsonify({
                "success": False,
                "error": "请提供 simulation_id"
            }), 400
        
        force_regenerate = data.get('force_regenerate', False)
        
        # 获取模拟信息
        manager = SimulationManager()
        state = manager.get_simulation(simulation_id)
        
        if not state:
            return jsonify({
                "success": False,
                "error": f"模拟不存在: {simulation_id}"
            }), 404
        
        # 检查是否已有报告
        if not force_regenerate:
            existing_report = ReportManager.get_report_by_simulation(simulation_id)
            if existing_report and existing_report.status == ReportStatus.COMPLETED:
                return jsonify({
                    "success": True,
                    "data": {
                        "simulation_id": simulation_id,
                        "report_id": existing_report.report_id,
                        "status": "completed",
                        "message": "报告已存在",
                        "already_generated": True
                    }
                })
        
        # 获取项目信息
        project = ProjectManager.get_project(state.project_id)
        if not project:
            return jsonify({
                "success": False,
                "error": f"项目不存在: {state.project_id}"
            }), 404
        
        graph_id = state.graph_id or project.graph_id
        if not graph_id:
            return jsonify({
                "success": False,
                "error": "缺少图谱ID,请确保已构建图谱"
            }), 400
        
        simulation_requirement = project.simulation_requirement
        if not simulation_requirement:
            return jsonify({
                "success": False,
                "error": "缺少模拟需求描述"
            }), 400
        
        # 提前生成 report_id,以便立即返回给前端
        import uuid
        report_id = f"report_{uuid.uuid4().hex[:12]}"
        
        # 创建异步任务
        task_manager = TaskManager()
        task_id = task_manager.create_task(
            task_type="report_generate",
            metadata={
                "simulation_id": simulation_id,
                "graph_id": graph_id,
                "report_id": report_id
            }
        )
        
        # 定义后台任务
        def run_generate():
            try:
                task_manager.update_task(
                    task_id,
                    status=TaskStatus.PROCESSING,
                    progress=0,
                    message="初始化Report Agent..."
                )
                
                # 创建Report Agent
                agent = ReportAgent(
                    graph_id=graph_id,
                    simulation_id=simulation_id,
                    simulation_requirement=simulation_requirement
                )
                
                # 进度回调
                def progress_callback(stage, progress, message):
                    task_manager.update_task(
                        task_id,
                        progress=progress,
                        message=f"[{stage}] {message}"
                    )
                
                # 生成报告(传入预先生成的 report_id)
                report = agent.generate_report(
                    progress_callback=progress_callback,
                    report_id=report_id
                )
                
                # 保存报告
                ReportManager.save_report(report)
                
                if report.status == ReportStatus.COMPLETED:
                    task_manager.complete_task(
                        task_id,
                        result={
                            "report_id": report.report_id,
                            "simulation_id": simulation_id,
                            "status": "completed"
                        }
                    )
                else:
                    task_manager.fail_task(task_id, report.error or "报告生成失败")
                
            except Exception as e:
                logger.error(f"报告生成失败: {str(e)}")
                task_manager.fail_task(task_id, str(e))
        
        # 启动后台线程
        thread = threading.Thread(target=run_generate, daemon=True)
        thread.start()
        
        return jsonify({
            "success": True,
            "data": {
                "simulation_id": simulation_id,
                "report_id": report_id,
                "task_id": task_id,
                "status": "generating",
                "message": "报告生成任务已启动,请通过 /api/report/generate/status 查询进度",
                "already_generated": False
            }
        })
        
    except Exception as e:
        logger.error(f"启动报告生成任务失败: {str(e)}")
        return jsonify({
            "success": False,
            "error": str(e),
            "traceback": traceback.format_exc()
        }), 500


@report_bp.route('/generate/status', methods=['POST'])
def get_generate_status():
    """
    查询报告生成任务进度
    
    请求(JSON):
        {
            "task_id": "task_xxxx",         // 可选,generate返回的task_id
            "simulation_id": "sim_xxxx"     // 可选,模拟ID
        }
    
    返回:
        {
            "success": true,
            "data": {
                "task_id": "task_xxxx",
                "status": "processing|completed|failed",
                "progress": 45,
                "message": "..."
            }
        }
    """
    try:
        data = request.get_json() or {}
        
        task_id = data.get('task_id')
        simulation_id = data.get('simulation_id')
        
        # 如果提供了simulation_id,先检查是否已有完成的报告
        if simulation_id:
            existing_report = ReportManager.get_report_by_simulation(simulation_id)
            if existing_report and existing_report.status == ReportStatus.COMPLETED:
                return jsonify({
                    "success": True,
                    "data": {
                        "simulation_id": simulation_id,
                        "report_id": existing_report.report_id,
                        "status": "completed",
                        "progress": 100,
                        "message": "报告已生成",
                        "already_completed": True
                    }
                })
        
        if not task_id:
            return jsonify({
                "success": False,
                "error": "请提供 task_id 或 simulation_id"
            }), 400
        
        task_manager = TaskManager()
        task = task_manager.get_task(task_id)
        
        if not task:
            return jsonify({
                "success": False,
                "error": f"任务不存在: {task_id}"
            }), 404
        
        return jsonify({
            "success": True,
            "data": task.to_dict()
        })
        
    except Exception as e:
        logger.error(f"查询任务状态失败: {str(e)}")
        return jsonify({
            "success": False,
            "error": str(e)
        }), 500


# ============== 报告获取接口 ==============

@report_bp.route('/<report_id>', methods=['GET'])
def get_report(report_id: str):
    """
    获取报告详情
    
    返回:
        {
            "success": true,
            "data": {
                "report_id": "report_xxxx",
                "simulation_id": "sim_xxxx",
                "status": "completed",
                "outline": {...},
                "markdown_content": "...",
                "created_at": "...",
                "completed_at": "..."
            }
        }
    """
    try:
        report = ReportManager.get_report(report_id)
        
        if not report:
            return jsonify({
                "success": False,
                "error": f"报告不存在: {report_id}"
            }), 404
        
        return jsonify({
            "success": True,
            "data": report.to_dict()
        })
        
    except Exception as e:
        logger.error(f"获取报告失败: {str(e)}")
        return jsonify({
            "success": False,
            "error": str(e),
            "traceback": traceback.format_exc()
        }), 500


@report_bp.route('/by-simulation/<simulation_id>', methods=['GET'])
def get_report_by_simulation(simulation_id: str):
    """
    根据模拟ID获取报告
    
    返回:
        {
            "success": true,
            "data": {
                "report_id": "report_xxxx",
                ...
            }
        }
    """
    try:
        report = ReportManager.get_report_by_simulation(simulation_id)
        
        if not report:
            return jsonify({
                "success": False,
                "error": f"该模拟暂无报告: {simulation_id}",
                "has_report": False
            }), 404
        
        return jsonify({
            "success": True,
            "data": report.to_dict(),
            "has_report": True
        })
        
    except Exception as e:
        logger.error(f"获取报告失败: {str(e)}")
        return jsonify({
            "success": False,
            "error": str(e),
            "traceback": traceback.format_exc()
        }), 500


@report_bp.route('/list', methods=['GET'])
def list_reports():
    """
    列出所有报告
    
    Query参数:
        simulation_id: 按模拟ID过滤(可选)
        limit: 返回数量限制(默认50)
    
    返回:
        {
            "success": true,
            "data": [...],
            "count": 10
        }
    """
    try:
        simulation_id = request.args.get('simulation_id')
        limit = request.args.get('limit', 50, type=int)
        
        reports = ReportManager.list_reports(
            simulation_id=simulation_id,
            limit=limit
        )
        
        return jsonify({
            "success": True,
            "data": [r.to_dict() for r in reports],
            "count": len(reports)
        })
        
    except Exception as e:
        logger.error(f"列出报告失败: {str(e)}")
        return jsonify({
            "success": False,
            "error": str(e),
            "traceback": traceback.format_exc()
        }), 500


@report_bp.route('/<report_id>/download', methods=['GET'])
def download_report(report_id: str):
    """
    下载报告(Markdown格式)
    
    返回Markdown文件
    """
    try:
        report = ReportManager.get_report(report_id)
        
        if not report:
            return jsonify({
                "success": False,
                "error": f"报告不存在: {report_id}"
            }), 404
        
        md_path = ReportManager._get_report_markdown_path(report_id)
        
        if not os.path.exists(md_path):
            # 如果MD文件不存在,生成一个临时文件
            import tempfile
            with tempfile.NamedTemporaryFile(mode='w', suffix='.md', delete=False) as f:
                f.write(report.markdown_content)
                temp_path = f.name
            
            return send_file(
                temp_path,
                as_attachment=True,
                download_name=f"{report_id}.md"
            )
        
        return send_file(
            md_path,
            as_attachment=True,
            download_name=f"{report_id}.md"
        )
        
    except Exception as e:
        logger.error(f"下载报告失败: {str(e)}")
        return jsonify({
            "success": False,
            "error": str(e),
            "traceback": traceback.format_exc()
        }), 500


@report_bp.route('/<report_id>', methods=['DELETE'])
def delete_report(report_id: str):
    """删除报告"""
    try:
        success = ReportManager.delete_report(report_id)
        
        if not success:
            return jsonify({
                "success": False,
                "error": f"报告不存在: {report_id}"
            }), 404
        
        return jsonify({
            "success": True,
            "message": f"报告已删除: {report_id}"
        })
        
    except Exception as e:
        logger.error(f"删除报告失败: {str(e)}")
        return jsonify({
            "success": False,
            "error": str(e),
            "traceback": traceback.format_exc()
        }), 500


# ============== Report Agent对话接口 ==============

@report_bp.route('/chat', methods=['POST'])
def chat_with_report_agent():
    """
    与Report Agent对话
    
    Report Agent可以在对话中自主调用检索工具来回答问题
    
    请求(JSON):
        {
            "simulation_id": "sim_xxxx",        // 必填,模拟ID
            "message": "请解释一下舆情走向",    // 必填,用户消息
            "chat_history": [                   // 可选,对话历史
                {"role": "user", "content": "..."},
                {"role": "assistant", "content": "..."}
            ]
        }
    
    返回:
        {
            "success": true,
            "data": {
                "response": "Agent回复...",
                "tool_calls": [调用的工具列表],
                "sources": [信息来源]
            }
        }
    """
    try:
        data = request.get_json() or {}
        
        simulation_id = data.get('simulation_id')
        message = data.get('message')
        chat_history = data.get('chat_history', [])
        
        if not simulation_id:
            return jsonify({
                "success": False,
                "error": "请提供 simulation_id"
            }), 400
        
        if not message:
            return jsonify({
                "success": False,
                "error": "请提供 message"
            }), 400
        
        # 获取模拟和项目信息
        manager = SimulationManager()
        state = manager.get_simulation(simulation_id)
        
        if not state:
            return jsonify({
                "success": False,
                "error": f"模拟不存在: {simulation_id}"
            }), 404
        
        project = ProjectManager.get_project(state.project_id)
        if not project:
            return jsonify({
                "success": False,
                "error": f"项目不存在: {state.project_id}"
            }), 404
        
        graph_id = state.graph_id or project.graph_id
        if not graph_id:
            return jsonify({
                "success": False,
                "error": "缺少图谱ID"
            }), 400
        
        simulation_requirement = project.simulation_requirement or ""
        
        # 创建Agent并进行对话
        agent = ReportAgent(
            graph_id=graph_id,
            simulation_id=simulation_id,
            simulation_requirement=simulation_requirement
        )
        
        result = agent.chat(message=message, chat_history=chat_history)
        
        return jsonify({
            "success": True,
            "data": result
        })
        
    except Exception as e:
        logger.error(f"对话失败: {str(e)}")
        return jsonify({
            "success": False,
            "error": str(e),
            "traceback": traceback.format_exc()
        }), 500


# ============== 报告进度与分章节接口 ==============

@report_bp.route('/<report_id>/progress', methods=['GET'])
def get_report_progress(report_id: str):
    """
    获取报告生成进度(实时)
    
    返回:
        {
            "success": true,
            "data": {
                "status": "generating",
                "progress": 45,
                "message": "正在生成章节: 关键发现",
                "current_section": "关键发现",
                "completed_sections": ["执行摘要", "模拟背景"],
                "updated_at": "2025-12-09T..."
            }
        }
    """
    try:
        progress = ReportManager.get_progress(report_id)
        
        if not progress:
            return jsonify({
                "success": False,
                "error": f"报告不存在或进度信息不可用: {report_id}"
            }), 404
        
        return jsonify({
            "success": True,
            "data": progress
        })
        
    except Exception as e:
        logger.error(f"获取报告进度失败: {str(e)}")
        return jsonify({
            "success": False,
            "error": str(e),
            "traceback": traceback.format_exc()
        }), 500


@report_bp.route('/<report_id>/sections', methods=['GET'])
def get_report_sections(report_id: str):
    """
    获取已生成的章节列表(分章节输出)
    
    前端可以轮询此接口获取已生成的章节内容,无需等待整个报告完成
    
    返回:
        {
            "success": true,
            "data": {
                "report_id": "report_xxxx",
                "sections": [
                    {
                        "filename": "section_01.md",
                        "section_index": 1,
                        "content": "## 执行摘要\\n\\n..."
                    },
                    ...
                ],
                "total_sections": 3,
                "is_complete": false
            }
        }
    """
    try:
        sections = ReportManager.get_generated_sections(report_id)
        
        # 获取报告状态
        report = ReportManager.get_report(report_id)
        is_complete = report is not None and report.status == ReportStatus.COMPLETED
        
        return jsonify({
            "success": True,
            "data": {
                "report_id": report_id,
                "sections": sections,
                "total_sections": len(sections),
                "is_complete": is_complete
            }
        })
        
    except Exception as e:
        logger.error(f"获取章节列表失败: {str(e)}")
        return jsonify({
            "success": False,
            "error": str(e),
            "traceback": traceback.format_exc()
        }), 500


@report_bp.route('/<report_id>/section/<int:section_index>', methods=['GET'])
def get_single_section(report_id: str, section_index: int):
    """
    获取单个章节内容
    
    返回:
        {
            "success": true,
            "data": {
                "filename": "section_01.md",
                "content": "## 执行摘要\\n\\n..."
            }
        }
    """
    try:
        section_path = ReportManager._get_section_path(report_id, section_index)
        
        if not os.path.exists(section_path):
            return jsonify({
                "success": False,
                "error": f"章节不存在: section_{section_index:02d}.md"
            }), 404
        
        with open(section_path, 'r', encoding='utf-8') as f:
            content = f.read()
        
        return jsonify({
            "success": True,
            "data": {
                "filename": f"section_{section_index:02d}.md",
                "section_index": section_index,
                "content": content
            }
        })
        
    except Exception as e:
        logger.error(f"获取章节内容失败: {str(e)}")
        return jsonify({
            "success": False,
            "error": str(e),
            "traceback": traceback.format_exc()
        }), 500


# ============== 报告状态检查接口 ==============

@report_bp.route('/check/<simulation_id>', methods=['GET'])
def check_report_status(simulation_id: str):
    """
    检查模拟是否有报告,以及报告状态
    
    用于前端判断是否解锁Interview功能
    
    返回:
        {
            "success": true,
            "data": {
                "simulation_id": "sim_xxxx",
                "has_report": true,
                "report_status": "completed",
                "report_id": "report_xxxx",
                "interview_unlocked": true
            }
        }
    """
    try:
        report = ReportManager.get_report_by_simulation(simulation_id)
        
        has_report = report is not None
        report_status = report.status.value if report else None
        report_id = report.report_id if report else None
        
        # 只有报告完成后才解锁interview
        interview_unlocked = has_report and report.status == ReportStatus.COMPLETED
        
        return jsonify({
            "success": True,
            "data": {
                "simulation_id": simulation_id,
                "has_report": has_report,
                "report_status": report_status,
                "report_id": report_id,
                "interview_unlocked": interview_unlocked
            }
        })
        
    except Exception as e:
        logger.error(f"检查报告状态失败: {str(e)}")
        return jsonify({
            "success": False,
            "error": str(e),
            "traceback": traceback.format_exc()
        }), 500


# ============== Agent 日志接口 ==============

@report_bp.route('/<report_id>/agent-log', methods=['GET'])
def get_agent_log(report_id: str):
    """
    获取 Report Agent 的详细执行日志
    
    实时获取报告生成过程中的每一步动作,包括:
    - 报告开始、规划开始/完成
    - 每个章节的开始、工具调用、LLM响应、完成
    - 报告完成或失败
    
    Query参数:
        from_line: 从第几行开始读取(可选,默认0,用于增量获取)
    
    返回:
        {
            "success": true,
            "data": {
                "logs": [
                    {
                        "timestamp": "2025-12-13T...",
                        "elapsed_seconds": 12.5,
                        "report_id": "report_xxxx",
                        "action": "tool_call",
                        "stage": "generating",
                        "section_title": "执行摘要",
                        "section_index": 1,
                        "details": {
                            "tool_name": "insight_forge",
                            "parameters": {...},
                            ...
                        }
                    },
                    ...
                ],
                "total_lines": 25,
                "from_line": 0,
                "has_more": false
            }
        }
    """
    try:
        from_line = request.args.get('from_line', 0, type=int)
        
        log_data = ReportManager.get_agent_log(report_id, from_line=from_line)
        
        return jsonify({
            "success": True,
            "data": log_data
        })
        
    except Exception as e:
        logger.error(f"获取Agent日志失败: {str(e)}")
        return jsonify({
            "success": False,
            "error": str(e),
            "traceback": traceback.format_exc()
        }), 500


@report_bp.route('/<report_id>/agent-log/stream', methods=['GET'])
def stream_agent_log(report_id: str):
    """
    获取完整的 Agent 日志(一次性获取全部)
    
    返回:
        {
            "success": true,
            "data": {
                "logs": [...],
                "count": 25
            }
        }
    """
    try:
        logs = ReportManager.get_agent_log_stream(report_id)
        
        return jsonify({
            "success": True,
            "data": {
                "logs": logs,
                "count": len(logs)
            }
        })
        
    except Exception as e:
        logger.error(f"获取Agent日志失败: {str(e)}")
        return jsonify({
            "success": False,
            "error": str(e),
            "traceback": traceback.format_exc()
        }), 500


# ============== 控制台日志接口 ==============

@report_bp.route('/<report_id>/console-log', methods=['GET'])
def get_console_log(report_id: str):
    """
    获取 Report Agent 的控制台输出日志
    
    实时获取报告生成过程中的控制台输出(INFO、WARNING等),
    这与 agent-log 接口返回的结构化 JSON 日志不同,
    是纯文本格式的控制台风格日志。
    
    Query参数:
        from_line: 从第几行开始读取(可选,默认0,用于增量获取)
    
    返回:
        {
            "success": true,
            "data": {
                "logs": [
                    "[19:46:14] INFO: 搜索完成: 找到 15 条相关事实",
                    "[19:46:14] INFO: 图谱搜索: graph_id=xxx, query=...",
                    ...
                ],
                "total_lines": 100,
                "from_line": 0,
                "has_more": false
            }
        }
    """
    try:
        from_line = request.args.get('from_line', 0, type=int)
        
        log_data = ReportManager.get_console_log(report_id, from_line=from_line)
        
        return jsonify({
            "success": True,
            "data": log_data
        })
        
    except Exception as e:
        logger.error(f"获取控制台日志失败: {str(e)}")
        return jsonify({
            "success": False,
            "error": str(e),
            "traceback": traceback.format_exc()
        }), 500


@report_bp.route('/<report_id>/console-log/stream', methods=['GET'])
def stream_console_log(report_id: str):
    """
    获取完整的控制台日志(一次性获取全部)
    
    返回:
        {
            "success": true,
            "data": {
                "logs": [...],
                "count": 100
            }
        }
    """
    try:
        logs = ReportManager.get_console_log_stream(report_id)
        
        return jsonify({
            "success": True,
            "data": {
                "logs": logs,
                "count": len(logs)
            }
        })
        
    except Exception as e:
        logger.error(f"获取控制台日志失败: {str(e)}")
        return jsonify({
            "success": False,
            "error": str(e),
            "traceback": traceback.format_exc()
        }), 500


# ============== 工具调用接口(供调试使用)==============

@report_bp.route('/tools/search', methods=['POST'])
def search_graph_tool():
    """
    图谱搜索工具接口(供调试使用)
    
    请求(JSON):
        {
            "graph_id": "mirofish_xxxx",
            "query": "搜索查询",
            "limit": 10
        }
    """
    try:
        data = request.get_json() or {}
        
        graph_id = data.get('graph_id')
        query = data.get('query')
        limit = data.get('limit', 10)
        
        if not graph_id or not query:
            return jsonify({
                "success": False,
                "error": "请提供 graph_id 和 query"
            }), 400
        
        from ..services.zep_tools import ZepToolsService
        
        tools = ZepToolsService()
        result = tools.search_graph(
            graph_id=graph_id,
            query=query,
            limit=limit
        )
        
        return jsonify({
            "success": True,
            "data": result.to_dict()
        })
        
    except Exception as e:
        logger.error(f"图谱搜索失败: {str(e)}")
        return jsonify({
            "success": False,
            "error": str(e),
            "traceback": traceback.format_exc()
        }), 500


@report_bp.route('/tools/statistics', methods=['POST'])
def get_graph_statistics_tool():
    """
    图谱统计工具接口(供调试使用)
    
    请求(JSON):
        {
            "graph_id": "mirofish_xxxx"
        }
    """
    try:
        data = request.get_json() or {}
        
        graph_id = data.get('graph_id')
        
        if not graph_id:
            return jsonify({
                "success": False,
                "error": "请提供 graph_id"
            }), 400
        
        from ..services.zep_tools import ZepToolsService
        
        tools = ZepToolsService()
        result = tools.get_graph_statistics(graph_id)
        
        return jsonify({
            "success": True,
            "data": result
        })
        
    except Exception as e:
        logger.error(f"获取图谱统计失败: {str(e)}")
        return jsonify({
            "success": False,
            "error": str(e),
            "traceback": traceback.format_exc()
        }), 500


================================================
FILE: backend/app/api/simulation.py
================================================
"""
模拟相关API路由
Step2: Zep实体读取与过滤、OASIS模拟准备与运行(全程自动化)
"""

import os
import traceback
from flask import request, jsonify, send_file

from . import simulation_bp
from ..config import Config
from ..services.zep_entity_reader import ZepEntityReader
from ..services.oasis_profile_generator import OasisProfileGenerator
from ..services.simulation_manager import SimulationManager, SimulationStatus
from ..services.simulation_runner import SimulationRunner, RunnerStatus
from ..utils.logger import get_logger
from ..models.project import ProjectManager

logger = get_logger('mirofish.api.simulation')


# Interview prompt 优化前缀
# 添加此前缀可以避免Agent调用工具,直接用文本回复
INTERVIEW_PROMPT_PREFIX = "结合你的人设、所有的过往记忆与行动,不调用任何工具直接用文本回复我:"


def optimize_interview_prompt(prompt: str) -> str:
    """
    优化Interview提问,添加前缀避免Agent调用工具
    
    Args:
        prompt: 原始提问
        
    Returns:
        优化后的提问
    """
    if not prompt:
        return prompt
    # 避免重复添加前缀
    if prompt.startswith(INTERVIEW_PROMPT_PREFIX):
        return prompt
    return f"{INTERVIEW_PROMPT_PREFIX}{prompt}"


# ============== 实体读取接口 ==============

@simulation_bp.route('/entities/<graph_id>', methods=['GET'])
def get_graph_entities(graph_id: str):
    """
    获取图谱中的所有实体(已过滤)
    
    只返回符合预定义实体类型的节点(Labels不只是Entity的节点)
    
    Query参数:
        entity_types: 逗号分隔的实体类型列表(可选,用于进一步过滤)
        enrich: 是否获取相关边信息(默认true)
    """
    try:
        if not Config.ZEP_API_KEY:
            return jsonify({
                "success": False,
                "error": "ZEP_API_KEY未配置"
            }), 500
        
        entity_types_str = request.args.get('entity_types', '')
        entity_types = [t.strip() for t in entity_types_str.split(',') if t.strip()] if entity_types_str else None
        enrich = request.args.get('enrich', 'true').lower() == 'true'
        
        logger.info(f"获取图谱实体: graph_id={graph_id}, entity_types={entity_types}, enrich={enrich}")
        
        reader = ZepEntityReader()
        result = reader.filter_defined_entities(
            graph_id=graph_id,
            defined_entity_types=entity_types,
            enrich_with_edges=enrich
        )
        
        return jsonify({
            "success": True,
            "data": result.to_dict()
        })
        
    except Exception as e:
        logger.error(f"获取图谱实体失败: {str(e)}")
        return jsonify({
            "success": False,
            "error": str(e),
            "traceback": traceback.format_exc()
        }), 500


@simulation_bp.route('/entities/<graph_id>/<entity_uuid>', methods=['GET'])
def get_entity_detail(graph_id: str, entity_uuid: str):
    """获取单个实体的详细信息"""
    try:
        if not Config.ZEP_API_KEY:
            return jsonify({
                "success": False,
                "error": "ZEP_API_KEY未配置"
            }), 500
        
        reader = ZepEntityReader()
        entity = reader.get_entity_with_context(graph_id, entity_uuid)
        
        if not entity:
            return jsonify({
                "success": False,
                "error": f"实体不存在: {entity_uuid}"
            }), 404
        
        return jsonify({
            "success": True,
            "data": entity.to_dict()
        })
        
    except Exception as e:
        logger.error(f"获取实体详情失败: {str(e)}")
        return jsonify({
            "success": False,
            "error": str(e),
            "traceback": traceback.format_exc()
        }), 500


@simulation_bp.route('/entities/<graph_id>/by-type/<entity_type>', methods=['GET'])
def get_entities_by_type(graph_id: str, entity_type: str):
    """获取指定类型的所有实体"""
    try:
        if not Config.ZEP_API_KEY:
            return jsonify({
                "success": False,
                "error": "ZEP_API_KEY未配置"
            }), 500
        
        enrich = request.args.get('enrich', 'true').lower() == 'true'
        
        reader = ZepEntityReader()
        entities = reader.get_entities_by_type(
            graph_id=graph_id,
            entity_type=entity_type,
            enrich_with_edges=enrich
        )
        
        return jsonify({
            "success": True,
            "data": {
                "entity_type": entity_type,
                "count": len(entities),
                "entities": [e.to_dict() for e in entities]
            }
        })
        
    except Exception as e:
        logger.error(f"获取实体失败: {str(e)}")
        return jsonify({
            "success": False,
            "error": str(e),
            "traceback": traceback.format_exc()
        }), 500


# ============== 模拟管理接口 ==============

@simulation_bp.route('/create', methods=['POST'])
def create_simulation():
    """
    创建新的模拟
    
    注意:max_rounds等参数由LLM智能生成,无需手动设置
    
    请求(JSON):
        {
            "project_id": "proj_xxxx",      // 必填
            "graph_id": "mirofish_xxxx",    // 可选,如不提供则从project获取
            "enable_twitter": true,          // 可选,默认true
            "enable_reddit": true            // 可选,默认true
        }
    
    返回:
        {
            "success": true,
            "data": {
                "simulation_id": "sim_xxxx",
                "project_id": "proj_xxxx",
                "graph_id": "mirofish_xxxx",
                "status": "created",
                "enable_twitter": true,
                "enable_reddit": true,
                "created_at": "2025-12-01T10:00:00"
            }
        }
    """
    try:
        data = request.get_json() or {}
        
        project_id = data.get('project_id')
        if not project_id:
            return jsonify({
                "success": False,
                "error": "请提供 project_id"
            }), 400
        
        project = ProjectManager.get_project(project_id)
        if not project:
            return jsonify({
                "success": False,
                "error": f"项目不存在: {project_id}"
            }), 404
        
        graph_id = data.get('graph_id') or project.graph_id
        if not graph_id:
            return jsonify({
                "success": False,
                "error": "项目尚未构建图谱,请先调用 /api/graph/build"
            }), 400
        
        manager = SimulationManager()
        state = manager.create_simulation(
            project_id=project_id,
            graph_id=graph_id,
            enable_twitter=data.get('enable_twitter', True),
            enable_reddit=data.get('enable_reddit', True),
        )
        
        return jsonify({
            "success": True,
            "data": state.to_dict()
        })
        
    except Exception as e:
        logger.error(f"创建模拟失败: {str(e)}")
        return jsonify({
            "success": False,
            "error": str(e),
            "traceback": traceback.format_exc()
        }), 500


def _check_simulation_prepared(simulation_id: str) -> tuple:
    """
    检查模拟是否已经准备完成
    
    检查条件:
    1. state.json 存在且 status 为 "ready"
    2. 必要文件存在:reddit_profiles.json, twitter_profiles.csv, simulation_config.json
    
    注意:运行脚本(run_*.py)保留在 backend/scripts/ 目录,不再复制到模拟目录
    
    Args:
        simulation_id: 模拟ID
        
    Returns:
        (is_prepared: bool, info: dict)
    """
    import os
    from ..config import Config
    
    simulation_dir = os.path.join(Config.OASIS_SIMULATION_DATA_DIR, simulation_id)
    
    # 检查目录是否存在
    if not os.path.exists(simulation_dir):
        return False, {"reason": "模拟目录不存在"}
    
    # 必要文件列表(不包括脚本,脚本位于 backend/scripts/)
    required_files = [
        "state.json",
        "simulation_config.json",
        "reddit_profiles.json",
        "twitter_profiles.csv"
    ]
    
    # 检查文件是否存在
    existing_files = []
    missing_files = []
    for f in required_files:
        file_path = os.path.join(simulation_dir, f)
        if os.path.exists(file_path):
            existing_files.append(f)
        else:
            missing_files.append(f)
    
    if missing_files:
        return False, {
            "reason": "缺少必要文件",
            "missing_files": missing_files,
            "existing_files": existing_files
        }
    
    # 检查state.json中的状态
    state_file = os.path.join(simulation_dir, "state.json")
    try:
        import json
        with open(state_file, 'r', encoding='utf-8') as f:
            state_data = json.load(f)
        
        status = state_data.get("status", "")
        config_generated = state_data.get("config_generated", False)
        
        # 详细日志
        logger.debug(f"检测模拟准备状态: {simulation_id}, status={status}, config_generated={config_generated}")
        
        # 如果 config_generated=True 且文件存在,认为准备完成
        # 以下状态都说明准备工作已完成:
        # - ready: 准备完成,可以运行
        # - preparing: 如果 config_generated=True 说明已完成
        # - running: 正在运行,说明准备早就完成了
        # - completed: 运行完成,说明准备早就完成了
        # - stopped: 已停止,说明准备早就完成了
        # - failed: 运行失败(但准备是完成的)
        prepared_statuses = ["ready", "preparing", "running", "completed", "stopped", "failed"]
        if status in prepared_statuses and config_generated:
            # 获取文件统计信息
            profiles_file = os.path.join(simulation_dir, "reddit_profiles.json")
            config_file = os.path.join(simulation_dir, "simulation_config.json")
            
            profiles_count = 0
            if os.path.exists(profiles_file):
                with open(profiles_file, 'r', encoding='utf-8') as f:
                    profiles_data = json.load(f)
                    profiles_count = len(profiles_data) if isinstance(profiles_data, list) else 0
            
            # 如果状态是preparing但文件已完成,自动更新状态为ready
            if status == "preparing":
                try:
                    state_data["status"] = "ready"
                    from datetime import datetime
                    state_data["updated_at"] = datetime.now().isoformat()
                    with open(state_file, 'w', encoding='utf-8') as f:
                        json.dump(state_data, f, ensure_ascii=False, indent=2)
                    logger.info(f"自动更新模拟状态: {simulation_id} preparing -> ready")
                    status = "ready"
                except Exception as e:
                    logger.warning(f"自动更新状态失败: {e}")
            
            logger.info(f"模拟 {simulation_id} 检测结果: 已准备完成 (status={status}, config_generated={config_generated})")
            return True, {
                "status": status,
                "entities_count": state_data.get("entities_count", 0),
                "profiles_count": profiles_count,
                "entity_types": state_data.get("entity_types", []),
                "config_generated": config_generated,
                "created_at": state_data.get("created_at"),
                "updated_at": state_data.get("updated_at"),
                "existing_files": existing_files
            }
        else:
            logger.warning(f"模拟 {simulation_id} 检测结果: 未准备完成 (status={status}, config_generated={config_generated})")
            return False, {
                "reason": f"状态不在已准备列表中或config_generated为false: status={status}, config_generated={config_generated}",
                "status": status,
                "config_generated": config_generated
            }
            
    except Exception as e:
        return False, {"reason": f"读取状态文件失败: {str(e)}"}


@simulation_bp.route('/prepare', methods=['POST'])
def prepare_simulation():
    """
    准备模拟环境(异步任务,LLM智能生成所有参数)
    
    这是一个耗时操作,接口会立即返回task_id,
    使用 GET /api/simulation/prepare/status 查询进度
    
    特性:
    - 自动检测已完成的准备工作,避免重复生成
    - 如果已准备完成,直接返回已有结果
    - 支持强制重新生成(force_regenerate=true)
    
    步骤:
    1. 检查是否已有完成的准备工作
    2. 从Zep图谱读取并过滤实体
    3. 为每个实体生成OASIS Agent Profile(带重试机制)
    4. LLM智能生成模拟配置(带重试机制)
    5. 保存配置文件和预设脚本
    
    请求(JSON):
        {
            "simulation_id": "sim_xxxx",                   // 必填,模拟ID
            "entity_types": ["Student", "PublicFigure"],  // 可选,指定实体类型
            "use_llm_for_profiles": true,                 // 可选,是否用LLM生成人设
            "parallel_profile_count": 5,                  // 可选,并行生成人设数量,默认5
            "force_regenerate": false                     // 可选,强制重新生成,默认false
        }
    
    返回:
        {
            "success": true,
            "data": {
                "simulation_id": "sim_xxxx",
                "task_id": "task_xxxx",           // 新任务时返回
                "status": "preparing|ready",
                "message": "准备任务已启动|已有完成的准备工作",
                "already_prepared": true|false    // 是否已准备完成
            }
        }
    """
    import threading
    import os
    from ..models.task import TaskManager, TaskStatus
    from ..config import Config
    
    try:
        data = request.get_json() or {}
        
        simulation_id = data.get('simulation_id')
        if not simulation_id:
            return jsonify({
                "success": False,
                "error": "请提供 simulation_id"
            }), 400
        
        manager = SimulationManager()
        state = manager.get_simulation(simulation_id)
        
        if not state:
            return jsonify({
                "success": False,
                "error": f"模拟不存在: {simulation_id}"
            }), 404
        
        # 检查是否强制重新生成
        force_regenerate = data.get('force_regenerate', False)
        logger.info(f"开始处理 /prepare 请求: simulation_id={simulation_id}, force_regenerate={force_regenerate}")
        
        # 检查是否已经准备完成(避免重复生成)
        if not force_regenerate:
            logger.debug(f"检查模拟 {simulation_id} 是否已准备完成...")
            is_prepared, prepare_info = _check_simulation_prepared(simulation_id)
            logger.debug(f"检查结果: is_prepared={is_prepared}, prepare_info={prepare_info}")
            if is_prepared:
                logger.info(f"模拟 {simulation_id} 已准备完成,跳过重复生成")
                return jsonify({
                    "success": True,
                    "data": {
                        "simulation_id": simulation_id,
                        "status": "ready",
                        "message": "已有完成的准备工作,无需重复生成",
                        "already_prepared": True,
                        "prepare_info": prepare_info
                    }
                })
            else:
                logger.info(f"模拟 {simulation_id} 未准备完成,将启动准备任务")
        
        # 从项目获取必要信息
        project = ProjectManager.get_project(state.project_id)
        if not project:
            return jsonify({
                "success": False,
                "error": f"项目不存在: {state.project_id}"
            }), 404
        
        # 获取模拟需求
        simulation_requirement = project.simulation_requirement or ""
        if not simulation_requirement:
            return jsonify({
                "success": False,
                "error": "项目缺少模拟需求描述 (simulation_requirement)"
            }), 400
        
        # 获取文档文本
        document_text = ProjectManager.get_extracted_text(state.project_id) or ""
        
        entity_types_list = data.get('entity_types')
        use_llm_for_profiles = data.get('use_llm_for_profiles', True)
        parallel_profile_count = data.get('parallel_profile_count', 5)
        
        # ========== 同步获取实体数量(在后台任务启动前) ==========
        # 这样前端在调用prepare后立即就能获取到预期Agent总数
        try:
            logger.info(f"同步获取实体数量: graph_id={state.graph_id}")
            reader = ZepEntityReader()
            # 快速读取实体(不需要边信息,只统计数量)
            filtered_preview = reader.filter_defined_entities(
                graph_id=state.graph_id,
                defined_entity_types=entity_types_list,
                enrich_with_edges=False  # 不获取边信息,加快速度
            )
            # 保存实体数量到状态(供前端立即获取)
            state.entities_count = filtered_preview.filtered_count
            state.entity_types = list(filtered_preview.entity_types)
            logger.info(f"预期实体数量: {filtered_preview.filtered_count}, 类型: {filtered_preview.entity_types}")
        except Exception as e:
            logger.warning(f"同步获取实体数量失败(将在后台任务中重试): {e}")
            # 失败不影响后续流程,后台任务会重新获取
        
        # 创建异步任务
        task_manager = TaskManager()
        task_id = task_manager.create_task(
            task_type="simulation_prepare",
            metadata={
                "simulation_id": simulation_id,
                "project_id": state.project_id
            }
        )
        
        # 更新模拟状态(包含预先获取的实体数量)
        state.status = SimulationStatus.PREPARING
        manager._save_simulation_state(state)
        
        # 定义后台任务
        def run_prepare():
            try:
                task_manager.update_task(
                    task_id,
                    status=TaskStatus.PROCESSING,
                    progress=0,
                    message="开始准备模拟环境..."
                )
                
                # 准备模拟(带进度回调)
                # 存储阶段进度详情
                stage_details = {}
                
                def progress_callback(stage, progress, message, **kwargs):
                    # 计算总进度
                    stage_weights = {
                        "reading": (0, 20),           # 0-20%
                        "generating_profiles": (20, 70),  # 20-70%
                        "generating_config": (70, 90),    # 70-90%
                        "copying_scripts": (90, 100)       # 90-100%
                    }
                    
                    start, end = stage_weights.get(stage, (0, 100))
                    current_progress = int(start + (end - start) * progress / 100)
                    
                    # 构建详细进度信息
                    stage_names = {
                        "reading": "读取图谱实体",
                        "generating_profiles": "生成Agent人设",
                        "generating_config": "生成模拟配置",
                        "copying_scripts": "准备模拟脚本"
                    }
                    
                    stage_index = list(stage_weights.keys()).index(stage) + 1 if stage in stage_weights else 1
                    total_stages = len(stage_weights)
                    
                    # 更新阶段详情
                    stage_details[stage] = {
                        "stage_name": stage_names.get(stage, stage),
                        "stage_progress": progress,
                        "current": kwargs.get("current", 0),
                        "total": kwargs.get("total", 0),
                        "item_name": kwargs.get("item_name", "")
                    }
                    
                    # 构建详细进度信息
                    detail = stage_details[stage]
                    progress_detail_data = {
                        "current_stage": stage,
                        "current_stage_name": stage_names.get(stage, stage),
                        "stage_index": stage_index,
                        "total_stages": total_stages,
                        "stage_progress": progress,
                        "current_item": detail["current"],
                        "total_items": detail["total"],
                        "item_description": message
                    }
                    
                    # 构建简洁消息
                    if detail["total"] > 0:
                        detailed_message = (
                            f"[{stage_index}/{total_stages}] {stage_names.get(stage, stage)}: "
                            f"{detail['current']}/{detail['total']} - {message}"
                        )
                    else:
                        detailed_message = f"[{stage_index}/{total_stages}] {stage_names.get(stage, stage)}: {message}"
                    
                    task_manager.update_task(
                        task_id,
                        progress=current_progress,
                        message=detailed_message,
                        progress_detail=progress_detail_data
                    )
                
                result_state = manager.prepare_simulation(
                    simulation_id=simulation_id,
                    simulation_requirement=simulation_requirement,
                    document_text=document_text,
                    defined_entity_types=entity_types_list,
                    use_llm_for_profiles=use_llm_for_profiles,
                    progress_callback=progress_callback,
                    parallel_profile_count=parallel_profile_count
                )
                
                # 任务完成
                task_manager.complete_task(
                    task_id,
                    result=result_state.to_simple_dict()
                )
                
            except Exception as e:
                logger.error(f"准备模拟失败: {str(e)}")
                task_manager.fail_task(task_id, str(e))
                
                # 更新模拟状态为失败
                state = manager.get_simulation(simulation_id)
                if state:
                    state.status = SimulationStatus.FAILED
                    state.error = str(e)
                    manager._save_simulation_state(state)
        
        # 启动后台线程
        thread = threading.Thread(target=run_prepare, daemon=True)
        thread.start()
        
        return jsonify({
            "success": True,
            "data": {
                "simulation_id": simulation_id,
                "task_id": task_id,
                "status": "preparing",
                "message": "准备任务已启动,请通过 /api/simulation/prepare/status 查询进度",
                "already_prepared": False,
                "expected_entities_count": state.entities_count,  # 预期的Agent总数
                "entity_types": state.entity_types  # 实体类型列表
            }
        })
        
    except ValueError as e:
        return jsonify({
            "success": False,
            "error": str(e)
        }), 404
        
    except Exception as e:
        logger.error(f"启动准备任务失败: {str(e)}")
        return jsonify({
            "success": False,
            "error": str(e),
            "traceback": traceback.format_exc()
        }), 500


@simulation_bp.route('/prepare/status', methods=['POST'])
def get_prepare_status():
    """
    查询准备任务进度
    
    支持两种查询方式:
    1. 通过task_id查询正在进行的任务进度
    2. 通过simulation_id检查是否已有完成的准备工作
    
    请求(JSON):
        {
            "task_id": "task_xxxx",          // 可选,prepare返回的task_id
            "simulation_id": "sim_xxxx"      // 可选,模拟ID(用于检查已完成的准备)
        }
    
    返回:
        {
            "success": true,
            "data": {
                "task_id": "task_xxxx",
                "status": "processing|completed|ready",
                "progress": 45,
                "message": "...",
                "already_prepared": true|false,  // 是否已有完成的准备
                "prepare_info": {...}            // 已准备完成时的详细信息
            }
        }
    """
    from ..models.task import TaskManager
    
    try:
        data = request.get_json() or {}
        
        task_id = data.get('task_id')
        simulation_id = data.get('simulation_id')
        
        # 如果提供了simulation_id,先检查是否已准备完成
        if simulation_id:
            is_prepared, prepare_info = _check_simulation_prepared(simulation_id)
            if is_prepared:
                return jsonify({
                    "success": True,
                    "data": {
                        "simulation_id": simulation_id,
                        "status": "ready",
                        "progress": 100,
                        "message": "已有完成的准备工作",
                        "already_prepared": True,
                        "prepare_info": prepare_info
                    }
                })
        
        # 如果没有task_id,返回错误
        if not task_id:
            if simulation_id:
                # 有simulation_id但未准备完成
                return jsonify({
                    "success": True,
                    "data": {
                        "simulation_id": simulation_id,
                        "status": "not_started",
                        "progress": 0,
                        "message": "尚未开始准备,请调用 /api/simulation/prepare 开始",
                        "already_prepared": False
                    }
                })
            return jsonify({
                "success": False,
                "error": "请提供 task_id 或 simulation_id"
            }), 400
        
        task_manager = TaskManager()
        task = task_manager.get_task(task_id)
        
        if not task:
            # 任务不存在,但如果有simulation_id,检查是否已准备完成
            if simulation_id:
                is_prepared, prepare_info = _check_simulation_prepared(simulation_id)
                if is_prepared:
                    return jsonify({
                        "success": True,
                        "data": {
                            "simulation_id": simulation_id,
                            "task_id": task_id,
                            "status": "ready",
                            "progress": 100,
                            "message": "任务已完成(准备工作已存在)",
                            "already_prepared": True,
                            "prepare_info": prepare_info
                        }
                    })
            
            return jsonify({
                "success": False,
                "error": f"任务不存在: {task_id}"
            }), 404
        
        task_dict = task.to_dict()
        task_dict["already_prepared"] = False
        
        return jsonify({
            "success": True,
            "data": task_dict
        })
        
    except Exception as e:
        logger.error(f"查询任务状态失败: {str(e)}")
        return jsonify({
            "success": False,
            "error": str(e)
        }), 500


@simulation_bp.route('/<simulation_id>', methods=['GET'])
def get_simulation(simulation_id: str):
    """获取模拟状态"""
    try:
        manager = SimulationManager()
        state = manager.get_simulation(simulation_id)
        
        if not state:
            return jsonify({
                "success": False,
                "error": f"模拟不存在: {simulation_id}"
            }), 404
        
        result = state.to_dict()
        
        # 如果模拟已准备好,附加运行说明
        if state.status == SimulationStatus.READY:
            result["run_instructions"] = manager.get_run_instructions(simulation_id)
        
        return jsonify({
            "success": True,
            "data": result
        })
        
    except Exception as e:
        logger.error(f"获取模拟状态失败: {str(e)}")
        return jsonify({
            "success": False,
            "error": str(e),
            "traceback": traceback.format_exc()
        }), 500


@simulation_bp.route('/list', methods=['GET'])
def list_simulations():
    """
    列出所有模拟
    
    Query参数:
        project_id: 按项目ID过滤(可选)
    """
    try:
        project_id = request.args.get('project_id')
        
        manager = SimulationManager()
        simulations = manager.list_simulations(project_id=project_id)
        
        return jsonify({
            "success": True,
            "data": [s.to_dict() for s in simulations],
            "count": len(simulations)
        })
        
    except Exception as e:
        logger.error(f"列出模拟失败: {str(e)}")
        return jsonify({
            "success": False,
            "error": str(e),
            "traceback": traceback.format_exc()
        }), 500


def _get_report_id_for_simulation(simulation_id: str) -> str:
    """
    获取 simulation 对应的最新 report_id
    
    遍历 reports 目录,找出 simulation_id 匹配的 report,
    如果有多个则返回最新的(按 created_at 排序)
    
    Args:
        simulation_id: 模拟ID
        
    Returns:
        report_id 或 None
    """
    import json
    from datetime import datetime
    
    # reports 目录路径:backend/uploads/reports
    # __file__ 是 app/api/simulation.py,需要向上两级到 backend/
    reports_dir = os.path.join(os.path.dirname(__file__), '../../uploads/reports')
    if not os.path.exists(reports_dir):
        return None
    
    matching_reports = []
    
    try:
        for report_folder in os.listdir(reports_dir):
            report_path = os.path.join(reports_dir, report_folder)
            if not os.path.isdir(report_path):
                continue
            
            meta_file = os.path.join(report_path, "meta.json")
            if not os.path.exists(meta_file):
                continue
            
            try:
                with open(meta_file, 'r', encoding='utf-8') as f:
                    meta = json.load(f)
                
                if meta.get("simulation_id") == simulation_id:
                    matching_reports.append({
                        "report_id": meta.get("report_id"),
                        "created_at": meta.get("created_at", ""),
                        "status": meta.get("status", "")
                    })
            except Exception:
                continue
        
        if not matching_reports:
            return None
        
        # 按创建时间倒序排序,返回最新的
        matching_reports.sort(key=lambda x: x.get("created_at", ""), reverse=True)
        return matching_reports[0].get("report_id")
        
    except Exception as e:
        logger.warning(f"查找 simulation {simulation_id} 的 report 失败: {e}")
        return None


@simulation_bp.route('/history', methods=['GET'])
def get_simulation_history():
    """
    获取历史模拟列表(带项目详情)
    
    用于首页历史项目展示,返回包含项目名称、描述等丰富信息的模拟列表
    
    Query参数:
        limit: 返回数量限制(默认20)
    
    返回:
        {
            "success": true,
            "data": [
                {
                    "simulation_id": "sim_xxxx",
                    "project_id": "proj_xxxx",
                    "project_name": "武大舆情分析",
                    "simulation_requirement": "如果武汉大学发布...",
                    "status": "completed",
                    "entities_count": 68,
                    "profiles_count": 68,
                    "entity_types": ["Student", "Professor", ...],
                    "created_at": "2024-12-10",
                    "updated_at": "2024-12-10",
                    "total_rounds": 120,
                    "current_round": 120,
                    "report_id": "report_xxxx",
                    "version": "v1.0.2"
                },
                ...
            ],
            "count": 7
        }
    """
    try:
        limit = request.args.get('limit', 20, type=int)
        
        manager = SimulationManager()
        simulations = manager.list_simulations()[:limit]
        
        # 增强模拟数据,只从 Simulation 文件读取
        enriched_simulations = []
        for sim in simulations:
            sim_dict = sim.to_dict()
            
            # 获取模拟配置信息(从 simulation_config.json 读取 simulation_requirement)
            config = manager.get_simulation_config(sim.simulation_id)
            if config:
                sim_dict["simulation_requirement"] = config.get("simulation_requirement", "")
                time_config = config.get("time_config", {})
                sim_dict["total_simulation_hours"] = time_config.get("total_simulation_hours", 0)
                # 推荐轮数(后备值)
                recommended_rounds = int(
                    time_config.get("total_simulation_hours", 0) * 60 / 
                    max(time_config.get("minutes_per_round", 60), 1)
                )
            else:
                sim_dict["simulation_requirement"] = ""
                sim_dict["total_simulation_hours"] = 0
                recommended_rounds = 0
            
            # 获取运行状态(从 run_state.json 读取用户设置的实际轮数)
            run_state = SimulationRunner.get_run_state(sim.simulation_id)
            if run_state:
                sim_dict["current_round"] = run_state.current_round
                sim_dict["runner_status"] = run_state.runner_status.value
                # 使用用户设置的 total_rounds,若无则使用推荐轮数
                sim_dict["total_rounds"] = run_state.total_rounds if run_state.total_rounds > 0 else recommended_rounds
            else:
                sim_dict["current_round"] = 0
                sim_dict["runner_status"] = "idle"
                sim_dict["total_rounds"] = recommended_rounds
            
            # 获取关联项目的文件列表(最多3个)
            project = ProjectManager.get_project(sim.project_id)
            if project and hasattr(project, 'files') and project.files:
                sim_dict["files"] = [
                    {"filename": f.get("filename", "未知文件")} 
                    for f in project.files[:3]
                ]
            else:
                sim_dict["files"] = []
            
            # 获取关联的 report_id(查找该 simulation 最新的 report)
            sim_dict["report_id"] = _get_report_id_for_simulation(sim.simulation_id)
            
            # 添加版本号
            sim_dict["version"] = "v1.0.2"
            
            # 格式化日期
            try:
                created_date = sim_dict.get("created_at", "")[:10]
                sim_dict["created_date"] = created_date
            except:
                sim_dict["created_date"] = ""
            
            enriched_simulations.append(sim_dict)
        
        return jsonify({
            "success": True,
            "data": enriched_simulations,
            "count": len(enriched_simulations)
        })
        
    except Exception as e:
        logger.error(f"获取历史模拟失败: {str(e)}")
        return jsonify({
            "success": False,
            "error": str(e),
            "traceback": traceback.format_exc()
        }), 500


@simulation_bp.route('/<simulation_id>/profiles', methods=['GET'])
def get_simulation_profiles(simulation_id: str):
    """
    获取模拟的Agent Profile
    
    Query参数:
        platform: 平台类型(reddit/twitter,默认reddit)
    """
    try:
        platform = request.args.get('platform', 'reddit')
        
        manager = SimulationManager()
        profiles = manager.get_profiles(simulation_id, platform=platform)
        
        return jsonify({
            "success": True,
            "data": {
                "platform": platform,
                "count": len(profiles),
                "profiles": profiles
            }
        })
        
    except ValueError as e:
        return jsonify({
            "success": False,
            "error": str(e)
        }), 404
        
    except Exception as e:
        logger.error(f"获取Profile失败: {str(e)}")
        return jsonify({
            "success": False,
            "error": str(e),
            "traceback": traceback.format_exc()
        }), 500


@simulation_bp.route('/<simulation_id>/profiles/realtime', methods=['GET'])
def get_simulation_profiles_realtime(simulation_id: str):
    """
    实时获取模拟的Agent Profile(用于在生成过程中实时查看进度)
    
    与 /profiles 接口的区别:
    - 直接读取文件,不经过 SimulationManager
    - 适用于生成过程中的实时查看
    - 返回额外的元数据(如文件修改时间、是否正在生成等)
    
    Query参数:
        platform: 平台类型(reddit/twitter,默认reddit)
    
    返回:
        {
            "success": true,
            "data": {
                "simulation_id": "sim_xxxx",
                "platform": "reddit",
                "count": 15,
                "total_expected": 93,  // 预期总数(如果有)
                "is_generating": true,  // 是否正在生成
                "file_exists": true,
                "file_modified_at": "2025-12-04T18:20:00",
                "profiles": [...]
            }
        }
    """
    import json
    import csv
    from datetime import datetime
    
    try:
        platform = request.args.get('platform', 'reddit')
        
        # 获取模拟目录
        sim_dir = os.path.join(Config.OASIS_SIMULATION_DATA_DIR, simulation_id)
        
        if not os.path.exists(sim_dir):
            return jsonify({
                "success": False,
                "error": f"模拟不存在: {simulation_id}"
            }), 404
        
        # 确定文件路径
        if platform == "reddit":
            profiles_file = os.path.join(sim_dir, "reddit_profiles.json")
        else:
            profiles_file = os.path.join(sim_dir, "twitter_profiles.csv")
        
        # 检查文件是否存在
        file_exists = os.path.exists(profiles_file)
        profiles = []
        file_modified_at = None
        
        if file_exists:
            # 获取文件修改时间
            file_stat = os.stat(profiles_file)
            file_modified_at = datetime.fromtimestamp(file_stat.st_mtime).isoformat()
            
            try:
                if platform == "reddit":
                    with open(profiles_file, 'r', encoding='utf-8') as f:
                        profiles = json.load(f)
                else:
                    with open(profiles_file, 'r', encoding='utf-8') as f:
                        reader = csv.DictReader(f)
                        profiles = list(reader)
            except (json.JSONDecodeError, Exception) as e:
                logger.warning(f"读取 profiles 文件失败(可能正在写入中): {e}")
                profiles = []
        
        # 检查是否正在生成(通过 state.json 判断)
        is_generating = False
        total_expected = None
        
        state_file = os.path.join(sim_dir, "state.json")
        if os.path.exists(state_file):
            try:
                with open(state_file, 'r', encoding='utf-8') as f:
                    state_data = json.load(f)
                    status = state_data.get("status", "")
                    is_generating = status == "preparing"
                    total_expected = state_data.get("entities_count")
            except Exception:
                pass
        
        return jsonify({
            "success": True,
            "data": {
                "simulation_id": simulation_id,
                "platform": platform,
                "count": len(profiles),
                "total_expected": total_expected,
                "is_generating": is_generating,
                "file_exists": file_exists,
                "file_modified_at": file_modified_at,
                "profiles": profiles
            }
        })
        
    except Exception as e:
        logger.error(f"实时获取Profile失败: {str(e)}")
        return jsonify({
            "success": False,
            "error": str(e),
            "traceback": traceback.format_exc()
        }), 500


@simulation_bp.route('/<simulation_id>/config/realtime', methods=['GET'])
def get_simulation_config_realtime(simulation_id: str):
    """
    实时获取模拟配置(用于在生成过程中实时查看进度)
    
    与 /config 接口的区别:
    - 直接读取文件,不经过 SimulationManager
    - 适用于生成过程中的实时查看
    - 返回额外的元数据(如文件修改时间、是否正在生成等)
    - 即使配置还没生成完也能返回部分信息
    
    返回:
        {
            "success": true,
            "data": {
                "simulation_id": "sim_xxxx",
                "file_exists": true,
                "file_modified_at": "2025-12-04T18:20:00",
                "is_generating": true,  // 是否正在生成
                "generation_stage": "generating_config",  // 当前生成阶段
                "config": {...}  // 配置内容(如果存在)
            }
        }
    """
    import json
    from datetime import datetime
    
    try:
        # 获取模拟目录
        sim_dir = os.path.join(Config.OASIS_SIMULATION_DATA_DIR, simulation_id)
        
        if not os.path.exists(sim_dir):
            return jsonify({
                "success": False,
                "error": f"模拟不存在: {simulation_id}"
            }), 404
        
        # 配置文件路径
        config_file = os.path.join(sim_dir, "simulation_config.json")
        
        # 检查文件是否存在
        file_exists = os.path.exists(config_file)
        config = None
        file_modified_at = None
        
        if file_exists:
            # 获取文件修改时间
            file_stat = os.stat(config_file)
            file_modified_at = datetime.fromtimestamp(file_stat.st_mtime).isoformat()
            
            try:
                with open(config_file, 'r', encoding='utf-8') as f:
                    config = json.load(f)
            except (json.JSONDecodeError, Exception) as e:
                logger.warning(f"读取 config 文件失败(可能正在写入中): {e}")
                config = None
        
        # 检查是否正在生成(通过 state.json 判断)
        is_generating = False
        generation_stage = None
        config_generated = False
        
        state_file = os.path.join(sim_dir, "state.json")
        if os.path.exists(state_file):
            try:
                with open(state_file, 'r', encoding='utf-8') as f:
                    state_data = json.load(f)
                    status = state_data.get("status", "")
                    is_generating = status == "preparing"
                    config_generated = state_data.get("config_generated", False)
                    
                    # 判断当前阶段
                    if is_generating:
                        if state_data.get("profiles_generated", False):
                            generation_stage = "generating_config"
                        else:
                            generation_stage = "generating_profiles"
                    elif status == "ready":
                        generation_stage = "completed"
            except Exception:
                pass
        
        # 构建返回数据
        response_data = {
            "simulation_id": simulation_id,
            "file_exists": file_exists,
            "file_modified_at": file_modified_at,
            "is_generating": is_generating,
            "generation_stage": generation_stage,
            "config_generated": config_generated,
            "config": config
        }
        
        # 如果配置存在,提取一些关键统计信息
        if config:
            response_data["summary"] = {
                "total_agents": len(config.get("agent_configs", [])),
                "simulation_hours": config.get("time_config", {}).get("total_simulation_hours"),
                "initial_posts_count": len(config.get("event_config", {}).get("initial_posts", [])),
                "hot_topics_count": len(config.get("event_config", {}).get("hot_topics", [])),
                "has_twitter_config": "twitter_config" in config,
                "has_reddit_config": "reddit_config" in config,
                "generated_at": config.get("generated_at"),
                "llm_model": config.get("llm_model")
            }
        
        return jsonify({
            "success": True,
            "data": response_data
        })
        
    except Exception as e:
        logger.error(f"实时获取Config失败: {str(e)}")
        return jsonify({
            "success": False,
            "error": str(e),
            "traceback": traceback.format_exc()
        }), 500


@simulation_bp.route('/<simulation_id>/config', methods=['GET'])
def get_simulation_config(simulation_id: str):
    """
    获取模拟配置(LLM智能生成的完整配置)
    
    返回包含:
        - time_config: 时间配置(模拟时长、轮次、高峰/低谷时段)
        - agent_configs: 每个Agent的活动配置(活跃度、发言频率、立场等)
        - event_config: 事件配置(初始帖子、热点话题)
        - platform_configs: 平台配置
        - generation_reasoning: LLM的配置推理说明
    """
    try:
        manager = SimulationManager()
        config = manager.get_simulation_config(simulation_id)
        
        if not config:
            return jsonify({
                "success": False,
                "error": f"模拟配置不存在,请先调用 /prepare 接口"
            }), 404
        
        return jsonify({
            "success": True,
            "data": config
        })
        
    except Exception as e:
        logger.error(f"获取配置失败: {str(e)}")
        return jsonify({
            "success": False,
            "error": str(e),
            "traceback": traceback.format_exc()
        }), 500


@simulation_bp.route('/<simulation_id>/config/download', methods=['GET'])
def download_simulation_config(simulation_id: str):
    """下载模拟配置文件"""
    try:
        manager = SimulationManager()
        sim_dir = manager._get_simulation_dir(simulation_id)
        config_path = os.path.join(sim_dir, "simulation_config.json")
        
        if not os.path.exists(config_path):
            return jsonify({
                "success": False,
                "error": "配置文件不存在,请先调用 /prepare 接口"
            }), 404
        
        return send_file(
            config_path,
            as_attachment=True,
            download_name="simulation_config.json"
        )
        
    except Exception as e:
        logger.error(f"下载配置失败: {str(e)}")
        return jsonify({
            "success": False,
            "error": str(e),
            "traceback": traceback.format_exc()
        }), 500


@simulation_bp.route('/script/<script_name>/download', methods=['GET'])
def download_simulation_script(script_name: str):
    """
    下载模拟运行脚本文件(通用脚本,位于 backend/scripts/)
    
    script_name可选值:
        - run_twitter_simulation.py
        - run_reddit_simulation.py
        - run_parallel_simulation.py
        - action_logger.py
    """
    try:
        # 脚本位于 backend/scripts/ 目录
        scripts_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), '../../scripts'))
        
        # 验证脚本名称
        allowed_scripts = [
            "run_twitter_simulation.py",
            "run_reddit_simulation.py", 
            "run_parallel_simulation.py",
            "action_logger.py"
        ]
        
        if script_name not in allowed_scripts:
            return jsonify({
                "success": False,
                "error": f"未知脚本: {script_name},可选: {allowed_scripts}"
            }), 400
        
        script_path = os.path.join(scripts_dir, script_name)
        
        if not os.path.exists(script_path):
            return jsonify({
                "success": False,
                "error": f"脚本文件不存在: {script_name}"
            }), 404
        
        return send_file(
            script_path,
            as_attachment=True,
            download_name=script_name
        )
        
    except Exception as e:
        logger.error(f"下载脚本失败: {str(e)}")
        return jsonify({
            "success": False,
            "error": str(e),
            "traceback": traceback.format_exc()
        }), 500


# ============== Profile生成接口(独立使用) ==============

@simulation_bp.route('/generate-profiles', methods=['POST'])
def generate_profiles():
    """
    直接从图谱生成OASIS Agent Profile(不创建模拟)
    
    请求(JSON):
        {
            "graph_id": "mirofish_xxxx",     // 必填
            "entity_types": ["Student"],      // 可选
            "use_llm": true,                  // 可选
            "platform": "reddit"              // 可选
        }
    """
    try:
        data = request.get_json() or {}
        
        graph_id = data.get('graph_id')
        if not graph_id:
            return jsonify({
                "success": False,
                "error": "请提供 graph_id"
            }), 400
        
        entity_types = data.get('entity_types')
        use_llm = data.get('use_llm', True)
        platform = data.get('platform', 'reddit')
        
        reader = ZepEntityReader()
        filtered = reader.filter_defined_entities(
            graph_id=graph_id,
            defined_entity_types=entity_types,
            enrich_with_edges=True
        )
        
        if filtered.filtered_count == 0:
            return jsonify({
                "success": False,
                "error": "没有找到符合条件的实体"
            }), 400
        
        generator = OasisProfileGenerator()
        profiles = generator.generate_profiles_from_entities(
            entities=filtered.entities,
            use_llm=use_llm
        )
        
        if platform == "reddit":
            profiles_data = [p.to_reddit_format() for p in profiles]
        elif platform == "twitter":
            profiles_data = [p.to_twitter_format() for p in profiles]
        else:
            profiles_data = [p.to_dict() for p in profiles]
        
        return jsonify({
            "success": True,
            "data": {
                "platform": platform,
                "entity_types": list(filtered.entity_types),
                "count": len(profiles_data),
                "profiles": profiles_data
            }
        })
        
    except Exception as e:
        logger.error(f"生成Profile失败: {str(e)}")
        return jsonify({
            "success": False,
            "error": str(e),
            "traceback": traceback.format_exc()
        }), 500


# ============== 模拟运行控制接口 ==============

@simulation_bp.route('/start', methods=['POST'])
def start_simulation():
    """
    开始运行模拟

    请求(JSON):
        {
            "simulation_id": "sim_xxxx",          // 必填,模拟ID
            "platform": "parallel",                // 可选: twitter / reddit / parallel (默认)
            "max_rounds": 100,                     // 可选: 最大模拟轮数,用于截断过长的模拟
            "enable_graph_memory_update": false,   // 可选: 是否将Agent活动动态更新到Zep图谱记忆
            "force": false                         // 可选: 强制重新开始(会停止运行中的模拟并清理日志)
        }

    关于 force 参数:
        - 启用后,如果模拟正在运行或已完成,会先停止并清理运行日志
        - 清理的内容包括:run_state.json, actions.jsonl, simulation.log 等
        - 不会清理配置文件(simulation_config.json)和 profile 文件
        - 适用于需要重新运行模拟的场景

    关于 enable_graph_memory_update:
        - 启用后,模拟中所有Agent的活动(发帖、评论、点赞等)都会实时更新到Zep图谱
        - 这可以让图谱"记住"模拟过程,用于后续分析或AI对话
        - 需要模拟关联的项目有有效的 graph_id
        - 采用批量更新机制,减少API调用次数

    返回:
        {
            "success": true,
            "data": {
                "simulation_id": "sim_xxxx",
                "runner_status": "running",
                "process_pid": 12345,
                "twitter_running": true,
                "reddit_running": true,
                "started_at": "2025-12-01T10:00:00",
                "graph_memory_update_enabled": true,  // 是否启用了图谱记忆更新
                "force_restarted": true               // 是否是强制重新开始
            }
        }
    """
    try:
        data = request.get_json() or {}

        simulation_id = data.get('simulation_id')
        if not simulation_id:
            return jsonify({
                "success": False,
                "error": "请提供 simulation_id"
            }), 400

        platform = data.get('platform', 'parallel')
        max_rounds = data.get('max_rounds')  # 可选:最大模拟轮数
        enable_graph_memory_update = data.get('enable_graph_memory_update', False)  # 可选:是否启用图谱记忆更新
        force = data.get('force', False)  # 可选:强制重新开始

        # 验证 max_rounds 参数
        if max_rounds is not None:
            try:
                max_rounds = int(max_rounds)
                if max_rounds <= 0:
                    return jsonify({
                        "success": False,
                        "error": "max_rounds 必须是正整数"
                    }), 400
            except (ValueError, TypeError):
                return jsonify({
                    "success": False,
                    "error": "max_rounds 必须是有效的整数"
                }), 400

        if platform not in ['twitter', 'reddit', 'parallel']:
            return jsonify({
                "success": False,
                "error": f"无效的平台类型: {platform},可选: twitter/reddit/parallel"
            }), 400

        # 检查模拟是否已准备好
        manager = SimulationManager()
        state = manager.get_simulation(simulation_id)

        if not state:
            return jsonify({
                "success": False,
                "error": f"模拟不存在: {simulation_id}"
            }), 404

        force_restarted = False
        
        # 智能处理状态:如果准备工作已完成,允许重新启动
        if state.status != SimulationStatus.READY:
            # 检查准备工作是否已完成
            is_prepared, prepare_info = _check_simulation_prepared(simulation_id)

            if is_prepared:
                # 准备工作已完成,检查是否有正在运行的进程
                if state.status == SimulationStatus.RUNNING:
                    # 检查模拟进程是否真的在运行
                    run_state = SimulationRunner.get_run_state(simulation_id)
                    if run_state and run_state.runner_status.value == "running":
                        # 进程确实在运行
                        if force:
                            # 强制模式:停止运行中的模拟
                            logger.info(f"强制模式:停止运行中的模拟 {simulation_id}")
                            try:
                                SimulationRunner.stop_simulation(simulation_id)
                            except Exception as e:
                                logger.warning(f"停止模拟时出现警告: {str(e)}")
                        else:
                            return jsonify({
                                "success": False,
                                "error": f"模拟正在运行中,请先调用 /stop 接口停止,或使用 force=true 强制重新开始"
                            }), 400

                # 如果是强制模式,清理运行日志
                if force:
                    logger.info(f"强制模式:清理模拟日志 {simulation_id}")
                    cleanup_result = SimulationRunner.cleanup_simulation_logs(simulation_id)
                    if not cleanup_result.get("success"):
                        logger.warning(f"清理日志时出现警告: {cleanup_result.get('errors')}")
                    force_restarted = True

                # 进程不存在或已结束,重置状态为 ready
                logger.info(f"模拟 {simulation_id} 准备工作已完成,重置状态为 ready(原状态: {state.status.value})")
                state.status = SimulationStatus.READY
                manager._save_simulation_state(state)
            else:
                # 准备工作未完成
                return jsonify({
                    "success": False,
                    "error": f"模拟未准备好,当前状态: {state.status.value},请先调用 /prepare 接口"
                }), 400
        
        # 获取图谱ID(用于图谱记忆更新)
        graph_id = None
        if enable_graph_memory_update:
            # 从模拟状态或项目中获取 graph_id
            graph_id = state.graph_id
            if not graph_id:
                # 尝试从项目中获取
                project = ProjectManager.get_project(state.project_id)
                if project:
                    graph_id = project.graph_id
            
            if not graph_id:
                return jsonify({
                    "success": False,
                    "error": "启用图谱记忆更新需要有效的 graph_id,请确保项目已构建图谱"
                }), 400
            
            logger.info(f"启用图谱记忆更新: simulation_id={simulation_id}, graph_id={graph_id}")
        
        # 启动模拟
        run_state = SimulationRunner.start_simulation(
            simulation_id=simulation_id,
            platform=platform,
            max_rounds=max_rounds,
            enable_graph_memory_update=enable_graph_memory_update,
            graph_id=graph_id
        )
        
        # 更新模拟状态
        state.status = SimulationStatus.RUNNING
        manager._save_simulation_state(state)
        
        response_data = run_state.to_dict()
        if max_rounds:
            response_data['max_rounds_applied'] = max_rounds
        response_data['graph_memory_update_enabled'] = enable_graph_memory_update
        response_data['force_restarted'] = force_restarted
        if enable_graph_memory_update:
            response_data['graph_id'] = graph_id
        
        return jsonify({
            "success": True,
            "data": response_data
        })
        
    except ValueError as e:
        return jsonify({
            "success": False,
            "error": str(e)
        }), 400
        
    except Exception as e:
        logger.error(f"启动模拟失败: {str(e)}")
        return jsonify({
            "success": False,
            "error": str(e),
            "traceback": traceback.format_exc()
        }), 500


@simulation_bp.route('/stop', methods=['POST'])
def stop_simulation():
    """
    停止模拟
    
    请求(JSON):
        {
            "simulation_id": "sim_xxxx"  // 必填,模拟ID
        }
    
    返回:
        {
            "success": true,
            "data": {
                "simulation_id": "sim_xxxx",
                "runner_status": "stopped",
                "completed_at": "2025-12-01T12:00:00"
            }
        }
    """
    try:
        data = request.get_json() or {}
        
        simulation_id = data.get('simulation_id')
        if not simulation_id:
            return jsonify({
                "success": False,
                "error": "请提供 simulation_id"
            }), 400
        
        run_state = SimulationRunner.stop_simulation(simulation_id)
        
        # 更新模拟状态
        manager = SimulationManager()
        state = manager.get_simulation(simulation_id)
        if state:
            state.status = SimulationStatus.PAUSED
            manager._save_simulation_state(state)
        
        return jsonify({
            "success": True,
            "data": run_state.to_dict()
        })
        
    except ValueError as e:
        return jsonify({
            "success": False,
            "error": str(e)
        }), 400
        
    except Exception as e:
        logger.error(f"停止模拟失败: {str(e)}")
        return jsonify({
            "success": False,
            "error": str(e),
            "traceback": traceback.format_exc()
        }), 500


# ============== 实时状态监控接口 ==============

@simulation_bp.route('/<simulation_id>/run-status', methods=['GET'])
def get_run_status(simulation_id: str):
    """
    获取模拟运行实时状态(用于前端轮询)
    
    返回:
        {
            "success": true,
            "data": {
                "simulation_id": "sim_xxxx",
                "runner_status": "running",
                "current_round": 5,
                "total_rounds": 144,
                "progress_percent": 3.5,
                "simulated_hours": 2,
                "total_simulation_hours": 72,
                "twitter_running": true,
                "reddit_running": true,
                "twitter_actions_count": 150,
                "reddit_actions_count": 200,
                "total_actions_count": 350,
                "started_at": "2025-12-01T10:00:00",
                "updated_at": "2025-12-01T10:30:00"
            }
        }
    """
    try:
        run_state = SimulationRunner.get_run_state(simulation_id)
        
        if not run_state:
            return jsonify({
                "success": True,
                "data": {
                    "simulation_id": simulation_id,
                    "runner_status": "idle",
                    "current_round": 0,
                    "total_rounds": 0,
                    "progress_percent": 0,
                    "twitter_actions_count": 0,
                    "reddit_actions_count": 0,
                    "total_actions_count": 0,
                }
            })
        
        return jsonify({
            "success": True,
            "data": run_state.to_dict()
        })
        
    except Exception as e:
        logger.error(f"获取运行状态失败: {str(e)}")
        return jsonify({
            "success": False,
            "error": str(e),
            "traceback": traceback.format_exc()
        }), 500


@simulation_bp.route('/<simulation_id>/run-status/detail', methods=['GET'])
def get_run_status_detail(simulation_id: str):
    """
    获取模拟运行详细状态(包含所有动作)
    
    用于前端展示实时动态
    
    Query参数:
        platform: 过滤平台(twitter/reddit,可选)
    
    返回:
        {
            "success": true,
            "data": {
                "simulation_id": "sim_xxxx",
                "runner_status": "running",
                "current_round": 5,
                ...
                "all_actions": [
                    {
                        "round_num": 5,
                        "timestamp": "2025-12-01T10:30:00",
                        "platform": "twitter",
                        "agent_id": 3,
                        "agent_name": "Agent Name",
                        "action_type": "CREATE_POST",
                        "action_args": {"content": "..."},
                        "result": null,
                        "success": true
                    },
                    ...
                ],
                "twitter_actions": [...],  # Twitter 平台的所有动作
                "reddit_actions": [...]    # Reddit 平台的所有动作
            }
        }
    """
    try:
        run_state = SimulationRunner.get_run_state(simulation_id)
        platform_filter = request.args.get('platform')
        
        if not run_state:
            return jsonify({
                "success": True,
                "data": {
                    "simulation_id": simulation_id,
                    "runner_status": "idle",
                    "all_actions": [],
                    "twitter_actions": [],
                    "reddit_actions": []
                }
            })
        
        # 获取完整的动作列表
        all_actions = SimulationRunner.get_all_actions(
            simulation_id=simulation_id,
            platform=platform_filter
        )
        
        # 分平台获取动作
        twitter_actions = SimulationRunner.get_all_actions(
            simulation_id=simulation_id,
            platform="twitter"
        ) if not platform_filter or platform_filter == "twitter" else []
        
        reddit_actions = SimulationRunner.get_all_actions(
            simulation_id=simulation_id,
            platform="reddit"
        ) if not platform_filter or platform_filter == "reddit" else []
        
        # 获取当前轮次的动作(recent_actions 只展示最新一轮)
        current_round = run_state.current_round
        recent_actions = SimulationRunner.get_all_actions(
            simulation_id=simulation_id,
            platform=platform_filter,
            round_num=current_round
        ) if current_round > 0 else []
        
        # 获取基础状态信息
        result = run_state.to_dict()
        result["all_actions"] = [a.to_dict() for a in all_actions]
        result["twitter_actions"] = [a.to_dict() for a in twitter_actions]
        result["reddit_actions"] = [a.to_dict() for a in reddit_actions]
        result["rounds_count"] = len(run_state.rounds)
        # recent_actions 只展示当前最新一轮两个平台的内容
        result["recent_actions"] = [a.to_dict() for a in recent_actions]
        
        return jsonify({
            "success": True,
            "data": result
        })
        
    except Exception as e:
        logger.error(f"获取详细状态失败: {str(e)}")
        return jsonify({
            "success": False,
            "error": str(e),
            "traceback": traceback.format_exc()
        }), 500


@simulation_bp.route('/<simulation_id>/actions', methods=['GET'])
def get_simulation_actions(simulation_id: str):
    """
    获取模拟中的Agent动作历史
    
    Query参数:
        limit: 返回数量(默认100)
        offset: 偏移量(默认0)
        platform: 过滤平台(twitter/reddit)
        agent_id: 过滤Agent ID
        round_num: 过滤轮次
    
    返回:
        {
            "success": true,
            "data": {
                "count": 100,
                "actions": [...]
            }
        }
    """
    try:
        limit = request.args.get('limit', 100, type=int)
        offset = request.args.get('offset', 0, type=int)
        platform = request.args.get('platform')
        agent_id = request.args.get('agent_id', type=int)
        round_num = request.args.get('round_num', type=int)
        
        actions = SimulationRunner.get_actions(
            simulation_id=simulation_id,
            limit=limit,
            offset=offset,
            platform=platform,
            agent_id=agent_id,
            round_num=round_num
        )
        
        return jsonify({
            "success": True,
            "data": {
                "count": len(actions),
                "actions": [a.to_dict() for a in actions]
            }
        })
        
    except Exception as e:
        logger.error(f"获取动作历史失败: {str(e)}")
        return jsonify({
            "success": False,
            "error": str(e),
            "traceback": traceback.format_exc()
        }), 500


@simulation_bp.route('/<simulation_id>/timeline', methods=['GET'])
def get_simulation_timeline(simulation_id: str):
    """
    获取模拟时间线(按轮次汇总)
    
    用于前端展示进度条和时间线视图
    
    Query参数:
        start_round: 起始轮次(默认0)
        end_round: 结束轮次(默认全部)
    
    返回每轮的汇总信息
    """
    try:
        start_round = request.args.get('start_round', 0, type=int)
        end_round = request.args.get('end_round', type=int)
        
        timeline = SimulationRunner.get_timeline(
            simulation_id=simulation_id,
            start_round=start_round,
            end_round=end_round
        )
        
        return jsonify({
            "success": True,
            "data": {
                "rounds_count": len(timeline),
                "timeline": timeline
            }
        })
        
    except Exception as e:
        logger.error(f"获取时间线失败: {str(e)}")
        return jsonify({
            "success": False,
            "error": str(e),
            "traceback": traceback.format_exc()
        }), 500


@simulation_bp.route('/<simulation_id>/agent-stats', methods=['GET'])
def get_agent_stats(simulation_id: str):
    """
    获取每个Agent的统计信息
    
    用于前端展示Agent活跃度排行、动作分布等
    """
    try:
        stats = SimulationRunner.get_agent_stats(simulation_id)
        
        return jsonify({
            "success": True,
            "data": {
                "agents_count": len(stats),
                "stats": stats
            }
        })
        
    except Exception as e:
        logger.error(f"获取Agent统计失败: {str(e)}")
        return jsonify({
            "success": False,
            "error": str(e),
            "traceback": traceback.format_exc()
        }), 500


# ============== 数据库查询接口 ==============

@simulation_bp.route('/<simulation_id>/posts', methods=['GET'])
def get_simulation_posts(simulation_id: str):
    """
    获取模拟中的帖子
    
    Query参数:
        platform: 平台类型(twitter/reddit)
        limit: 返回数量(默认50)
        offset: 偏移量
    
    返回帖子列表(从SQLite数据库读取)
    """
    try:
        platform = request.args.get('platform', 'reddit')
        limit = request.args.get('limit', 50, type=int)
        offset = request.args.get('offset', 0, type=int)
        
        sim_dir = os.path.join(
            os.path.dirname(__file__),
            f'../../uploads/simulations/{simulation_id}'
        )
        
        db_file = f"{platform}_simulation.db"
        db_path = os.path.join(sim_dir, db_file)
        
        if not os.path.exists(db_path):
            return jsonify({
                "success": True,
                "data": {
                    "platform": platform,
                    "count": 0,
                    "posts": [],
                    "message": "数据库不存在,模拟可能尚未运行"
                }
            })
        
        import sqlite3
        conn = sqlite3.connect(db_path)
        conn.row_factory = sqlite3.Row
        cursor = conn.cursor()
        
        try:
            cursor.execute("""
                SELECT * FROM post 
                ORDER BY created_at DESC 
                LIMIT ? OFFSET ?
            """, (limit, offset))
            
            posts = [dict(row) for row in cursor.fetchall()]
            
            cursor.execute("SELECT COUNT(*) FROM post")
            total = cursor.fetchone()[0]
            
        except sqlite3.OperationalError:
            posts = []
            total = 0
        
        conn.close()
        
        return jsonify({
            "success": True,
            "data": {
                "platform": platform,
                "total": total,
                "count": len(posts),
                "posts": posts
            }
        })
        
    except Exception as e:
        logger.error(f"获取帖子失败: {str(e)}")
        return jsonify({
            "success": False,
            "error": str(e),
            "traceback": traceback.format_exc()
        }), 500


@simulation_bp.route('/<simulation_id>/comments', methods=['GET'])
def get_simulation_comments(simulation_id: str):
    """
    获取模拟中的评论(仅Reddit)
    
    Query参数:
        post_id: 过滤帖子ID(可选)
        limit: 返回数量
        offset: 偏移量
    """
    try:
        post_id = request.args.get('post_id')
        limit = request.args.get('limit', 50, type=int)
        offset = request.args.get('offset', 0, type=int)
        
        sim_dir = os.path.join(
            os.path.dirname(__file__),
            f'../../uploads/simulations/{simulation_id}'
        )
        
        db_path = os.path.join(sim_dir, "reddit_simulation.db")
        
        if not os.path.exists(db_path):
            return jsonify({
                "success": True,
                "data": {
                    "count": 0,
                    "comments": []
                }
            })
        
        import sqlite3
        conn = sqlite3.connect(db_path)
        conn.row_factory = sqlite3.Row
        cursor = conn.cursor()
        
        try:
            if post_id:
                cursor.execute("""
                    SELECT * FROM comment 
                    WHERE post_id = ?
                    ORDER BY created_at DESC 
                    LIMIT ? OFFSET ?
                """, (post_id, limit, offset))
            else:
                cursor.execute("""
                    SELECT * FROM comment 
                    ORDER BY created_at DESC 
                    LIMIT ? OFFSET ?
                """, (limit, offset))
            
            comments = [dict(row) for row in cursor.fetchall()]
            
        except sqlite3.OperationalError:
            comments = []
        
        conn.close()
        
        return jsonify({
            "success": True,
            "data": {
                "count": len(comments),
                "comments": comments
            }
        })
        
    except Exception as e:
        logger.error(f"获取评论失败: {str(e)}")
        return jsonify({
            "success": False,
            "error": str(e),
            "traceback": traceback.format_exc()
        }), 500


# ============== Interview 采访接口 ==============

@simulation_bp.route('/interview', methods=['POST'])
def interview_agent():
    """
    采访单个Agent

    注意:此功能需要模拟环境处于运行状态(完成模拟循环后进入等待命令模式)

    请求(JSON):
        {
            "simulation_id": "sim_xxxx",       // 必填,模拟ID
            "agent_id": 0,                     // 必填,Agent ID
            "prompt": "你对这件事有什么看法?",  // 必填,采访问题
            "platform": "twitter",             // 可选,指定平台(twitter/reddit)
                                               // 不指定时:双平台模拟同时采访两个平台
            "timeout": 60                      // 可选,超时时间(秒),默认60
        }

    返回(不指定platform,双平台模式):
        {
            "success": true,
            "data": {
                "agent_id": 0,
                "prompt": "你对这件事有什么看法?",
                "result": {
                    "agent_id": 0,
                    "prompt": "...",
                    "platforms": {
                        "twitter": {"agent_id": 0, "response": "...", "platform": "twitter"},
                        "reddit": {"agent_id": 0, "response": "...", "platform": "reddit"}
                    }
                },
                "timestamp": "2025-12-08T10:00:01"
            }
        }

    返回(指定platform):
        {
            "success": true,
            "data": {
                "agent_id": 0,
                "prompt": "你对这件事有什么看法?",
                "result": {
                    "agent_id": 0,
                    "response": "我认为...",
                    "platform": "twitter",
                    "timestamp": "2025-12-08T10:00:00"
                },
                "timestamp": "2025-12-08T10:00:01"
            }
        }
    """
    try:
        data = request.get_json() or {}
        
        simulation_id = data.get('simulation_id')
        agent_id = data.get('agent_id')
        prompt = data.get('prompt')
        platform = data.get('platform')  # 可选:twitter/reddit/None
        timeout = data.get('timeout', 60)
        
        if not simulation_id:
            return jsonify({
                "success": False,
                "error": "请提供 simulation_id"
            }), 400
        
        if agent_id is None:
            return jsonify({
                "success": False,
                "error": "请提供 agent_id"
            }), 400
        
        if not prompt:
            return jsonify({
                "success": False,
                "error": "请提供 prompt(采访问题)"
            }), 400
        
        # 验证platform参数
        if platform and platform not in ("twitter", "reddit"):
            return jsonify({
                "success": False,
                "error": "platform 参数只能是 'twitter' 或 'reddit'"
            }), 400
        
        # 检查环境状态
        if not SimulationRunner.check_env_alive(simulation_id):
            return jsonify({
                "success": False,
                "error": "模拟环境未运行或已关闭。请确保模拟已完成并进入等待命令模式。"
            }), 400
        
        # 优化prompt,添加前缀避免Agent调用工具
        optimized_prompt = optimize_interview_prompt(prompt)
        
        result = SimulationRunner.interview_agent(
            simulation_id=simulation_id,
            agent_id=agent_id,
            prompt=optimized_prompt,
            platform=platform,
            timeout=timeout
        )

        return jsonify({
            "success": result.get("success", False),
            "data": result
        })
        
    except ValueError as e:
        return jsonify({
            "success": False,
            "error": str(e)
        }), 400
        
    except TimeoutError as e:
        return jsonify({
            "success": False,
            "error": f"等待Interview响应超时: {str(e)}"
        }), 504
        
    except Exception as e:
        logger.error(f"Interview失败: {str(e)}")
        return jsonify({
            "success": False,
            "error": str(e),
            "traceback": traceback.format_exc()
        }), 500


@simulation_bp.route('/interview/batch', methods=['POST'])
def interview_agents_batch():
    """
    批量采访多个Agent

    注意:此功能需要模拟环境处于运行状态

    请求(JSON):
        {
            "simulation_id": "sim_xxxx",       // 必填,模拟ID
            "interviews": [                    // 必填,采访列表
                {
                    "agent_id": 0,
                    "prompt": "你对A有什么看法?",
                    "platform": "twitter"      // 可选,指定该Agent的采访平台
                },
                {
                    "agent_id": 1,
                    "prompt": "你对B有什么看法?"  // 不指定platform则使用默认值
                }
            ],
            "platform": "reddit",              // 可选,默认平台(被每项的platform覆盖)
                                               // 不指定时:双平台模拟每个Agent同时采访两个平台
            "timeout": 120                     // 可选,超时时间(秒),默认120
        }

    返回:
        {
            "success": true,
            "data": {
                "interviews_count": 2,
                "result": {
                    "interviews_count": 4,
                    "results": {
                        "twitter_0": {"agent_id": 0, "response": "...", "platform": "twitter"},
                        "reddit_0": {"agent_id": 0, "response": "...", "platform": "reddit"},
                        "twitter_1": {"agent_id": 1, "response": "...", "platform": "twitter"},
                        "reddit_1": {"agent_id": 1, "response": "...", "platform": "reddit"}
                    }
                },
                "timestamp": "2025-12-08T10:00:01"
            }
        }
    """
    try:
        data = request.get_json() or {}

        simulation_id = data.get('simulation_id')
        interviews = data.get('interviews')
        platform = data.get('platform')  # 可选:twitter/reddit/None
        timeout = data.get('timeout', 120)

        if not simulation_id:
            return jsonify({
                "success": False,
                "error": "请提供 simulation_id"
            }), 400

        if not interviews or not isinstance(interviews, list):
            return jsonify({
                "success": False,
                "error": "请提供 interviews(采访列表)"
            }), 400

        # 验证platform参数
        if platform and platform not in ("twitter", "reddit"):
            return jsonify({
                "success": False,
                "error": "platform 参数只能是 'twitter' 或 'reddit'"
            }), 400

        # 验证每个采访项
        for i, interview in enumerate(interviews):
            if 'agent_id' not in interview:
                return jsonify({
                    "success": False,
                    "error": f"采访列表第{i+1}项缺少 agent_id"
                }), 400
            if 'prompt' not in interview:
                return jsonify({
                    "success": False,
                    "error": f"采访列表第{i+1}项缺少 prompt"
                }), 400
            # 验证每项的platform(如果有)
            item_platform = interview.get('platform')
            if item_platform and item_platform not in ("twitter", "reddit"):
                return jsonify({
                    "success": False,
                    "error": f"采访列表第{i+1}项的platform只能是 'twitter' 或 'reddit'"
                }), 400

        # 检查环境状态
        if not SimulationRunner.check_env_alive(simulation_id):
            return jsonify({
                "success": False,
                "error": "模拟环境未运行或已关闭。请确保模拟已完成并进入等待命令模式。"
            }), 400

        # 优化每个采访项的prompt,添加前缀避免Agent调用工具
        optimized_interviews = []
        for interview in interviews:
            optimized_interview = interview.copy()
            optimized_interview['prompt'] = optimize_interview_prompt(interview.get('prompt', ''))
            optimized_interviews.append(optimized_interview)

        result = SimulationRunner.interview_agents_batch(
            simulation_id=simulation_id,
            interviews=optimized_interviews,
            platform=platform,
            timeout=timeout
        )

        return jsonify({
            "success": result.get("success", False),
            "data": result
        })

    except ValueError as e:
        return jsonify({
            "success": False,
            "error": str(e)
        }), 400

    except TimeoutError as e:
        return jsonify({
            "success": False,
            "error": f"等待批量Interview响应超时: {str(e)}"
        }), 504

    except Exception as e:
        logger.error(f"批量Interview失败: {str(e)}")
        return jsonify({
            "success": False,
            "error": str(e),
            "traceback": traceback.format_exc()
        }), 500


@simulation_bp.route('/interview/all', methods=['POST'])
def interview_all_agents():
    """
    全局采访 - 使用相同问题采访所有Agent

    注意:此功能需要模拟环境处于运行状态

    请求(JSON):
        {
            "simulation_id": "sim_xxxx",            // 必填,模拟ID
            "prompt": "你对这件事整体有什么看法?",  // 必填,采访问题(所有Agent使用相同问题)
            "platform": "reddit",                   // 可选,指定平台(twitter/reddit)
                                                    // 不指定时:双平台模拟每个Agent同时采访两个平台
            "timeout": 180                          // 可选,超时时间(秒),默认180
        }

    返回:
        {
            "success": true,
            "data": {
                "interviews_count": 50,
                "result": {
                    "interviews_count": 100,
                    "results": {
                        "twitter_0": {"agent_id": 0, "response": "...", "platform": "twitter"},
                        "reddit_0": {"agent_id": 0, "response": "...", "platform": "reddit"},
                        ...
                    }
                },
                "timestamp": "2025-12-08T10:00:01"
            }
        }
    """
    try:
        data = request.get_json() or {}

        simulation_id = data.get('simulation_id')
        prompt = data.get('prompt')
        platform = data.get('platform')  # 可选:twitter/reddit/None
        timeout = data.get('timeout', 180)

        if not simulation_id:
            return jsonify({
                "success": False,
                "error": "请提供 simulation_id"
            }), 400

        if not prompt:
            return jsonify({
                "success": False,
                "error": "请提供 prompt(采访问题)"
            }), 400

        # 验证platform参数
        if platform and platform not in ("twitter", "reddit"):
            return jsonify({
                "success": False,
                "error": "platform 参数只能是 'twitter' 或 'reddit'"
            }), 400

        # 检查环境状态
        if not SimulationRunner.check_env_alive(simulation_id):
            return jsonify({
                "success": False,
                "error": "模拟环境未运行或已关闭。请确保模拟已完成并进入等待命令模式。"
            }), 400

        # 优化prompt,添加前缀避免Agent调用工具
        optimized_prompt = optimize_interview_prompt(prompt)

        result = SimulationRunner.interview_all_agents(
            simulation_id=simulation_id,
            prompt=optimized_prompt,
            platform=platform,
            timeout=timeout
        )

        return jsonify({
            "success": result.get("success", False),
            "data": result
        })

    except ValueError as e:
        return jsonify({
            "success": False,
            "error": str(e)
        }), 400

    except TimeoutError as e:
        return jsonify({
            "success": False,
            "error": f"等待全局Interview响应超时: {str(e)}"
        }), 504

    except Exception as e:
        logger.error(f"全局Interview失败: {str(e)}")
        return jsonify({
            "success": False,
            "error": str(e),
            "traceback": traceback.format_exc()
        }), 500


@simulation_bp.route('/interview/history', methods=['POST'])
def get_interview_history():
    """
    获取Interview历史记录

    从模拟数据库中读取所有Interview记录

    请求(JSON):
        {
            "simulation_id": "sim_xxxx",  // 必填,模拟ID
            "platform": "reddit",          // 可选,平台类型(reddit/twitter)
                                           // 不指定则返回两个平台的所有历史
            "agent_id": 0,                 // 可选,只获取该Agent的采访历史
            "limit": 100                   // 可选,返回数量,默认100
        }

    返回:
        {
            "success": true,
            "data": {
                "count": 10,
                "history": [
                    {
                        "agent_id": 0,
                        "response": "我认为...",
                        "prompt": "你对这件事有什么看法?",
                        "timestamp": "2025-12-08T10:00:00",
                        "platform": "reddit"
                    },
                    ...
                ]
            }
        }
    """
    try:
        data = request.get_json() or {}
        
        simulation_id = data.get('simulation_id')
        platform = data.get('platform')  # 不指定则返回两个平台的历史
        agent_id = data.get('agent_id')
        limit = data.get('limit', 100)
        
        if not simulation_id:
            return jsonify({
                "success": False,
                "error": "请提供 simulation_id"
            }), 400

        history = SimulationRunner.get_interview_history(
            simulation_id=simulation_id,
            platform=platform,
            agent_id=agent_id,
            limit=limit
        )

        return jsonify({
            "success": True,
            "data": {
                "count": len(history),
                "history": history
            }
        })

    except Exception as e:
        logger.error(f"获取Interview历史失败: {str(e)}")
        return jsonify({
            "success": False,
            "error": str(e),
            "traceback": traceback.format_exc()
        }), 500


@simulation_bp.route('/env-status', methods=['POST'])
def get_env_status():
    """
    获取模拟环境状态

    检查模拟环境是否存活(可以接收Interview命令)

    请求(JSON):
        {
            "simulation_id": "sim_xxxx"  // 必填,模拟ID
        }

    返回:
        {
            "success": true,
            "data": {
                "simulation_id": "sim_xxxx",
                "env_alive": true,
                "twitter_available": true,
                "reddit_available": true,
                "message": "环境正在运行,可以接收Interview命令"
            }
        }
    """
    try:
        data = request.get_json() or {}
        
        simulation_id = data.get('simulation_id')
        
        if not simulation_id:
            return jsonify({
                "success": False,
                "error": "请提供 simulation_id"
            }), 400

        env_alive = SimulationRunner.check_env_alive(simulation_id)
        
        # 获取更详细的状态信息
        env_status = SimulationRunner.get_env_status_detail(simulation_id)

        if env_alive:
            message = "环境正在运行,可以接收Interview命令"
        else:
            message = "环境未运行或已关闭"

        return jsonify({
            "success": True,
            "data": {
                "simulation_id": simulation_id,
                "env_alive": env_alive,
                "twitter_available": env_status.get("twitter_available", False),
                "reddit_available": env_status.get("reddit_available", False),
                "message": message
            }
        })

    except Exception as e:
        logger.error(f"获取环境状态失败: {str(e)}")
        return jsonify({
            "success": False,
            "error": str(e),
            "traceback": traceback.format_exc()
        }), 500


@simulation_bp.route('/close-env', methods=['POST'])
def close_simulation_env():
    """
    关闭模拟环境
    
    向模拟发送关闭环境命令,使其优雅退出等待命令模式。
    
    注意:这不同于 /stop 接口,/stop 会强制终止进程,
    而此接口会让模拟优雅地关闭环境并退出。
    
    请求(JSON):
        {
            "simulation_id": "sim_xxxx",  // 必填,模拟ID
            "timeout": 30                  // 可选,超时时间(秒),默认30
        }
    
    返回:
        {
            "success": true,
            "data": {
                "message": "环境关闭命令已发送",
                "result": {...},
                "timestamp": "2025-12-08T10:00:01"
            }
        }
    """
    try:
        data = request.get_json() or {}
        
        simulation_id = data.get('simulation_id')
        timeout = data.get('timeout', 30)
        
        if not simulation_id:
            return jsonify({
                "success": False,
                "error": "请提供 simulation_id"
            }), 400
        
        result = SimulationRunner.close_simulation_env(
            simulation_id=simulation_id,
            timeout=timeout
        )
        
        # 更新模拟状态
        manager = SimulationManager()
        state = manager.get_simulation(simulation_id)
        if state:
            state.status = SimulationStatus.COMPLETED
            manager._save_simulation_state(state)
        
        return jsonify({
            "success": result.get("success", False),
            "data": result
        })
        
    except ValueError as e:
        return jsonify({
            "success": False,
            "error": str(e)
        }), 400
        
    except Exception as e:
        logger.error(f"关闭环境失败: {str(e)}")
        return jsonify({
            "success": False,
            "error": str(e),
            "traceback": traceback.format_exc()
        }), 500


================================================
FILE: backend/app/config.py
================================================
"""
配置管理
统一从项目根目录的 .env 文件加载配置
"""

import os
from dotenv import load_dotenv

# 加载项目根目录的 .env 文件
# 路径: MiroFish/.env (相对于 backend/app/config.py)
project_root_env = os.path.join(os.path.dirname(__file__), '../../.env')

if os.path.exists(project_root_env):
    load_dotenv(project_root_env, override=True)
else:
    # 如果根目录没有 .env,尝试加载环境变量(用于生产环境)
    load_dotenv(override=True)


class Config:
    """Flask配置类"""
    
    # Flask配置
    SECRET_KEY = os.environ.get('SECRET_KEY', 'mirofish-secret-key')
    DEBUG = os.environ.get('FLASK_DEBUG', 'True').lower() == 'true'
    
    # JSON配置 - 禁用ASCII转义,让中文直接显示(而不是 \uXXXX 格式)
    JSON_AS_ASCII = False
    
    # LLM配置(统一使用OpenAI格式)
    LLM_API_KEY = os.environ.get('LLM_API_KEY')
    LLM_BASE_URL = os.environ.get('LLM_BASE_URL', 'https://api.openai.com/v1')
    LLM_MODEL_NAME = os.environ.get('LLM_MODEL_NAME', 'gpt-4o-mini')
    
    # Zep配置
    ZEP_API_KEY = os.environ.get('ZEP_API_KEY')
    
    # 文件上传配置
    MAX_CONTENT_LENGTH = 50 * 1024 * 1024  # 50MB
    UPLOAD_FOLDER = os.path.join(os.path.dirname(__file__), '../uploads')
    ALLOWED_EXTENSIONS = {'pdf', 'md', 'txt', 'markdown'}
    
    # 文本处理配置
    DEFAULT_CHUNK_SIZE = 500  # 默认切块大小
    DEFAULT_CHUNK_OVERLAP = 50  # 默认重叠大小
    
    # OASIS模拟配置
    OASIS_DEFAULT_MAX_ROUNDS = int(os.environ.get('OASIS_DEFAULT_MAX_ROUNDS', '10'))
    OASIS_SIMULATION_DATA_DIR = os.path.join(os.path.dirname(__file__), '../uploads/simulations')
    
    # OASIS平台可用动作配置
    OASIS_TWITTER_ACTIONS = [
        'CREATE_POST', 'LIKE_POST', 'REPOST', 'FOLLOW', 'DO_NOTHING', 'QUOTE_POST'
    ]
    OASIS_REDDIT_ACTIONS = [
        'LIKE_POST', 'DISLIKE_POST', 'CREATE_POST', 'CREATE_COMMENT',
        'LIKE_COMMENT', 'DISLIKE_COMMENT', 'SEARCH_POSTS', 'SEARCH_USER',
        'TREND', 'REFRESH', 'DO_NOTHING', 'FOLLOW', 'MUTE'
    ]
    
    # Report Agent配置
    REPORT_AGENT_MAX_TOOL_CALLS = int(os.environ.get('REPORT_AGENT_MAX_TOOL_CALLS', '5'))
    REPORT_AGENT_MAX_REFLECTION_ROUNDS = int(os.environ.get('REPORT_AGENT_MAX_REFLECTION_ROUNDS', '2'))
    REPORT_AGENT_TEMPERATURE = float(os.environ.get('REPORT_AGENT_TEMPERATURE', '0.5'))
    
    @classmethod
    def validate(cls):
        """验证必要配置"""
        errors = []
        if not cls.LLM_API_KEY:
            errors.append("LLM_API_KEY 未配置")
        if not cls.ZEP_API_KEY:
            errors.append("ZEP_API_KEY 未配置")
        return errors



================================================
FILE: backend/app/models/__init__.py
================================================
"""
数据模型模块
"""

from .task import TaskManager, TaskStatus
from .project import Project, ProjectStatus, ProjectManager

__all__ = ['TaskManager', 'TaskStatus', 'Project', 'ProjectStatus', 'ProjectManager']



================================================
FILE: backend/app/models/project.py
================================================
"""
项目上下文管理
用于在服务端持久化项目状态,避免前端在接口间传递大量数据
"""

import os
import json
import uuid
import shutil
from datetime import datetime
from typing import Dict, Any, List, Optional
from enum import Enum
from dataclasses import dataclass, field, asdict
from ..config import Config


class ProjectStatus(str, Enum):
    """项目状态"""
    CREATED = "created"              # 刚创建,文件已上传
    ONTOLOGY_GENERATED = "ontology_generated"  # 本体已生成
    GRAPH_BUILDING = "graph_building"    # 图谱构建中
    GRAPH_COMPLETED = "graph_completed"  # 图谱构建完成
    FAILED = "failed"                # 失败


@dataclass
class Project:
    """项目数据模型"""
    project_id: str
    name: str
    status: ProjectStatus
    created_at: str
    updated_at: str
    
    # 文件信息
    files: List[Dict[str, str]] = field(default_factory=list)  # [{filename, path, size}]
    total_text_length: int = 0
    
    # 本体信息(接口1生成后填充)
    ontology: Optional[Dict[str, Any]] = None
    analysis_summary: Optional[str] = None
    
    # 图谱信息(接口2完成后填充)
    graph_id: Optional[str] = None
    graph_build_task_id: Optional[str] = None
    
    # 配置
    simulation_requirement: Optional[str] = None
    chunk_size: int = 500
    chunk_overlap: int = 50
    
    # 错误信息
    error: Optional[str] = None
    
    def to_dict(self) -> Dict[str, Any]:
        """转换为字典"""
        return {
            "project_id": self.project_id,
            "name": self.name,
            "status": self.status.value if isinstance(self.status, ProjectStatus) else self.status,
            "created_at": self.created_at,
            "updated_at": self.updated_at,
            "files": self.files,
            "total_text_length": self.total_text_length,
            "ontology": self.ontology,
            "analysis_summary": self.analysis_summary,
            "graph_id": self.graph_id,
            "graph_build_task_id": self.graph_build_task_id,
            "simulation_requirement": self.simulation_requirement,
            "chunk_size": self.chunk_size,
            "chunk_overlap": self.chunk_overlap,
            "error": self.error
        }
    
    @classmethod
    def from_dict(cls, data: Dict[str, Any]) -> 'Project':
        """从字典创建"""
        status = data.get('status', 'created')
        if isinstance(status, str):
            status = ProjectStatus(status)
        
        return cls(
            project_id=data['project_id'],
            name=data.get('name', 'Unnamed Project'),
            status=status,
            created_at=data.get('created_at', ''),
            updated_at=data.get('updated_at', ''),
            files=data.get('files', []),
            total_text_length=data.get('total_text_length', 0),
            ontology=data.get('ontology'),
            analysis_summary=data.get('analysis_summary'),
            graph_id=data.get('graph_id'),
            graph_build_task_id=data.get('graph_build_task_id'),
            simulation_requirement=data.get('simulation_requirement'),
            chunk_size=data.get('chunk_size', 500),
            chunk_overlap=data.get('chunk_overlap', 50),
            error=data.get('error')
        )


class ProjectManager:
    """项目管理器 - 负责项目的持久化存储和检索"""
    
    # 项目存储根目录
    PROJECTS_DIR = os.path.join(Config.UPLOAD_FOLDER, 'projects')
    
    @classmethod
    def _ensure_projects_dir(cls):
        """确保项目目录存在"""
        os.makedirs(cls.PROJECTS_DIR, exist_ok=True)
    
    @classmethod
    def _get_project_dir(cls, project_id: str) -> str:
        """获取项目目录路径"""
        return os.path.join(cls.PROJECTS_DIR, project_id)
    
    @classmethod
    def _get_project_meta_path(cls, project_id: str) -> str:
        """获取项目元数据文件路径"""
        return os.path.join(cls._get_project_dir(project_id), 'project.json')
    
    @classmethod
    def _get_project_files_dir(cls, project_id: str) -> str:
        """获取项目文件存储目录"""
        return os.path.join(cls._get_project_dir(project_id), 'files')
    
    @classmethod
    def _get_project_text_path(cls, project_id: str) -> str:
        """获取项目提取文本存储路径"""
        return os.path.join(cls._get_project_dir(project_id), 'extracted_text.txt')
    
    @classmethod
    def create_project(cls, name: str = "Unnamed Project") -> Project:
        """
        创建新项目
        
        Args:
            name: 项目名称
            
        Returns:
            新创建的Project对象
        """
        cls._ensure_projects_dir()
        
        project_id = f"proj_{uuid.uuid4().hex[:12]}"
        now = datetime.now().isoformat()
        
        project = Project(
            project_id=project_id,
            name=name,
            status=ProjectStatus.CREATED,
            created_at=now,
            updated_at=now
        )
        
        # 创建项目目录结构
        project_dir = cls._get_project_dir(project_id)
        files_dir = cls._get_project_files_dir(project_id)
        os.makedirs(project_dir, exist_ok=True)
        os.makedirs(files_dir, exist_ok=True)
        
        # 保存项目元数据
        cls.save_project(project)
        
        return project
    
    @classmethod
    def save_project(cls, project: Project) -> None:
        """保存项目元数据"""
        project.updated_at = datetime.now().isoformat()
        meta_path = cls._get_project_meta_path(project.project_id)
        
        with open(meta_path, 'w', encoding='utf-8') as f:
            json.dump(project.to_dict(), f, ensure_ascii=False, indent=2)
    
    @classmethod
    def get_project(cls, project_id: str) -> Optional[Project]:
        """
        获取项目
        
        Args:
            project_id: 项目ID
            
        Returns:
            Project对象,如果不存在返回None
        """
        meta_path = cls._get_project_meta_path(project_id)
        
        if not os.path.exists(meta_path):
            return None
        
        with open(meta_path, 'r', encoding='utf-8') as f:
            data = json.load(f)
        
        return Project.from_dict(data)
    
    @classmethod
    def list_projects(cls, limit: int = 50) -> List[Project]:
        """
        列出所有项目
        
        Args:
            limit: 返回数量限制
            
        Returns:
            项目列表,按创建时间倒序
        """
        cls._ensure_projects_dir()
        
        projects = []
        for project_id in os.listdir(cls.PROJECTS_DIR):
            project = cls.get_project(project_id)
            if project:
                projects.append(project)
        
        # 按创建时间倒序排序
        projects.sort(key=lambda p: p.created_at, reverse=True)
        
        return projects[:limit]
    
    @classmethod
    def delete_project(cls, project_id: str) -> bool:
        """
        删除项目及其所有文件
        
        Args:
            project_id: 项目ID
            
        Returns:
            是否删除成功
        """
        project_dir = cls._get_project_dir(project_id)
        
        if not os.path.exists(project_dir):
            return False
        
        shutil.rmtree(project_dir)
        return True
    
    @classmethod
    def save_file_to_project(cls, project_id: str, file_storage, original_filename: str) -> Dict[str, str]:
        """
        保存上传的文件到项目目录
        
        Args:
            project_id: 项目ID
            file_storage: Flask的FileStorage对象
            original_filename: 原始文件名
            
        Returns:
            文件信息字典 {filename, path, size}
        """
        files_dir = cls._get_project_files_dir(project_id)
        os.makedirs(files_dir, exist_ok=True)
        
        # 生成安全的文件名
        ext = os.path.splitext(original_filename)[1].lower()
        safe_filename = f"{uuid.uuid4().hex[:8]}{ext}"
        file_path = os.path.join(files_dir, safe_filename)
        
        # 保存文件
        file_storage.save(file_path)
        
        # 获取文件大小
        file_size = os.path.getsize(file_path)
        
        return {
            "original_filename": original_filename,
            "saved_filename": safe_filename,
            "path": file_path,
            "size": file_size
        }
    
    @classmethod
    def save_extracted_text(cls, project_id: str, text: str) -> None:
        """保存提取的文本"""
        text_path = cls._get_project_text_path(project_id)
        with open(text_path, 'w', encoding='utf-8') as f:
            f.write(text)
    
    @classmethod
    def get_extracted_text(cls, project_id: str) -> Optional[str]:
        """获取提取的文本"""
        text_path = cls._get_project_text_path(project_id)
        
        if not os.path.exists(text_path):
            return None
        
        with open(text_path, 'r', encoding='utf-8') as f:
            return f.read()
    
    @classmethod
    def get_project_files(cls, project_id: str) -> List[str]:
        """获取项目的所有文件路径"""
        files_dir = cls._get_project_files_dir(project_id)
        
        if not os.path.exists(files_dir):
            return []
        
        return [
            os.path.join(files_dir, f) 
            for f in os.listdir(files_dir) 
            if os.path.isfile(os.path.join(files_dir, f))
        ]



=========
Download .txt
gitextract_d857zhx1/

├── .dockerignore
├── .github/
│   └── workflows/
│       └── docker-image.yml
├── .gitignore
├── Dockerfile
├── LICENSE
├── README-EN.md
├── README.md
├── backend/
│   ├── app/
│   │   ├── __init__.py
│   │   ├── api/
│   │   │   ├── __init__.py
│   │   │   ├── graph.py
│   │   │   ├── report.py
│   │   │   └── simulation.py
│   │   ├── config.py
│   │   ├── models/
│   │   │   ├── __init__.py
│   │   │   ├── project.py
│   │   │   └── task.py
│   │   ├── services/
│   │   │   ├── __init__.py
│   │   │   ├── graph_builder.py
│   │   │   ├── oasis_profile_generator.py
│   │   │   ├── ontology_generator.py
│   │   │   ├── report_agent.py
│   │   │   ├── simulation_config_generator.py
│   │   │   ├── simulation_ipc.py
│   │   │   ├── simulation_manager.py
│   │   │   ├── simulation_runner.py
│   │   │   ├── text_processor.py
│   │   │   ├── zep_entity_reader.py
│   │   │   ├── zep_graph_memory_updater.py
│   │   │   └── zep_tools.py
│   │   └── utils/
│   │       ├── __init__.py
│   │       ├── file_parser.py
│   │       ├── llm_client.py
│   │       ├── logger.py
│   │       ├── retry.py
│   │       └── zep_paging.py
│   ├── pyproject.toml
│   ├── requirements.txt
│   ├── run.py
│   └── scripts/
│       ├── action_logger.py
│       ├── run_parallel_simulation.py
│       ├── run_reddit_simulation.py
│       ├── run_twitter_simulation.py
│       └── test_profile_format.py
├── docker-compose.yml
├── frontend/
│   ├── .gitignore
│   ├── index.html
│   ├── package.json
│   ├── src/
│   │   ├── App.vue
│   │   ├── api/
│   │   │   ├── graph.js
│   │   │   ├── index.js
│   │   │   ├── report.js
│   │   │   └── simulation.js
│   │   ├── components/
│   │   │   ├── GraphPanel.vue
│   │   │   ├── HistoryDatabase.vue
│   │   │   ├── Step1GraphBuild.vue
│   │   │   ├── Step2EnvSetup.vue
│   │   │   ├── Step3Simulation.vue
│   │   │   ├── Step4Report.vue
│   │   │   └── Step5Interaction.vue
│   │   ├── main.js
│   │   ├── router/
│   │   │   └── index.js
│   │   ├── store/
│   │   │   └── pendingUpload.js
│   │   └── views/
│   │       ├── Home.vue
│   │       ├── InteractionView.vue
│   │       ├── MainView.vue
│   │       ├── Process.vue
│   │       ├── ReportView.vue
│   │       ├── SimulationRunView.vue
│   │       └── SimulationView.vue
│   └── vite.config.js
└── package.json
Download .txt
SYMBOL INDEX (562 symbols across 32 files)

FILE: backend/app/__init__.py
  function create_app (line 19) | def create_app(config_class=Config):

FILE: backend/app/api/graph.py
  function allowed_file (line 25) | def allowed_file(filename: str) -> bool:
  function get_project (line 36) | def get_project(project_id: str):
  function list_projects (line 55) | def list_projects():
  function delete_project (line 70) | def delete_project(project_id: str):
  function reset_project (line 89) | def reset_project(project_id: str):
  function generate_ontology (line 122) | def generate_ontology():
  function build_graph (line 260) | def build_graph():
  function get_task (line 530) | def get_task(task_id: str):
  function list_tasks (line 549) | def list_tasks():
  function get_graph_data (line 565) | def get_graph_data(graph_id: str):
  function delete_graph (line 593) | def delete_graph(graph_id: str):

FILE: backend/app/api/report.py
  function generate_report (line 25) | def generate_report():
  function get_generate_status (line 199) | def get_generate_status():
  function get_report (line 273) | def get_report(report_id: str):
  function get_report_by_simulation (line 315) | def get_report_by_simulation(simulation_id: str):
  function list_reports (line 354) | def list_reports():
  function download_report (line 394) | def download_report(report_id: str):
  function delete_report (line 440) | def delete_report(report_id: str):
  function chat_with_report_agent (line 468) | def chat_with_report_agent():
  function get_report_progress (line 565) | def get_report_progress(report_id: str):
  function get_report_sections (line 606) | def get_report_sections(report_id: str):
  function get_single_section (line 657) | def get_single_section(report_id: str, section_index: int):
  function check_report_status (line 703) | def check_report_status(simulation_id: str):
  function get_agent_log (line 754) | def get_agent_log(report_id: str):
  function stream_agent_log (line 813) | def stream_agent_log(report_id: str):
  function get_console_log (line 849) | def get_console_log(report_id: str):
  function stream_console_log (line 895) | def stream_console_log(report_id: str):
  function search_graph_tool (line 931) | def search_graph_tool():
  function get_graph_statistics_tool (line 979) | def get_graph_statistics_tool():

FILE: backend/app/api/simulation.py
  function optimize_interview_prompt (line 27) | def optimize_interview_prompt(prompt: str) -> str:
  function get_graph_entities (line 48) | def get_graph_entities(graph_id: str):
  function get_entity_detail (line 93) | def get_entity_detail(graph_id: str, entity_uuid: str):
  function get_entities_by_type (line 126) | def get_entities_by_type(graph_id: str, entity_type: str):
  function create_simulation (line 165) | def create_simulation():
  function _check_simulation_prepared (line 239) | def _check_simulation_prepared(simulation_id: str) -> tuple:
  function prepare_simulation (line 359) | def prepare_simulation():
  function get_prepare_status (line 638) | def get_prepare_status():
  function get_simulation (line 751) | def get_simulation(simulation_id: str):
  function list_simulations (line 784) | def list_simulations():
  function _get_report_id_for_simulation (line 812) | def _get_report_id_for_simulation(simulation_id: str) -> str:
  function get_simulation_history (line 872) | def get_simulation_history():
  function get_simulation_profiles (line 986) | def get_simulation_profiles(simulation_id: str):
  function get_simulation_profiles_realtime (line 1024) | def get_simulation_profiles_realtime(simulation_id: str):
  function get_simulation_config_realtime (line 1134) | def get_simulation_config_realtime(simulation_id: str):
  function get_simulation_config (line 1254) | def get_simulation_config(simulation_id: str):
  function download_simulation_config (line 1290) | def download_simulation_config(simulation_id: str):
  function download_simulation_script (line 1319) | def download_simulation_script(script_name: str):
  function generate_profiles (line 1373) | def generate_profiles():
  function start_simulation (line 1447) | def start_simulation():
  function stop_simulation (line 1640) | def stop_simulation():
  function get_run_status (line 1701) | def get_run_status(simulation_id: str):
  function get_run_status_detail (line 1759) | def get_run_status_detail(simulation_id: str):
  function get_simulation_actions (line 1860) | def get_simulation_actions(simulation_id: str):
  function get_simulation_timeline (line 1914) | def get_simulation_timeline(simulation_id: str):
  function get_agent_stats (line 1954) | def get_agent_stats(simulation_id: str):
  function get_simulation_posts (line 1983) | def get_simulation_posts(simulation_id: str):
  function get_simulation_comments (line 2061) | def get_simulation_comments(simulation_id: str):
  function interview_agent (line 2138) | def interview_agent():
  function interview_agents_batch (line 2267) | def interview_agents_batch():
  function interview_all_agents (line 2405) | def interview_all_agents():
  function get_interview_history (line 2508) | def get_interview_history():
  function get_env_status (line 2580) | def get_env_status():
  function close_simulation_env (line 2645) | def close_simulation_env():

FILE: backend/app/config.py
  class Config (line 20) | class Config:
    method validate (line 67) | def validate(cls):

FILE: backend/app/models/project.py
  class ProjectStatus (line 17) | class ProjectStatus(str, Enum):
  class Project (line 27) | class Project:
    method to_dict (line 55) | def to_dict(self) -> Dict[str, Any]:
    method from_dict (line 76) | def from_dict(cls, data: Dict[str, Any]) -> 'Project':
  class ProjectManager (line 101) | class ProjectManager:
    method _ensure_projects_dir (line 108) | def _ensure_projects_dir(cls):
    method _get_project_dir (line 113) | def _get_project_dir(cls, project_id: str) -> str:
    method _get_project_meta_path (line 118) | def _get_project_meta_path(cls, project_id: str) -> str:
    method _get_project_files_dir (line 123) | def _get_project_files_dir(cls, project_id: str) -> str:
    method _get_project_text_path (line 128) | def _get_project_text_path(cls, project_id: str) -> str:
    method create_project (line 133) | def create_project(cls, name: str = "Unnamed Project") -> Project:
    method save_project (line 168) | def save_project(cls, project: Project) -> None:
    method get_project (line 177) | def get_project(cls, project_id: str) -> Optional[Project]:
    method list_projects (line 198) | def list_projects(cls, limit: int = 50) -> List[Project]:
    method delete_project (line 222) | def delete_project(cls, project_id: str) -> bool:
    method save_file_to_project (line 241) | def save_file_to_project(cls, project_id: str, file_storage, original_...
    method save_extracted_text (line 275) | def save_extracted_text(cls, project_id: str, text: str) -> None:
    method get_extracted_text (line 282) | def get_extracted_text(cls, project_id: str) -> Optional[str]:
    method get_project_files (line 293) | def get_project_files(cls, project_id: str) -> List[str]:

FILE: backend/app/models/task.py
  class TaskStatus (line 14) | class TaskStatus(str, Enum):
  class Task (line 23) | class Task:
    method to_dict (line 37) | def to_dict(self) -> Dict[str, Any]:
  class TaskManager (line 54) | class TaskManager:
    method __new__ (line 63) | def __new__(cls):
    method create_task (line 73) | def create_task(self, task_type: str, metadata: Optional[Dict] = None)...
    method get_task (line 101) | def get_task(self, task_id: str) -> Optional[Task]:
    method update_task (line 106) | def update_task(
    method complete_task (line 145) | def complete_task(self, task_id: str, result: Dict):
    method fail_task (line 155) | def fail_task(self, task_id: str, error: str):
    method list_tasks (line 164) | def list_tasks(self, task_type: Optional[str] = None) -> list:
    method cleanup_old_tasks (line 172) | def cleanup_old_tasks(self, max_age_hours: int = 24):

FILE: backend/app/services/graph_builder.py
  class GraphInfo (line 23) | class GraphInfo:
    method to_dict (line 30) | def to_dict(self) -> Dict[str, Any]:
  class GraphBuilderService (line 39) | class GraphBuilderService:
    method __init__ (line 45) | def __init__(self, api_key: Optional[str] = None):
    method build_graph_async (line 53) | def build_graph_async(
    method _build_graph_worker (line 96) | def _build_graph_worker(
    method create_graph (line 187) | def create_graph(self, name: str) -> str:
    method set_ontology (line 199) | def set_ontology(self, graph_id: str, ontology: Dict[str, Any]):
    method add_text_batches (line 288) | def add_text_batches(
    method _wait_for_episodes (line 341) | def _wait_for_episodes(
    method _get_graph_info (line 397) | def _get_graph_info(self, graph_id: str) -> GraphInfo:
    method get_graph_data (line 420) | def get_graph_data(self, graph_id: str) -> Dict[str, Any]:
    method delete_graph (line 497) | def delete_graph(self, graph_id: str):

FILE: backend/app/services/oasis_profile_generator.py
  class OasisAgentProfile (line 29) | class OasisAgentProfile:
    method to_reddit_format (line 60) | def to_reddit_format(self) -> Dict[str, Any]:
    method to_twitter_format (line 88) | def to_twitter_format(self) -> Dict[str, Any]:
    method to_dict (line 118) | def to_dict(self) -> Dict[str, Any]:
  class OasisProfileGenerator (line 142) | class OasisProfileGenerator:
    method __init__ (line 180) | def __init__(
    method generate_profile_from_entity (line 211) | def generate_profile_from_entity(
    method _generate_username (line 275) | def _generate_username(self, name: str) -> str:
    method _search_zep_for_entity (line 285) | def _search_zep_for_entity(self, entity: EntityNode) -> Dict[str, Any]:
    method _build_entity_context (line 413) | def _build_entity_context(self, entity: EntityNode) -> str:
    method _is_individual_entity (line 488) | def _is_individual_entity(self, entity_type: str) -> bool:
    method _is_group_entity (line 492) | def _is_group_entity(self, entity_type: str) -> bool:
    method _generate_profile_with_llm (line 496) | def _generate_profile_with_llm(
    method _fix_truncated_json (line 582) | def _fix_truncated_json(self, content: str) -> str:
    method _try_fix_json (line 605) | def _try_fix_json(self, content: str, entity_name: str, entity_type: s...
    method _get_system_prompt (line 671) | def _get_system_prompt(self, is_individual: bool) -> str:
    method _build_individual_persona_prompt (line 676) | def _build_individual_persona_prompt(
    method _build_group_persona_prompt (line 725) | def _build_group_persona_prompt(
    method _generate_profile_rule_based (line 773) | def _generate_profile_rule_based(
    method set_graph_id (line 846) | def set_graph_id(self, graph_id: str):
    method generate_profiles_from_entities (line 850) | def generate_profiles_from_entities(
    method _print_generated_profile (line 1011) | def _print_generated_profile(self, entity_name: str, entity_type: str,...
    method save_profiles (line 1042) | def save_profiles(
    method _save_twitter_csv (line 1065) | def _save_twitter_csv(self, profiles: List[OasisAgentProfile], file_pa...
    method _normalize_gender (line 1116) | def _normalize_gender(self, gender: Optional[str]) -> str:
    method _save_reddit_json (line 1141) | def _save_reddit_json(self, profiles: List[OasisAgentProfile], file_pa...
    method save_profiles_to_json (line 1191) | def save_profiles_to_json(

FILE: backend/app/services/ontology_generator.py
  class OntologyGenerator (line 158) | class OntologyGenerator:
    method __init__ (line 164) | def __init__(self, llm_client: Optional[LLMClient] = None):
    method generate (line 167) | def generate(
    method _build_user_message (line 211) | def _build_user_message(
    method _validate_and_process (line 257) | def _validate_and_process(self, result: Dict[str, Any]) -> Dict[str, A...
    method generate_python_code (line 347) | def generate_python_code(self, ontology: Dict[str, Any]) -> str:

FILE: backend/app/services/report_agent.py
  class ReportLogger (line 35) | class ReportLogger:
    method __init__ (line 43) | def __init__(self, report_id: str):
    method _ensure_log_file (line 57) | def _ensure_log_file(self):
    method _get_elapsed_time (line 62) | def _get_elapsed_time(self) -> float:
    method log (line 66) | def log(
    method log_start (line 99) | def log_start(self, simulation_id: str, graph_id: str, simulation_requ...
    method log_planning_start (line 112) | def log_planning_start(self):
    method log_planning_context (line 120) | def log_planning_context(self, context: Dict[str, Any]):
    method log_planning_complete (line 131) | def log_planning_complete(self, outline_dict: Dict[str, Any]):
    method log_section_start (line 142) | def log_section_start(self, section_title: str, section_index: int):
    method log_react_thought (line 152) | def log_react_thought(self, section_title: str, section_index: int, it...
    method log_tool_call (line 166) | def log_tool_call(
    method log_tool_result (line 188) | def log_tool_result(
    method log_llm_response (line 211) | def log_llm_response(
    method log_section_content (line 236) | def log_section_content(
    method log_section_full_complete (line 257) | def log_section_full_complete(
    method log_report_complete (line 280) | def log_report_complete(self, total_sections: int, total_time_seconds:...
    method log_error (line 292) | def log_error(self, error_message: str, stage: str, section_title: str...
  class ReportConsoleLogger (line 306) | class ReportConsoleLogger:
    method __init__ (line 314) | def __init__(self, report_id: str):
    method _ensure_log_file (line 329) | def _ensure_log_file(self):
    method _setup_file_handler (line 334) | def _setup_file_handler(self):
    method close (line 365) | def close(self):
    method __del__ (line 383) | def __del__(self):
  class ReportStatus (line 388) | class ReportStatus(str, Enum):
  class ReportSection (line 398) | class ReportSection:
    method to_dict (line 403) | def to_dict(self) -> Dict[str, Any]:
    method to_markdown (line 409) | def to_markdown(self, level: int = 2) -> str:
  class ReportOutline (line 418) | class ReportOutline:
    method to_dict (line 424) | def to_dict(self) -> Dict[str, Any]:
    method to_markdown (line 431) | def to_markdown(self) -> str:
  class Report (line 441) | class Report:
    method to_dict (line 454) | def to_dict(self) -> Dict[str, Any]:
  class ReportAgent (line 864) | class ReportAgent:
    method __init__ (line 883) | def __init__(
    method _define_tools (line 918) | def _define_tools(self) -> Dict[str, Dict[str, Any]]:
    method _execute_tool (line 955) | def _execute_tool(self, tool_name: str, parameters: Dict[str, Any], re...
    method _parse_tool_calls (line 1066) | def _parse_tool_calls(self, response: str) -> List[Dict[str, Any]]:
    method _is_valid_tool_call (line 1113) | def _is_valid_tool_call(self, data: dict) -> bool:
    method _get_tools_description (line 1126) | def _get_tools_description(self) -> str:
    method plan_outline (line 1136) | def plan_outline(
    method _generate_section_react (line 1220) | def _generate_section_react(
    method generate_report (line 1532) | def generate_report(
    method chat (line 1766) | def chat(
  class ReportManager (line 1883) | class ReportManager:
    method _ensure_reports_dir (line 1905) | def _ensure_reports_dir(cls):
    method _get_report_folder (line 1910) | def _get_report_folder(cls, report_id: str) -> str:
    method _ensure_report_folder (line 1915) | def _ensure_report_folder(cls, report_id: str) -> str:
    method _get_report_path (line 1922) | def _get_report_path(cls, report_id: str) -> str:
    method _get_report_markdown_path (line 1927) | def _get_report_markdown_path(cls, report_id: str) -> str:
    method _get_outline_path (line 1932) | def _get_outline_path(cls, report_id: str) -> str:
    method _get_progress_path (line 1937) | def _get_progress_path(cls, report_id: str) -> str:
    method _get_section_path (line 1942) | def _get_section_path(cls, report_id: str, section_index: int) -> str:
    method _get_agent_log_path (line 1947) | def _get_agent_log_path(cls, report_id: str) -> str:
    method _get_console_log_path (line 1952) | def _get_console_log_path(cls, report_id: str) -> str:
    method get_console_log (line 1957) | def get_console_log(cls, report_id: str, from_line: int = 0) -> Dict[s...
    method get_console_log_stream (line 2004) | def get_console_log_stream(cls, report_id: str) -> List[str]:
    method get_agent_log (line 2018) | def get_agent_log(cls, report_id: str, from_line: int = 0) -> Dict[str...
    method get_agent_log_stream (line 2066) | def get_agent_log_stream(cls, report_id: str) -> List[Dict[str, Any]]:
    method save_outline (line 2080) | def save_outline(cls, report_id: str, outline: ReportOutline) -> None:
    method save_section (line 2094) | def save_section(
    method _clean_section_content (line 2131) | def _clean_section_content(cls, content: str, section_title: str) -> str:
    method update_progress (line 2199) | def update_progress(
    method get_progress (line 2228) | def get_progress(cls, report_id: str) -> Optional[Dict[str, Any]]:
    method get_generated_sections (line 2239) | def get_generated_sections(cls, report_id: str) -> List[Dict[str, Any]]:
    method assemble_full_report (line 2270) | def assemble_full_report(cls, report_id: str, outline: ReportOutline) ...
    method _post_process_report (line 2300) | def _post_process_report(cls, content: str, outline: ReportOutline) ->...
    method save_report (line 2426) | def save_report(cls, report: Report) -> None:
    method get_report (line 2446) | def get_report(cls, report_id: str) -> Optional[Report]:
    method get_report_by_simulation (line 2499) | def get_report_by_simulation(cls, simulation_id: str) -> Optional[Repo...
    method list_reports (line 2520) | def list_reports(cls, simulation_id: Optional[str] = None, limit: int ...
    method delete_report (line 2547) | def delete_report(cls, report_id: str) -> bool:

FILE: backend/app/services/simulation_config_generator.py
  class AgentActivityConfig (line 51) | class AgentActivityConfig:
  class TimeSimulationConfig (line 83) | class TimeSimulationConfig:
  class EventConfig (line 113) | class EventConfig:
  class PlatformConfig (line 129) | class PlatformConfig:
  class SimulationParameters (line 146) | class SimulationParameters:
    method to_dict (line 175) | def to_dict(self) -> Dict[str, Any]:
    method to_json (line 194) | def to_json(self, indent: int = 2) -> str:
  class SimulationConfigGenerator (line 199) | class SimulationConfigGenerator:
    method __init__ (line 224) | def __init__(
    method generate_config (line 242) | def generate_config(
    method _build_context (line 380) | def _build_context(
    method _summarize_entities (line 408) | def _summarize_entities(self, entities: List[EntityNode]) -> str:
    method _call_llm_with_retry (line 433) | def _call_llm_with_retry(self, prompt: str, system_prompt: str) -> Dic...
    method _fix_truncated_json (line 482) | def _fix_truncated_json(self, content: str) -> str:
    method _try_fix_config_json (line 500) | def _try_fix_config_json(self, content: str) -> Optional[Dict[str, Any]]:
    method _generate_time_config (line 534) | def _generate_time_config(self, context: str, num_entities: int) -> Di...
    method _get_default_time_config (line 595) | def _get_default_time_config(self, num_entities: int) -> Dict[str, Any]:
    method _parse_time_config (line 609) | def _parse_time_config(self, result: Dict[str, Any], num_entities: int...
    method _generate_event_config (line 644) | def _generate_event_config(
    method _parse_event_config (line 716) | def _parse_event_config(self, result: Dict[str, Any]) -> EventConfig:
    method _assign_initial_post_agents (line 725) | def _assign_initial_post_agents(
    method _generate_agent_configs_batch (line 810) | def _generate_agent_configs_batch(
    method _generate_agent_config_by_rule (line 904) | def _generate_agent_config_by_rule(self, entity: EntityNode) -> Dict[s...

FILE: backend/app/services/simulation_ipc.py
  class CommandType (line 25) | class CommandType(str, Enum):
  class CommandStatus (line 32) | class CommandStatus(str, Enum):
  class IPCCommand (line 41) | class IPCCommand:
    method to_dict (line 48) | def to_dict(self) -> Dict[str, Any]:
    method from_dict (line 57) | def from_dict(cls, data: Dict[str, Any]) -> 'IPCCommand':
  class IPCResponse (line 67) | class IPCResponse:
    method to_dict (line 75) | def to_dict(self) -> Dict[str, Any]:
    method from_dict (line 85) | def from_dict(cls, data: Dict[str, Any]) -> 'IPCResponse':
  class SimulationIPCClient (line 95) | class SimulationIPCClient:
    method __init__ (line 102) | def __init__(self, simulation_dir: str):
    method send_command (line 117) | def send_command(
    method send_interview (line 189) | def send_interview(
    method send_batch_interview (line 224) | def send_batch_interview(
    method send_close_env (line 254) | def send_close_env(self, timeout: float = 30.0) -> IPCResponse:
    method check_env_alive (line 270) | def check_env_alive(self) -> bool:
  class SimulationIPCServer (line 288) | class SimulationIPCServer:
    method __init__ (line 295) | def __init__(self, simulation_dir: str):
    method start (line 313) | def start(self):
    method stop (line 318) | def stop(self):
    method _update_env_status (line 323) | def _update_env_status(self, status: str):
    method poll_commands (line 332) | def poll_commands(self) -> Optional[IPCCommand]:
    method send_response (line 362) | def send_response(self, response: IPCResponse):
    method send_success (line 380) | def send_success(self, command_id: str, result: Dict[str, Any]):
    method send_error (line 388) | def send_error(self, command_id: str, error: str):

FILE: backend/app/services/simulation_manager.py
  class SimulationStatus (line 24) | class SimulationStatus(str, Enum):
  class PlatformType (line 36) | class PlatformType(str, Enum):
  class SimulationState (line 43) | class SimulationState:
    method to_dict (line 77) | def to_dict(self) -> Dict[str, Any]:
    method to_simple_dict (line 99) | def to_simple_dict(self) -> Dict[str, Any]:
  class SimulationManager (line 114) | class SimulationManager:
    method __init__ (line 131) | def __init__(self):
    method _get_simulation_dir (line 138) | def _get_simulation_dir(self, simulation_id: str) -> str:
    method _save_simulation_state (line 144) | def _save_simulation_state(self, state: SimulationState):
    method _load_simulation_state (line 156) | def _load_simulation_state(self, simulation_id: str) -> Optional[Simul...
    method create_simulation (line 193) | def create_simulation(
    method prepare_simulation (line 229) | def prepare_simulation(
    method get_simulation (line 458) | def get_simulation(self, simulation_id: str) -> Optional[SimulationSta...
    method list_simulations (line 462) | def list_simulations(self, project_id: Optional[str] = None) -> List[S...
    method get_profiles (line 480) | def get_profiles(self, simulation_id: str, platform: str = "reddit") -...
    method get_simulation_config (line 495) | def get_simulation_config(self, simulation_id: str) -> Optional[Dict[s...
    method get_run_instructions (line 506) | def get_run_instructions(self, simulation_id: str) -> Dict[str, str]:

FILE: backend/app/services/simulation_runner.py
  class RunnerStatus (line 35) | class RunnerStatus(str, Enum):
  class AgentAction (line 48) | class AgentAction:
    method to_dict (line 60) | def to_dict(self) -> Dict[str, Any]:
  class RoundSummary (line 75) | class RoundSummary:
    method to_dict (line 86) | def to_dict(self) -> Dict[str, Any]:
  class SimulationRunState (line 101) | class SimulationRunState:
    method add_action (line 146) | def add_action(self, action: AgentAction):
    method to_dict (line 159) | def to_dict(self) -> Dict[str, Any]:
    method to_detail_dict (line 187) | def to_detail_dict(self) -> Dict[str, Any]:
  class SimulationRunner (line 195) | class SimulationRunner:
    method get_run_state (line 230) | def get_run_state(cls, simulation_id: str) -> Optional[SimulationRunSt...
    method _load_run_state (line 242) | def _load_run_state(cls, simulation_id: str) -> Optional[SimulationRun...
    method _save_run_state (line 298) | def _save_run_state(cls, state: SimulationRunState):
    method start_simulation (line 312) | def start_simulation(
    method _monitor_simulation (line 478) | def _monitor_simulation(cls, simulation_id: str):
    method _read_action_log (line 579) | def _read_action_log(
    method _check_all_platforms_completed (line 689) | def _check_all_platforms_completed(cls, state: SimulationRunState) -> ...
    method _terminate_process (line 716) | def _terminate_process(cls, process: subprocess.Popen, simulation_id: ...
    method stop_simulation (line 772) | def stop_simulation(cls, simulation_id: str) -> SimulationRunState:
    method _read_actions_from_file (line 820) | def _read_actions_from_file(
    method get_all_actions (line 889) | def get_all_actions(
    method get_actions (line 950) | def get_actions(
    method get_timeline (line 984) | def get_timeline(
    method get_agent_stats (line 1055) | def get_agent_stats(cls, simulation_id: str) -> List[Dict[str, Any]]:
    method cleanup_simulation_logs (line 1098) | def cleanup_simulation_logs(cls, simulation_id: str) -> Dict[str, Any]:
    method cleanup_all_simulations (line 1182) | def cleanup_all_simulations(cls):
    method register_cleanup (line 1283) | def register_cleanup(cls):
    method get_running_simulations (line 1356) | def get_running_simulations(cls) -> List[str]:
    method check_env_alive (line 1369) | def check_env_alive(cls, simulation_id: str) -> bool:
    method get_env_status_detail (line 1387) | def get_env_status_detail(cls, simulation_id: str) -> Dict[str, Any]:
    method interview_agent (line 1423) | def interview_agent(
    method interview_agents_batch (line 1487) | def interview_agents_batch(
    method interview_all_agents (line 1546) | def interview_all_agents(
    method close_simulation_env (line 1606) | def close_simulation_env(
    method _get_interview_history_from_db (line 1654) | def _get_interview_history_from_db(
    method get_interview_history (line 1712) | def get_interview_history(

FILE: backend/app/services/text_processor.py
  class TextProcessor (line 9) | class TextProcessor:
    method extract_from_files (line 13) | def extract_from_files(file_paths: List[str]) -> str:
    method split_text (line 18) | def split_text(
    method preprocess_text (line 37) | def preprocess_text(text: str) -> str:
    method get_text_stats (line 64) | def get_text_stats(text: str) -> dict:

FILE: backend/app/services/zep_entity_reader.py
  class EntityNode (line 23) | class EntityNode:
    method to_dict (line 35) | def to_dict(self) -> Dict[str, Any]:
    method get_entity_type (line 46) | def get_entity_type(self) -> Optional[str]:
  class FilteredEntities (line 55) | class FilteredEntities:
    method to_dict (line 62) | def to_dict(self) -> Dict[str, Any]:
  class ZepEntityReader (line 71) | class ZepEntityReader:
    method __init__ (line 81) | def __init__(self, api_key: Optional[str] = None):
    method _call_with_retry (line 88) | def _call_with_retry(
    method get_all_nodes (line 127) | def get_all_nodes(self, graph_id: str) -> List[Dict[str, Any]]:
    method get_all_edges (line 154) | def get_all_edges(self, graph_id: str) -> List[Dict[str, Any]]:
    method get_node_edges (line 182) | def get_node_edges(self, node_uuid: str) -> List[Dict[str, Any]]:
    method filter_defined_entities (line 215) | def filter_defined_entities(
    method get_entity_with_context (line 333) | def get_entity_with_context(
    method get_entities_by_type (line 413) | def get_entities_by_type(

FILE: backend/app/services/zep_graph_memory_updater.py
  class AgentActivity (line 24) | class AgentActivity:
    method to_episode_text (line 34) | def to_episode_text(self) -> str:
    method _describe_create_post (line 63) | def _describe_create_post(self) -> str:
    method _describe_like_post (line 69) | def _describe_like_post(self) -> str:
    method _describe_dislike_post (line 82) | def _describe_dislike_post(self) -> str:
    method _describe_repost (line 95) | def _describe_repost(self) -> str:
    method _describe_quote_post (line 108) | def _describe_quote_post(self) -> str:
    method _describe_follow (line 128) | def _describe_follow(self) -> str:
    method _describe_create_comment (line 136) | def _describe_create_comment(self) -> str:
    method _describe_like_comment (line 152) | def _describe_like_comment(self) -> str:
    method _describe_dislike_comment (line 165) | def _describe_dislike_comment(self) -> str:
    method _describe_search (line 178) | def _describe_search(self) -> str:
    method _describe_search_user (line 183) | def _describe_search_user(self) -> str:
    method _describe_mute (line 188) | def _describe_mute(self) -> str:
    method _describe_generic (line 196) | def _describe_generic(self) -> str:
  class ZepGraphMemoryUpdater (line 201) | class ZepGraphMemoryUpdater:
    method __init__ (line 231) | def __init__(self, graph_id: str, api_key: Optional[str] = None):
    method _get_platform_display_name (line 270) | def _get_platform_display_name(self, platform: str) -> str:
    method start (line 274) | def start(self):
    method stop (line 288) | def stop(self):
    method add_activity (line 305) | def add_activity(self, activity: AgentActivity):
    method add_activity_from_dict (line 335) | def add_activity_from_dict(self, data: Dict[str, Any], platform: str):
    method _worker_loop (line 359) | def _worker_loop(self):
    method _send_batch_activities (line 390) | def _send_batch_activities(self, activities: List[AgentActivity], plat...
    method _flush_remaining (line 429) | def _flush_remaining(self):
    method get_stats (line 454) | def get_stats(self) -> Dict[str, Any]:
  class ZepGraphMemoryManager (line 473) | class ZepGraphMemoryManager:
    method create_updater (line 484) | def create_updater(cls, simulation_id: str, graph_id: str) -> ZepGraph...
    method get_updater (line 508) | def get_updater(cls, simulation_id: str) -> Optional[ZepGraphMemoryUpd...
    method stop_updater (line 513) | def stop_updater(cls, simulation_id: str):
    method stop_all (line 525) | def stop_all(cls):
    method get_all_stats (line 543) | def get_all_stats(cls) -> Dict[str, Dict[str, Any]]:

FILE: backend/app/services/zep_tools.py
  class SearchResult (line 27) | class SearchResult:
    method to_dict (line 35) | def to_dict(self) -> Dict[str, Any]:
    method to_text (line 44) | def to_text(self) -> str:
  class NodeInfo (line 57) | class NodeInfo:
    method to_dict (line 65) | def to_dict(self) -> Dict[str, Any]:
    method to_text (line 74) | def to_text(self) -> str:
  class EdgeInfo (line 81) | class EdgeInfo:
    method to_dict (line 96) | def to_dict(self) -> Dict[str, Any]:
    method to_text (line 111) | def to_text(self, include_temporal: bool = False) -> str:
    method is_expired (line 127) | def is_expired(self) -> bool:
    method is_invalid (line 132) | def is_invalid(self) -> bool:
  class InsightForgeResult (line 138) | class InsightForgeResult:
    method to_dict (line 157) | def to_dict(self) -> Dict[str, Any]:
    method to_text (line 170) | def to_text(self) -> str:
  class PanoramaResult (line 214) | class PanoramaResult:
    method to_dict (line 236) | def to_dict(self) -> Dict[str, Any]:
    method to_text (line 249) | def to_text(self) -> str:
  class AgentInterview (line 284) | class AgentInterview:
    method to_dict (line 293) | def to_dict(self) -> Dict[str, Any]:
    method to_text (line 303) | def to_text(self) -> str:
  class InterviewResult (line 340) | class InterviewResult:
    method to_dict (line 362) | def to_dict(self) -> Dict[str, Any]:
    method to_text (line 374) | def to_text(self) -> str:
  class ZepToolsService (line 400) | class ZepToolsService:
    method __init__ (line 424) | def __init__(self, api_key: Optional[str] = None, llm_client: Optional...
    method llm (line 435) | def llm(self) -> LLMClient:
    method _call_with_retry (line 441) | def _call_with_retry(self, func, operation_name: str, max_retries: int...
    method search_graph (line 464) | def search_graph(
    method _local_search (line 546) | def _local_search(
    method get_all_nodes (line 650) | def get_all_nodes(self, graph_id: str) -> List[NodeInfo]:
    method get_all_edges (line 678) | def get_all_edges(self, graph_id: str, include_temporal: bool = True) ...
    method get_node_detail (line 716) | def get_node_detail(self, node_uuid: str) -> Optional[NodeInfo]:
    method get_node_edges (line 748) | def get_node_edges(self, graph_id: str, node_uuid: str) -> List[EdgeIn...
    method get_entities_by_type (line 780) | def get_entities_by_type(
    method get_entity_summary (line 808) | def get_entity_summary(
    method get_graph_statistics (line 855) | def get_graph_statistics(self, graph_id: str) -> Dict[str, Any]:
    method get_simulation_context (line 890) | def get_simulation_context(
    method insight_forge (line 945) | def insight_forge(
    method _generate_sub_queries (line 1092) | def _generate_sub_queries(
    method panorama_search (line 1145) | def panorama_search(
    method quick_search (line 1237) | def quick_search(
    method interview_agents (line 1272) | def interview_agents(
    method _clean_tool_call_response (line 1485) | def _clean_tool_call_response(response: str) -> str:
    method _load_agent_profiles (line 1505) | def _load_agent_profiles(self, simulation_id: str) -> List[Dict[str, A...
    method _select_agents_for_interview (line 1551) | def _select_agents_for_interview(
    method _generate_interview_questions (line 1634) | def _generate_interview_questions(
    method _generate_interview_summary (line 1683) | def _generate_interview_summary(

FILE: backend/app/utils/file_parser.py
  function _read_text_with_fallback (line 11) | def _read_text_with_fallback(file_path: str) -> str:
  class FileParser (line 61) | class FileParser:
    method extract_text (line 67) | def extract_text(cls, file_path: str) -> str:
    method _extract_from_pdf (line 97) | def _extract_from_pdf(file_path: str) -> str:
    method _extract_from_md (line 114) | def _extract_from_md(file_path: str) -> str:
    method _extract_from_txt (line 119) | def _extract_from_txt(file_path: str) -> str:
    method extract_from_multiple (line 124) | def extract_from_multiple(cls, file_paths: List[str]) -> str:
  function split_text_into_chunks (line 147) | def split_text_into_chunks(

FILE: backend/app/utils/llm_client.py
  class LLMClient (line 14) | class LLMClient:
    method __init__ (line 17) | def __init__(
    method chat (line 35) | def chat(
    method chat_json (line 70) | def chat_json(

FILE: backend/app/utils/logger.py
  function _ensure_utf8_stdout (line 13) | def _ensure_utf8_stdout():
  function setup_logger (line 30) | def setup_logger(name: str = 'mirofish', level: int = logging.DEBUG) -> ...
  function get_logger (line 91) | def get_logger(name: str = 'mirofish') -> logging.Logger:
  function debug (line 112) | def debug(msg, *args, **kwargs):
  function info (line 115) | def info(msg, *args, **kwargs):
  function warning (line 118) | def warning(msg, *args, **kwargs):
  function error (line 121) | def error(msg, *args, **kwargs):
  function critical (line 124) | def critical(msg, *args, **kwargs):

FILE: backend/app/utils/retry.py
  function retry_with_backoff (line 15) | def retry_with_backoff(
  function retry_with_backoff_async (line 80) | def retry_with_backoff_async(
  class RetryableAPIClient (line 132) | class RetryableAPIClient:
    method __init__ (line 137) | def __init__(
    method call_with_retry (line 149) | def call_with_retry(
    method call_batch_with_retry (line 195) | def call_batch_with_retry(

FILE: backend/app/utils/zep_paging.py
  function _fetch_page_with_retry (line 26) | def _fetch_page_with_retry(
  function fetch_all_nodes (line 59) | def fetch_all_nodes(
  function fetch_all_edges (line 105) | def fetch_all_edges(

FILE: backend/run.py
  function main (line 25) | def main():

FILE: backend/scripts/action_logger.py
  class PlatformActionLogger (line 22) | class PlatformActionLogger:
    method __init__ (line 25) | def __init__(self, platform: str, base_dir: str):
    method _ensure_dir (line 39) | def _ensure_dir(self):
    method log_action (line 43) | def log_action(
    method log_round_start (line 68) | def log_round_start(self, round_num: int, simulated_hour: int):
    method log_round_end (line 80) | def log_round_end(self, round_num: int, actions_count: int):
    method log_simulation_start (line 92) | def log_simulation_start(self, config: Dict[str, Any]):
    method log_simulation_end (line 105) | def log_simulation_end(self, total_rounds: int, total_actions: int):
  class SimulationLogManager (line 119) | class SimulationLogManager:
    method __init__ (line 125) | def __init__(self, simulation_dir: str):
    method _setup_main_logger (line 140) | def _setup_main_logger(self):
    method get_twitter_logger (line 169) | def get_twitter_logger(self) -> PlatformActionLogger:
    method get_reddit_logger (line 175) | def get_reddit_logger(self) -> PlatformActionLogger:
    method log (line 181) | def log(self, message: str, level: str = "info"):
    method info (line 186) | def info(self, message: str):
    method warning (line 189) | def warning(self, message: str):
    method error (line 192) | def error(self, message: str):
    method debug (line 195) | def debug(self, message: str):
  class ActionLogger (line 201) | class ActionLogger:
    method __init__ (line 207) | def __init__(self, log_path: str):
    method _ensure_dir (line 211) | def _ensure_dir(self):
    method log_action (line 216) | def log_action(
    method log_round_start (line 242) | def log_round_start(self, round_num: int, simulated_hour: int, platfor...
    method log_round_end (line 254) | def log_round_end(self, round_num: int, actions_count: int, platform: ...
    method log_simulation_start (line 266) | def log_simulation_start(self, platform: str, config: Dict[str, Any]):
    method log_simulation_end (line 278) | def log_simulation_end(self, platform: str, total_rounds: int, total_a...
  function get_logger (line 295) | def get_logger(log_path: Optional[str] = None) -> ActionLogger:

FILE: backend/scripts/run_parallel_simulation.py
  function _utf8_open (line 53) | def _utf8_open(file, mode='r', buffering=-1, encoding=None, errors=None,
  class MaxTokensWarningFilter (line 106) | class MaxTokensWarningFilter(logging.Filter):
    method filter (line 109) | def filter(self, record):
  function disable_oasis_logging (line 120) | def disable_oasis_logging():
  function init_logging_for_simulation (line 141) | def init_logging_for_simulation(simulation_dir: str):
  class CommandType (line 210) | class CommandType:
  class ParallelIPCHandler (line 217) | class ParallelIPCHandler:
    method __init__ (line 224) | def __init__(
    method update_status (line 246) | def update_status(self, status: str):
    method poll_command (line 256) | def poll_command(self) -> Optional[Dict[str, Any]]:
    method send_response (line 279) | def send_response(self, command_id: str, status: str, result: Dict = N...
    method _get_env_and_graph (line 300) | def _get_env_and_graph(self, platform: str):
    method _interview_single_platform (line 317) | async def _interview_single_platform(self, agent_id: int, prompt: str,...
    method handle_interview (line 345) | async def handle_interview(self, command_id: str, agent_id: int, promp...
    method handle_batch_interview (line 416) | async def handle_batch_interview(self, command_id: str, interviews: Li...
    method _get_interview_result (line 517) | def _get_interview_result(self, agent_id: int, platform: str) -> Dict[...
    method process_commands (line 560) | async def process_commands(self) -> bool:
  function load_config (line 604) | def load_config(config_path: str) -> Dict[str, Any]:
  function get_agent_names_from_config (line 633) | def get_agent_names_from_config(config: Dict[str, Any]) -> Dict[int, str]:
  function fetch_new_actions_from_db (line 657) | def fetch_new_actions_from_db(
  function _enrich_action_context (line 749) | def _enrich_action_context(
  function _get_post_info (line 857) | def _get_post_info(
  function _get_user_name (line 903) | def _get_user_name(
  function _get_comment_info (line 938) | def _get_comment_info(
  function create_model (line 984) | def create_model(config: Dict[str, Any], use_boost: bool = False):
  function get_active_agents_for_round (line 1040) | def get_active_agents_for_round(
  class PlatformSimulation (line 1093) | class PlatformSimulation:
    method __init__ (line 1095) | def __init__(self):
  function run_twitter_simulation (line 1101) | async def run_twitter_simulation(
  function run_reddit_simulation (line 1293) | async def run_reddit_simulation(
  function main (line 1492) | async def main():
  function setup_signal_handlers (line 1653) | def setup_signal_handlers(loop=None):

FILE: backend/scripts/run_reddit_simulation.py
  class UnicodeFormatter (line 53) | class UnicodeFormatter(logging.Formatter):
    method format (line 58) | def format(self, record):
  class MaxTokensWarningFilter (line 70) | class MaxTokensWarningFilter(logging.Filter):
    method filter (line 73) | def filter(self, record):
  function setup_oasis_logging (line 84) | def setup_oasis_logging(log_dir: str):
  class CommandType (line 139) | class CommandType:
  class IPCHandler (line 146) | class IPCHandler:
    method __init__ (line 149) | def __init__(self, simulation_dir: str, env, agent_graph):
    method update_status (line 162) | def update_status(self, status: str):
    method poll_command (line 170) | def poll_command(self) -> Optional[Dict[str, Any]]:
    method send_response (line 193) | def send_response(self, command_id: str, status: str, result: Dict = N...
    method handle_interview (line 214) | async def handle_interview(self, command_id: str, agent_id: int, promp...
    method handle_batch_interview (line 248) | async def handle_batch_interview(self, command_id: str, interviews: Li...
    method _get_interview_result (line 300) | def _get_interview_result(self, agent_id: int) -> Dict[str, Any]:
    method process_commands (line 343) | async def process_commands(self) -> bool:
  class RedditSimulationRunner (line 385) | class RedditSimulationRunner:
    method __init__ (line 405) | def __init__(self, config_path: str, wait_for_commands: bool = True):
    method _load_config (line 421) | def _load_config(self) -> Dict[str, Any]:
    method _get_profile_path (line 426) | def _get_profile_path(self) -> str:
    method _get_db_path (line 430) | def _get_db_path(self) -> str:
    method _create_model (line 434) | def _create_model(self):
    method _get_active_agents_for_round (line 469) | def _get_active_agents_for_round(
    method run (line 523) | async def run(self, max_rounds: int = None):
  function main (line 695) | async def main():
  function setup_signal_handlers (line 737) | def setup_signal_handlers():

FILE: backend/scripts/run_twitter_simulation.py
  class UnicodeFormatter (line 53) | class UnicodeFormatter(logging.Formatter):
    method format (line 58) | def format(self, record):
  class MaxTokensWarningFilter (line 70) | class MaxTokensWarningFilter(logging.Filter):
    method filter (line 73) | def filter(self, record):
  function setup_oasis_logging (line 84) | def setup_oasis_logging(log_dir: str):
  class CommandType (line 139) | class CommandType:
  class IPCHandler (line 146) | class IPCHandler:
    method __init__ (line 149) | def __init__(self, simulation_dir: str, env, agent_graph):
    method update_status (line 162) | def update_status(self, status: str):
    method poll_command (line 170) | def poll_command(self) -> Optional[Dict[str, Any]]:
    method send_response (line 193) | def send_response(self, command_id: str, status: str, result: Dict = N...
    method handle_interview (line 214) | async def handle_interview(self, command_id: str, agent_id: int, promp...
    method handle_batch_interview (line 248) | async def handle_batch_interview(self, command_id: str, interviews: Li...
    method _get_interview_result (line 300) | def _get_interview_result(self, agent_id: int) -> Dict[str, Any]:
    method process_commands (line 343) | async def process_commands(self) -> bool:
  class TwitterSimulationRunner (line 385) | class TwitterSimulationRunner:
    method __init__ (line 398) | def __init__(self, config_path: str, wait_for_commands: bool = True):
    method _load_config (line 414) | def _load_config(self) -> Dict[str, Any]:
    method _get_profile_path (line 419) | def _get_profile_path(self) -> str:
    method _get_db_path (line 423) | def _get_db_path(self) -> str:
    method _create_model (line 427) | def _create_model(self):
    method _get_active_agents_for_round (line 462) | def _get_active_agents_for_round(
    method run (line 531) | async def run(self, max_rounds: int = None):
  function main (line 707) | async def main():
  function setup_signal_handlers (line 749) | def setup_signal_handlers():

FILE: backend/scripts/test_profile_format.py
  function test_profile_formats (line 20) | def test_profile_formats():
  function show_expected_formats (line 130) | def show_expected_formats():

FILE: frontend/src/api/graph.js
  function generateOntology (line 8) | function generateOntology(formData) {
  function buildGraph (line 26) | function buildGraph(data) {
  function getTaskStatus (line 41) | function getTaskStatus(taskId) {
  function getGraphData (line 53) | function getGraphData(graphId) {
  function getProject (line 65) | function getProject(projectId) {

FILE: frontend/src/store/pendingUpload.js
  function setPendingUpload (line 13) | function setPendingUpload(files, requirement) {
  function getPendingUpload (line 19) | function getPendingUpload() {
  function clearPendingUpload (line 27) | function clearPendingUpload() {
Condensed preview — 71 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (1,318K chars).
[
  {
    "path": ".dockerignore",
    "chars": 223,
    "preview": ".git\n.github\n.gitignore\n.cursor\n.DS_Store\n.env\n\nnode_modules\nfrontend/node_modules\nbackend/.venv\n.venv\n.python-version\n\n"
  },
  {
    "path": ".github/workflows/docker-image.yml",
    "chars": 1138,
    "preview": "name: Build and push Docker image\n\non:\n  push:\n    tags: [\"*\"]\n  workflow_dispatch:\n\npermissions:\n  contents: read\n  pac"
  },
  {
    "path": ".gitignore",
    "chars": 492,
    "preview": "# OS\n.DS_Store\nThumbs.db\n\n# 环境变量(保护敏感信息)\n.env\n.env.local\n.env.*.local\n.env.development\n.env.test\n.env.production\n\n# Pyth"
  },
  {
    "path": "Dockerfile",
    "chars": 609,
    "preview": "FROM python:3.11\n\n# 安装 Node.js (满足 >=18)及必要工具\nRUN apt-get update \\\n  && apt-get install -y --no-install-recommends nodej"
  },
  {
    "path": "LICENSE",
    "chars": 34523,
    "preview": "                    GNU AFFERO GENERAL PUBLIC LICENSE\n                       Version 3, 19 November 2007\n\n Copyright (C)"
  },
  {
    "path": "README-EN.md",
    "chars": 8966,
    "preview": "<div align=\"center\">\n\n<img src=\"./static/image/MiroFish_logo_compressed.jpeg\" alt=\"MiroFish Logo\" width=\"75%\"/>\n\n<a href"
  },
  {
    "path": "README.md",
    "chars": 5805,
    "preview": "<div align=\"center\">\n\n<img src=\"./static/image/MiroFish_logo_compressed.jpeg\" alt=\"MiroFish Logo\" width=\"75%\"/>\n\n<a href"
  },
  {
    "path": "backend/app/__init__.py",
    "chars": 2384,
    "preview": "\"\"\"\nMiroFish Backend - Flask应用工厂\n\"\"\"\n\nimport os\nimport warnings\n\n# 抑制 multiprocessing resource_tracker 的警告(来自第三方库如 trans"
  },
  {
    "path": "backend/app/api/__init__.py",
    "chars": 306,
    "preview": "\"\"\"\nAPI路由模块\n\"\"\"\n\nfrom flask import Blueprint\n\ngraph_bp = Blueprint('graph', __name__)\nsimulation_bp = Blueprint('simulat"
  },
  {
    "path": "backend/app/api/graph.py",
    "chars": 18806,
    "preview": "\"\"\"\n图谱相关API路由\n采用项目上下文机制,服务端持久化状态\n\"\"\"\n\nimport os\nimport traceback\nimport threading\nfrom flask import request, jsonify\n\nfr"
  },
  {
    "path": "backend/app/api/report.py",
    "chars": 27690,
    "preview": "\"\"\"\nReport API路由\n提供模拟报告生成、获取、对话等接口\n\"\"\"\n\nimport os\nimport traceback\nimport threading\nfrom flask import request, jsonify, "
  },
  {
    "path": "backend/app/api/simulation.py",
    "chars": 85633,
    "preview": "\"\"\"\n模拟相关API路由\nStep2: Zep实体读取与过滤、OASIS模拟准备与运行(全程自动化)\n\"\"\"\n\nimport os\nimport traceback\nfrom flask import request, jsonify, "
  },
  {
    "path": "backend/app/config.py",
    "chars": 2395,
    "preview": "\"\"\"\n配置管理\n统一从项目根目录的 .env 文件加载配置\n\"\"\"\n\nimport os\nfrom dotenv import load_dotenv\n\n# 加载项目根目录的 .env 文件\n# 路径: MiroFish/.env (相对"
  },
  {
    "path": "backend/app/models/__init__.py",
    "chars": 206,
    "preview": "\"\"\"\n数据模型模块\n\"\"\"\n\nfrom .task import TaskManager, TaskStatus\nfrom .project import Project, ProjectStatus, ProjectManager\n\n_"
  },
  {
    "path": "backend/app/models/project.py",
    "chars": 8907,
    "preview": "\"\"\"\n项目上下文管理\n用于在服务端持久化项目状态,避免前端在接口间传递大量数据\n\"\"\"\n\nimport os\nimport json\nimport uuid\nimport shutil\nfrom datetime import datet"
  },
  {
    "path": "backend/app/models/task.py",
    "chars": 5265,
    "preview": "\"\"\"\n任务状态管理\n用于跟踪长时间运行的任务(如图谱构建)\n\"\"\"\n\nimport uuid\nimport threading\nfrom datetime import datetime\nfrom enum import Enum\nfro"
  },
  {
    "path": "backend/app/services/__init__.py",
    "chars": 1740,
    "preview": "\"\"\"\n业务服务模块\n\"\"\"\n\nfrom .ontology_generator import OntologyGenerator\nfrom .graph_builder import GraphBuilderService\nfrom .t"
  },
  {
    "path": "backend/app/services/graph_builder.py",
    "chars": 16679,
    "preview": "\"\"\"\n图谱构建服务\n接口2:使用Zep API构建Standalone Graph\n\"\"\"\n\nimport os\nimport uuid\nimport time\nimport threading\nfrom typing import Di"
  },
  {
    "path": "backend/app/services/oasis_profile_generator.py",
    "chars": 43094,
    "preview": "\"\"\"\nOASIS Agent Profile生成器\n将Zep图谱中的实体转换为OASIS模拟平台所需的Agent Profile格式\n\n优化改进:\n1. 调用Zep检索功能二次丰富节点信息\n2. 优化提示词生成非常详细的人设\n3. 区分个"
  },
  {
    "path": "backend/app/services/ontology_generator.py",
    "chars": 13020,
    "preview": "\"\"\"\n本体生成服务\n接口1:分析文本内容,生成适合社会模拟的实体和关系类型定义\n\"\"\"\n\nimport json\nfrom typing import Dict, Any, List, Optional\nfrom ..utils.llm_"
  },
  {
    "path": "backend/app/services/report_agent.py",
    "chars": 80754,
    "preview": "\"\"\"\nReport Agent服务\n使用LangChain + Zep实现ReACT模式的模拟报告生成\n\n功能:\n1. 根据模拟需求和Zep图谱信息生成报告\n2. 先规划目录结构,然后分段生成\n3. 每段采用ReACT多轮思考与反思模式\n"
  },
  {
    "path": "backend/app/services/simulation_config_generator.py",
    "chars": 33705,
    "preview": "\"\"\"\n模拟配置智能生成器\n使用LLM根据模拟需求、文档内容、图谱信息自动生成细致的模拟参数\n实现全程自动化,无需人工设置参数\n\n采用分步生成策略,避免一次性生成过长内容导致失败:\n1. 生成时间配置\n2. 生成事件配置\n3. 分批生成Ag"
  },
  {
    "path": "backend/app/services/simulation_ipc.py",
    "chars": 11159,
    "preview": "\"\"\"\n模拟IPC通信模块\n用于Flask后端和模拟脚本之间的进程间通信\n\n通过文件系统实现简单的命令/响应模式:\n1. Flask写入命令到 commands/ 目录\n2. 模拟脚本轮询命令目录,执行命令并写入响应到 responses/"
  },
  {
    "path": "backend/app/services/simulation_manager.py",
    "chars": 18698,
    "preview": "\"\"\"\nOASIS模拟管理器\n管理Twitter和Reddit双平台并行模拟\n使用预设脚本 + LLM智能生成配置参数\n\"\"\"\n\nimport os\nimport json\nimport shutil\nfrom typing import "
  },
  {
    "path": "backend/app/services/simulation_runner.py",
    "chars": 62314,
    "preview": "\"\"\"\nOASIS模拟运行器\n在后台运行模拟并记录每个Agent的动作,支持实时状态监控\n\"\"\"\n\nimport os\nimport sys\nimport json\nimport time\nimport asyncio\nimport thr"
  },
  {
    "path": "backend/app/services/text_processor.py",
    "chars": 1537,
    "preview": "\"\"\"\n文本处理服务\n\"\"\"\n\nfrom typing import List, Optional\nfrom ..utils.file_parser import FileParser, split_text_into_chunks\n\n\nc"
  },
  {
    "path": "backend/app/services/zep_entity_reader.py",
    "chars": 13801,
    "preview": "\"\"\"\nZep实体读取与过滤服务\n从Zep图谱中读取节点,筛选出符合预定义实体类型的节点\n\"\"\"\n\nimport time\nfrom typing import Dict, Any, List, Optional, Set, Callabl"
  },
  {
    "path": "backend/app/services/zep_graph_memory_updater.py",
    "chars": 18762,
    "preview": "\"\"\"\nZep图谱记忆更新服务\n将模拟中的Agent活动动态更新到Zep图谱中\n\"\"\"\n\nimport os\nimport time\nimport threading\nimport json\nfrom typing import Dict,"
  },
  {
    "path": "backend/app/services/zep_tools.py",
    "chars": 56952,
    "preview": "\"\"\"\nZep检索工具服务\n封装图谱搜索、节点读取、边查询等工具,供Report Agent使用\n\n核心检索工具(优化后):\n1. InsightForge(深度洞察检索)- 最强大的混合检索,自动生成子问题并多维度检索\n2. Panora"
  },
  {
    "path": "backend/app/utils/__init__.py",
    "chars": 124,
    "preview": "\"\"\"\n工具模块\n\"\"\"\n\nfrom .file_parser import FileParser\nfrom .llm_client import LLMClient\n\n__all__ = ['FileParser', 'LLMClient"
  },
  {
    "path": "backend/app/utils/file_parser.py",
    "chars": 4675,
    "preview": "\"\"\"\n文件解析工具\n支持PDF、Markdown、TXT文件的文本提取\n\"\"\"\n\nimport os\nfrom pathlib import Path\nfrom typing import List, Optional\n\n\ndef _re"
  },
  {
    "path": "backend/app/utils/llm_client.py",
    "chars": 2746,
    "preview": "\"\"\"\nLLM客户端封装\n统一使用OpenAI格式调用\n\"\"\"\n\nimport json\nimport re\nfrom typing import Optional, Dict, Any, List\nfrom openai import O"
  },
  {
    "path": "backend/app/utils/logger.py",
    "chars": 2904,
    "preview": "\"\"\"\n日志配置模块\n提供统一的日志管理,同时输出到控制台和文件\n\"\"\"\n\nimport os\nimport sys\nimport logging\nfrom datetime import datetime\nfrom logging.han"
  },
  {
    "path": "backend/app/utils/retry.py",
    "chars": 6927,
    "preview": "\"\"\"\nAPI调用重试机制\n用于处理LLM等外部API调用的重试逻辑\n\"\"\"\n\nimport time\nimport random\nimport functools\nfrom typing import Callable, Any, Opt"
  },
  {
    "path": "backend/app/utils/zep_paging.py",
    "chars": 4194,
    "preview": "\"\"\"Zep Graph 分页读取工具。\n\nZep 的 node/edge 列表接口使用 UUID cursor 分页,\n本模块封装自动翻页逻辑(含单页重试),对调用方透明地返回完整列表。\n\"\"\"\n\nfrom __future__ impo"
  },
  {
    "path": "backend/pyproject.toml",
    "chars": 951,
    "preview": "[project]\nname = \"mirofish-backend\"\nversion = \"0.1.0\"\ndescription = \"MiroFish - 简洁通用的群体智能引擎,预测万物\"\nrequires-python = \">=3"
  },
  {
    "path": "backend/requirements.txt",
    "chars": 751,
    "preview": "# ===========================================\n# MiroFish Backend Dependencies\n# ========================================"
  },
  {
    "path": "backend/run.py",
    "chars": 1112,
    "preview": "\"\"\"\nMiroFish Backend 启动入口\n\"\"\"\n\nimport os\nimport sys\n\n# 解决 Windows 控制台中文乱码问题:在所有导入之前设置 UTF-8 编码\nif sys.platform == 'win32"
  },
  {
    "path": "backend/scripts/action_logger.py",
    "chars": 9688,
    "preview": "\"\"\"\n动作日志记录器\n用于记录OASIS模拟中每个Agent的动作,供后端监控使用\n\n日志结构:\n    sim_xxx/\n    ├── twitter/\n    │   └── actions.jsonl    # Twitter 平"
  },
  {
    "path": "backend/scripts/run_parallel_simulation.py",
    "chars": 56890,
    "preview": "\"\"\"\nOASIS 双平台并行模拟预设脚本\n同时运行Twitter和Reddit模拟,读取相同的配置文件\n\n功能特性:\n- 双平台(Twitter + Reddit)并行模拟\n- 完成模拟后不立即关闭环境,进入等待命令模式\n- 支持通过IP"
  },
  {
    "path": "backend/scripts/run_reddit_simulation.py",
    "chars": 25068,
    "preview": "\"\"\"\nOASIS Reddit模拟预设脚本\n此脚本读取配置文件中的参数来执行模拟,实现全程自动化\n\n功能特性:\n- 完成模拟后不立即关闭环境,进入等待命令模式\n- 支持通过IPC接收Interview命令\n- 支持单个Agent采访和批量"
  },
  {
    "path": "backend/scripts/run_twitter_simulation.py",
    "chars": 24948,
    "preview": "\"\"\"\nOASIS Twitter模拟预设脚本\n此脚本读取配置文件中的参数来执行模拟,实现全程自动化\n\n功能特性:\n- 完成模拟后不立即关闭环境,进入等待命令模式\n- 支持通过IPC接收Interview命令\n- 支持单个Agent采访和批"
  },
  {
    "path": "backend/scripts/test_profile_format.py",
    "chars": 5488,
    "preview": "\"\"\"\n测试Profile格式生成是否符合OASIS要求\n验证:\n1. Twitter Profile生成CSV格式\n2. Reddit Profile生成JSON详细格式\n\"\"\"\n\nimport os\nimport sys\nimport "
  },
  {
    "path": "docker-compose.yml",
    "chars": 335,
    "preview": "services:\n  mirofish:\n    image: ghcr.io/666ghj/mirofish:latest\n    # 加速镜像(如拉取缓慢可替换上方地址)\n    # image: ghcr.nju.edu.cn/66"
  },
  {
    "path": "frontend/.gitignore",
    "chars": 253,
    "preview": "# Logs\nlogs\n*.log\nnpm-debug.log*\nyarn-debug.log*\nyarn-error.log*\npnpm-debug.log*\nlerna-debug.log*\n\nnode_modules\ndist\ndis"
  },
  {
    "path": "frontend/index.html",
    "chars": 802,
    "preview": "<!doctype html>\n<html lang=\"zh-CN\">\n  <head>\n    <link rel=\"preconnect\" href=\"https://fonts.googleapis.com\">\n    <link r"
  },
  {
    "path": "frontend/package.json",
    "chars": 392,
    "preview": "{\n  \"name\": \"frontend\",\n  \"private\": true,\n  \"version\": \"0.1.0\",\n  \"type\": \"module\",\n  \"scripts\": {\n    \"dev\": \"vite --h"
  },
  {
    "path": "frontend/src/App.vue",
    "chars": 674,
    "preview": "<template>\n  <router-view />\n</template>\n\n<script setup>\n// 使用 Vue Router 来管理页面\n</script>\n\n<style>\n/* 全局样式重置 */\n* {\n  ma"
  },
  {
    "path": "frontend/src/api/graph.js",
    "chars": 1286,
    "preview": "import service, { requestWithRetry } from './index'\n\n/**\n * 生成本体(上传文档和模拟需求)\n * @param {Object} data - 包含files, simulatio"
  },
  {
    "path": "frontend/src/api/index.js",
    "chars": 1618,
    "preview": "import axios from 'axios'\n\n// 创建axios实例\nconst service = axios.create({\n  baseURL: import.meta.env.VITE_API_BASE_URL || '"
  },
  {
    "path": "frontend/src/api/report.js",
    "chars": 1309,
    "preview": "import service, { requestWithRetry } from './index'\n\n/**\n * 开始报告生成\n * @param {Object} data - { simulation_id, force_rege"
  },
  {
    "path": "frontend/src/api/simulation.js",
    "chars": 4944,
    "preview": "import service, { requestWithRetry } from './index'\n\n/**\n * 创建模拟\n * @param {Object} data - { project_id, graph_id?, enab"
  },
  {
    "path": "frontend/src/components/GraphPanel.vue",
    "chars": 37817,
    "preview": "<template>\n  <div class=\"graph-panel\">\n    <div class=\"panel-header\">\n      <span class=\"panel-title\">Graph Relationship"
  },
  {
    "path": "frontend/src/components/HistoryDatabase.vue",
    "chars": 31723,
    "preview": "<template>\n  <div \n    class=\"history-database\"\n    :class=\"{ 'no-projects': projects.length === 0 && !loading }\"\n    re"
  },
  {
    "path": "frontend/src/components/Step1GraphBuild.vue",
    "chars": 17117,
    "preview": "<template>\n  <div class=\"workbench-panel\">\n    <div class=\"scroll-container\">\n      <!-- Step 01: Ontology -->\n      <di"
  },
  {
    "path": "frontend/src/components/Step2EnvSetup.vue",
    "chars": 65450,
    "preview": "<template>\n  <div class=\"env-setup-panel\">\n    <div class=\"scroll-container\">\n      <!-- Step 01: 模拟实例 -->\n      <div cl"
  },
  {
    "path": "frontend/src/components/Step3Simulation.vue",
    "chars": 37412,
    "preview": "<template>\n  <div class=\"simulation-panel\">\n    <!-- Top Control Bar -->\n    <div class=\"control-bar\">\n      <div class="
  },
  {
    "path": "frontend/src/components/Step4Report.vue",
    "chars": 140880,
    "preview": "<template>\n  <div class=\"report-panel\">\n    <!-- Main Split Layout -->\n    <div class=\"main-split-layout\">\n      <!-- LE"
  },
  {
    "path": "frontend/src/components/Step5Interaction.vue",
    "chars": 62214,
    "preview": "<template>\n  <div class=\"interaction-panel\">\n    <!-- Main Split Layout -->\n    <div class=\"main-split-layout\">\n      <!"
  },
  {
    "path": "frontend/src/main.js",
    "chars": 154,
    "preview": "import { createApp } from 'vue'\nimport App from './App.vue'\nimport router from './router'\n\nconst app = createApp(App)\n\na"
  },
  {
    "path": "frontend/src/router/index.js",
    "chars": 1121,
    "preview": "import { createRouter, createWebHistory } from 'vue-router'\nimport Home from '../views/Home.vue'\nimport Process from '.."
  },
  {
    "path": "frontend/src/store/pendingUpload.js",
    "chars": 643,
    "preview": "/**\n * 临时存储待上传的文件和需求\n * 用于首页点击启动引擎后立即跳转,在Process页面再进行API调用\n */\nimport { reactive } from 'vue'\n\nconst state = reactive({\n"
  },
  {
    "path": "frontend/src/views/Home.vue",
    "chars": 18712,
    "preview": "<template>\n  <div class=\"home-container\">\n    <!-- 顶部导航栏 -->\n    <nav class=\"navbar\">\n      <div class=\"nav-brand\">MIROF"
  },
  {
    "path": "frontend/src/views/InteractionView.vue",
    "chars": 8506,
    "preview": "<template>\n  <div class=\"main-view\">\n    <!-- Header -->\n    <header class=\"app-header\">\n      <div class=\"header-left\">"
  },
  {
    "path": "frontend/src/views/MainView.vue",
    "chars": 14569,
    "preview": "<template>\n  <div class=\"main-view\">\n    <!-- Header -->\n    <header class=\"app-header\">\n      <div class=\"header-left\">"
  },
  {
    "path": "frontend/src/views/Process.vue",
    "chars": 49907,
    "preview": "<template>\n  <div class=\"process-page\">\n    <!-- 顶部导航栏 -->\n    <nav class=\"navbar\">\n      <div class=\"nav-brand\" @click="
  },
  {
    "path": "frontend/src/views/ReportView.vue",
    "chars": 8370,
    "preview": "<template>\n  <div class=\"main-view\">\n    <!-- Header -->\n    <header class=\"app-header\">\n      <div class=\"header-left\">"
  },
  {
    "path": "frontend/src/views/SimulationRunView.vue",
    "chars": 10992,
    "preview": "<template>\n  <div class=\"main-view\">\n    <!-- Header -->\n    <header class=\"app-header\">\n      <div class=\"header-left\">"
  },
  {
    "path": "frontend/src/views/SimulationView.vue",
    "chars": 10313,
    "preview": "<template>\n  <div class=\"main-view\">\n    <!-- Header -->\n    <header class=\"app-header\">\n      <div class=\"header-left\">"
  },
  {
    "path": "frontend/vite.config.js",
    "chars": 337,
    "preview": "import { defineConfig } from 'vite'\nimport vue from '@vitejs/plugin-vue'\n\n// https://vite.dev/config/\nexport default def"
  },
  {
    "path": "package.json",
    "chars": 670,
    "preview": "{\n  \"name\": \"mirofish\",\n  \"version\": \"0.1.0\",\n  \"description\": \"MiroFish - 简洁通用的群体智能引擎,预测万物\",\n  \"scripts\": {\n    \"setup\""
  }
]

About this extraction

This page contains the full source code of the 666ghj/MiroFish GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 71 files (1.2 MB), approximately 330.4k tokens, and a symbol index with 562 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.

Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.

Copied to clipboard!