[
  {
    "path": "README-ptbr.md",
    "content": "\n# Kemono and Coomer Downloader\n\n[![Views](https://hits.sh/github.com/e43bkmncoompt/hits.svg)](https://github.com/e43b/Kemono-and-Coomer-Downloader/)\n\n[![](img/en-flag.svg) English](README.md) | [![](img/br.png) Português](README-ptbr.md)\n\nO **Kemono and Coomer Downloader** é uma ferramenta que permite baixar posts dos sites [Kemono](https://kemono.su/) e [Coomer](https://coomer.su/).\n\nCom essa ferramenta, é possível baixar posts únicos, múltiplos posts sequencialmente, baixar todos os posts de um perfil do Kemono ou Coomer.\n\n## Apoie o Desenvolvimento da Ferramenta 💖\n\nEsta ferramenta foi criada com dedicação para facilitar sua vida e é mantida de forma independente. Se você acha que ela foi útil e gostaria de contribuir para sua melhoria contínua, considere fazer uma doação.\n\nToda ajuda é bem-vinda e será usada para cobrir custos de manutenção, melhorias e adição de novos recursos. Seu apoio faz toda a diferença!\n\n[![ko-fi](https://www.ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/e43bs)\n\n### Por que doar?\n- **Manutenção contínua**: Ajude a manter a ferramenta sempre atualizada e funcionando.\n- **Novos recursos**: Contribua para a implementação de novas funcionalidades solicitadas pela comunidade.\n- **Agradecimento**: Mostre seu apoio ao projeto e incentive o desenvolvimento de mais ferramentas como esta.\n\n🎉 Obrigado por considerar apoiar este projeto!\n\n## Star History\n\n[![Star History Chart](https://api.star-history.com/svg?repos=e43b/Kemono-and-Coomer-Downloader&type=Date)](https://star-history.com/#e43b/Kemono-and-Coomer-Downloader&Date)\n\n## Como Usar\n\n1. **Certifique-se de ter o Python instalado em seu sistema.**\n2. **Clone este repositório:**\n```sh\ngit clone https://github.com/e43b/Kemono-and-Coomer-Downloader/\n```\n\n3. **Navegue até o diretório do projeto:**\n```sh\ncd Kemono-and-Coomer-Downloader\n```\n\n4. **Selecione o idioma desejado:**\n   - A pasta codeen contém a versão em inglês.\n   - A pasta codept contém a versão em português.\n\n5. **Execute o script principal:**\n```sh\npython main.py\n```\n\n6. **Siga as instruções no menu para escolher o que deseja baixar ou personalizar o programa.**\n\n## Bibliotecas\n\nA biblioteca necessária é: requests. Ao iniciar o script pela primeira vez, se a biblioteca não estiver instalada, será instalada automaticamente.\n\n## Funcionalidades\n\n### Página Inicial\n\nA página inicial do projeto apresenta as principais opções disponíveis para facilitar a utilização da ferramenta.\n\n![Página Inicial](img/home.png)\n\n### Baixar Post\n\n#### Opção 1: Download de 1 Post ou Alguns Posts Separados\n\n##### 1.1 Inserir os links diretamente\n\nPara baixar posts específicos, insira os links dos posts separados por vírgula. Esta opção é ideal para baixar poucos posts. Exemplo:\n\n```sh\nhttps://coomer.su/onlyfans/user/rosiee616/post/1005002977, https://kemono.su/patreon/user/9919437/post/103396563\n```\n\n![Posts](img/posts.png)\n\n##### 1.2 Carregar links de um arquivo TXT\n\nSe você possui vários links de posts para baixar, facilite o processo utilizando um arquivo `.txt`. \n\n###### Passo 1: Criando o Arquivo TXT\n\n1. Abra um editor de texto de sua preferência (como Notepad, VS Code, ou outro).\n2. Liste os links dos posts no seguinte formato:\n   - Separe os links por **vírgulas**.\n   - Exemplo de conteúdo do arquivo:\n```sh\nhttps://coomer.su/onlyfans/user/rosiee616/post/1005002977, https://kemono.su/patreon/user/9919437/post/103396563\n```\n3. Salve o arquivo com a extensão `.txt`. Por exemplo: `posts.txt`.\n\n###### Passo 2: Localizando o Caminho do Arquivo\n\nVocê pode especificar o caminho do arquivo ao script de duas maneiras:\n\n1. **Caminho Absoluto**: Localize o arquivo no seu sistema e copie o caminho completo.\n```sh\nC:\\Users\\SeuUsuario\\Documentos\\posts.txt\n```\n\n2. **Caminho Relativo**: Se o arquivo estiver na mesma pasta que o script `main.py`, basta informar o nome do arquivo.\n```sh\nposts.txt\n```\n\n###### Passo 3: Executando o Script\n\n1. Cole o caminho do arquivo TXT no console.\n2. O script iniciará o download automaticamente e processará todos os links listados no arquivo.\n\n###### Conteúdo do Arquivo TXT\n\n![Conteúdo do arquivo TXT](img/txtcontent.png)\n\n###### Script em Execução\n\n![Execução do Script](img/1_2.png)\n\n##### 1.3 Voltar ao menu principal\n\nSelecione esta opção para retornar ao menu inicial.\n\n#### Opção 2: Download de Todos os Posts de um Perfil\n\n⚠️ **Atenção Geral**:\nNeste modo de download, **não será criado o arquivo `files.md`** com informações como título, descrição, embeds, etc.\nSe você precisa dessas informações, utilize a **Opção 1**.\n\n##### 2.1: Download de Todos os Posts de um Perfil\n\n1. Insira o link de um perfil do Coomer ou Kemono.\n2. Pressione **Enter**.\n\n**Observações**:\n- Este modo permite baixar todos os posts do perfil inserido.\n- **Limitação**: Não é possível baixar mais de um perfil por vez.\n\nO sistema irá processar o link, extrair todos os posts e realizar o download.\n\n![Execução do Script](img/2_1.png)\n\n##### 2.2: Download de Posts de uma Página Específica\n\n1. Insira o link de um perfil do Coomer ou Kemono.\n2. Pressione **Enter**.\n3. Informe o **offset** da página desejada.\n\n**Como calcular o offset**:\n- Tanto no Kemono quanto no Coomer, os offsets aumentam de 50 em 50:\n  - Página 1: offset = 0\n  - Página 2: offset = 50\n  - Página 3: offset = 100\n  - ...\n- Para encontrar o offset da página desejada:\n  1. Acesse a página do perfil.\n  2. Clique na página desejada e observe o número no final do link.\n     Exemplo:\n```\nhttps://kemono.su/patreon/user/9919437?o=750\n```\nNesse caso, o offset é **750**.\n\nO sistema irá processar a página especificada, extrair os posts e realizar o download.\n\n![Execução do Script](img/2_2.png)\n\n##### 2.3: Download de Posts em um Intervalo de Páginas\n\n1. Insira o link de um perfil do Coomer ou Kemono.\n2. Pressione **Enter**.\n3. Informe o **offset** da página inicial.\n4. Informe o **offset** da página final.\n\n**Como calcular os offsets**:\n- O cálculo do offset segue a mesma lógica da **Opção 2.2**.\n  - Exemplo:\n    - Página 1: offset = 0\n    - Página 16: offset = 750\n\nTodos os posts entre os offsets especificados serão extraídos e baixados.\n\n![Execução do Script](img/2_3.png)\n\n##### 2.4: Download de Posts entre Dois Posts Específicos\n\n1. Insira o link de um perfil do Coomer ou Kemono.\n2. Pressione **Enter**.\n3. Insira o link ou o ID do **post inicial**.\n   - Exemplo de link:\n```\nhttps://kemono.su/patreon/user/9919437/post/54725686\n```\n   - Apenas o ID: `54725686`.\n4. Insira o link ou o ID do **post final**.\n\n**O que acontece**:\nO sistema fará o download de todos os posts entre os dois IDs especificados.\n\n![Execução do Script](img/2_4.png)\n\n##### 2.5: Voltar ao Menu Principal\n\nSelecione esta opção para retornar à página inicial.\n\n#### Opção 3: Personalizar as Configurações do Programa\n\nEssa opção permite configurar algumas preferências no programa. As opções disponíveis são as seguintes:\n\n1. **Take empty posts**: `False`\n2. **Download older posts first**: `False`\n3. **For individual posts, create a file with information (title, description, etc.)**: `True`\n4. **Choose the type of file to save the information (Markdown or TXT)**: `md`\n5. **Back to the main menu**\n\n##### Descrição das Opções\n\n###### Take Empty Posts\n- Define se posts vazios (sem arquivos anexos) devem ser incluídos nos downloads massivos de perfis.\n  - **False (Recomendado)**: Posts vazios serão ignorados.\n  - **True**: Será criada uma pasta para os posts vazios. Use essa opção apenas em casos específicos.\n\n###### Download Older Posts First\n- Controla a ordem de download dos posts em perfis:\n  - **False**: Baixa os posts mais recentes primeiro.\n  - **True**: Baixa os posts mais antigos primeiro.\n\n###### Criar Arquivo com Informações (Posts Individuais)\n- Define se será criado um arquivo contendo informações como título, descrição e embeds ao baixar posts individualmente:\n  - **True**: Cria o arquivo informativo.\n  - **False**: Não cria o arquivo.\n\n###### Tipo de Arquivo para Salvar Informações\n- Escolha o formato do arquivo criado nas **Opções Individuais**:\n  - **Markdown (`md`)**: Arquivo no formato Markdown.\n  - **TXT (`txt`)**: Arquivo no formato texto simples.\n  - **Nota**: Ambos os formatos utilizam estrutura Markdown.\n\n###### Como Alterar as Configurações\nPara modificar qualquer uma das opções, basta digitar o número correspondente. O programa alternará automaticamente o valor entre as opções disponíveis (por exemplo, de `True` para `False`).\n\n![Configurações do Programa](img/3.png)\n\n#### Opção 4: Sair do Programa\n\nEssa opção encerra o programa.\n\n## Organização dos Arquivos\n\nOs posts são salvos em pastas para facilitar a organização. A estrutura de pastas segue o padrão abaixo:\n\n### Estrutura das Pastas\n\n1. **Plataforma**: Uma pasta principal é criada para cada plataforma (Kemono ou Coomer).\n2. **Autor**: Dentro da pasta da plataforma, é criada uma pasta para cada autor no formato **Nome-Serviço-Id**.\n3. **Posts**: Dentro da pasta do autor, há uma subpasta chamada `posts` onde os conteúdos são organizados.\n   Cada post é salvo em uma subpasta identificada pelo **ID do post**.\n\n### Exemplo da Estrutura de Pastas\n\n```\nKemono-and-Coomer-Downloader/\n│\n├── kemono/                                 # Pasta da plataforma Kemono\n│   ├── Nome-Serviço-Id/                    # Pasta do autor no formato Nome-Serviço-Id\n│   │   ├── posts/                          # Pasta de posts do autor\n│   │   │   ├── postID1/                    # Pasta do post com ID 1\n│   │   │   │   ├── conteudo_do_post        # Conteúdo do post\n│   │   │   │   ├── files.md                # (Opcional) Arquivo com informações dos arquivos\n│   │   │   │   └── ...                     # Outros arquivos do post\n│   │   │   ├── postID2/                    # Pasta do post com ID 2\n│   │   │   │   ├── conteudo_do_post        # Conteúdo do post\n│   │   │   │   └── files.txt               # (Opcional) Arquivo com informações dos arquivos\n│   │   │   └── ...                         # Outros posts\n│   │   └── ...                             # Outros conteúdos do autor\n│   └── Nome-Serviço-Id/                    # Pasta de outro autor no formato Nome-Serviço-Id\n│       ├── posts/                          # Pasta de posts do autor\n│       └── ...                             # Outros conteúdos\n│\n└── coomer/                                 # Pasta da plataforma Coomer\n    ├── Nome-Serviço-Id/                    # Pasta do autor no formato Nome-Serviço-Id\n    │   ├── posts/                          # Pasta de posts do autor\n    │   │   ├── postID1/                    # Pasta do post com ID 1\n    │   │   │   ├── conteudo_do_post        # Conteúdo do post\n    │   │   │   ├── files.txt               # (Opcional) Arquivo com informações dos arquivos\n    │   │   │   └── ...                     # Outros arquivos do post\n    │   │   └── postID2/                    # Pasta do post com ID 2\n    │   │       ├── conteudo_do_post        # Conteúdo do post\n    │   │       └── ...                     # Outros arquivos do post\n    │   └── ...                             # Outros conteúdos do autor\n    └── Nome-Serviço-Id/                    # Pasta de outro autor no formato Nome-Serviço-Id\n        ├── posts/                          # Pasta de posts do autor\n        └── ...                             # Outros conteúdos\n```\n\n![Organização das Pastas](img/pastas.png)\n\n### Sobre o Arquivo `files.md` ou `files.txt`\n\nO arquivo `files.md` (ou `files.txt`, dependendo da configuração escolhida) contém as seguintes informações sobre cada post:\n- **Título**: O título do post.\n- **Descrição/Conteúdo**: O conteúdo ou descrição do post.\n- **Embeds**: Informações sobre elementos incorporados (se houver).\n- **Links de Arquivos**: URLs de arquivos presentes nas seções de **Attachments**, **Videos**, e **Images**.\n\n![Exemplo de files.md](img/files.png)\n\n## Contribuições\n\nEste projeto é **open-source**, e sua participação é muito bem-vinda! Se você deseja ajudar no aprimoramento da ferramenta, sinta-se à vontade para:\n\n- **Enviar sugestões** para novos recursos ou melhorias.\n- **Relatar problemas** ou bugs encontrados.\n- **Submeter pull requests** com suas próprias contribuições.\n\nVocê pode contribuir de diversas maneiras através do nosso [repositório no GitHub](https://github.com/e43b/Kemono-and-Coomer--Downloader/) ou interagir com a comunidade no nosso [Discord](https://discord.gg/GNJbxzD8bK).\n\n## Autor\n\nO **Kemono and Coomer Downloader** foi desenvolvido e é mantido por [E43b](https://github.com/e43b). Nosso objetivo é tornar o processo de download de posts nos sites **Kemono** e **Coomer** mais simples, rápido e organizado, proporcionando uma experiência fluída e acessível para os usuários.\n\n## Suporte\n\nSe você encontrar problemas, bugs ou tiver dúvidas, nossa comunidade está pronta para ajudar! Entre em contato pelo nosso [Discord](https://discord.gg/GNJbxzD8bK) para obter suporte ou tirar suas dúvidas.\n"
  },
  {
    "path": "README.md",
    "content": "# Kemono and Coomer Downloader\n\n[![Views](https://hits.sh/github.com/e43bkmncoomen/hits.svg)](https://github.com/e43b/Kemono-and-Coomer-Downloader/)\n\n[![](img/en-flag.svg) English](README.md) | [![](img/br.png) Português](README-ptbr.md)\n\nThe **Kemono and Coomer Downloader** is a tool that allows you to download posts from [Kemono](https://kemono.su/) and [Coomer](https://coomer.su/) websites.\n\nWith this tool, you can download single posts, multiple posts sequentially, or download all posts from a Kemono or Coomer profile.\n\n## Support Tool Development 💖\n\nThis tool was created with dedication to make your life easier and is maintained independently. If you find it useful and would like to contribute to its continuous improvement, consider making a donation.\n\nAny help is welcome and will be used to cover maintenance costs, improvements, and the addition of new features. Your support makes all the difference!\n\n[![ko-fi](https://www.ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/e43bs)\n\n### Why donate?\n- **Continuous maintenance**: Help keep the tool always updated and working.\n- **New features**: Contribute to implementing new functionalities requested by the community.\n- **Show appreciation**: Show your support for the project and encourage the development of more tools like this.\n\n🎉 Thank you for considering supporting this project!\n\n## Star History\n\n[![Star History Chart](https://api.star-history.com/svg?repos=e43b/Kemono-and-Coomer-Downloader&type=Date)](https://star-history.com/#e43b/Kemono-and-Coomer-Downloader&Date)\n\n## How to Use\n\n1. **Make sure you have Python installed on your system.**\n2. **Clone this repository:**\n```sh\ngit clone https://github.com/e43b/Kemono-and-Coomer-Downloader/\n```\n\n3. **Navigate to the project directory:**\n```sh\ncd Kemono-and-Coomer-Downloader\n```\n\n4. **Select your preferred language:**\n   - The codeen folder contains the English version.\n   - The codept folder contains the Portuguese version.\n\n5. **Run the main script:**\n```sh\npython main.py\n```\n\n6. **Follow the menu instructions to choose what you want to download or customize the program.**\n\n## Libraries\n\nThe required library is: requests. When starting the script for the first time, if the library is not installed, it will be installed automatically.\n\n## Features\n\n### Home Page\n\nThe project's home page presents the main options available to facilitate tool usage.\n\n![Home Page](img/home.png)\n\n### Download Post\n\n#### Option 1: Download 1 Post or Several Separate Posts\n\n##### 1.1 Insert links directly\n\nTo download specific posts, enter the post links separated by commas. This option is ideal for downloading a few posts. Example:\n\n```sh\nhttps://coomer.su/onlyfans/user/rosiee616/post/1005002977, https://kemono.su/patreon/user/9919437/post/103396563\n```\n\n![Posts](img/posts.png)\n\n##### 1.2 Load links from a TXT file\n\nIf you have multiple post links to download, simplify the process using a `.txt` file.\n\n###### Step 1: Creating the TXT File\n\n1. Open a text editor of your choice (like Notepad, VS Code, or other).\n2. List the post links in the following format:\n   - Separate links with **commas**.\n   - Example file content:\n```sh\nhttps://coomer.su/onlyfans/user/rosiee616/post/1005002977, https://kemono.su/patreon/user/9919437/post/103396563\n```\n3. Save the file with the `.txt` extension. For example: `posts.txt`.\n\n###### Step 2: Locating the File Path\n\nYou can specify the file path to the script in two ways:\n\n1. **Absolute Path**: Locate the file on your system and copy the complete path.\n```sh\nC:\\Users\\YourUser\\Documents\\posts.txt\n```\n\n2. **Relative Path**: If the file is in the same folder as the `main.py` script, just enter the file name.\n```sh\nposts.txt\n```\n\n###### Step 3: Running the Script\n\n1. Paste the TXT file path in the console.\n2. The script will automatically start downloading and process all links listed in the file.\n\n###### TXT File Content\n\n![TXT file content](img/txtcontent.png)\n\n###### Script Running\n\n![Script Execution](img/1_2.png)\n\n##### 1.3 Return to main menu\n\nSelect this option to return to the home menu.\n\n#### Option 2: Download All Posts from a Profile\n\n⚠️ **General Attention**:\nIn this download mode, the `files.md` file with information such as title, description, embeds, etc., **will not be created**.\nIf you need this information, use **Option 1**.\n\n##### 2.1: Download All Posts from a Profile\n\n1. Enter a Coomer or Kemono profile link.\n2. Press **Enter**.\n\n**Notes**:\n- This mode allows downloading all posts from the entered profile.\n- **Limitation**: You cannot download more than one profile at a time.\n\nThe system will process the link, extract all posts, and perform the download.\n\n![Script Execution](img/2_1.png)\n\n##### 2.2: Download Posts from a Specific Page\n\n1. Enter a Coomer or Kemono profile link.\n2. Press **Enter**.\n3. Enter the **offset** of the desired page.\n\n**How to calculate the offset**:\n- Both on Kemono and Coomer, offsets increase by 50:\n  - Page 1: offset = 0\n  - Page 2: offset = 50\n  - Page 3: offset = 100\n  - ...\n- To find the offset of the desired page:\n  1. Access the profile page.\n  2. Click on the desired page and observe the number at the end of the link.\n     Example:\n```\nhttps://kemono.su/patreon/user/9919437?o=750\n```\nIn this case, the offset is **750**.\n\nThe system will process the specified page, extract the posts, and perform the download.\n\n![Script Execution](img/2_2.png)\n\n##### 2.3: Download Posts in a Page Range\n\n1. Enter a Coomer or Kemono profile link.\n2. Press **Enter**.\n3. Enter the starting page **offset**.\n4. Enter the ending page **offset**.\n\n**How to calculate offsets**:\n- The offset calculation follows the same logic as **Option 2.2**.\n  - Example:\n    - Page 1: offset = 0\n    - Page 16: offset = 750\n\nAll posts between the specified offsets will be extracted and downloaded.\n\n![Script Execution](img/2_3.png)\n\n##### 2.4: Download Posts between Two Specific Posts\n\n1. Enter a Coomer or Kemono profile link.\n2. Press **Enter**.\n3. Enter the link or ID of the **initial post**.\n   - Example link:\n```\nhttps://kemono.su/patreon/user/9919437/post/54725686\n```\n   - Just the ID: `54725686`.\n4. Enter the link or ID of the **final post**.\n\n**What happens**:\nThe system will download all posts between the two specified IDs.\n\n![Script Execution](img/2_4.png)\n\n##### 2.5: Return to Main Menu\n\nSelect this option to return to the home page.\n\n#### Option 3: Customize Program Settings\n\nThis option allows you to configure some program preferences. The available options are:\n\n1. **Take empty posts**: `False`\n2. **Download older posts first**: `False`\n3. **For individual posts, create a file with information (title, description, etc.)**: `True`\n4. **Choose the type of file to save the information (Markdown or TXT)**: `md`\n5. **Back to the main menu**\n\n##### Option Descriptions\n\n###### Take Empty Posts\n- Defines whether empty posts (without attached files) should be included in massive profile downloads.\n  - **False (Recommended)**: Empty posts will be ignored.\n  - **True**: A folder will be created for empty posts. Use this option only in specific cases.\n\n###### Download Older Posts First\n- Controls the order of post downloads in profiles:\n  - **False**: Downloads the most recent posts first.\n  - **True**: Downloads the oldest posts first.\n\n###### Create Information File (Individual Posts)\n- Defines whether a file containing information such as title, description, and embeds will be created when downloading individual posts:\n  - **True**: Creates the information file.\n  - **False**: Does not create the file.\n\n###### File Type to Save Information\n- Choose the format of the file created in **Individual Options**:\n  - **Markdown (`md`)**: File in Markdown format.\n  - **TXT (`txt`)**: File in simple text format.\n  - **Note**: Both formats use Markdown structure.\n\n###### How to Change Settings\nTo modify any of the options, simply type the corresponding number. The program will automatically toggle the value between available options (for example, from `True` to `False`).\n\n![Program Settings](img/3.png)\n\n#### Option 4: Exit Program\n\nThis option closes the program.\n\n## File Organization\n\nPosts are saved in folders to facilitate organization. The folder structure follows the pattern below:\n\n### Folder Structure\n\n1. **Platform**: A main folder is created for each platform (Kemono or Coomer).\n2. **Author**: Within the platform folder, a folder is created for each author in the format **Name-Service-Id**.\n3. **Posts**: Within the author's folder, there is a subfolder called `posts` where contents are organized.\n   Each post is saved in a subfolder identified by the **post ID**.\n\n### Example Folder Structure\n\n```\nKemono-and-Coomer-Downloader/\n│\n├── kemono/                                 # Kemono platform folder\n│   ├── Name-Service-Id/                    # Author folder in Name-Service-Id format\n│   │   ├── posts/                          # Author's posts folder\n│   │   │   ├── postID1/                    # Post folder with ID 1\n│   │   │   │   ├── post_content            # Post content\n│   │   │   │   ├── files.md                # (Optional) File with file information\n│   │   │   │   └── ...                     # Other post files\n│   │   │   ├── postID2/                    # Post folder with ID 2\n│   │   │   │   ├── post_content            # Post content\n│   │   │   │   └── files.txt               # (Optional) File with file information\n│   │   │   └── ...                         # Other posts\n│   │   └── ...                             # Other author content\n│   └── Name-Service-Id/                    # Another author folder in Name-Service-Id format\n│       ├── posts/                          # Author's posts folder\n│       └── ...                             # Other content\n│\n└── coomer/                                 # Coomer platform folder\n    ├── Name-Service-Id/                    # Author folder in Name-Service-Id format\n    │   ├── posts/                          # Author's posts folder\n    │   │   ├── postID1/                    # Post folder with ID 1\n    │   │   │   ├── post_content            # Post content\n    │   │   │   ├── files.txt               # (Optional) File with file information\n    │   │   │   └── ...                     # Other post files\n    │   │   └── postID2/                    # Post folder with ID 2\n    │   │       ├── post_content            # Post content\n    │   │       └── ...                     # Other post files\n    │   └── ...                             # Other author content\n    └── Name-Service-Id/                    # Another author folder in Name-Service-Id format\n        ├── posts/                          # Author's posts folder\n        └── ...                             # Other content\n```\n\n![Folder Organization](img/pastas.png)\n\n### About the `files.md` or `files.txt` File\n\nThe `files.md` (or `files.txt`, depending on the chosen configuration) file contains the following information about each post:\n- **Title**: The post title.\n- **Description/Content**: The post content or description.\n- **Embeds**: Information about embedded elements (if any).\n- **File Links**: URLs of files present in the **Attachments**, **Videos**, and **Images** sections.\n\n![Example of files.md](img/files.png)\n\n## Contributions\n\nThis project is **open-source**, and your participation is very welcome! If you want to help improve the tool, feel free to:\n\n- **Send suggestions** for new features or improvements.\n- **Report issues** or bugs found.\n- **Submit pull requests** with your own contributions.\n\nYou can contribute in various ways through our [GitHub repository](https://github.com/e43b/Kemono-and-Coomer--Downloader/) or interact with the community on our [Discord](https://discord.gg/GNJbxzD8bK).\n\n## Author\n\nThe **Kemono and Coomer Downloader** was developed and is maintained by [E43b](https://github.com/e43b). Our goal is to make the process of downloading posts from **Kemono** and **Coomer** sites simpler, faster, and more organized, providing a smooth and accessible experience for users.\n\n## Support\n\nIf you encounter problems, bugs, or have questions, our community is ready to help! Contact us through our [Discord](https://discord.gg/GNJbxzD8bK) for support or to ask questions.\n"
  },
  {
    "path": "codeen/codes/down.py",
    "content": "import os\r\nimport json\r\nimport re\r\nimport time\r\nimport requests\r\nfrom concurrent.futures import ThreadPoolExecutor\r\nimport sys\r\n\r\ndef load_config(file_path):\r\n    \"\"\"Carregar a configuração de um arquivo JSON.\"\"\"\r\n    if os.path.exists(file_path):\r\n        with open(file_path, \"r\", encoding=\"utf-8\") as f:\r\n            return json.load(f)\r\n    return {}  # Retorna um dicionário vazio se o arquivo não existir\r\n\r\ndef sanitize_filename(filename):\r\n    \"\"\"Sanitize filename by removing invalid characters and replacing spaces with underscores.\"\"\"\r\n    filename = re.sub(r'[\\\\/*?\\\"<>|]', '', filename)\r\n    return filename.replace(' ', '_')\r\n\r\ndef download_file(file_url, save_path):\r\n    \"\"\"Download a file from a URL and save it to the specified path.\"\"\"\r\n    try:\r\n        response = requests.get(file_url, stream=True)\r\n        response.raise_for_status()\r\n        with open(save_path, 'wb') as f:\r\n            for chunk in response.iter_content(chunk_size=8192):\r\n                if chunk:\r\n                    f.write(chunk)\r\n    except Exception as e:\r\n        print(f\"Download failed {file_url}: {e}\")\r\n\r\ndef process_post(post, base_folder):\r\n    \"\"\"Process a single post, downloading its files.\"\"\"\r\n    post_id = post.get(\"id\")\r\n    post_folder = os.path.join(base_folder, post_id)\r\n    os.makedirs(post_folder, exist_ok=True)\r\n\r\n    print(f\"Processing post ID {post_id}\")\r\n\r\n    # Prepare downloads for this post\r\n    downloads = []\r\n    for file_index, file in enumerate(post.get(\"files\", []), start=1):\r\n        original_name = file.get(\"name\")\r\n        file_url = file.get(\"url\")\r\n        sanitized_name = sanitize_filename(original_name)\r\n        new_filename = f\"{file_index}-{sanitized_name}\"\r\n        file_save_path = os.path.join(post_folder, new_filename)\r\n        downloads.append((file_url, file_save_path))\r\n\r\n    # Download files using ThreadPoolExecutor\r\n    with ThreadPoolExecutor(max_workers=3) as executor:\r\n        for file_url, file_save_path in downloads:\r\n            executor.submit(download_file, file_url, file_save_path)\r\n\r\n    print(f\"Post {post_id} downloaded\")\r\n\r\ndef main():\r\n    if len(sys.argv) < 2:\r\n        print(\"Usage: python down.py {json_path}\")\r\n        sys.exit(1)\r\n\r\n    # Pega o caminho do arquivo JSON a partir do argumento da linha de comando\r\n    json_file_path = sys.argv[1]\r\n\r\n    # Verifica se o arquivo existe\r\n    if not os.path.exists(json_file_path):\r\n        print(f\"Error: The file '{json_file_path}' was not found.\")\r\n        sys.exit(1)\r\n\r\n    # Load the JSON file\r\n    with open(json_file_path, 'r', encoding='utf-8') as f:\r\n        data = json.load(f)\r\n\r\n    # Base folder for posts\r\n    base_folder = os.path.join(os.path.dirname(json_file_path), \"posts\")\r\n    os.makedirs(base_folder, exist_ok=True)\r\n\r\n    # Caminho para o arquivo de configuração\r\n    config_file_path = os.path.join(\"config\", \"conf.json\")\r\n\r\n    # Carregar a configuração do arquivo JSON\r\n    config = load_config(config_file_path)\r\n\r\n    # Pegar o valor de 'process_from_oldest' da configuração\r\n    process_from_oldest = config.get(\"process_from_oldest\", True)  # Valor padrão é True\r\n\r\n    posts = data.get(\"posts\", [])\r\n    if process_from_oldest:\r\n        posts = reversed(posts)\r\n\r\n    # Process each post sequentially\r\n    for post_index, post in enumerate(posts, start=1):\r\n        process_post(post, base_folder)\r\n        time.sleep(2)  # Wait 2 seconds between posts\r\n\r\nif __name__ == \"__main__\":\r\n    main()\r\n"
  },
  {
    "path": "codeen/codes/kcposts.py",
    "content": "import os\r\nimport sys\r\nimport json\r\nimport requests\r\nimport re\r\nfrom html.parser import HTMLParser\r\nfrom urllib.parse import quote, urlparse, unquote\r\n\r\ndef load_config(config_path='config/conf.json'):\r\n    \"\"\"\r\n    Carrega as configurações do arquivo conf.json\r\n    Se o arquivo não existir, retorna configurações padrão\r\n    \"\"\"\r\n    try:\r\n        with open(config_path, 'r') as file:\r\n            config = json.load(file)\r\n        return {\r\n            'post_info': config.get('post_info', 'md'),  # Padrão para md se não especificado\r\n            'save_info': config.get('save_info', True)   # Padrão para True se não especificado\r\n        }\r\n    except FileNotFoundError:\r\n        # Configurações padrão se o arquivo não existir\r\n        return {\r\n            'post_info': 'md',\r\n            'save_info': True\r\n        }\r\n    except json.JSONDecodeError:\r\n        print(f\"Error decoding {config_path}. Using default settings.\")\r\n        return {\r\n            'post_info': 'md',\r\n            'save_info': True\r\n        }\r\n\r\ndef ensure_directory(path):\r\n    if not os.path.exists(path):\r\n        os.makedirs(path)\r\n\r\ndef load_profiles(path):\r\n    if os.path.exists(path):\r\n        with open(path, 'r', encoding='utf-8') as file:\r\n            return json.load(file)\r\n    return {}\r\n\r\ndef save_profiles(path, profiles):\r\n    with open(path, 'w', encoding='utf-8') as file:\r\n        json.dump(profiles, file, indent=4)\r\n\r\ndef extract_data_from_link(link):\r\n    \"\"\"\r\n    Extract service, user_id, and post_id from both kemono.su and coomer.su links\r\n    \"\"\"\r\n    # Pattern for both kemono.su and coomer.su\r\n    match = re.match(r\"https://(kemono|coomer)\\.su/([^/]+)/user/([^/]+)/post/([^/]+)\", link)\r\n    if not match:\r\n        raise ValueError(\"Invalid link format\")\r\n    \r\n    # Unpack the match groups\r\n    domain, service, user_id, post_id = match.groups()\r\n    \r\n    return domain, service, user_id, post_id\r\n\r\ndef get_api_base_url(domain):\r\n    \"\"\"\r\n    Dynamically generate API base URL based on the domain\r\n    \"\"\"\r\n    return f\"https://{domain}.su/api/v1/\"\r\n\r\ndef fetch_profile(domain, service, user_id):\r\n    \"\"\"\r\n    Fetch user profile with dynamic domain support\r\n    \"\"\"\r\n    api_base_url = get_api_base_url(domain)\r\n    url = f\"{api_base_url}{service}/user/{user_id}/profile\"\r\n    response = requests.get(url)\r\n    response.raise_for_status()\r\n    return response.json()\r\n\r\ndef fetch_post(domain, service, user_id, post_id):\r\n    \"\"\"\r\n    Fetch post data with dynamic domain support\r\n    \"\"\"\r\n    api_base_url = get_api_base_url(domain)\r\n    url = f\"{api_base_url}{service}/user/{user_id}/post/{post_id}\"\r\n    response = requests.get(url)\r\n    response.raise_for_status()\r\n    return response.json()\r\n\r\nclass HTMLToMarkdown(HTMLParser):\r\n    \"\"\"Parser to convert HTML content to Markdown and plain text.\"\"\"\r\n    def __init__(self):\r\n        super().__init__()\r\n        self.result = []\r\n        self.raw_content = []\r\n        self.current_link = None\r\n\r\n    def handle_starttag(self, tag, attrs):\r\n        if tag == \"a\":\r\n            href = dict(attrs).get(\"href\", \"\")\r\n            self.current_link = href\r\n            self.result.append(\"[\")  # Markdown link opening\r\n        elif tag in (\"p\", \"br\"):\r\n            self.result.append(\"\\n\")  # New line for Markdown\r\n        self.raw_content.append(self.get_starttag_text())\r\n\r\n    def handle_endtag(self, tag):\r\n        if tag == \"a\" and self.current_link:\r\n            self.result.append(f\"]({self.current_link})\")\r\n            self.current_link = None\r\n        self.raw_content.append(f\"</{tag}>\")\r\n\r\n    def handle_data(self, data):\r\n        # Append visible text to the Markdown result\r\n        if self.current_link:\r\n            self.result.append(data.strip())\r\n        else:\r\n            self.result.append(data.strip())\r\n        # Append all raw content for reference\r\n        self.raw_content.append(data)\r\n\r\n    def get_markdown(self):\r\n        \"\"\"Return the cleaned Markdown content.\"\"\"\r\n        return \"\".join(self.result).strip()\r\n\r\n    def get_raw_content(self):\r\n        \"\"\"Return the raw HTML content.\"\"\"\r\n        return \"\".join(self.raw_content).strip()\r\n\r\ndef clean_html_to_text(html):\r\n    \"\"\"Converts HTML to Markdown and extracts raw HTML.\"\"\"\r\n    parser = HTMLToMarkdown()\r\n    parser.feed(html)\r\n    return parser.get_markdown(), parser.get_raw_content()\r\n\r\ndef adapt_file_name(name):\r\n    \"\"\"\r\n    Sanitize file name by removing special characters and reducing its size.\r\n    \"\"\"\r\n    sanitized = re.sub(r'[^a-zA-Z0-9]', '_', unquote(name).split('.')[0])\r\n    return sanitized[:50]  # Limit length to 50 characters\r\n\r\n\r\ndef download_files(file_list, folder_path):\r\n    \"\"\"\r\n    Download files from a list of URLs and save them with unique names in the folder_path.\r\n\r\n    :param file_list: List of tuples with original name and URL [(name, url), ...]\r\n    :param folder_path: Directory to save downloaded files\r\n    \"\"\"\r\n    seen_files = set()\r\n\r\n    for idx, (original_name, url) in enumerate(file_list, start=1):\r\n        # Check if URL is from allowed domains\r\n        parsed_url = urlparse(url)\r\n        domain = parsed_url.netloc.split('.')[-2] + '.' + parsed_url.netloc.split('.')[-1]  # Get main domain\r\n        if domain not in ['kemono.su', 'coomer.su']:\r\n            print(f\"⚠️ Ignoring not allowed domain URL: {url}\")\r\n            continue\r\n\r\n        # Derive file extension\r\n        extension = os.path.splitext(parsed_url.path)[1] or '.bin'\r\n\r\n        # Handle case where no original name is provided\r\n        if not original_name or original_name.strip() == \"\":\r\n            sanitized_name = str(idx)\r\n        else:\r\n            sanitized_name = adapt_file_name(original_name)\r\n\r\n        # Generate unique file name\r\n        file_name = f\"{idx}-{sanitized_name}{extension}\"\r\n        if file_name in seen_files:\r\n            continue  # Skip duplicates\r\n\r\n        seen_files.add(file_name)\r\n        file_path = os.path.join(folder_path, file_name)\r\n\r\n        # Download the file\r\n        try:\r\n            response = requests.get(url, stream=True)\r\n            response.raise_for_status()\r\n            with open(file_path, 'wb') as file:\r\n                for chunk in response.iter_content(chunk_size=8192):\r\n                    file.write(chunk)\r\n            print(f\"Downloaded: {file_name}\")\r\n        except Exception as e:\r\n            print(f\"Download failed {url}: {e}\")\r\n\r\n\r\ndef save_post_content(post_data, folder_path, config):\r\n    \"\"\"\r\n    Save post content and download files based on configuration settings.\r\n    Now includes support for poll data if present.\r\n    \r\n    :param post_data: Dictionary containing post information\r\n    :param folder_path: Path to save the post files\r\n    :param config: Configuration dictionary with 'post_info' and 'save_info' keys\r\n    \"\"\"\r\n    ensure_directory(folder_path)\r\n\r\n    # Verify if content should be saved based on save_info\r\n    if not config['save_info']:\r\n        return  # Do not save anything if save_info is False\r\n\r\n    # Use post_info configuration to define format\r\n    file_format = config['post_info'].lower()\r\n    file_extension = \".md\" if file_format == \"md\" else \".txt\"\r\n    file_name = f\"files{file_extension}\"\r\n\r\n    # Process title and content\r\n    title, raw_title = clean_html_to_text(post_data['post']['title'])\r\n    content, raw_content = clean_html_to_text(post_data['post']['content'])\r\n\r\n    # Path to save the main file\r\n    file_path = os.path.join(folder_path, file_name)\r\n    with open(file_path, 'w', encoding='utf-8') as file:\r\n        # Formatted title\r\n        if file_format == \"md\":\r\n            file.write(f\"# {title}\\n\\n\")\r\n        else:\r\n            file.write(f\"Title: {title}\\n\\n\")\r\n        \r\n        # Formatted content\r\n        file.write(f\"{content}\\n\\n\")\r\n\r\n        # Process poll if it exists\r\n        poll = post_data['post'].get('poll')\r\n        if poll:\r\n            if file_format == \"md\":\r\n                file.write(\"## Poll Information\\n\\n\")\r\n                file.write(f\"**Poll Title:** {poll.get('title', 'No Title')}\\n\")\r\n                if poll.get('description'):\r\n                    file.write(f\"\\n**Description:** {poll['description']}\\n\")\r\n                file.write(f\"\\n**Multiple Choices Allowed:** {'Yes' if poll.get('allows_multiple') else 'No'}\\n\")\r\n                file.write(f\"**Started:** {poll.get('created_at', 'N/A')}\\n\")\r\n                file.write(f\"**Closes:** {poll.get('closes_at', 'N/A')}\\n\")\r\n                file.write(f\"**Total Votes:** {poll.get('total_votes', 0)}\\n\\n\")\r\n                \r\n                # Poll choices\r\n                file.write(\"### Choices and Votes\\n\\n\")\r\n                for choice in poll.get('choices', []):\r\n                    file.write(f\"- **{choice['text']}:** {choice.get('votes', 0)} votes\\n\")\r\n            else:\r\n                file.write(\"Poll Information:\\n\\n\")\r\n                file.write(f\"Poll Title: {poll.get('title', 'No Title')}\\n\")\r\n                if poll.get('description'):\r\n                    file.write(f\"Description: {poll['description']}\\n\")\r\n                file.write(f\"Multiple Choices Allowed: {'Yes' if poll.get('allows_multiple') else 'No'}\\n\")\r\n                file.write(f\"Started: {poll.get('created_at', 'N/A')}\\n\")\r\n                file.write(f\"Closes: {poll.get('closes_at', 'N/A')}\\n\")\r\n                file.write(f\"Total Votes: {poll.get('total_votes', 0)}\\n\\n\")\r\n                \r\n                file.write(\"Choices and Votes:\\n\")\r\n                for choice in poll.get('choices', []):\r\n                    file.write(f\"- {choice['text']}: {choice.get('votes', 0)} votes\\n\")\r\n            \r\n            file.write(\"\\n\")\r\n\r\n        # Process embed\r\n        embed = post_data['post'].get('embed')\r\n        if embed:\r\n            if file_format == \"md\":\r\n                file.write(\"## Embedded Content\\n\")\r\n            else:\r\n                file.write(\"Embedded Content:\\n\")\r\n            file.write(f\"- URL: {embed.get('url', 'N/A')}\\n\")\r\n            file.write(f\"- Subject: {embed.get('subject', 'N/A')}\\n\")\r\n            file.write(f\"- Description: {embed.get('description', 'N/A')}\\n\")\r\n\r\n        # Separator\r\n        file.write(\"\\n---\\n\\n\")\r\n\r\n        # Raw Title and Content\r\n        if file_format == \"md\":\r\n            file.write(\"## Raw Title and Content\\n\\n\")\r\n        else:\r\n            file.write(\"Raw Title and Content:\\n\\n\")\r\n        file.write(f\"Raw Title: {raw_title}\\n\\n\")\r\n        file.write(f\"Raw Content:\\n{raw_content}\\n\\n\")\r\n\r\n        # Process attachments\r\n        attachments = post_data.get('attachments', [])\r\n        if attachments:\r\n            if file_format == \"md\":\r\n                file.write(\"## Attachments\\n\\n\")\r\n            else:\r\n                file.write(\"Attachments:\\n\\n\")\r\n            for attach in attachments:\r\n                server_url = f\"{attach['server']}/data{attach['path']}?f={adapt_file_name(attach['name'])}\"\r\n                file.write(f\"- {attach['name']}: {server_url}\\n\")\r\n\r\n        # Process videos\r\n        videos = post_data.get('videos', [])\r\n        if videos:\r\n            if file_format == \"md\":\r\n                file.write(\"## Videos\\n\\n\")\r\n            else:\r\n                file.write(\"Videos:\\n\\n\")\r\n            for video in videos:\r\n                server_url = f\"{video['server']}/data{video['path']}?f={adapt_file_name(video['name'])}\"\r\n                file.write(f\"- {video['name']}: {server_url}\\n\")\r\n\r\n        # Process images\r\n        seen_paths = set()\r\n        images = []\r\n        for preview in post_data.get(\"previews\", []):\r\n            if 'name' in preview and 'server' in preview and 'path' in preview:\r\n                server_url = f\"{preview['server']}/data{preview['path']}\"\r\n                images.append((preview.get('name', ''), server_url))\r\n\r\n        if images:\r\n            if file_format == \"md\":\r\n                file.write(\"## Images\\n\\n\")\r\n            else:\r\n                file.write(\"Images:\\n\\n\")\r\n            for idx, (name, image_url) in enumerate(images, 1):\r\n                if file_format == \"md\":\r\n                    file.write(f\"![Image {idx}]({image_url}) - {name}\\n\")\r\n                else:\r\n                    file.write(f\"Image {idx}: {image_url} (Name: {name})\\n\")\r\n\r\n    # Consolidate all files for download\r\n    all_files_to_download = []\r\n\r\n    for attach in post_data.get('attachments', []):\r\n        if 'name' in attach and 'server' in attach and 'path' in attach:\r\n            url = f\"{attach['server']}/data{attach['path']}?f={adapt_file_name(attach['name'])}\"\r\n            all_files_to_download.append((attach['name'], url))\r\n\r\n    for video in post_data.get('videos', []):\r\n        if 'name' in video and 'server' in video and 'path' in video:\r\n            url = f\"{video['server']}/data{video['path']}?f={adapt_file_name(video['name'])}\"\r\n            all_files_to_download.append((video['name'], url))\r\n\r\n    for image in post_data.get('previews', []):\r\n        if 'name' in image and 'server' in image and 'path' in image:\r\n            url = f\"{image['server']}/data{image['path']}\"\r\n            all_files_to_download.append((image.get('name', ''), url))\r\n\r\n    # Remove duplicates based on URL\r\n    unique_files_to_download = list({url: (name, url) for name, url in all_files_to_download}.values())\r\n\r\n    # Download files to the specified folder\r\n    download_files(unique_files_to_download, folder_path)\r\n\r\ndef sanitize_filename(value):\r\n    \"\"\"Remove caracteres que podem quebrar a criação de pastas.\"\"\"\r\n    return value.replace(\"/\", \"_\").replace(\"\\\\\", \"_\")\r\n    \r\ndef main():\r\n    # Carregar configurações\r\n    config = load_config()\r\n\r\n    # Verificar se links foram passados por linha de comando\r\n    if len(sys.argv) < 2:\r\n        print(\"Please provide at least one link as an argument.\")\r\n        print(\"Example: python kcposts.py https://kemono.su/link1, https://coomer.su/link2\")\r\n        sys.exit(1)\r\n\r\n    # Processar cada link passado\r\n    links = sys.argv[1:]\r\n    \r\n    for user_link in links:\r\n        try:\r\n            print(f\"\\n--- Processing link: {user_link} ---\")\r\n            \r\n            # Extract data from the link\r\n            domain, service, user_id, post_id = extract_data_from_link(user_link)\r\n\r\n            # Setup paths\r\n            base_path = domain  # Use domain as base path (kemono or coomer)\r\n            profiles_path = os.path.join(base_path, \"profiles.json\")\r\n\r\n            ensure_directory(base_path)\r\n\r\n            # Load existing profiles\r\n            profiles = load_profiles(profiles_path)\r\n\r\n            # Fetch and save profile if not already in profiles.json\r\n            if user_id not in profiles:\r\n                profile_data = fetch_profile(domain, service, user_id)\r\n                profiles[user_id] = profile_data\r\n                save_profiles(profiles_path, profiles)\r\n            else:\r\n                profile_data = profiles[user_id]\r\n\r\n            # Criar pasta específica para o usuário\r\n            user_name = sanitize_filename(profile_data.get(\"name\", \"unknown_user\"))\r\n            safe_service = sanitize_filename(service)\r\n            safe_user_id = sanitize_filename(user_id)\r\n\r\n            user_folder = os.path.join(base_path, f\"{user_name}-{safe_service}-{safe_user_id}\")\r\n            ensure_directory(user_folder)\r\n\r\n            # Create posts folder and post-specific folder\r\n            posts_folder = os.path.join(user_folder, \"posts\")\r\n            ensure_directory(posts_folder)\r\n\r\n            post_folder = os.path.join(posts_folder, post_id)\r\n            ensure_directory(post_folder)\r\n\r\n            # Fetch post data\r\n            post_data = fetch_post(domain, service, user_id, post_id)\r\n            \r\n            # Salvar conteúdo do post usando as configurações\r\n            save_post_content(post_data, post_folder, config)\r\n\r\n            print(f\"\\n✅ Link processed successfully: {user_link}\")\r\n\r\n        except Exception as e:\r\n            print(f\"❌ Error processing link {user_link}: {e}\")\r\n            import traceback\r\n            traceback.print_exc()\r\n            continue  # Continua processando próximos links mesmo se um falhar\r\n\r\nif __name__ == \"__main__\":\r\n    main()\r\n"
  },
  {
    "path": "codeen/codes/posts.py",
    "content": "import os\r\nimport sys\r\nimport json\r\nimport requests\r\nfrom datetime import datetime\r\n\r\ndef save_json(file_path, data):\r\n    \"\"\"Helper function to save JSON files with UTF-8 encoding and pretty formatting\"\"\"\r\n    with open(file_path, \"w\", encoding=\"utf-8\") as f:\r\n        json.dump(data, f, indent=4, ensure_ascii=False)\r\n\r\ndef load_config(file_path):\r\n    \"\"\"Carregar a configuração de um arquivo JSON.\"\"\"\r\n    if os.path.exists(file_path):\r\n        with open(file_path, \"r\", encoding=\"utf-8\") as f:\r\n            return json.load(f)\r\n    return {}  # Retorna um dicionário vazio se o arquivo não existir\r\n\r\ndef get_base_config(profile_url):\r\n    \"\"\"\r\n    Dynamically configure base URLs and directories based on the profile URL domain\r\n    \"\"\"\r\n    # Extract domain from the profile URL\r\n    domain = profile_url.split('/')[2]\r\n    \r\n    if domain not in ['kemono.su', 'coomer.su']:\r\n        raise ValueError(f\"Unsupported domain: {domain}\")\r\n    \r\n    BASE_API_URL = f\"https://{domain}/api/v1\"\r\n    BASE_SERVER = f\"https://{domain}\"\r\n    BASE_DIR = domain.split('.')[0]  # 'kemono' or 'coomer'\r\n    \r\n    return BASE_API_URL, BASE_SERVER, BASE_DIR\r\n\r\ndef is_offset(value):\r\n    \"\"\"Determina se o valor é um offset (até 5 dígitos) ou um ID.\"\"\"\r\n    try:\r\n        # Tenta converter para inteiro e verifica o comprimento\r\n        return isinstance(int(value), int) and len(value) <= 5\r\n    except ValueError:\r\n        # Se não for um número, não é offset\r\n        return False\r\n\r\ndef parse_fetch_mode(fetch_mode, total_count):\r\n    \"\"\"\r\n    Analisa o modo de busca e retorna os offsets correspondentes\r\n    \"\"\"\r\n    # Caso especial: buscar todos os posts\r\n    if fetch_mode == \"all\":\r\n        return list(range(0, total_count, 50))\r\n    \r\n    # Se for um número único (página específica)\r\n    if fetch_mode.isdigit():\r\n        if is_offset(fetch_mode):\r\n            return [int(fetch_mode)]\r\n        else:\r\n            # Se for um ID específico, retorna como tal\r\n            return [\"id:\" + fetch_mode]\r\n    \r\n    # Caso seja um intervalo\r\n    if \"-\" in fetch_mode:\r\n        start, end = fetch_mode.split(\"-\")\r\n        \r\n        # Tratar \"start\" e \"end\" especificamente\r\n        if start == \"start\":\r\n            start = 0\r\n        else:\r\n            start = int(start)\r\n        \r\n        if end == \"end\":\r\n            end = total_count\r\n        else:\r\n            end = int(end)\r\n        \r\n        # Se os valores são offsets\r\n        if start <= total_count and end <= total_count:\r\n            # Calcular o número de páginas necessárias para cobrir o intervalo\r\n            # Usa ceil para garantir que inclua a página final\r\n            import math\r\n            num_pages = math.ceil((end - start) / 50)\r\n            \r\n            # Gerar lista de offsets\r\n            return [start + i * 50 for i in range(num_pages)]\r\n        \r\n        # Se parecem ser IDs, retorna o intervalo de IDs\r\n        return [\"id:\" + str(start) + \"-\" + str(end)]\r\n    \r\n    raise ValueError(f\"Modo de busca inválido: {fetch_mode}\")\r\n\r\ndef get_artist_info(profile_url):\r\n    # Extrair serviço e user_id do URL\r\n    parts = profile_url.split(\"/\")\r\n    service = parts[-3]\r\n    user_id = parts[-1]\r\n    return service, user_id\r\n\r\ndef fetch_posts(base_api_url, service, user_id, offset=0):\r\n    # Buscar posts da API\r\n    url = f\"{base_api_url}/{service}/user/{user_id}/posts-legacy?o={offset}\"\r\n    response = requests.get(url)\r\n    response.raise_for_status()\r\n    return response.json()\r\n\r\ndef save_json_incrementally(file_path, new_posts, start_offset, end_offset):\r\n    # Criar um novo dicionário com os posts atuais\r\n    data = {\r\n        \"total_posts\": len(new_posts),\r\n        \"posts\": new_posts\r\n    }\r\n    \r\n    # Salvar o novo arquivo, substituindo o existente\r\n    with open(file_path, \"w\", encoding=\"utf-8\") as f:\r\n        json.dump(data, f, indent=4, ensure_ascii=False)\r\n\r\ndef process_posts(posts, previews, attachments_data, page_number, offset, base_server, save_empty_files=True, id_filter=None):\r\n    # Processar posts e organizar os links dos arquivos\r\n    processed = []\r\n    for post in posts:\r\n        # Filtro de ID se especificado\r\n        if id_filter and not id_filter(post['id']):\r\n            continue\r\n\r\n        result = {\r\n            \"id\": post[\"id\"],\r\n            \"user\": post[\"user\"],\r\n            \"service\": post[\"service\"],\r\n            \"title\": post[\"title\"],\r\n            \"link\": f\"{base_server}/{post['service']}/user/{post['user']}/post/{post['id']}\",\r\n            \"page\": page_number,\r\n            \"offset\": offset,\r\n            \"files\": []\r\n        }\r\n\r\n        # Combina previews e attachments_data em uma única lista para busca\r\n        all_data = previews + attachments_data\r\n\r\n        # Processar arquivos no campo file\r\n        if \"file\" in post and post[\"file\"]:\r\n            matching_data = next(\r\n                (item for item in all_data if item[\"path\"] == post[\"file\"][\"path\"]),\r\n                None\r\n            )\r\n            if matching_data:\r\n                file_url = f\"{matching_data['server']}/data{post['file']['path']}\"\r\n                if file_url not in [f[\"url\"] for f in result[\"files\"]]:\r\n                    result[\"files\"].append({\"name\": post[\"file\"][\"name\"], \"url\": file_url})\r\n\r\n        # Processar arquivos no campo attachments\r\n        for attachment in post.get(\"attachments\", []):\r\n            matching_data = next(\r\n                (item for item in all_data if item[\"path\"] == attachment[\"path\"]),\r\n                None\r\n            )\r\n            if matching_data:\r\n                file_url = f\"{matching_data['server']}/data{attachment['path']}\"\r\n                if file_url not in [f[\"url\"] for f in result[\"files\"]]:\r\n                    result[\"files\"].append({\"name\": attachment[\"name\"], \"url\": file_url})\r\n\r\n        # Ignorar posts sem arquivos se save_empty_files for False\r\n        if not save_empty_files and not result[\"files\"]:\r\n            continue\r\n\r\n        processed.append(result)\r\n\r\n    return processed\r\n\r\ndef sanitize_filename(value):\r\n    \"\"\"Remove caracteres que podem quebrar a criação de pastas.\"\"\"\r\n    return value.replace(\"/\", \"_\").replace(\"\\\\\", \"_\")\r\n\r\ndef main():\r\n    # Verificar argumentos de linha de comando\r\n    if len(sys.argv) < 2 or len(sys.argv) > 3:\r\n        print(\"Usage: python posts.py <profile_url> [fetch_mode]\")\r\n        print(\"Possible search modes:\")\r\n        print(\"- all\")\r\n        print(\"- <page number>\")\r\n        print(\"- start-end\")\r\n        print(\"- <start_id>-<end_id>\")\r\n        sys.exit(1)\r\n\r\n    # Definir profile_url do argumento\r\n    profile_url = sys.argv[1]\r\n    \r\n    # Definir FETCH_MODE (padrão para \"all\" se não especificado)\r\n    FETCH_MODE = sys.argv[2] if len(sys.argv) == 3 else \"all\"\r\n    \r\n    config_file_path = os.path.join(\"config\", \"conf.json\")\r\n\r\n    # Carregar a configuração do arquivo JSON\r\n    config = load_config(config_file_path)\r\n\r\n    # Pegar o valor de 'process_from_oldest' da configuração\r\n    SAVE_EMPTY_FILES = config.get(\"get_empty_posts\", False)  # Alterar para True se quiser salvar posts sem arquivos\r\n\r\n    # Configurar base URLs dinamicamente\r\n    BASE_API_URL, BASE_SERVER, BASE_DIR = get_base_config(profile_url)\r\n    \r\n    # Pasta base\r\n    base_dir = BASE_DIR\r\n    os.makedirs(base_dir, exist_ok=True)\r\n\r\n    # Atualizar o arquivo profiles.json\r\n    profiles_file = os.path.join(base_dir, \"profiles.json\")\r\n    if os.path.exists(profiles_file):\r\n        with open(profiles_file, \"r\", encoding=\"utf-8\") as f:\r\n            profiles = json.load(f)\r\n    else:\r\n        profiles = {}\r\n\r\n    # Buscar primeiro conjunto de posts para informações gerais\r\n    service, user_id = get_artist_info(profile_url)\r\n    initial_data = fetch_posts(BASE_API_URL, service, user_id, offset=0)\r\n    name = initial_data[\"props\"][\"name\"]\r\n    count = initial_data[\"props\"][\"count\"]\r\n\r\n    # Salvar informações do artista\r\n    artist_info = {\r\n        \"id\": user_id,\r\n        \"name\": name,\r\n        \"service\": service,\r\n        \"indexed\": initial_data[\"props\"][\"artist\"][\"indexed\"],\r\n        \"updated\": initial_data[\"props\"][\"artist\"][\"updated\"],\r\n        \"public_id\": initial_data[\"props\"][\"artist\"][\"public_id\"],\r\n        \"relation_id\": initial_data[\"props\"][\"artist\"][\"relation_id\"],\r\n    }\r\n    profiles[user_id] = artist_info\r\n    save_json(profiles_file, profiles)\r\n\r\n    # Sanitizar os valores\r\n    safe_name = sanitize_filename(name)\r\n    safe_service = sanitize_filename(service)\r\n    safe_user_id = sanitize_filename(user_id)\r\n\r\n    # Pasta do artista\r\n    artist_dir = os.path.join(base_dir, f\"{safe_name}-{safe_service}-{safe_user_id}\")\r\n    os.makedirs(artist_dir, exist_ok=True)\r\n\r\n    # Processar modo de busca\r\n    today = datetime.now().strftime(\"%Y-%m-%d\")\r\n    \r\n    try:\r\n        offsets = parse_fetch_mode(FETCH_MODE, count)\r\n    except ValueError as e:\r\n        print(e)\r\n        return\r\n\r\n    # Verificar se é busca por ID específico\r\n    id_filter = None\r\n    found_ids = set()\r\n    if isinstance(offsets[0], str) and offsets[0].startswith(\"id:\"):\r\n        # Extrair IDs para filtro\r\n        id_range = offsets[0].split(\":\")[1]\r\n        \r\n        if \"-\" in id_range:\r\n            id1, id2 = map(str, sorted(map(int, id_range.split(\"-\"))))\r\n            id_filter = lambda x: id1 <= str(x) <= id2\r\n        else:\r\n            id_filter = lambda x: x == id_range\r\n\r\n        # Redefinir offsets para varrer todas as páginas\r\n        offsets = list(range(0, count, 50))\r\n\r\n    # Nome do arquivo JSON com range de offsets\r\n    if len(offsets) > 1:\r\n        file_path = os.path.join(artist_dir, f\"posts-{offsets[0]}-{offsets[-1]}-{today}.json\")\r\n    else:\r\n        file_path = os.path.join(artist_dir, f\"posts-{offsets[0]}-{today}.json\")\r\n\r\n    new_posts= []\r\n    # Processamento principal\r\n    for offset in offsets:\r\n        page_number = (offset // 50) + 1\r\n        post_data = fetch_posts(BASE_API_URL, service, user_id, offset=offset)\r\n        posts = post_data[\"results\"]\r\n        previews = [item for sublist in post_data.get(\"result_previews\", []) for item in sublist]\r\n        attachments = [item for sublist in post_data.get(\"result_attachments\", []) for item in sublist]\r\n\r\n        processed_posts = process_posts(\r\n            posts, \r\n            previews, \r\n            attachments, \r\n            page_number, \r\n            offset, \r\n            BASE_SERVER,\r\n            save_empty_files=SAVE_EMPTY_FILES,\r\n            id_filter=id_filter\r\n        )\r\n        new_posts.extend(processed_posts)\r\n        # Salvar posts incrementais no JSON\r\n        if processed_posts:\r\n            save_json_incrementally(file_path, new_posts, offset, offset+50)\r\n            \r\n            # Verificar se encontrou os IDs desejados\r\n            if id_filter:\r\n                found_ids.update(post['id'] for post in processed_posts)\r\n                \r\n                # Verificar se encontrou ambos os IDs\r\n                if (id1 in found_ids) and (id2 in found_ids):\r\n                    print(f\"Found both IDs: {id1} e {id2}\")\r\n                    break\r\n\r\n    # Imprimir o caminho completo do arquivo JSON gerado\r\n    print(f\"{os.path.abspath(file_path)}\")\r\n\r\nif __name__ == \"__main__\":\r\n    main()\r\n"
  },
  {
    "path": "codeen/config/conf.json",
    "content": "{\r\n    \"get_empty_posts\": false,\r\n    \"process_from_oldest\": false,\r\n    \"post_info\": \"md\",\r\n    \"save_info\": true\r\n}"
  },
  {
    "path": "codeen/main.py",
    "content": "import os\nimport sys\nimport subprocess\nimport re\nimport json\nimport time\nimport importlib\n\ndef install_requirements():\n    \"\"\"Verifica e instala as dependências do requirements.txt.\"\"\"\n    requirements_file = \"requirements.txt\"\n\n    if not os.path.exists(requirements_file):\n        print(f\"Error: File {requirements_file} not found.\")\n        return\n\n    with open(requirements_file, 'r', encoding='utf-8') as req_file:\n        for line in req_file:\n            # Lê cada linha, ignora vazias ou comentários\n            package = line.strip()\n            if package and not package.startswith(\"#\"):\n                try:\n                    # Tenta importar o pacote para verificar se já está instalado\n                    package_name = package.split(\"==\")[0]  # Ignora versão específica na importação\n                    importlib.import_module(package_name)\n                except ImportError:\n                    # Se falhar, instala o pacote usando pip\n                    print(f\"Installing the package: {package}\")\n                    subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", package])\n\ndef clear_screen():\n    \"\"\"Limpa a tela do console de forma compatível com diferentes sistemas operacionais\"\"\"\n    os.system('cls' if os.name == 'nt' else 'clear')\n\ndef display_logo():\n    \"\"\"Exibe o logo do projeto\"\"\"\n    logo = \"\"\"\n _  __                                                   \n| |/ /___ _ __ ___   ___  _ __   ___                     \n| ' // _ \\ '_ ` _ \\ / _ \\| '_ \\ / _ \\                    \n| . \\  __/ | | | | | (_) | | | | (_) |                   \n|_|\\_\\___|_| |_| |_|\\___/|_| |_|\\___/                    \n / ___|___   ___  _ __ ___   ___ _ __                    \n| |   / _ \\ / _ \\| '_ ` _ \\ / _ \\ '__|                   \n| |__| (_) | (_) | | | | | |  __/ |                      \n \\____\\___/ \\___/|_| |_| |_|\\___|_|          _           \n|  _ \\  _____      ___ __ | | ___   __ _  __| | ___ _ __ \n| | | |/ _ \\ \\ /\\ / / '_ \\| |/ _ \\ / _` |/ _` |/ _ \\ '__|\n| |_| | (_) \\ V  V /| | | | | (_) | (_| | (_| |  __/ |   \n|____/ \\___/ \\_/\\_/ |_| |_|_|\\___/ \\__,_|\\__,_|\\___|_|   \n\nCreated by E43b\nGitHub: https://github.com/e43b\nDiscord: https://discord.gg/GNJbxzD8bK\nProject Repository: https://github.com/e43b/Kemono-and-Coomer-Downloader\nDonate: https://ko-fi.com/e43bs\n\"\"\"\n    print(logo)\n\ndef normalize_path(path):\n    \"\"\"\n    Normaliza o caminho do arquivo para lidar com caracteres não-ASCII\n    \"\"\"\n    try:\n        # Se o caminho original existir, retorna ele\n        if os.path.exists(path):\n            return path\n            \n        # Extrai o nome do arquivo e os componentes do caminho\n        filename = os.path.basename(path)\n        path_parts = path.split(os.sep)\n        \n        # Identifica se está procurando em kemono ou coomer\n        base_dir = None\n        if 'kemono' in path_parts:\n            base_dir = 'kemono'\n        elif 'coomer' in path_parts:\n            base_dir = 'coomer'\n            \n        if base_dir:\n            # Procura em todos os subdiretórios do diretório base\n            for root, dirs, files in os.walk(base_dir):\n                if filename in files:\n                    return os.path.join(root, filename)\n        \n        # Se ainda não encontrou, tenta o caminho normalizado\n        return os.path.abspath(os.path.normpath(path))\n\n    except Exception as e:\n        print(f\"Error when normalizing path: {e}\")\n        return path\n\ndef run_download_script(json_path):\n    \"\"\"Roda o script de download com o JSON gerado e faz tracking detalhado em tempo real\"\"\"\n    try:\n        # Normalizar o caminho do JSON\n        json_path = normalize_path(json_path)\n\n        # Verificar se o arquivo JSON existe\n        if not os.path.exists(json_path):\n            print(f\"Error: JSON file not found: {json_path}\")\n            return\n\n        # Ler configurações\n        config_path = normalize_path(os.path.join('config', 'conf.json'))\n        with open(config_path, 'r', encoding='utf-8') as config_file:\n            config = json.load(config_file)\n\n        # Ler o JSON de posts\n        with open(json_path, 'r', encoding='utf-8') as posts_file:\n            posts_data = json.load(posts_file)\n\n        # Análise inicial\n        total_posts = posts_data['total_posts']\n        post_ids = [post['id'] for post in posts_data['posts']]\n\n        # Contagem de arquivos\n        total_files = sum(len(post['files']) for post in posts_data['posts'])\n\n        # Imprimir informações iniciais\n        print(f\"Post extraction completed: {total_posts} posts found\")\n        print(f\"Total number of files to download: {total_files}\")\n        print(\"Starting post downloads\")\n\n        # Determinar ordem de processamento\n        if config['process_from_oldest']:\n            post_ids = sorted(post_ids)  # Ordem do mais antigo ao mais recente\n        else:\n            post_ids = sorted(post_ids, reverse=True)  # Ordem do mais recente ao mais antigo\n\n        # Pasta base para posts usando normalização de caminho\n        posts_folder = normalize_path(os.path.join(os.path.dirname(json_path), 'posts'))\n        os.makedirs(posts_folder, exist_ok=True)\n\n        # Processar cada post\n        for idx, post_id in enumerate(post_ids, 1):\n            # Encontrar dados do post específico\n            post_data = next((p for p in posts_data['posts'] if p['id'] == post_id), None)\n\n            if post_data:\n                # Pasta do post específico com normalização\n                post_folder = normalize_path(os.path.join(posts_folder, post_id))\n                os.makedirs(post_folder, exist_ok=True)\n\n                # Contar número de arquivos no JSON para este post\n                expected_files_count = len(post_data['files'])\n\n                # Contar arquivos já existentes na pasta\n                existing_files = [f for f in os.listdir(post_folder) if os.path.isfile(os.path.join(post_folder, f))]\n                existing_files_count = len(existing_files)\n\n                # Se já tem todos os arquivos, pula o download\n                if existing_files_count == expected_files_count:\n                    continue\n                \n                try:\n                    # Normalizar caminho do script de download\n                    download_script = normalize_path(os.path.join('codes', 'down.py'))\n                    \n                    # Use subprocess.Popen com caminho normalizado e suporte a Unicode\n                    download_process = subprocess.Popen(\n                        [sys.executable, download_script, json_path, post_id], \n                        stdout=subprocess.PIPE, \n                        stderr=subprocess.STDOUT, \n                        universal_newlines=True,\n                        encoding='utf-8'\n                    )\n\n                    # Capturar e imprimir output em tempo real\n                    while True:\n                        output = download_process.stdout.readline()\n                        if output == '' and download_process.poll() is not None:\n                            break\n                        if output:\n                            print(output.strip())\n\n                    # Verificar código de retorno\n                    download_process.wait()\n\n                    # Após o download, verificar novamente os arquivos\n                    current_files = [f for f in os.listdir(post_folder) if os.path.isfile(os.path.join(post_folder, f))]\n                    current_files_count = len(current_files)\n\n                    # Verificar o resultado do download\n                    if current_files_count == expected_files_count:\n                        print(f\"Post {post_id} downloaded completely ({current_files_count}/{expected_files_count} files)\")\n                    else:\n                        print(f\"Post {post_id} partially downloaded: {current_files_count}/{expected_files_count} files\")\n\n                except Exception as e:\n                    print(f\"Error while downloading post {post_id}: {e}\")\n\n                # Pequeno delay para evitar sobrecarga\n                time.sleep(0.5)\n\n        print(\"\\nAll posts have been processed!\")\n\n    except Exception as e:\n        print(f\"Unexpected error: {e}\")\n        # Adicionar mais detalhes para diagnóstico\n        import traceback\n        traceback.print_exc()\n\ndef download_specific_posts():\n    \"\"\"Opção para baixar posts específicos\"\"\"\n    clear_screen()\n    display_logo()\n    print(\"Download 1 post or a few separate posts\")\n    print(\"------------------------------------\")\n    print(\"Choose the input method:\")\n    print(\"1 - Enter the links directly\")\n    print(\"2 - Loading links from a TXT file\")\n    print(\"3 - Back to the main menu\")\n    choice = input(\"\\nEnter your choice (1/2/3): \")\n\n    links = []\n\n    if choice == '3':\n        return\n    \n    elif choice == '1':\n        print(\"Paste the links to the posts (separated by commas):\")\n        links = input(\"Links: \").split(',')\n    elif choice == '2':\n        file_path = input(\"Enter the path to the TXT file: \").strip()\n        if os.path.exists(file_path):\n            with open(file_path, 'r', encoding='utf-8') as file:\n                content = file.read()\n                links = content.split(',')\n        else:\n            print(f\"Error: The file '{file_path}' was not found.\")\n            input(\"\\nPress Enter to continue...\")\n            return\n    else:\n        print(\"Invalid option. Return to the previous menu.\")\n        input(\"\\nPress Enter to continue...\")\n        return\n\n    links = [link.strip() for link in links if link.strip()]\n\n    for link in links:\n        try:\n            domain = link.split('/')[2]\n            if domain == 'kemono.su':\n                script_path = os.path.join('codes', 'kcposts.py')\n            elif domain == 'coomer.su':\n                script_path = os.path.join('codes', 'kcposts.py')\n            else:\n                print(f\"Domain not supported: {domain}\")\n                continue\n\n            # Executa o script específico para o domínio\n            subprocess.run(['python', script_path, link], check=True)\n        except IndexError:\n            print(f\"Link format error: {link}\")\n        except subprocess.CalledProcessError:\n            print(f\"Error downloading the post: {link}\")\n\n    input(\"\\nPress Enter to continue...\")\n\ndef download_profile_posts():\n    \"\"\"Opção para baixar posts de um perfil\"\"\"\n    clear_screen()\n    display_logo()\n    print(\"Download Profile Posts\")\n    print(\"-----------------------\")\n    print(\"1 - Download all posts from a profile\")\n    print(\"2 - Download posts from a specific page\")\n    print(\"3 - Downloading posts from a range of pages\")\n    print(\"4 - Downloading posts between two specific posts\")\n    print(\"5 - Back to the main menu\")\n    \n    choice = input(\"\\nEnter your choice (1/2/3/4/5): \")\n    \n    if choice == '5':\n        return\n    \n    profile_link = input(\"Paste the profile link: \")\n    \n    try:\n        json_path = None\n\n        if choice == '1':\n            posts_process = subprocess.run(\n                ['python', os.path.join('codes', 'posts.py'), profile_link, 'all'],\n                capture_output=True,\n                text=True,\n                encoding='utf-8',  # Certifique-se de que a saída é decodificada corretamente\n                check=True\n            )\n\n            # Verificar se stdout contém dados\n            if posts_process.stdout:\n                for line in posts_process.stdout.split('\\n'):\n                    if line.endswith('.json'):\n                        json_path = line.strip()\n                        break\n            else:\n                print(\"No output from the sub-process.\")\n        \n        elif choice == '2':\n            page = input(\"Enter the page number (0 = first page, 50 = second, etc.): \")\n            posts_process = subprocess.run(['python', os.path.join('codes', 'posts.py'), profile_link, page], \n                                           capture_output=True, text=True, check=True)\n            for line in posts_process.stdout.split('\\n'):\n                if line.endswith('.json'):\n                    json_path = line.strip()\n                    break\n        \n        elif choice == '3':\n            start_page = input(\"Enter the start page (start, 0, 50, 100, etc.): \")\n            end_page = input(\"Enter the final page (or use end, 300, 350, 400): \")\n            posts_process = subprocess.run(['python', os.path.join('codes', 'posts.py'), profile_link, f\"{start_page}-{end_page}\"], \n                                           capture_output=True, text=True, check=True)\n            for line in posts_process.stdout.split('\\n'):\n                if line.endswith('.json'):\n                    json_path = line.strip()\n                    break\n        \n        elif choice == '4':\n            first_post = input(\"Paste the link or ID of the first post: \")\n            second_post = input(\"Paste the link or ID from the second post: \")\n            \n            first_id = first_post.split('/')[-1] if '/' in first_post else first_post\n            second_id = second_post.split('/')[-1] if '/' in second_post else second_post\n            \n            posts_process = subprocess.run(['python', os.path.join('codes', 'posts.py'), profile_link, f\"{first_id}-{second_id}\"], \n                                           capture_output=True, text=True, check=True)\n            for line in posts_process.stdout.split('\\n'):\n                if line.endswith('.json'):\n                    json_path = line.strip()\n                    break\n        \n        # Se um JSON foi gerado, roda o script de download\n        if json_path:\n            run_download_script(json_path)\n        else:\n            print(\"The JSON path could not be found.\")\n    \n    except subprocess.CalledProcessError as e:\n        print(f\"Error generating JSON: {e}\")\n        print(e.stderr)\n    \n    input(\"\\nPress Enter to continue...\")\n\ndef customize_settings():\n    \"\"\"Opção para personalizar configurações\"\"\"\n    config_path = os.path.join('config', 'conf.json')\n    import json\n\n    # Carregar o arquivo de configuração\n    with open(config_path, 'r') as f:\n        config = json.load(f)\n\n    while True:\n        clear_screen()\n        display_logo()\n        print(\"Customize Settings\")\n        print(\"------------------------\")\n        print(f\"1 - Take empty posts: {config['get_empty_posts']}\")\n        print(f\"2 - Download older posts first: {config['process_from_oldest']}\")\n        print(f\"3 - For individual posts, create a file with information (title, description, etc.): {config['save_info']}\")\n        print(f\"4 - Choose the type of file to save the information (Markdown or TXT): {config['post_info']}\")\n        print(\"5 - Back to the main menu\")\n\n        choice = input(\"\\nChoose an option (1/2/3/4/5): \")\n\n        if choice == '1':\n            config['get_empty_posts'] = not config['get_empty_posts']\n        elif choice == '2':\n            config['process_from_oldest'] = not config['process_from_oldest']\n        elif choice == '3':\n            config['save_info'] = not config['save_info']\n        elif choice == '4':\n            # Alternar entre \"md\" e \"txt\"\n            config['post_info'] = 'txt' if config['post_info'] == 'md' else 'md'\n        elif choice == '5':\n            # Sair do menu de configurações\n            break\n        else:\n            print(\"Invalid option. Please try again.\")\n\n        # Salvar as configurações no arquivo\n        with open(config_path, 'w') as f:\n            json.dump(config, f, indent=4)\n\n        print(\"\\nUpdated configurations.\")\n        time.sleep(1)\n\ndef main_menu():\n    \"\"\"Menu principal do aplicativo\"\"\"\n    while True:\n        clear_screen()\n        display_logo()\n        print(\"Choose an option:\")\n        print(\"1 - Download 1 post or a few separate posts\")\n        print(\"2 - Download all posts from a profile\")\n        print(\"3 - Customize the program settings\")\n        print(\"4 - Exit the program\")\n        \n        choice = input(\"\\nEnter your choice (1/2/3/4): \")\n        \n        if choice == '1':\n            download_specific_posts()\n        elif choice == '2':\n            download_profile_posts()\n        elif choice == '3':\n            customize_settings()\n        elif choice == '4':\n            print(\"Leaving the program. See you later!\")\n            break\n        else:\n            input(\"Invalid option. Press Enter to continue...\")\n\nif __name__ == \"__main__\":\n    print(\"Checking dependencies...\")\n    install_requirements()\n    print(\"Verified dependencies.\\n\")\n    main_menu()\n"
  },
  {
    "path": "codeen/requirements.txt",
    "content": "requests\n"
  },
  {
    "path": "codept/codes/down.py",
    "content": "import os\r\nimport json\r\nimport re\r\nimport time\r\nimport requests\r\nfrom concurrent.futures import ThreadPoolExecutor\r\nimport sys\r\n\r\ndef load_config(file_path):\r\n    \"\"\"Carregar a configuração de um arquivo JSON.\"\"\"\r\n    if os.path.exists(file_path):\r\n        with open(file_path, \"r\", encoding=\"utf-8\") as f:\r\n            return json.load(f)\r\n    return {}  # Retorna um dicionário vazio se o arquivo não existir\r\n\r\ndef sanitize_filename(filename):\r\n    \"\"\"Sanitize filename by removing invalid characters and replacing spaces with underscores.\"\"\"\r\n    filename = re.sub(r'[\\\\/*?\\\"<>|]', '', filename)\r\n    return filename.replace(' ', '_')\r\n\r\ndef download_file(file_url, save_path):\r\n    \"\"\"Download a file from a URL and save it to the specified path.\"\"\"\r\n    try:\r\n        response = requests.get(file_url, stream=True)\r\n        response.raise_for_status()\r\n        with open(save_path, 'wb') as f:\r\n            for chunk in response.iter_content(chunk_size=8192):\r\n                if chunk:\r\n                    f.write(chunk)\r\n    except Exception as e:\r\n        print(f\"Falha no download {file_url}: {e}\")\r\n\r\ndef process_post(post, base_folder):\r\n    \"\"\"Process a single post, downloading its files.\"\"\"\r\n    post_id = post.get(\"id\")\r\n    post_folder = os.path.join(base_folder, post_id)\r\n    os.makedirs(post_folder, exist_ok=True)\r\n\r\n    print(f\"Processando post ID {post_id}\")\r\n\r\n    # Prepare downloads for this post\r\n    downloads = []\r\n    for file_index, file in enumerate(post.get(\"files\", []), start=1):\r\n        original_name = file.get(\"name\")\r\n        file_url = file.get(\"url\")\r\n        sanitized_name = sanitize_filename(original_name)\r\n        new_filename = f\"{file_index}-{sanitized_name}\"\r\n        file_save_path = os.path.join(post_folder, new_filename)\r\n        downloads.append((file_url, file_save_path))\r\n\r\n    # Download files using ThreadPoolExecutor\r\n    with ThreadPoolExecutor(max_workers=3) as executor:\r\n        for file_url, file_save_path in downloads:\r\n            executor.submit(download_file, file_url, file_save_path)\r\n\r\n    print(f\"Post {post_id} baixado\")\r\n\r\ndef main():\r\n    if len(sys.argv) < 2:\r\n        print(\"Uso: python down.py {caminho_do_json}\")\r\n        sys.exit(1)\r\n\r\n    # Pega o caminho do arquivo JSON a partir do argumento da linha de comando\r\n    json_file_path = sys.argv[1]\r\n\r\n    # Verifica se o arquivo existe\r\n    if not os.path.exists(json_file_path):\r\n        print(f\"Erro: O arquivo '{json_file_path}' não foi encontrado.\")\r\n        sys.exit(1)\r\n\r\n    # Load the JSON file\r\n    with open(json_file_path, 'r', encoding='utf-8') as f:\r\n        data = json.load(f)\r\n\r\n    # Base folder for posts\r\n    base_folder = os.path.join(os.path.dirname(json_file_path), \"posts\")\r\n    os.makedirs(base_folder, exist_ok=True)\r\n\r\n    # Caminho para o arquivo de configuração\r\n    config_file_path = os.path.join(\"config\", \"conf.json\")\r\n\r\n    # Carregar a configuração do arquivo JSON\r\n    config = load_config(config_file_path)\r\n\r\n    # Pegar o valor de 'process_from_oldest' da configuração\r\n    process_from_oldest = config.get(\"process_from_oldest\", True)  # Valor padrão é True\r\n\r\n    posts = data.get(\"posts\", [])\r\n    if process_from_oldest:\r\n        posts = reversed(posts)\r\n\r\n    # Process each post sequentially\r\n    for post_index, post in enumerate(posts, start=1):\r\n        process_post(post, base_folder)\r\n        time.sleep(2)  # Wait 2 seconds between posts\r\n\r\nif __name__ == \"__main__\":\r\n    main()\r\n"
  },
  {
    "path": "codept/codes/kcposts.py",
    "content": "import os\r\nimport sys\r\nimport json\r\nimport requests\r\nimport re\r\nfrom html.parser import HTMLParser\r\nfrom urllib.parse import quote, urlparse, unquote\r\n\r\ndef load_config(config_path='config/conf.json'):\r\n    \"\"\"\r\n    Carrega as configurações do arquivo conf.json\r\n    Se o arquivo não existir, retorna configurações padrão\r\n    \"\"\"\r\n    try:\r\n        with open(config_path, 'r') as file:\r\n            config = json.load(file)\r\n        return {\r\n            'post_info': config.get('post_info', 'md'),  # Padrão para md se não especificado\r\n            'save_info': config.get('save_info', True)   # Padrão para True se não especificado\r\n        }\r\n    except FileNotFoundError:\r\n        # Configurações padrão se o arquivo não existir\r\n        return {\r\n            'post_info': 'md',\r\n            'save_info': True\r\n        }\r\n    except json.JSONDecodeError:\r\n        print(f\"Erro ao decodificar {config_path}. Usando configurações padrão.\")\r\n        return {\r\n            'post_info': 'md',\r\n            'save_info': True\r\n        }\r\n\r\ndef ensure_directory(path):\r\n    if not os.path.exists(path):\r\n        os.makedirs(path)\r\n\r\ndef load_profiles(path):\r\n    if os.path.exists(path):\r\n        with open(path, 'r', encoding='utf-8') as file:\r\n            return json.load(file)\r\n    return {}\r\n\r\ndef save_profiles(path, profiles):\r\n    with open(path, 'w', encoding='utf-8') as file:\r\n        json.dump(profiles, file, indent=4)\r\n\r\ndef extract_data_from_link(link):\r\n    \"\"\"\r\n    Extract service, user_id, and post_id from both kemono.su and coomer.su links\r\n    \"\"\"\r\n    # Pattern for both kemono.su and coomer.su\r\n    match = re.match(r\"https://(kemono|coomer)\\.su/([^/]+)/user/([^/]+)/post/([^/]+)\", link)\r\n    if not match:\r\n        raise ValueError(\"Invalid link format\")\r\n    \r\n    # Unpack the match groups\r\n    domain, service, user_id, post_id = match.groups()\r\n    \r\n    return domain, service, user_id, post_id\r\n\r\ndef get_api_base_url(domain):\r\n    \"\"\"\r\n    Dynamically generate API base URL based on the domain\r\n    \"\"\"\r\n    return f\"https://{domain}.su/api/v1/\"\r\n\r\ndef fetch_profile(domain, service, user_id):\r\n    \"\"\"\r\n    Fetch user profile with dynamic domain support\r\n    \"\"\"\r\n    api_base_url = get_api_base_url(domain)\r\n    url = f\"{api_base_url}{service}/user/{user_id}/profile\"\r\n    response = requests.get(url)\r\n    response.raise_for_status()\r\n    return response.json()\r\n\r\ndef fetch_post(domain, service, user_id, post_id):\r\n    \"\"\"\r\n    Fetch post data with dynamic domain support\r\n    \"\"\"\r\n    api_base_url = get_api_base_url(domain)\r\n    url = f\"{api_base_url}{service}/user/{user_id}/post/{post_id}\"\r\n    response = requests.get(url)\r\n    response.raise_for_status()\r\n    return response.json()\r\n\r\nclass HTMLToMarkdown(HTMLParser):\r\n    \"\"\"Parser to convert HTML content to Markdown and plain text.\"\"\"\r\n    def __init__(self):\r\n        super().__init__()\r\n        self.result = []\r\n        self.raw_content = []\r\n        self.current_link = None\r\n\r\n    def handle_starttag(self, tag, attrs):\r\n        if tag == \"a\":\r\n            href = dict(attrs).get(\"href\", \"\")\r\n            self.current_link = href\r\n            self.result.append(\"[\")  # Markdown link opening\r\n        elif tag in (\"p\", \"br\"):\r\n            self.result.append(\"\\n\")  # New line for Markdown\r\n        self.raw_content.append(self.get_starttag_text())\r\n\r\n    def handle_endtag(self, tag):\r\n        if tag == \"a\" and self.current_link:\r\n            self.result.append(f\"]({self.current_link})\")\r\n            self.current_link = None\r\n        self.raw_content.append(f\"</{tag}>\")\r\n\r\n    def handle_data(self, data):\r\n        # Append visible text to the Markdown result\r\n        if self.current_link:\r\n            self.result.append(data.strip())\r\n        else:\r\n            self.result.append(data.strip())\r\n        # Append all raw content for reference\r\n        self.raw_content.append(data)\r\n\r\n    def get_markdown(self):\r\n        \"\"\"Return the cleaned Markdown content.\"\"\"\r\n        return \"\".join(self.result).strip()\r\n\r\n    def get_raw_content(self):\r\n        \"\"\"Return the raw HTML content.\"\"\"\r\n        return \"\".join(self.raw_content).strip()\r\n\r\ndef clean_html_to_text(html):\r\n    \"\"\"Converts HTML to Markdown and extracts raw HTML.\"\"\"\r\n    parser = HTMLToMarkdown()\r\n    parser.feed(html)\r\n    return parser.get_markdown(), parser.get_raw_content()\r\n\r\ndef adapt_file_name(name):\r\n    \"\"\"\r\n    Sanitize file name by removing special characters and reducing its size.\r\n    \"\"\"\r\n    sanitized = re.sub(r'[^a-zA-Z0-9]', '_', unquote(name).split('.')[0])\r\n    return sanitized[:50]  # Limit length to 50 characters\r\n\r\n\r\ndef download_files(file_list, folder_path):\r\n    \"\"\"\r\n    Download files from a list of URLs and save them with unique names in the folder_path.\r\n\r\n    :param file_list: List of tuples with original name and URL [(name, url), ...]\r\n    :param folder_path: Directory to save downloaded files\r\n    \"\"\"\r\n    seen_files = set()\r\n\r\n    for idx, (original_name, url) in enumerate(file_list, start=1):\r\n        # Check if URL is from allowed domains\r\n        parsed_url = urlparse(url)\r\n        domain = parsed_url.netloc.split('.')[-2] + '.' + parsed_url.netloc.split('.')[-1]  # Get main domain\r\n        if domain not in ['kemono.su', 'coomer.su']:\r\n            print(f\"⚠️ Ignorando URL de domínio não permitido: {url}\")\r\n            continue\r\n\r\n        # Derive file extension\r\n        extension = os.path.splitext(parsed_url.path)[1] or '.bin'\r\n\r\n        # Handle case where no original name is provided\r\n        if not original_name or original_name.strip() == \"\":\r\n            sanitized_name = str(idx)\r\n        else:\r\n            sanitized_name = adapt_file_name(original_name)\r\n\r\n        # Generate unique file name\r\n        file_name = f\"{idx}-{sanitized_name}{extension}\"\r\n        if file_name in seen_files:\r\n            continue  # Skip duplicates\r\n\r\n        seen_files.add(file_name)\r\n        file_path = os.path.join(folder_path, file_name)\r\n\r\n        # Download the file\r\n        try:\r\n            response = requests.get(url, stream=True)\r\n            response.raise_for_status()\r\n            with open(file_path, 'wb') as file:\r\n                for chunk in response.iter_content(chunk_size=8192):\r\n                    file.write(chunk)\r\n            print(f\"Baixado: {file_name}\")\r\n        except Exception as e:\r\n            print(f\"Falha no download {url}: {e}\")\r\n\r\n\r\ndef save_post_content(post_data, folder_path, config):\r\n    \"\"\"\r\n    Save post content and download files based on configuration settings.\r\n    Now includes support for poll data if present.\r\n    \r\n    :param post_data: Dictionary containing post information\r\n    :param folder_path: Path to save the post files\r\n    :param config: Configuration dictionary with 'post_info' and 'save_info' keys\r\n    \"\"\"\r\n    ensure_directory(folder_path)\r\n\r\n    # Verify if content should be saved based on save_info\r\n    if not config['save_info']:\r\n        return  # Do not save anything if save_info is False\r\n\r\n    # Use post_info configuration to define format\r\n    file_format = config['post_info'].lower()\r\n    file_extension = \".md\" if file_format == \"md\" else \".txt\"\r\n    file_name = f\"files{file_extension}\"\r\n\r\n    # Process title and content\r\n    title, raw_title = clean_html_to_text(post_data['post']['title'])\r\n    content, raw_content = clean_html_to_text(post_data['post']['content'])\r\n\r\n    # Path to save the main file\r\n    file_path = os.path.join(folder_path, file_name)\r\n    with open(file_path, 'w', encoding='utf-8') as file:\r\n        # Formatted title\r\n        if file_format == \"md\":\r\n            file.write(f\"# {title}\\n\\n\")\r\n        else:\r\n            file.write(f\"Title: {title}\\n\\n\")\r\n        \r\n        # Formatted content\r\n        file.write(f\"{content}\\n\\n\")\r\n\r\n        # Process poll if it exists\r\n        poll = post_data['post'].get('poll')\r\n        if poll:\r\n            if file_format == \"md\":\r\n                file.write(\"## Poll Information\\n\\n\")\r\n                file.write(f\"**Poll Title:** {poll.get('title', 'No Title')}\\n\")\r\n                if poll.get('description'):\r\n                    file.write(f\"\\n**Description:** {poll['description']}\\n\")\r\n                file.write(f\"\\n**Multiple Choices Allowed:** {'Yes' if poll.get('allows_multiple') else 'No'}\\n\")\r\n                file.write(f\"**Started:** {poll.get('created_at', 'N/A')}\\n\")\r\n                file.write(f\"**Closes:** {poll.get('closes_at', 'N/A')}\\n\")\r\n                file.write(f\"**Total Votes:** {poll.get('total_votes', 0)}\\n\\n\")\r\n                \r\n                # Poll choices\r\n                file.write(\"### Choices and Votes\\n\\n\")\r\n                for choice in poll.get('choices', []):\r\n                    file.write(f\"- **{choice['text']}:** {choice.get('votes', 0)} votes\\n\")\r\n            else:\r\n                file.write(\"Poll Information:\\n\\n\")\r\n                file.write(f\"Poll Title: {poll.get('title', 'No Title')}\\n\")\r\n                if poll.get('description'):\r\n                    file.write(f\"Description: {poll['description']}\\n\")\r\n                file.write(f\"Multiple Choices Allowed: {'Yes' if poll.get('allows_multiple') else 'No'}\\n\")\r\n                file.write(f\"Started: {poll.get('created_at', 'N/A')}\\n\")\r\n                file.write(f\"Closes: {poll.get('closes_at', 'N/A')}\\n\")\r\n                file.write(f\"Total Votes: {poll.get('total_votes', 0)}\\n\\n\")\r\n                \r\n                file.write(\"Choices and Votes:\\n\")\r\n                for choice in poll.get('choices', []):\r\n                    file.write(f\"- {choice['text']}: {choice.get('votes', 0)} votes\\n\")\r\n            \r\n            file.write(\"\\n\")\r\n\r\n        # Process embed\r\n        embed = post_data['post'].get('embed')\r\n        if embed:\r\n            if file_format == \"md\":\r\n                file.write(\"## Embedded Content\\n\")\r\n            else:\r\n                file.write(\"Embedded Content:\\n\")\r\n            file.write(f\"- URL: {embed.get('url', 'N/A')}\\n\")\r\n            file.write(f\"- Subject: {embed.get('subject', 'N/A')}\\n\")\r\n            file.write(f\"- Description: {embed.get('description', 'N/A')}\\n\")\r\n\r\n        # Separator\r\n        file.write(\"\\n---\\n\\n\")\r\n\r\n        # Raw Title and Content\r\n        if file_format == \"md\":\r\n            file.write(\"## Raw Title and Content\\n\\n\")\r\n        else:\r\n            file.write(\"Raw Title and Content:\\n\\n\")\r\n        file.write(f\"Raw Title: {raw_title}\\n\\n\")\r\n        file.write(f\"Raw Content:\\n{raw_content}\\n\\n\")\r\n\r\n        # Process attachments\r\n        attachments = post_data.get('attachments', [])\r\n        if attachments:\r\n            if file_format == \"md\":\r\n                file.write(\"## Attachments\\n\\n\")\r\n            else:\r\n                file.write(\"Attachments:\\n\\n\")\r\n            for attach in attachments:\r\n                server_url = f\"{attach['server']}/data{attach['path']}?f={adapt_file_name(attach['name'])}\"\r\n                file.write(f\"- {attach['name']}: {server_url}\\n\")\r\n\r\n        # Process videos\r\n        videos = post_data.get('videos', [])\r\n        if videos:\r\n            if file_format == \"md\":\r\n                file.write(\"## Videos\\n\\n\")\r\n            else:\r\n                file.write(\"Videos:\\n\\n\")\r\n            for video in videos:\r\n                server_url = f\"{video['server']}/data{video['path']}?f={adapt_file_name(video['name'])}\"\r\n                file.write(f\"- {video['name']}: {server_url}\\n\")\r\n\r\n        # Process images\r\n        seen_paths = set()\r\n        images = []\r\n        for preview in post_data.get(\"previews\", []):\r\n            if 'name' in preview and 'server' in preview and 'path' in preview:\r\n                server_url = f\"{preview['server']}/data{preview['path']}\"\r\n                images.append((preview.get('name', ''), server_url))\r\n\r\n        if images:\r\n            if file_format == \"md\":\r\n                file.write(\"## Images\\n\\n\")\r\n            else:\r\n                file.write(\"Images:\\n\\n\")\r\n            for idx, (name, image_url) in enumerate(images, 1):\r\n                if file_format == \"md\":\r\n                    file.write(f\"![Image {idx}]({image_url}) - {name}\\n\")\r\n                else:\r\n                    file.write(f\"Image {idx}: {image_url} (Name: {name})\\n\")\r\n\r\n    # Consolidate all files for download\r\n    all_files_to_download = []\r\n\r\n    for attach in post_data.get('attachments', []):\r\n        if 'name' in attach and 'server' in attach and 'path' in attach:\r\n            url = f\"{attach['server']}/data{attach['path']}?f={adapt_file_name(attach['name'])}\"\r\n            all_files_to_download.append((attach['name'], url))\r\n\r\n    for video in post_data.get('videos', []):\r\n        if 'name' in video and 'server' in video and 'path' in video:\r\n            url = f\"{video['server']}/data{video['path']}?f={adapt_file_name(video['name'])}\"\r\n            all_files_to_download.append((video['name'], url))\r\n\r\n    for image in post_data.get('previews', []):\r\n        if 'name' in image and 'server' in image and 'path' in image:\r\n            url = f\"{image['server']}/data{image['path']}\"\r\n            all_files_to_download.append((image.get('name', ''), url))\r\n\r\n    # Remove duplicates based on URL\r\n    unique_files_to_download = list({url: (name, url) for name, url in all_files_to_download}.values())\r\n\r\n    # Download files to the specified folder\r\n    download_files(unique_files_to_download, folder_path)\r\n\r\ndef sanitize_filename(value):\r\n    \"\"\"Remove caracteres que podem quebrar a criação de pastas.\"\"\"\r\n    return value.replace(\"/\", \"_\").replace(\"\\\\\", \"_\")\r\n    \r\ndef main():\r\n    # Carregar configurações\r\n    config = load_config()\r\n\r\n    # Verificar se links foram passados por linha de comando\r\n    if len(sys.argv) < 2:\r\n        print(\"Por favor, forneça pelo menos um link como argumento.\")\r\n        print(\"Exemplo: python kcposts.py https://kemono.su/link1, https://coomer.su/link2\")\r\n        sys.exit(1)\r\n\r\n    # Processar cada link passado\r\n    links = sys.argv[1:]\r\n    \r\n    for user_link in links:\r\n        try:\r\n            print(f\"\\n--- Processando link: {user_link} ---\")\r\n            \r\n            # Extract data from the link\r\n            domain, service, user_id, post_id = extract_data_from_link(user_link)\r\n\r\n            # Setup paths\r\n            base_path = domain  # Use domain as base path (kemono or coomer)\r\n            profiles_path = os.path.join(base_path, \"profiles.json\")\r\n\r\n            ensure_directory(base_path)\r\n\r\n            # Load existing profiles\r\n            profiles = load_profiles(profiles_path)\r\n\r\n            # Fetch and save profile if not already in profiles.json\r\n            if user_id not in profiles:\r\n                profile_data = fetch_profile(domain, service, user_id)\r\n                profiles[user_id] = profile_data\r\n                save_profiles(profiles_path, profiles)\r\n            else:\r\n                profile_data = profiles[user_id]\r\n\r\n            # Criar pasta específica para o usuário\r\n            user_name = sanitize_filename(profile_data.get(\"name\", \"unknown_user\"))\r\n            safe_service = sanitize_filename(service)\r\n            safe_user_id = sanitize_filename(user_id)\r\n\r\n            user_folder = os.path.join(base_path, f\"{user_name}-{safe_service}-{safe_user_id}\")\r\n            ensure_directory(user_folder)\r\n\r\n            # Create posts folder and post-specific folder\r\n            posts_folder = os.path.join(user_folder, \"posts\")\r\n            ensure_directory(posts_folder)\r\n\r\n            post_folder = os.path.join(posts_folder, post_id)\r\n            ensure_directory(post_folder)\r\n\r\n            # Fetch post data\r\n            post_data = fetch_post(domain, service, user_id, post_id)\r\n            \r\n            # Salvar conteúdo do post usando as configurações\r\n            save_post_content(post_data, post_folder, config)\r\n\r\n            print(f\"\\n✅ Link processado com sucesso: {user_link}\")\r\n\r\n        except Exception as e:\r\n            print(f\"❌ Erro ao processar link {user_link}: {e}\")\r\n            import traceback\r\n            traceback.print_exc()\r\n            continue  # Continua processando próximos links mesmo se um falhar\r\n\r\nif __name__ == \"__main__\":\r\n    main()\r\n"
  },
  {
    "path": "codept/codes/posts.py",
    "content": "import os\r\nimport sys\r\nimport json\r\nimport requests\r\nfrom datetime import datetime\r\n\r\ndef save_json(file_path, data):\r\n    \"\"\"Helper function to save JSON files with UTF-8 encoding and pretty formatting\"\"\"\r\n    with open(file_path, \"w\", encoding=\"utf-8\") as f:\r\n        json.dump(data, f, indent=4, ensure_ascii=False)\r\n\r\ndef load_config(file_path):\r\n    \"\"\"Carregar a configuração de um arquivo JSON.\"\"\"\r\n    if os.path.exists(file_path):\r\n        with open(file_path, \"r\", encoding=\"utf-8\") as f:\r\n            return json.load(f)\r\n    return {}  # Retorna um dicionário vazio se o arquivo não existir\r\n\r\ndef get_base_config(profile_url):\r\n    \"\"\"\r\n    Dynamically configure base URLs and directories based on the profile URL domain\r\n    \"\"\"\r\n    # Extract domain from the profile URL\r\n    domain = profile_url.split('/')[2]\r\n    \r\n    if domain not in ['kemono.su', 'coomer.su']:\r\n        raise ValueError(f\"Unsupported domain: {domain}\")\r\n    \r\n    BASE_API_URL = f\"https://{domain}/api/v1\"\r\n    BASE_SERVER = f\"https://{domain}\"\r\n    BASE_DIR = domain.split('.')[0]  # 'kemono' or 'coomer'\r\n    \r\n    return BASE_API_URL, BASE_SERVER, BASE_DIR\r\n\r\ndef is_offset(value):\r\n    \"\"\"Determina se o valor é um offset (até 5 dígitos) ou um ID.\"\"\"\r\n    try:\r\n        # Tenta converter para inteiro e verifica o comprimento\r\n        return isinstance(int(value), int) and len(value) <= 5\r\n    except ValueError:\r\n        # Se não for um número, não é offset\r\n        return False\r\n\r\ndef parse_fetch_mode(fetch_mode, total_count):\r\n    \"\"\"\r\n    Analisa o modo de busca e retorna os offsets correspondentes\r\n    \"\"\"\r\n    # Caso especial: buscar todos os posts\r\n    if fetch_mode == \"all\":\r\n        return list(range(0, total_count, 50))\r\n    \r\n    # Se for um número único (página específica)\r\n    if fetch_mode.isdigit():\r\n        if is_offset(fetch_mode):\r\n            return [int(fetch_mode)]\r\n        else:\r\n            # Se for um ID específico, retorna como tal\r\n            return [\"id:\" + fetch_mode]\r\n    \r\n    # Caso seja um intervalo\r\n    if \"-\" in fetch_mode:\r\n        start, end = fetch_mode.split(\"-\")\r\n        \r\n        # Tratar \"start\" e \"end\" especificamente\r\n        if start == \"start\":\r\n            start = 0\r\n        else:\r\n            start = int(start)\r\n        \r\n        if end == \"end\":\r\n            end = total_count\r\n        else:\r\n            end = int(end)\r\n        \r\n        # Se os valores são offsets\r\n        if start <= total_count and end <= total_count:\r\n            # Calcular o número de páginas necessárias para cobrir o intervalo\r\n            # Usa ceil para garantir que inclua a página final\r\n            import math\r\n            num_pages = math.ceil((end - start) / 50)\r\n            \r\n            # Gerar lista de offsets\r\n            return [start + i * 50 for i in range(num_pages)]\r\n        \r\n        # Se parecem ser IDs, retorna o intervalo de IDs\r\n        return [\"id:\" + str(start) + \"-\" + str(end)]\r\n    \r\n    raise ValueError(f\"Modo de busca inválido: {fetch_mode}\")\r\n\r\ndef get_artist_info(profile_url):\r\n    # Extrair serviço e user_id do URL\r\n    parts = profile_url.split(\"/\")\r\n    service = parts[-3]\r\n    user_id = parts[-1]\r\n    return service, user_id\r\n\r\ndef fetch_posts(base_api_url, service, user_id, offset=0):\r\n    # Buscar posts da API\r\n    url = f\"{base_api_url}/{service}/user/{user_id}/posts-legacy?o={offset}\"\r\n    response = requests.get(url)\r\n    response.raise_for_status()\r\n    return response.json()\r\n\r\ndef save_json_incrementally(file_path, new_posts, start_offset, end_offset):\r\n    # Criar um novo dicionário com os posts atuais\r\n    data = {\r\n        \"total_posts\": len(new_posts),\r\n        \"posts\": new_posts\r\n    }\r\n    \r\n    # Salvar o novo arquivo, substituindo o existente\r\n    with open(file_path, \"w\", encoding=\"utf-8\") as f:\r\n        json.dump(data, f, indent=4, ensure_ascii=False)\r\n\r\ndef process_posts(posts, previews, attachments_data, page_number, offset, base_server, save_empty_files=True, id_filter=None):\r\n    # Processar posts e organizar os links dos arquivos\r\n    processed = []\r\n    for post in posts:\r\n        # Filtro de ID se especificado\r\n        if id_filter and not id_filter(post['id']):\r\n            continue\r\n\r\n        result = {\r\n            \"id\": post[\"id\"],\r\n            \"user\": post[\"user\"],\r\n            \"service\": post[\"service\"],\r\n            \"title\": post[\"title\"],\r\n            \"link\": f\"{base_server}/{post['service']}/user/{post['user']}/post/{post['id']}\",\r\n            \"page\": page_number,\r\n            \"offset\": offset,\r\n            \"files\": []\r\n        }\r\n\r\n        # Combina previews e attachments_data em uma única lista para busca\r\n        all_data = previews + attachments_data\r\n\r\n        # Processar arquivos no campo file\r\n        if \"file\" in post and post[\"file\"]:\r\n            matching_data = next(\r\n                (item for item in all_data if item[\"path\"] == post[\"file\"][\"path\"]),\r\n                None\r\n            )\r\n            if matching_data:\r\n                file_url = f\"{matching_data['server']}/data{post['file']['path']}\"\r\n                if file_url not in [f[\"url\"] for f in result[\"files\"]]:\r\n                    result[\"files\"].append({\"name\": post[\"file\"][\"name\"], \"url\": file_url})\r\n\r\n        # Processar arquivos no campo attachments\r\n        for attachment in post.get(\"attachments\", []):\r\n            matching_data = next(\r\n                (item for item in all_data if item[\"path\"] == attachment[\"path\"]),\r\n                None\r\n            )\r\n            if matching_data:\r\n                file_url = f\"{matching_data['server']}/data{attachment['path']}\"\r\n                if file_url not in [f[\"url\"] for f in result[\"files\"]]:\r\n                    result[\"files\"].append({\"name\": attachment[\"name\"], \"url\": file_url})\r\n\r\n        # Ignorar posts sem arquivos se save_empty_files for False\r\n        if not save_empty_files and not result[\"files\"]:\r\n            continue\r\n\r\n        processed.append(result)\r\n\r\n    return processed\r\n\r\ndef sanitize_filename(value):\r\n    \"\"\"Remove caracteres que podem quebrar a criação de pastas.\"\"\"\r\n    return value.replace(\"/\", \"_\").replace(\"\\\\\", \"_\")\r\n\r\ndef main():\r\n    # Verificar argumentos de linha de comando\r\n    if len(sys.argv) < 2 or len(sys.argv) > 3:\r\n        print(\"Uso: python posts.py <profile_url> [fetch_mode]\")\r\n        print(\"Modos de busca possíveis:\")\r\n        print(\"- all\")\r\n        print(\"- <número de página>\")\r\n        print(\"- start-end\")\r\n        print(\"- <id_inicial>-<id_final>\")\r\n        sys.exit(1)\r\n\r\n    # Definir profile_url do argumento\r\n    profile_url = sys.argv[1]\r\n    \r\n    # Definir FETCH_MODE (padrão para \"all\" se não especificado)\r\n    FETCH_MODE = sys.argv[2] if len(sys.argv) == 3 else \"all\"\r\n    \r\n    config_file_path = os.path.join(\"config\", \"conf.json\")\r\n\r\n    # Carregar a configuração do arquivo JSON\r\n    config = load_config(config_file_path)\r\n\r\n    # Pegar o valor de 'process_from_oldest' da configuração\r\n    SAVE_EMPTY_FILES = config.get(\"get_empty_posts\", False)  # Alterar para True se quiser salvar posts sem arquivos\r\n\r\n    # Configurar base URLs dinamicamente\r\n    BASE_API_URL, BASE_SERVER, BASE_DIR = get_base_config(profile_url)\r\n    \r\n    # Pasta base\r\n    base_dir = BASE_DIR\r\n    os.makedirs(base_dir, exist_ok=True)\r\n\r\n    # Atualizar o arquivo profiles.json\r\n    profiles_file = os.path.join(base_dir, \"profiles.json\")\r\n    if os.path.exists(profiles_file):\r\n        with open(profiles_file, \"r\", encoding=\"utf-8\") as f:\r\n            profiles = json.load(f)\r\n    else:\r\n        profiles = {}\r\n\r\n    # Buscar primeiro conjunto de posts para informações gerais\r\n    service, user_id = get_artist_info(profile_url)\r\n    initial_data = fetch_posts(BASE_API_URL, service, user_id, offset=0)\r\n    name = initial_data[\"props\"][\"name\"]\r\n    count = initial_data[\"props\"][\"count\"]\r\n\r\n    # Salvar informações do artista\r\n    artist_info = {\r\n        \"id\": user_id,\r\n        \"name\": name,\r\n        \"service\": service,\r\n        \"indexed\": initial_data[\"props\"][\"artist\"][\"indexed\"],\r\n        \"updated\": initial_data[\"props\"][\"artist\"][\"updated\"],\r\n        \"public_id\": initial_data[\"props\"][\"artist\"][\"public_id\"],\r\n        \"relation_id\": initial_data[\"props\"][\"artist\"][\"relation_id\"],\r\n    }\r\n    profiles[user_id] = artist_info\r\n    save_json(profiles_file, profiles)\r\n\r\n    # Sanitizar os valores\r\n    safe_name = sanitize_filename(name)\r\n    safe_service = sanitize_filename(service)\r\n    safe_user_id = sanitize_filename(user_id)\r\n\r\n    # Pasta do artista\r\n    artist_dir = os.path.join(base_dir, f\"{safe_name}-{safe_service}-{safe_user_id}\")\r\n    os.makedirs(artist_dir, exist_ok=True)\r\n\r\n    # Processar modo de busca\r\n    today = datetime.now().strftime(\"%Y-%m-%d\")\r\n    \r\n    try:\r\n        offsets = parse_fetch_mode(FETCH_MODE, count)\r\n    except ValueError as e:\r\n        print(e)\r\n        return\r\n\r\n    # Verificar se é busca por ID específico\r\n    id_filter = None\r\n    found_ids = set()\r\n    if isinstance(offsets[0], str) and offsets[0].startswith(\"id:\"):\r\n        # Extrair IDs para filtro\r\n        id_range = offsets[0].split(\":\")[1]\r\n        \r\n        if \"-\" in id_range:\r\n            id1, id2 = map(str, sorted(map(int, id_range.split(\"-\"))))\r\n            id_filter = lambda x: id1 <= str(x) <= id2\r\n        else:\r\n            id_filter = lambda x: x == id_range\r\n\r\n        # Redefinir offsets para varrer todas as páginas\r\n        offsets = list(range(0, count, 50))\r\n\r\n    # Nome do arquivo JSON com range de offsets\r\n    if len(offsets) > 1:\r\n        file_path = os.path.join(artist_dir, f\"posts-{offsets[0]}-{offsets[-1]}-{today}.json\")\r\n    else:\r\n        file_path = os.path.join(artist_dir, f\"posts-{offsets[0]}-{today}.json\")\r\n\r\n    new_posts= []\r\n    # Processamento principal\r\n    for offset in offsets:\r\n        page_number = (offset // 50) + 1\r\n        post_data = fetch_posts(BASE_API_URL, service, user_id, offset=offset)\r\n        posts = post_data[\"results\"]\r\n        previews = [item for sublist in post_data.get(\"result_previews\", []) for item in sublist]\r\n        attachments = [item for sublist in post_data.get(\"result_attachments\", []) for item in sublist]\r\n\r\n        processed_posts = process_posts(\r\n            posts, \r\n            previews, \r\n            attachments, \r\n            page_number, \r\n            offset, \r\n            BASE_SERVER,\r\n            save_empty_files=SAVE_EMPTY_FILES,\r\n            id_filter=id_filter\r\n        )\r\n        new_posts.extend(processed_posts)\r\n        # Salvar posts incrementais no JSON\r\n        if processed_posts:\r\n            save_json_incrementally(file_path, new_posts, offset, offset+50)\r\n            \r\n            # Verificar se encontrou os IDs desejados\r\n            if id_filter:\r\n                found_ids.update(post['id'] for post in processed_posts)\r\n                \r\n                # Verificar se encontrou ambos os IDs\r\n                if (id1 in found_ids) and (id2 in found_ids):\r\n                    print(f\"Encontrados ambos os IDs: {id1} e {id2}\")\r\n                    break\r\n\r\n    # Imprimir o caminho completo do arquivo JSON gerado\r\n    print(f\"{os.path.abspath(file_path)}\")\r\n\r\nif __name__ == \"__main__\":\r\n    main()\r\n"
  },
  {
    "path": "codept/config/conf.json",
    "content": "{\r\n    \"get_empty_posts\": false,\r\n    \"process_from_oldest\": false,\r\n    \"post_info\": \"md\",\r\n    \"save_info\": true\r\n}"
  },
  {
    "path": "codept/main.py",
    "content": "import os\nimport sys\nimport subprocess\nimport re\nimport json\nimport time\nimport importlib\n\ndef install_requirements():\n    \"\"\"Verifica e instala as dependências do requirements.txt.\"\"\"\n    requirements_file = \"requirements.txt\"\n\n    if not os.path.exists(requirements_file):\n        print(f\"Erro: Arquivo {requirements_file} não encontrado.\")\n        return\n\n    with open(requirements_file, 'r', encoding='utf-8') as req_file:\n        for line in req_file:\n            # Lê cada linha, ignora vazias ou comentários\n            package = line.strip()\n            if package and not package.startswith(\"#\"):\n                try:\n                    # Tenta importar o pacote para verificar se já está instalado\n                    package_name = package.split(\"==\")[0]  # Ignora versão específica na importação\n                    importlib.import_module(package_name)\n                except ImportError:\n                    # Se falhar, instala o pacote usando pip\n                    print(f\"Instalando o pacote: {package}\")\n                    subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", package])\n\ndef clear_screen():\n    \"\"\"Limpa a tela do console de forma compatível com diferentes sistemas operacionais\"\"\"\n    os.system('cls' if os.name == 'nt' else 'clear')\n\ndef display_logo():\n    \"\"\"Exibe o logo do projeto\"\"\"\n    logo = \"\"\"\n _  __                                                   \n| |/ /___ _ __ ___   ___  _ __   ___                     \n| ' // _ \\ '_ ` _ \\ / _ \\| '_ \\ / _ \\                    \n| . \\  __/ | | | | | (_) | | | | (_) |                   \n|_|\\_\\___|_| |_| |_|\\___/|_| |_|\\___/                    \n / ___|___   ___  _ __ ___   ___ _ __                    \n| |   / _ \\ / _ \\| '_ ` _ \\ / _ \\ '__|                   \n| |__| (_) | (_) | | | | | |  __/ |                      \n \\____\\___/ \\___/|_| |_| |_|\\___|_|          _           \n|  _ \\  _____      ___ __ | | ___   __ _  __| | ___ _ __ \n| | | |/ _ \\ \\ /\\ / / '_ \\| |/ _ \\ / _` |/ _` |/ _ \\ '__|\n| |_| | (_) \\ V  V /| | | | | (_) | (_| | (_| |  __/ |   \n|____/ \\___/ \\_/\\_/ |_| |_|_|\\___/ \\__,_|\\__,_|\\___|_|   \n\nCriado por E43b\nGitHub: https://github.com/e43b\nDiscord: https://discord.gg/GNJbxzD8bK\nRepositório do Projeto: https://github.com/e43b/Kemono-and-Coomer-Downloader\nFaça uma Doação: https://ko-fi.com/e43bs\n\"\"\"\n    print(logo)\n\ndef normalize_path(path):\n    \"\"\"\n    Normaliza o caminho do arquivo para lidar com caracteres não-ASCII\n    \"\"\"\n    try:\n        # Se o caminho original existir, retorna ele\n        if os.path.exists(path):\n            return path\n            \n        # Extrai o nome do arquivo e os componentes do caminho\n        filename = os.path.basename(path)\n        path_parts = path.split(os.sep)\n        \n        # Identifica se está procurando em kemono ou coomer\n        base_dir = None\n        if 'kemono' in path_parts:\n            base_dir = 'kemono'\n        elif 'coomer' in path_parts:\n            base_dir = 'coomer'\n            \n        if base_dir:\n            # Procura em todos os subdiretórios do diretório base\n            for root, dirs, files in os.walk(base_dir):\n                if filename in files:\n                    return os.path.join(root, filename)\n        \n        # Se ainda não encontrou, tenta o caminho normalizado\n        return os.path.abspath(os.path.normpath(path))\n\n    except Exception as e:\n        print(f\"Erro ao normalizar caminho: {e}\")\n        return path\n\ndef run_download_script(json_path):\n    \"\"\"Roda o script de download com o JSON gerado e faz tracking detalhado em tempo real\"\"\"\n    try:\n        # Normalizar o caminho do JSON\n        json_path = normalize_path(json_path)\n\n        # Verificar se o arquivo JSON existe\n        if not os.path.exists(json_path):\n            print(f\"Erro: Arquivo JSON não encontrado: {json_path}\")\n            return\n\n        # Ler configurações\n        config_path = normalize_path(os.path.join('config', 'conf.json'))\n        with open(config_path, 'r', encoding='utf-8') as config_file:\n            config = json.load(config_file)\n\n        # Ler o JSON de posts\n        with open(json_path, 'r', encoding='utf-8') as posts_file:\n            posts_data = json.load(posts_file)\n\n        # Análise inicial\n        total_posts = posts_data['total_posts']\n        post_ids = [post['id'] for post in posts_data['posts']]\n\n        # Contagem de arquivos\n        total_files = sum(len(post['files']) for post in posts_data['posts'])\n\n        # Imprimir informações iniciais\n        print(f\"Extração de posts concluída: {total_posts} posts encontrados\")\n        print(f\"Número total de arquivos a baixar: {total_files}\")\n        print(\"Iniciando downloads de posts\")\n\n        # Determinar ordem de processamento\n        if config['process_from_oldest']:\n            post_ids = sorted(post_ids)  # Ordem do mais antigo ao mais recente\n        else:\n            post_ids = sorted(post_ids, reverse=True)  # Ordem do mais recente ao mais antigo\n\n        # Pasta base para posts usando normalização de caminho\n        posts_folder = normalize_path(os.path.join(os.path.dirname(json_path), 'posts'))\n        os.makedirs(posts_folder, exist_ok=True)\n\n        # Processar cada post\n        for idx, post_id in enumerate(post_ids, 1):\n            # Encontrar dados do post específico\n            post_data = next((p for p in posts_data['posts'] if p['id'] == post_id), None)\n\n            if post_data:\n                # Pasta do post específico com normalização\n                post_folder = normalize_path(os.path.join(posts_folder, post_id))\n                os.makedirs(post_folder, exist_ok=True)\n\n                # Contar número de arquivos no JSON para este post\n                expected_files_count = len(post_data['files'])\n\n                # Contar arquivos já existentes na pasta\n                existing_files = [f for f in os.listdir(post_folder) if os.path.isfile(os.path.join(post_folder, f))]\n                existing_files_count = len(existing_files)\n\n                # Se já tem todos os arquivos, pula o download\n                if existing_files_count == expected_files_count:\n                    continue\n                \n                try:\n                    # Normalizar caminho do script de download\n                    download_script = normalize_path(os.path.join('codes', 'down.py'))\n                    \n                    # Use subprocess.Popen com caminho normalizado e suporte a Unicode\n                    download_process = subprocess.Popen(\n                        [sys.executable, download_script, json_path, post_id], \n                        stdout=subprocess.PIPE, \n                        stderr=subprocess.STDOUT, \n                        universal_newlines=True,\n                        encoding='utf-8'\n                    )\n\n                    # Capturar e imprimir output em tempo real\n                    while True:\n                        output = download_process.stdout.readline()\n                        if output == '' and download_process.poll() is not None:\n                            break\n                        if output:\n                            print(output.strip())\n\n                    # Verificar código de retorno\n                    download_process.wait()\n\n                    # Após o download, verificar novamente os arquivos\n                    current_files = [f for f in os.listdir(post_folder) if os.path.isfile(os.path.join(post_folder, f))]\n                    current_files_count = len(current_files)\n\n                    # Verificar o resultado do download\n                    if current_files_count == expected_files_count:\n                        print(f\"Post {post_id} baixado completamente ({current_files_count}/{expected_files_count} arquivos)\")\n                    else:\n                        print(f\"Post {post_id} parcialmente baixado: {current_files_count}/{expected_files_count} arquivos\")\n\n                except Exception as e:\n                    print(f\"Erro durante o download do post {post_id}: {e}\")\n\n                # Pequeno delay para evitar sobrecarga\n                time.sleep(0.5)\n\n        print(\"\\nTodos os posts foram processados!\")\n\n    except Exception as e:\n        print(f\"Erro inesperado: {e}\")\n        # Adicionar mais detalhes para diagnóstico\n        import traceback\n        traceback.print_exc()\n\ndef download_specific_posts():\n    \"\"\"Opção para baixar posts específicos\"\"\"\n    clear_screen()\n    display_logo()\n    print(\"Baixar 1 post ou alguns posts distintos\")\n    print(\"------------------------------------\")\n    print(\"Escolha o método de entrada:\")\n    print(\"1 - Digitar os links diretamente\")\n    print(\"2 - Carregar os links de um arquivo TXT\")\n    print(\"3 - Voltar para o menu principal\")\n    choice = input(\"\\nDigite sua escolha (1/2/3): \")\n\n    links = []\n\n    if choice == '3':\n        return\n    \n    elif choice == '1':\n        print(\"Cole os links dos posts (separados por vírgula):\")\n        links = input(\"Links: \").split(',')\n    elif choice == '2':\n        file_path = input(\"Digite o caminho para o arquivo TXT: \").strip()\n        if os.path.exists(file_path):\n            with open(file_path, 'r', encoding='utf-8') as file:\n                content = file.read()\n                links = content.split(',')\n        else:\n            print(f\"Erro: O arquivo '{file_path}' não foi encontrado.\")\n            input(\"\\nPressione Enter para continuar...\")\n            return\n    else:\n        print(\"Opção inválida. Retornando ao menu anterior.\")\n        input(\"\\nPressione Enter para continuar...\")\n        return\n\n    links = [link.strip() for link in links if link.strip()]\n\n    for link in links:\n        try:\n            domain = link.split('/')[2]\n            if domain == 'kemono.su':\n                script_path = os.path.join('codes', 'kcposts.py')\n            elif domain == 'coomer.su':\n                script_path = os.path.join('codes', 'kcposts.py')\n            else:\n                print(f\"Domínio não suportado: {domain}\")\n                continue\n\n            # Executa o script específico para o domínio\n            subprocess.run(['python', script_path, link], check=True)\n        except IndexError:\n            print(f\"Erro no formato do link: {link}\")\n        except subprocess.CalledProcessError:\n            print(f\"Erro ao baixar o post: {link}\")\n\n    input(\"\\nPressione Enter para continuar...\")\n\ndef download_profile_posts():\n    \"\"\"Opção para baixar posts de um perfil\"\"\"\n    clear_screen()\n    display_logo()\n    print(\"Baixar Posts de um Perfil\")\n    print(\"-----------------------\")\n    print(\"1 - Baixar todos os posts de um perfil\")\n    print(\"2 - Baixar Posts de uma página específica\")\n    print(\"3 - Baixar posts de um intervalo de páginas\")\n    print(\"4 - Baixar posts entre dois posts específicos\")\n    print(\"5 - Voltar para o menu principal\")\n    \n    choice = input(\"\\nDigite sua escolha (1/2/3/4/5): \")\n    \n    if choice == '5':\n        return\n    \n    profile_link = input(\"Cole o link do perfil: \")\n    \n    try:\n        json_path = None\n\n        if choice == '1':\n            posts_process = subprocess.run(\n                ['python', os.path.join('codes', 'posts.py'), profile_link, 'all'],\n                capture_output=True,\n                text=True,\n                encoding='utf-8',  # Certifique-se de que a saída é decodificada corretamente\n                check=True\n            )\n\n            # Verificar se stdout contém dados\n            if posts_process.stdout:\n                for line in posts_process.stdout.split('\\n'):\n                    if line.endswith('.json'):\n                        json_path = line.strip()\n                        break\n            else:\n                print(\"Nenhuma saída do subprocesso.\")\n        \n        elif choice == '2':\n            page = input(\"Digite o número da página (0 = primeira página, 50 = segunda, etc.): \")\n            posts_process = subprocess.run(['python', os.path.join('codes', 'posts.py'), profile_link, page], \n                                           capture_output=True, text=True, check=True)\n            for line in posts_process.stdout.split('\\n'):\n                if line.endswith('.json'):\n                    json_path = line.strip()\n                    break\n        \n        elif choice == '3':\n            start_page = input(\"Digite a página inicial (start, 0, 50, 100, etc.): \")\n            end_page = input(\"Digite a página final (ou use end, 300, 350, 400): \")\n            posts_process = subprocess.run(['python', os.path.join('codes', 'posts.py'), profile_link, f\"{start_page}-{end_page}\"], \n                                           capture_output=True, text=True, check=True)\n            for line in posts_process.stdout.split('\\n'):\n                if line.endswith('.json'):\n                    json_path = line.strip()\n                    break\n        \n        elif choice == '4':\n            first_post = input(\"Cole o link ou ID do primeiro post: \")\n            second_post = input(\"Cole o link ou ID do segundo post: \")\n            \n            first_id = first_post.split('/')[-1] if '/' in first_post else first_post\n            second_id = second_post.split('/')[-1] if '/' in second_post else second_post\n            \n            posts_process = subprocess.run(['python', os.path.join('codes', 'posts.py'), profile_link, f\"{first_id}-{second_id}\"], \n                                           capture_output=True, text=True, check=True)\n            for line in posts_process.stdout.split('\\n'):\n                if line.endswith('.json'):\n                    json_path = line.strip()\n                    break\n        \n        # Se um JSON foi gerado, roda o script de download\n        if json_path:\n            run_download_script(json_path)\n        else:\n            print(\"Não foi possível encontrar o caminho do JSON.\")\n    \n    except subprocess.CalledProcessError as e:\n        print(f\"Erro ao gerar JSON: {e}\")\n        print(e.stderr)\n    \n    input(\"\\nPressione Enter para continuar...\")\n\ndef customize_settings():\n    \"\"\"Opção para personalizar configurações\"\"\"\n    config_path = os.path.join('config', 'conf.json')\n    import json\n\n    # Carregar o arquivo de configuração\n    with open(config_path, 'r') as f:\n        config = json.load(f)\n\n    while True:\n        clear_screen()\n        display_logo()\n        print(\"Personalizar Configurações\")\n        print(\"------------------------\")\n        print(f\"1 - Pegar posts vazios: {config['get_empty_posts']}\")\n        print(f\"2 - Baixar posts mais antigos primeiro: {config['process_from_oldest']}\")\n        print(f\"3 - Para posts individuais, criar arquivo com informações (título, descrição, etc.): {config['save_info']}\")\n        print(f\"4 - Escolha o tipo de arquivo para salvar informações (Markdown ou TXT): {config['post_info']}\")\n        print(\"5 - Voltar ao menu principal\")\n\n        choice = input(\"\\nEscolha uma opção (1/2/3/4/5): \")\n\n        if choice == '1':\n            config['get_empty_posts'] = not config['get_empty_posts']\n        elif choice == '2':\n            config['process_from_oldest'] = not config['process_from_oldest']\n        elif choice == '3':\n            config['save_info'] = not config['save_info']\n        elif choice == '4':\n            # Alternar entre \"md\" e \"txt\"\n            config['post_info'] = 'txt' if config['post_info'] == 'md' else 'md'\n        elif choice == '5':\n            # Sair do menu de configurações\n            break\n        else:\n            print(\"Opção inválida. Tente novamente.\")\n\n        # Salvar as configurações no arquivo\n        with open(config_path, 'w') as f:\n            json.dump(config, f, indent=4)\n\n        print(\"\\nConfigurações atualizadas.\")\n        time.sleep(1)\n\ndef main_menu():\n    \"\"\"Menu principal do aplicativo\"\"\"\n    while True:\n        clear_screen()\n        display_logo()\n        print(\"Escolha uma opção:\")\n        print(\"1 - Baixar 1 post ou alguns posts distintos\")\n        print(\"2 - Baixar todos os posts de um perfil\")\n        print(\"3 - Personalizar as configurações do programa\")\n        print(\"4 - Sair do programa\")\n        \n        choice = input(\"\\nDigite sua escolha (1/2/3/4): \")\n        \n        if choice == '1':\n            download_specific_posts()\n        elif choice == '2':\n            download_profile_posts()\n        elif choice == '3':\n            customize_settings()\n        elif choice == '4':\n            print(\"Saindo do programa. Até logo!\")\n            break\n        else:\n            input(\"Opção inválida. Pressione Enter para continuar...\")\n\nif __name__ == \"__main__\":\n    print(\"Verificando dependências...\")\n    install_requirements()\n    print(\"Dependências verificadas.\\n\")\n    main_menu()\n"
  },
  {
    "path": "codept/requirements.txt",
    "content": "requests\n"
  }
]