Repository: jupediaz/chatgpt-prompt-splitter
Branch: main
Commit: 017961be5b16
Files: 18
Total size: 24.4 KB
Directory structure:
gitextract_891tnwvz/
├── .editorconfig
├── .gitattributes
├── .gitignore
├── .vscode/
│ └── settings.json
├── CONTRIBUTING.md
├── LICENSE
├── Procfile
├── README.md
├── api/
│ ├── index.py
│ └── templates/
│ └── index.html
├── requirements.txt
├── static/
│ ├── browserconfig.xml
│ ├── chatgpt_prompt_splitter.psd
│ ├── scripts.js
│ ├── site.webmanifest
│ └── styles.css
├── tests/
│ └── test_chatgpt_prompt_splitter.py
└── vercel.json
================================================
FILE CONTENTS
================================================
================================================
FILE: .editorconfig
================================================
{
"arrowParens": "always",
"bracketSpacing": true,
"endOfLine": "lf",
"htmlWhitespaceSensitivity": "css",
"insertPragma": false,
"singleAttributePerLine": false,
"bracketSameLine": true,
"jsxBracketSameLine": false,
"jsxSingleQuote": false,
"printWidth": 80,
"proseWrap": "preserve",
"quoteProps": "as-needed",
"requirePragma": false,
"semi": true,
"singleQuote": false,
"tabWidth": 4,
"trailingComma": "es5",
"useTabs": false,
"vueIndentScriptAndStyle": false,
"parser": "html"
}
================================================
FILE: .gitattributes
================================================
*.html linguist-detectable=false
*.css linguist-detectable=false
================================================
FILE: .gitignore
================================================
.vercel
*.log
*.pyc
__pycache__
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
================================================
FILE: .vscode/settings.json
================================================
{
"cSpell.words": [
"dotenv",
"upstash",
"vercel"
]
}
================================================
FILE: CONTRIBUTING.md
================================================
# Contributing
I'm thrilled you are interested in contributing this ChatGPT Prompt Splitter. Your help may be essential to the project's success and I want you in! There are many ways to contribute to this project, and I appreciate all of them.
## How to contribute
### Reporting bugs
This section guides you through submitting a bug report for this project. Following these guidelines helps maintainers and the community understand your report, reproduce the behavior, and find related reports.
### Suggesting enhancements
This section guides you through submitting an enhancement suggestion for this project, including completely new features and minor improvements to existing functionality. Following these guidelines helps maintainers and the community understand your suggestion and find related suggestions.
### Your first code contribution
Unsure where to begin contributing to this project? You can start by looking through the "good first issue" and "help wanted" issues:
Thank you!
================================================
FILE: LICENSE
================================================
MIT License
Copyright (c) 2023 Jose Diaz
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
================================================
FILE: Procfile
================================================
web: python index.py
================================================
FILE: README.md
================================================
ChatGPT PROMPTs Splitter
[](https://vercel.com/new/clone?repository-url=https%3A%2F%2Fgithub.com%2Fjupediaz%2Fchatgpt-prompt-splitter)
### ❓ Have you ever received a message from ChatGPT about sending too much data and needing to send a shorter text?
#### **Here's a great alternative to bypass this limitation!** 🚀

## Overview
**ChatGPT PROMPTs Splitter** is an open-source tool designed to help you split long text prompts into smaller chunks, making them suitable for usage with ChatGPT (or other language models with character limitations).
The tool ensures that the text is divided into safe chunks of up to 15,000 characters per request as default, although can be changed.
The project includes an easy-to-use web interface for inputting the long text, selecting the maximum length of each chunk, and copying the chunks individually to paste them to ChatGPT.
## Post on Medium
You can read the full article on Medium: [ChatGPT PROMPTs Splitter: Split long text prompts into smaller chunks for ChatGPT](https://medium.com/@josediazmoreno/break-the-limits-send-large-text-blocks-to-chatgpt-with-ease-6824b86d3270)
## How it works
The tool uses a simple algorithm to split the text into smaller chunks. The algorithm is based on the following rules:
1. Divide the prompt into chunks based on the specified maximum length.
2. Add information to the first chunk to instruct the AI on the process of receiving and acknowledging the chunks, and to wait for the completion of chunk transmission before processing subsequent requests.
## Features
- Python 3.9
- Web interface for splitting text into smaller chunks
- Customizable maximum length for each chunk
- Copy chunks individually to send to ChatGPT
- Instructions for ChatGPT on how to process the chunks
- Tests included
- Easy deployment to Vercel included
## Usage example
Follow these simple steps to use the ChatGPT Prompt Splitter web application, illustrated with screenshots.
### Step 1: Access the application
Open your web browser and navigate to the application URL.
https://chatgpt-prompt-splitter.jjdiaz.dev/
You should see the main screen, displaying the input fields for your long text prompt and maximum chunk length.

### Step 2: Input the long prompt
Enter the text you want to split into smaller chunks for use with ChatGPT.
You can also specify custom length for each chunk by entering the number of characters in the *"Max chars length..."* field.
In this example, we are gonna split into chunks of just 25 characters.

### Step 3: Click "Split"
Click the "Split" button to process the text and divide it into smaller chunks.

### Step 4: Copy the chunks
The application will display the text divided into smaller chunks. You can copy each chunk individually by clicking the "Copy" button next to it.

### Step 5: Paste the chunks into ChatGPT
Now that you have your chunks copied, you can paste them into ChatGPT or any other language model with character limitations.

That's it! You've successfully split a **long PROMPT** into smaller, manageable chunks using the **ChatGPT Prompt Splitter**.
## Getting Started
### Prerequisites
- Python 3.x
- Flask
### Installation
1. Clone the repository:
```bash
git clone https://github.com/jupediaz/chatgpt-prompt-splitter.git
```
2. Change to the project directory:
```bash
cd chatgpt-prompt-splitter
```
3. Install the required dependencies:
```bash
pip install -r requirements.txt
```
### Usage
#### Running the Flask application in development mode
1. Run the Flask application:
```bash
vercel dev
```
2. Open your web browser and navigate to .
#### Deploy the Flask application to production
1. Deploy the Flask application:
```bash
vercel --prod
```
2. Open your web browser and navigate to .
## Running Tests
This project includes a suite of unit tests to ensure the proper functionality of the tool. To run the tests, follow these steps:
1. Make sure you have the required dependencies installed:
```bash
pip install -r requirements.txt
```
2. Run the tests using the unittest module:
```bash
python3 -m unittest discover tests
```
The test suite will run, and the results will be displayed in the terminal.
## License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## Contributing
Contributions are welcome! Please read the [CONTRIBUTING](CONTRIBUTING.md) file for details on how to contribute to the project.
## Contact
If you have any questions or suggestions, please contact me at [hello@jjdiaz.dev](mailto:hello@jjdiaz.dev).
## Disclaimer
This project is not affiliated with OpenAI, Microsoft, or any other entity. The project is provided "as is" without warranty of any kind, express or implied. The author is not responsible for any damages or losses arising from the use of this project.
## Changelog
### 1.0.0
- Initial release
================================================
FILE: api/index.py
================================================
from flask import Flask, render_template, request
from dotenv import load_dotenv
import os
import random
import redis
import string
# Load environment variables from .env file
load_dotenv()
app = Flask(__name__)
# Set up Redis client
upstash_redis_url = os.environ.get('UPSTASH_REDIS_URL')
redis_client = redis.from_url(upstash_redis_url)
@app.route("/", methods=["GET", "POST"])
def index():
prompt = ""
split_length = ""
file_data = []
redis_client.incr("visit_counter")
visit_count = int(redis_client.get("visit_counter"))
if request.method == "POST":
prompt = request.form["prompt"]
split_length = int(request.form["split_length"])
file_data = split_prompt(prompt, split_length)
hash_value = generate_random_hash(8)
return render_template("index.html", prompt=prompt, split_length=split_length, file_data=file_data, hash=hash_value, visit_count=visit_count)
def split_prompt(text, split_length):
if split_length <= 0:
raise ValueError("Max length must be greater than 0.")
num_parts = -(-len(text) // split_length)
file_data = []
for i in range(num_parts):
start = i * split_length
end = min((i + 1) * split_length, len(text))
if i == num_parts - 1:
content = f'[START PART {i + 1}/{num_parts}]\n' + text[start:end] + f'\n[END PART {i + 1}/{num_parts}]'
content += '\nALL PARTS SENT. Now you can continue processing the request.'
else:
content = f'Do not answer yet. This is just another part of the text I want to send you. Just receive and acknowledge as "Part {i + 1}/{num_parts} received" and wait for the next part.\n[START PART {i + 1}/{num_parts}]\n' + text[start:end] + f'\n[END PART {i + 1}/{num_parts}]'
content += f'\nRemember not answering yet. Just acknowledge you received this part with the message "Part {i + 1}/{num_parts} received" and wait for the next part.'
file_data.append({
'name': f'split_{str(i + 1).zfill(3)}_of_{str(num_parts).zfill(3)}.txt',
'content': content
})
return file_data
def generate_random_hash(length):
return ''.join(random.choice(string.ascii_letters + string.digits) for _ in range(length))
if __name__ == '__main__':
app.run(host='0.0.0.0', port=int(os.environ.get('PORT', 3000)))
================================================
FILE: api/templates/index.html
================================================
Long PROMPTs Splitter
ChatGPT PROMPTs Splitter
Open-source tool for safely process chunks of up to 15,000 characters per request
The total length of the content that I want to send you is too large to send in only one piece.
For sending you that content, I will follow this rule:
[START PART 1/10]
this is the content of the part 1 out of 10 in total
[END PART 1/10]
Then you just answer: "Received part 1/10"
And when I tell you "ALL PARTS SENT", then you can continue processing the data and answering my requests.
This way we explain ChatGPT how to process the messages we are gonna send.
{% for file in file_data %}
{% set partNumber = file.name[6:9]|int %}
{% set totalParts = file.name[13:16]|int %}
{% endfor %}