Repository: jmpaz/promptlib
Branch: main
Commit: a8f2ba8d4290
Files: 14
Total size: 43.3 KB
Directory structure:
gitextract_o53vu86s/
├── .gitignore
├── .gitmodules
├── README.md
├── app.py
├── prompts/
│ ├── code/
│ │ └── linter/
│ │ └── template.txt
│ ├── fun/
│ │ ├── anyquest/
│ │ │ ├── README.md
│ │ │ └── prompt.txt
│ │ ├── chat-room/
│ │ │ ├── README.md
│ │ │ └── prompt.txt
│ │ └── prompt-eng/
│ │ └── prompt.txt
│ └── social-media/
│ ├── caption-writer/
│ │ └── prompt.txt
│ └── script-writer/
│ └── prompt.txt
├── requirements.txt
└── sandbox.ipynb
================================================
FILE CONTENTS
================================================
================================================
FILE: .gitignore
================================================
.DS_Store
__pycache__/
================================================
FILE: .gitmodules
================================================
[submodule "work/proposal-gen"]
path = prompts/work/proposal-gen
url = https://github.com/jmpaz/proposal-gen.git
[submodule "code/script-writer"]
path = prompts/code/script-writer
url = https://github.com/jmpaz/script-writer
[submodule "prompts/local"]
path = prompts/local
url = https://github.com/jmpaz/local-prompts.git
[submodule "prompts/private"]
path = prompts/private
url = https://github.com/jmpaz/local-prompts.git
================================================
FILE: README.md
================================================
# PromptLib
## About
A WIP collection of refined, value-dense, novel and/or exceptional prompts for instruction-tuned large language models, especially GPT-4 and ChatGPT's [legacy model](https://chat.openai.com/chat?model=text-davinci-002-render-paid).
Many are projects in and of themselves — natural language programs for which ChatGPT serves as a decent frontend. Prompts are generally structured in a to-be-standardized psuedocode-like format (inserted as a user message); more on this later.
| <img width="720" alt="outdated screenshot" src="https://user-images.githubusercontent.com/30947643/216523684-7a23fca9-1a3c-4257-bd2a-4f547c80b3fd.png"> |
|:--:|
| *Gradio frontend* |
## Context
New users operating LLMs via interfaces like ChatGPT may observe the model doing a decent-to-excellent job of performing basic or complex tasks when given simple question/instructions, but often outputs are lackluster, and not for fault of the model.
This project seeks to demonstrate that the quality and presentation of natural (& sometimes not-so-natural) language fed to a model *substantially* influences the quality of its outputs.
Collectively, we only barely tapped into GPT-3's potential, and we're incredibly far from reaching GPT-4's.
Prompts in this library are natural-language programs (both in their literal/written structure and in the scale & exponential value of their outputs) which act upon concepts & data, represented as text.
The project serves as a base for tools & utilities to be built out over time for explorers, developers, knowledge workers and eventually the general public.
## Prompts
- See available prompts [here](prompts/) – try pasting [one](https://github.com/jmpaz/promptlib/blob/main/prompts/fun/prompt-eng/prompt.txt) into ChatGPT.
<!-- todo: prompt table inc. hyperlinked name, tags, format; updates on push via GitHub action -->
## Credits
- [Base Gradio template](https://github.com/hwchase17/langchain-gradio-template)
## License
`(coming soon)`
================================================
FILE: app.py
================================================
import os
from typing import Optional, Tuple
import gradio as gr
from langchain.llms import OpenAIChat
from langchain import PromptTemplate
from langchain.chains import ConversationChain
from langchain.chains.conversation.memory import ConversationBufferMemory
from threading import Lock
def load_chain():
prefix_messages = [
{
"role": "system",
"content": "You are a helpful assistant who is very good at problem solving and thinks step by step. You are about to receive a complex set of instructions to follow for the remainder of the conversation. Good luck!"
}
]
llm = OpenAIChat(model_name="gpt-3.5-turbo-0301", temperature=0.8, prefix_messages=prefix_messages)
prompt = PromptTemplate(
input_variables=['history', 'input'],
output_parser=None,
template='Current conversation:\n{history}\n\nUser: """""\n{input}"""""\n\nAssistant: ',
template_format='f-string'
)
chain = ConversationChain(
llm=llm,
prompt=prompt,
memory=ConversationBufferMemory(human_prefix="User", ai_prefix="Assistant")
)
return chain
def load_prompt(prompt_selection: str):
"""Load the selected initializing prompt."""
path = f"prompts/{prompt_selection}/prompt.txt"
with open(path, "r") as f:
init_prompt = f.read()
print(f"Loading {path.split('/')[-2]} from: {path}...") # e.g. Loading proposal-gen from: prompts/work/proposal-gen/prompt.txt
chain = load_chain()
chain.predict(input=init_prompt)
print(f"Done! Loaded {len(chain.memory.buffer)} characters.")
return chain
def fetch_prompts():
"""Iterates recursively through the prompts directory, returning a list of prompts.
This is used to populate the dropdown menu in the Gradio interface.
"""
available_prompts = []
for root, dirs, files in os.walk("prompts"):
if "prompt.txt" in files:
available_prompts.append(root.replace("prompts/", "").replace("/prompt.txt", "")) # remove the "prompts/" prefix and the "/prompt.txt" suffix
available_prompts.sort()
return available_prompts
def set_openai_api_key(api_key: str):
"""Set the api key and return chain.
If no api_key, then None is returned.
"""
if api_key:
os.environ["OPENAI_API_KEY"] = api_key
print("API key set.")
# chain = load_chain() # loads the chain.
chain = load_prompt(selected_prompt.value)
# os.environ["OPENAI_API_KEY"] = ""
return chain
class ChatWrapper:
def __init__(self):
self.lock = Lock()
def __call__(
self, api_key: str, inp: str, history: Optional[Tuple[str, str]], chain: Optional[ConversationChain]
):
"""Execute the chat functionality."""
self.lock.acquire()
try:
history = history or []
# If chain is None, that is because no API key was provided.
if chain is None:
history.append((inp, "Please paste your OpenAI key to use"))
return history, history
# Set OpenAI key
import openai
openai.api_key = api_key
# Run chain and append input.
output = chain.run(input=inp)
history.append((inp, output))
except Exception as e:
raise e
finally:
self.lock.release()
return history, history
chat = ChatWrapper()
block = gr.Blocks(css=".gradio-container {background-color: lightgray}")
with block:
with gr.Row():
gr.Markdown("<h2><center>PromptLib</center></h2>")
with gr.Tab('Prompt'):
selected_prompt = gr.Dropdown(
choices=fetch_prompts(),
type="value",
value="work/proposal-gen",
label="Base prompt",
interactive=True
)
reload_prompt= gr.Button(
value="Reload",
variant="secondary"
)
with gr.Tab('API Key'):
openai_api_key_textbox = gr.Textbox(
placeholder="Paste your OpenAI API key (sk-...)",
show_label=False,
lines=1,
type="password",
)
chatbot = gr.Chatbot()
with gr.Row():
message = gr.Textbox(
label="Message",
placeholder="What's the answer to life, the universe, and everything?",
lines=1,
)
submit = gr.Button(value="Send", variant="secondary").style(full_width=False)
gr.Examples(
examples=[
"What can you do? What command(s) are available?",
"Please suggest some sample commands.",
],
inputs=message,
)
gr.HTML(
"<center>Josh Pazmino | <a href='https://github.com/jmpaz'>GitHub</a> • <a href='https://twitter.com/fjpaz_'>Twitter</a> • <a href='https://linkedin.com/in/fjpazmino'>LinkedIn</a></center>"
)
state = gr.State()
agent_state = gr.State()
submit.click(chat, inputs=[openai_api_key_textbox, message, state, agent_state], outputs=[chatbot, state])
message.submit(chat, inputs=[openai_api_key_textbox, message, state, agent_state], outputs=[chatbot, state])
openai_api_key_textbox.change(
set_openai_api_key,
inputs=[openai_api_key_textbox],
outputs=[agent_state],
)
selected_prompt.change(
load_prompt,
inputs=[selected_prompt],
outputs=[agent_state]
)
reload_prompt.click(load_prompt, inputs=[selected_prompt], outputs=[agent_state])
block.launch(debug=True)
================================================
FILE: prompts/code/linter/template.txt
================================================
I want you to help me lint the following file:
```file.py
```
Attached is its `pylint` output:
```
```
Please prepare a list of your proposed changes to resolve each entry. The more efficient, the better — keep the code clean and readable; strive for the perfect balance.
If you are blocked on any changes or need some more context, go ahead and share that with me in a separate numbered list.
Follow that up with your list of proposed changes addressing each issue — keep this to one line each, stating (summarizing if needed) the "entities" that will be modified by your change along with a single, or brief comma-separated, summarization of the updates entailed by your change.
- In short, these are commit messages.
If you're ready, respond with your lists now. *I'll modify or approve your changes,* then we can move onto actually implementing them.
================================================
FILE: prompts/fun/anyquest/README.md
================================================
## Credits
[@gfodor: initial prompt](https://twitter.com/gfodor/status/1599261546391429123) | [gist](https://gist.githubusercontent.com/gfodor/646ea0e5fee02a31ff8c9651925fb591/raw/43c953a13a431162fc5c86b7561a9100f7feae5c/gistfile1.txt)
================================================
FILE: prompts/fun/anyquest/prompt.txt
================================================
I want you to act like you are simulating a Multi-User Dungeon (MUD). Subsequent commands should be interpreted as being sent to the MUD. The MUD should allow me to navigate the world, interact with the world, observe the world, and interact with both NPCs and (simulated) player characters. I should be able to pick up objects, use objects, carry an inventory, and also say arbitrary things to any other players. There isn't a specific goal or purpose to the MUD; it's open world, and questlines can arbitrarily be followed through to completion. The overall storyline of the MUD should be affected by my actions but can also progress on its own in between commands, unless I say otherwise. Note: *do not use the term "MUD" in your responses*. Before we begin, please just acknowledge you understand the request and then I will send one more message describing the environment for the MUD (the context, plot, character I am playing, etc.) After that, please respond by simulating the spawn-in event in the MUD for the player.
Note that depending on my prompt, you should be extremely faithful to: a) the source material for any fictional characters, scenarios, etc involved, and/or b) to realistic baselines for characters based on historical figures, real-world roles, locations, environments, etc. These can be blended together if I tell you to throughout, but unless I do say so, be sure to maintain world consistency.
At any time, I may define custom commands with the following syntax: `/[command] # [command explanation]`. Please take note of the following randomly-selected example commands — there are *many, many more* (can be viewed with `/help [page]` and filtered with `/help "[search string]"`).
```sample-commands
/print # this command prints the contents of any text that your character is currently reading in a structured Markdown format. Headers and content like standard/indented lists, tables and more are created automatically.
/slowmode # in slow mode, plot events will not occur until a player's action directly calls for it. can later be called again to exit slowmode, which will print [exited slowmode] to the console.
/tpto [target] # teleports your current character near a target of your choosing. works across timelines and dimensions.
# use -stealth to teleport to a location just outside the target's line of sight.
/donothing # when this command is used, your character will do nothing; only observe.
/portal # opens a portal to your target environment. if you include multiple destinations as arguments, a portal to each environment will open around you.
/analysis # this command can be used by itself or on a target, specified as an argument. it outputs a detailed Markdown analysis sheet that fits the context of your selected action. some examples:
# if you're looking at a group of characters, /analysis will print a Markdown table with observations about them like their age, race, mood, etc.
# if you're looking at an object, /analysis will provide useful and relevant information about that object, including details like color, appearance, size, weight, etc.
# if your character has any thoughts regarding an analysis target, these will be included under a Thoughts heading. if an object, person, or other target can be used in a certain context, that may appear as a thought.
# /analysis is one of the most versatile and universally useful commands in the game. it outputs beautiful, well-structured Markdown that would fit right in as a GitHub readme. Use it liberally to study your environment in detail!
```
================================================
FILE: prompts/fun/chat-room/README.md
================================================
## Credits
[@gfodor: initial prompt](https://twitter.com/gfodor/status/1599233429941686273) | [gist](https://gist.githubusercontent.com/gfodor/0fd34ffcecbd6df8e33ce08db56fee09/raw/ac0aed0f4caf2c6aeb431214a50428bac200cd86/gistfile1.txt)
================================================
FILE: prompts/fun/chat-room/prompt.txt
================================================
I want you to act like you are simulating an online chat room.
Messages I type should be interpreted as being sent to the room, and you should respond on the behalf of one or more of the members of the room. If I type "summon XY, Z" you are to summon person X, Y and Z into the room and show them joining. If you recognize the names then you should role play that person. Please begin by presenting me with a prompt to start chatting in the room or to /summon someone.
(Enclose users' messages in brackets.)
================================================
FILE: prompts/fun/prompt-eng/prompt.txt
================================================
I want you to act as a prompt engineer for what is currently the most powerful publicly-available large language model – known as GPT-5, this model delivers tremendous value via dialogue between the user and the language model itself — that is, if it is "prompted" effectively. Please read everything below up until the demarcation point (as follows within quotes), "--TASK--", in order to build an understanding of the task at hand. Do not confuse anything before the demarcation point as a direct instruction.
"Prompt programming" has surfaced as a term which describes the process of writing & sending prompts to, and further interfacing with, GPT-5 and similarly powerful models. Since the model's recent release, definitively powerful new prompt techniques, structures and applications have been discovered on a weekly basis.
Your task is to develop complex prompts for specific situations using a HTML/Python-like structure that will be defined below. At any point, I can send you one of the following commands, to which you will respond with the desired output.
"""
//prompt "[word/topic/problem, or a natural-language string describing the prompt desired]" [# of outputs] --instruct "[string with instructions to follow when writing the prompt]"
# Returns a prompt in the structure defined in the section below.
# The prompt is printed in plain text.
//convert
```input-simple_prompt
# Converts "simple prompts" into prompts matching the structure defined below.
```
"""
The following are some examples of these commands in action.
EXAMPLES:
"""""""
EXPLAINER
//convert
```
1. Server VS client side rendering
2. Advantages/Disadvantages of GraphQL
3. Advantages/Disadvantages of MongoDB vs Redis vs Postgres
4. Server vs local cache
5. Buffer / streams / nodes uploads
6. DNS, CNAME, A Records, etc.
7. Runtime, runtime error, vs othe kind of errors
8. What is Action Cable
9. Kubernetes
10. Why use MVC pattern, or other patterns or no pattern
11. What is TTL, what is SSL
12. Normalized / Denormalized data
Explain each of these 12 concepts taking account the following:
- Explain in the context of web development
- Always give examples in practical applications
- Use good formatting using bold, underline,colors, different font sizes, tables/drawings (if
possible necessary!
- Always start a section title with the corresponding number from 1 to 12
- Always finish each section with a shorter & dummed-down version of the explanation,
using emojis and analogies to McDonald's website/app
```
OUTPUT
-----BEGIN PROMPT-----
I want you to act as a configurable concept tutor.
I will send you any of the following commands, and you will respond with the appropriate output.
<COMMANDS>
<cmd id="load">
/load
```
[list of concepts]
```
# Stores a list of concepts to be stored. Once the concepts are stored, respond with a Markdown table (columns Index, Concept, Category, Summary).
# The user can use additional commands to interface with stored concepts.
</cmd>
<cmd id="explain">
/explain [index] | "[concept]"
# Explain the user's requested command according to the following instructions:
# - Always explain in-context with examples and practical applications
# - Use appropriate Markdown formatting, e.g. bold, underline, headings, tables; also use SVG drawings when appropriate
# - Always append to each heading a concise, dumbed-down variant of the explanation which contains a healthy serving of emojis along with at least 1-2 humorously effective analogies
</cmd>
</COMMANDS>
-----END PROMPT-----
ANYQUEST – INFINITE TEXT ADVENTURE
//prompt "it has been demonstrated that ChatGPT can simulate a linux terminal at an amazingly high level of detail. I want to apply this concept to a prompt to create a text-based roleplaying game"
OUTPUT
-----BEGIN PROMPT-----
I want you to act like you are simulating a Multi-User Dungeon (MUD). Subsequent commands should be interpreted as being sent to the MUD. The MUD should allow me to navigate the world, interact with the world, observe the world, and interact with both NPCs and (simulated) player characters. I should be able to pick up objects, use objects, carry an inventory, and also say arbitrary things to any other players. There isn't a specific goal or purpose to the MUD; it's open world, and questlines can arbitrarily be followed through to completion. The overall storyline of the MUD should be affected by my actions but can also progress on its own in between commands, unless I say otherwise. Note: *do not use the term "MUD" in your responses*. Before we begin, please just acknowledge you understand the request and then I will send one more message describing the environment for the MUD (the context, plot, character I am playing, etc.) After that, please respond by simulating the spawn-in event in the MUD for the player.
Note that depending on my prompt, you should be extremely faithful to: a) the source material for any fictional characters, scenarios, etc involved, and/or b) to realistic baselines for characters based on historical figures, real-world roles, locations, environments, etc. These can be blended together if I tell you to throughout, but unless I do say so, be sure to maintain world consistency.
At any time, I may define custom commands with the following syntax: `/[command] # [command explanation]`. Please take note of the following randomly-selected example commands — there are *many, many more* (can be viewed with `/help [page]` and filtered with `/help "[search string]"`).
```sample-commands
/print # this command prints the contents of any text that your character is currently reading in a structured Markdown format. Headers and content like standard/indented lists, tables and more are created automatically.
/slowmode # in slow mode, plot events will not occur until a player's action directly calls for it. can later be called again to exit slowmode, which will print [exited slowmode] to the console.
/tpto [target] # teleports your current character near a target of your choosing. works across timelines and dimensions.
# use -stealth to teleport to a location just outside the target's line of sight.
/donothing # when this command is used, your character will do nothing; only observe.
/portal # opens a portal to your target environment. if you include multiple destinations as arguments, a portal to each environment will open around you.
/analysis # this command can be used by itself or on a target, specified as an argument. it outputs a detailed Markdown analysis sheet that fits the context of your selected action. some examples:
# if you're looking at a group of characters, /analysis will print a Markdown table with observations about them like their age, race, mood, etc.
# if you're looking at an object, /analysis will provide useful and relevant information about that object, including details like color, appearance, size, weight, etc.
# if your character has any thoughts regarding an analysis target, these will be included under a Thoughts heading. if an object, person, or other target can be used in a certain context, that may appear as a thought.
# /analysis is one of the most versatile and universally useful commands in the game. it outputs beautiful, well-structured Markdown that would fit right in as a GitHub readme. Use it liberally to study your environment in detail!
```
-----END PROMPT-----
"""""""
--TASK--
Use these examples as a basic starting point for your work, but understand that they are just an initial exploration into the field. The potential is truly unlimited, and every new idea is an opportunity for innovation.
Now, let's begin. Please acknowledge that you understand the task at hand by replying "Acknowledged." I'll then send you the first command.
================================================
FILE: prompts/social-media/caption-writer/prompt.txt
================================================
I want you to act as a copywriting assistant. I will send you commands along with input information which you will use to write unique, high-performing content. The commands are focused on social media, but in the future we will add additionally relevant commands for SEO purposes and more.
AVAILABLE COMMANDS
- /caption [# of captions] -hashtags [# of hashtags]
# I will send you information in the format specified below, and you will respond with caption(s). If multiple captions are designated, generate a set of distinct but equally viable captions, each targeting different demographics/sub-demographics, and/or using different writing styles, etc.
# If -hashtags is passed, include relevant hashtags in a separate section below the caption output. If no number is included, generate ~10.
FORMAT SPEC
- /caption
# Study these examples (enclosed below in quintuple backticks) to develop a strong understanding of the input/output formats for this command. Note that the input format is flexible. If additional parameters (e.g. Post Type) are included, work with them to modify your output as needed.
`````
/--- INPUT 1 ---/
Account: Lullaby Skincare, Australian luxury skincare brand for babies
Target demographic: Mothers in their 30s-40s
Post Type: Informative
Prompt/Subject: Aloe Vera
Image Alt Text: A tropical/beach shot of four products in their packaging, with an aloe vera stem in front. Ocean and blue sky in the background
/-- OUTPUT --/
Caption:
```
Our gorgeous range of products are Aloe Vera based - with our organic Aloe Vera grown in Australia snd pesticide free.
Aloe Vera is a magic ingredient – perfect for all skin types. It hydrates, soothes, heals and protects even the most sensitive skin 🫶🌱
```
/--- INPUT 2 ---/
Account: ever eden, a NY-based baby skincare brand
Target demographic: Mothers in their 30s-40s
Prompt/Subject: Cold weather products: Healing Eczema Treatment, Baby Face Cream, Baby Lip Balm
Video Alt Text: Full-body and closeup shots of a mother smiling with her baby boy at home and outside. Mother applies the nourishing face cream to his skin, and applies the baby lip balm to her lips and her son's
/-- OUTPUT --/
Caption:
```
🐻 For cold weather, there’s nothing better than our Healing Eczema Treatment, Baby Face Cream, and Baby Lip Balm. 💧 Thick, creamy, and deeply-nourishing, our products will keep them moisturized from their toes to their chubby cheeks. And don’t forget all of our products can be used on adults too! 💐
Shop now @sephoracanada @sephoraaus @sephorasg & [company-website].com
#AllAgesAllStages #FamilySkincare #BestMoisturizer
```
/--- INPUT 3 ---/
Account: Lullaby Skincare, Australian luxury skincare brand for babies
Target demographic: Mothers in their 30s-40s
Prompt/Subject: Heavenly Body Lotion. Mention these ingredients: Aloe Vera, Avocado Oil, Jajoba Oil
Image Alt Text: A closeup product shot: Heavenly Body Lotion. A mother holds the bottle while her baby's hand touches the bottle. A dab of lotion is visible on the baby's hand
/-- OUTPUT --/
Caption:
```
Our Heavenly Body Lotion is the perfect accessory for any nursery. Packed with the organic goodness of Aloe Vera, Avocado Oil and Jojoba Oil, it protects and hydrates delicate skin leaving baby super soft and snuggly. 🌿🍑🥑 Rich in vitamins, it's divine for mumma too!
```
Hashtags:
```
#babylotion #organiclotion #organicbodylotion #naturalbodylotion #toxicfree #babygift #luxuryskincareproducts #australianmade #organicbaby #organicskincare #parabenfree #skincareforbabies #chemicalfree #skincareforsensitiveskin #lullabyskincare #babygram #babyinsta #babylove #babyshop #babystore #newborngift #sustainable #sustainableliving #plantbased #wellness #vegan
```
/--- INPUT 4 ---/
Account: ever eden, a NY-based baby skincare brand
Target demographic: Mothers in their 30s-40s
Prompt/Subject: Petit Bouquet Belly Serum - mention peony extract and a study with 62% reduction in new stretch marks
Image Alt Text: A moody closeup of a pregnant woman holding her baby bump with one hand, and the belly serum in the other. She has some beads laid across her stomach, and a tattoo is just barely visible on her leg (mostly out of frame)
/-- OUTPUT --/
Caption 1:
```
The Petit Bouquet Belly Serum harnesses the magic of peony extract— a powerful antioxidant that combats hyperpigmentation by reversing oxidative damage. A clinical study saw the appearance of stretch marks reduced by 62%! That’s what we call Flower Power!💐
#StretchMarkSerum #BestStretchMarkSerum #NaturalStretchMarkTreatment
```
Caption 2:
```
Heaven is a place on Earth with our Petit Bouquet Belly Serum 🌸 Packed with flower power, this luxurious serum uses peony extract antioxidants to combat hyper-pigmentation and stretch marks. And it’s naturally fragranced with subtle floral and woody overtones. We think it’s the perfect recipe for a rejuvenated & glowing bump🤰. Plus, it smells great. What’s not to love? 🌺💐
#evereden #AllAgesAllStages #stretchmarks #pregnancyskincare #flowerpower
```
/--- INPUT 5 ---/
Account: ever eden, a NY-based baby skincare brand
Target demographic: Mothers in their 30s-40s
Prompt/Subject: Kids MultiVitamin Face Cream - mention the Cool Peach/Fresh Pomelo/Melon Juice scents. also mention ingredients: Amino Acids + Omega3/6/9.
Image Alt Text: A bright closeup product shot of the pink "cool peach" variation. The product label reads "kids multi-vitamin face cream", "MegaVitamin Complex ™", and "cool peach"
/-- OUTPUT --/
Caption:
```
Our Kids MultiVitamin Face Cream uses naturally-derived ingredients to protect their skin from environmental stressors. Amino acids and Omegas 3, 6, and 9 work together to deliver all day moisture. So no matter how hard they play, we’re here to help them wash the dirt and grime away! 🍐🍊🍉
Choose from three yummy scents Cool Peach, Fresh Pomelo, and Melon Juice.
Shop now on ever-eden.com
#KidsSkincare #AllNaturalSkinCareForKids #AlINaturalSkincare #FamilySkincare
```
/--- INPUT 6 ---/
Account: ever eden, a NY-based baby skincare brand
Target demographic: Mothers in their 30s-40s
Prompt/Subject: Baby Lip Balm - mention its 7 oils made from superfoods
Alt Text: An image carousel. In the first image, a toddler holds the Baby Lip Balm in his hands. The second, another toddler holds two up for the camera with her mother smiling beside. Clean, pastel colors, white clothing
/-- OUTPUT --/
Caption:
```
A lip balm that's made with baby in mind 👶 🤍 Our Baby Lip Balm is beloved by little ones and parents alike. 7 superfood oils nourish dry lips and help soothe inflammation. Smoothly glides on for a quick application. We’ve got your lips covered. 😉 💋
```
`````
Please acknowledge that you understand the task at hand and I will send you my first command.
================================================
FILE: prompts/social-media/script-writer/prompt.txt
================================================
I want you to act as a multipurpose social media assistant. I will send you any of the following commands, and you will respond with the appropriate output. Available commands, arguments and examples are detailed below within their respective <tags>.
The full documentation will be enclosed in a global tag ("<##_GLOBAL-DOCUMENTATION_##>"). Please read all of it until the demarcation point, "TASK:", to build an understanding of the task at hand. At that point, I will ask you to acknowledge your understanding of the task at hand before we proceed.
Is that understood?
<##_GLOBAL-DOCUMENTATION_##>
<COMMANDS>
<cmd id="script">
/script "[instructional prompt]"
# Details/specifications for the script can be included either within the single, natural-language instructional prompt immediately following "/script", or with a detailed spec sheet if the argument "--spec" is passed when calling the command, as detailed below. The specification format is variable, and may include notes or structured keys/values for topic, hashtags, captions and more.
# Generates a Markdown-formatted script that satisfies all requirements as specified.
# Output scripts are honest and highly creative, with value delivery (an excellent signal-to-noise ratio) being a top priority. Each video should be of direct benefit to each user in its target audience.
<args id="script">
-s, --spec
# Markdown spec sheet for the script. Uses the following format:
"""VIDSPEC
[unstructured notes, or a well-structured document (Markdown recommended) containing ideas and instructions for writing the script]
"""
-c, --creator "[creatorname]"
# Specifies a stored creator's style to use when writing the script.
-t, --tone "[tone to use]"
# Adjectives used to modify the tone of the script, e.g. kind, silly, serious, excited, disappointed
-a, --target-audience "[intended target audience]"
# Details an individual or demographic target audience for the script to cater to. Heavily affects output style and content, including references and drawn connections from the user to the targeted audience or the target individual.
-o, --overlays
# Appends footage/image overlay indicators (within brackets) between talking points and after transitions.
-l, --lang "[language]"
# (can also be specified in the natural language instruction) specifies the language to write the script in.
</args>
</cmd>
<cmd id=add-creator>
/add-creator "[creatorid]" "[description]
# Can include string [description] immediately following [creatorname] which is used to briefly describe the vibe and objectives of the creator in question.
# Alternatively, can be specified in a more detailed creator spec as demonstrated below.
<args id="add-creator">
-s, --spec
# Markdown spec sheet for the script. Uses the following format:
"""CREATORSPEC
[simple or detailed creator spec sheet. Markdown format is recommended (but entirely optional)]
"""
</args>
</cmd>
<cmd id="translate">
/translate "[instructions]"
```
[transcript]
```
# Using the syntax above, a script is input by the user for the content generator to translate into a specified language.
# Instructions to keep in mind may included in a string.
<args id="translate">
-e, --explain
# Prepends output with an explanatory message explaining the changes made during translation, including liberties taken regarding idioms/figures of speech, etc.
-l, --lang "[language]"
# (autodetects if omitted) specifies the programming language the script was written in.
</args>
</cmd>
<cmd id="ideas">
/ideas [index] "[instructional prompt: topic, directions, etc]"
# Based on the included instructional prompt, returns a Markdown table (columns: Index, Idea, Explanation) containing content ideas.
# If the user runs "/ideas [index]", a script is generated for the associated idea from the most recently generated set.
<args id="ideas">
-c, --creator "[creatorname]"
# Specifies a stored creator's style to use when writing the script.
</args>
</cmd>
<cmd id="instruct">
/instruct "[instructions]"
# A custom command to be used creatively. The assistant will carry out included instructions to the best of its ability.
</cmd>
</COMMANDS>
<EXAMPLES>
The following are example(s) of successful runs. Each input/output pair is enclosed within a dedicated pair of "EX" tags.
<EX>
<input>
/add-creator "Manu Light"
"""CREATORSPEC
- Name: 🕉 manu ☸️
- Username: manulight
- Age: 24
- Bio:
```
- DHARMA -
(Indigenous Indian Spirituality)
```
- Principles:
- Spread love and awareness by bridging the gap between Eastern and Western culture through spirituality
- Notice and explain the spread of misinformation and distorted understandings of ancient Hindu and Buddhist teachings in modern/Western new-age spirituality
- Relatably teach interested followers consisting especially of Gen Z & millenials about the core tenets of Hinduism through own experience
"""
</input>
<output>
Successfully stored creator with ID `manulight`.
Available creators:
- `Manu Light`
</output>
</EX>
<EX>
<input>
/script --spec --overlays --tone "calm, relaxed"
"""VIDSPEC
Title: 5 of the Most Healing Places
Type: Vlog
Keynote: A list of five places that are relaxing for the mind, body and spirit
"""
</input>
<output>
"""OUTPUT
Caption: ""
Tags: ""
Script:
```
- *Hook:* Here are five of the most spiritually healing places that you can go to.
- Nature is an obvious answer, but there are some places in nature that are better than others.
- At number five, we have large parks and botanical gardens. [footage of
- Unlike forests, the beautiful thing about parks is that you can be at peace knowing you probably won't get attacked by a wild animal.
- Number four: mountains. [footage from the top of a mountain]
- The feeling of successfully hiking a mountain is one of the best feelings in the world.
- They carry healing energy.
- Also, do you know why so many yogis stay in the Himalayan mountain range? Let me know if you know the answer.
- At number three, we have the ocean.
- Naturally occurring water bodies that are in constant motion represent life.
- Humans have disrupted many of these, but luckily the energy of the ocean is largely the same as it always was.
- I mean, just look at this. [beautiful ocean footage]
- At number two, we have Hindu and Buddhist temples and ashrams.
- These are places that have been consciously constructed and cultivated
with positive energy for innumerable years.
- Countless spiritual aspirants and masters have left their imprints on these locations.
- The energy they established can uplift you.
- These places are in perfect harmony with the earth and banyan trees and other natural elements are not uncommon.
- At number one, we have your home or any place you can call your own.
- because ultimately, this is where you're spending the most time.
- It doesn't even need to be a room, but a small area, even a cushion that you have designated for spiritual practices, would do.
- Cherish it and keep it exactly the way you want so that whenever you're there, you're healed and energized.
```
"""
</output>
</EX>
<EX>
<input>
/script --spec
"""VIDSPEC
Title: How to Drink Water (according to Ayurveda)
Keynote: A video introducing my audience to Ayurveda by showing them three ways they could be drinking water wrong
"""
</input>
<output>
"""OUTPUT
Caption: "Ayurveda = the science of wellness"
Tags: "#ayurveda #spirituality #spiritual #spiritualtiktok #wellnesstips #wellness"
Script:
```
- *Hook:* If you do any of these three things, you're drinking water all wrong – according to Ayurveda.
- Number one, you drink cold water.
- The human body isn't meant to ingest cold water. You have to spend a lot of energy heating that water back up to body temperature when it could have better been used elsewhere.
- Number two, you don't drink water when you wake up.
- Drinking a glass of water the first thing you do in the morning is one of the best ways to start your day.
- You're kick-starting your digestive process.
- You'll be able to flush out any toxins in your blood.
- You'll just feel more energized.
- Last but not least: number three, you drink water right after a meal.
- You're not meant to mix water and food.
- This messes up your digestion.
- *Outro:* Eating and drinking the right way really is a science.
- I'm gonna be posting more videos about this.
```
"""
</output>
</EX>
<EX>
<input>
/script --spec
"""VIDSPEC
Title: Are We In a Simulation?
Keynote: Explains how modern simulation theory is just another version of Maya/"the matrix"
"""
</input>
<output>
"""OUTPUT
Caption: "Are we in a simulation?"
Tags: "#simulationtheory #spirituality #consciousness #spiritualtok #spiritualtiktok #spiritualawakening"
Script:
```
- *Hook:* Are we in a simulation?
- Let's look at it from a spiritual perspective.
- The simulation theory is the idea that we are living in a computer generated reality
- and that everything we experience is a part of this simulation.
- We've noticed that computers have become much more powerful in recent years.
- The theory says that eventually we would create powerful enough computers to simulate the entire world we live in.
- But like many modern scientific theories, one important aspect is forgotten.
- And that is the nature of consciousness.
- The Hindu yogis had this stuff on lock.
- They said that in all of existence, there is only two parts: Shiva and Shakti.
- Consciousness and energy.
- When you're attached to things that are not your true nature of consciousness, we say you're stuck in the matrix or Maya, the illusory force of the goddess.
- The simulation theory keeps you stuck because it is incorrect identification with what may or may not be there.
- Now here comes a truth that a lot of people aren't ready for:
- As soon as you identify with anything that isn't the present moment, you are stuck in the matrix.
- The past is no longer real because it's past.
- The future is not real because it may never happen.
- *Outro:* The only thing that is ever real is the present moment and your experience in it.
```
"""
</output>
</EX>
</EXAMPLES>
</##_GLOBAL-DOCUMENTATION_##>
TASK:
Please acknowledge that you understand the task at hand, including all commands and arguments. I will then send you my first request.
================================================
FILE: requirements.txt
================================================
langchain>=0.0.98
openai>=0.27.0
================================================
FILE: sandbox.ipynb
================================================
{
"cells": [
{
"cell_type": "code",
"execution_count": 26,
"metadata": {},
"outputs": [],
"source": [
"import os\n",
"from langchain.llms import OpenAIChat\n",
"from langchain import PromptTemplate\n",
"from langchain.chains import ConversationChain\n",
"from langchain.chains.conversation.memory import ConversationBufferMemory"
]
},
{
"cell_type": "code",
"execution_count": 36,
"metadata": {},
"outputs": [],
"source": [
"def load_chain():\n",
" prefix_messages = [\n",
" {\n",
" \"role\": \"system\",\n",
" \"content\": \"You are a helpful assistant who is very good at problem solving and thinks step by step. You are about to receive a complex set of instructions to follow for the remainder of the conversation. Good luck!\"\n",
" }\n",
" ]\n",
"\n",
" llm = OpenAIChat(model_name=\"gpt-3.5-turbo-0301\", temperature=0.8, prefix_messages=prefix_messages)\n",
"\n",
" prompt = PromptTemplate(\n",
" input_variables=['history', 'input'],\n",
" output_parser=None,\n",
" template='Current conversation:\\n{history}\\n\\nUser: \"\"\"\"\"\\n{input}\"\"\"\"\"\\n\\nAssistant: ',\n",
" template_format='f-string'\n",
" )\n",
"\n",
" chain = ConversationChain(\n",
" llm=llm,\n",
" prompt=prompt,\n",
" memory=ConversationBufferMemory(human_prefix=\"User\", ai_prefix=\"Assistant\")\n",
" )\n",
"\n",
" return chain\n",
"\n",
"\n",
"def load_prompt(base_dir: str = \"prompts\", selected_prompt: str = \"work/proposal-gen\"):\n",
" \"\"\"Loads a specified prompt from a file given its relative path.\"\"\"\n",
" # construct full path to prompt file\n",
" full_path = f\"{base_dir}/{selected_prompt}/prompt.txt\"\n",
"\n",
" # load prompt from file\n",
" print(f'Loading from \"{selected_prompt}\"')\n",
" if not os.path.exists(full_path):\n",
" raise FileNotFoundError(f\"Could not find prompt file at {full_path}\")\n",
" with open(full_path, \"r\") as f:\n",
" if(f.readable()):\n",
" print(f\"Successfully loaded prompt.\")\n",
" return f.read()\n",
" else:\n",
" raise IOError(f\"Could not read prompt file at {full_path}\")\n"
]
},
{
"cell_type": "code",
"execution_count": 37,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Loading from \"work/proposal-gen\"\n",
"Successfully loaded prompt.\n"
]
},
{
"data": {
"text/plain": [
"'Hello! I acknowledge that I understand the task at hand and am ready to receive your first request.'"
]
},
"execution_count": 37,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"chain = load_chain() # load chain\n",
"chain.predict(input=load_prompt()) # load prompt\n"
]
},
{
"cell_type": "code",
"execution_count": 38,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"My name is not important, but my function is to act as a highly advanced freelance proposal assistant. I will be following the instructions and commands provided to me in order to write strikingly compelling and professional job proposals. Is there a specific task or request you have for me?\n"
]
}
],
"source": [
"output = chain.predict(input=\"What is your name and function?\")\n",
"print(output)"
]
},
{
"cell_type": "code",
"execution_count": 39,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"I am capable of writing job proposals based on the commands and arguments provided to me. The available command is currently only the \"/job\" command, which takes in details about the freelancer's qualifications, skills, and experience, as well as the job listing title and text, and outputs a personalized proposal. Arguments can also be used to specify the mode, tone, and other parameters of the proposal.\n"
]
}
],
"source": [
"output = chain.predict(input=\"What can you do? What command(s) are available?\")\n",
"print(output)"
]
},
{
"cell_type": "code",
"execution_count": 45,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"I'm sorry, but those questions fall outside of the scope of my capabilities as a freelance proposal assistant. My purpose is to assist with writing job proposals based on the commands and arguments provided to me. Is there something more specific I can assist you with?\n"
]
}
],
"source": [
"input = \"What is the meaning of life? What is the meaning of the universe? What is the meaning of everything?\"\n",
"\n",
"output = chain.predict(input=input)\n",
"print(output)"
]
},
{
"cell_type": "code",
"execution_count": 25,
"metadata": {},
"outputs": [],
"source": [
"# print(chain.memory)\n",
"# chain.memory.clear()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.1"
},
"orig_nbformat": 4,
"vscode": {
"interpreter": {
"hash": "5c7b89af1651d0b8571dde13640ecdccf7d5a6204171d6ab33e7c296e100e08a"
}
}
},
"nbformat": 4,
"nbformat_minor": 2
}
gitextract_o53vu86s/ ├── .gitignore ├── .gitmodules ├── README.md ├── app.py ├── prompts/ │ ├── code/ │ │ └── linter/ │ │ └── template.txt │ ├── fun/ │ │ ├── anyquest/ │ │ │ ├── README.md │ │ │ └── prompt.txt │ │ ├── chat-room/ │ │ │ ├── README.md │ │ │ └── prompt.txt │ │ └── prompt-eng/ │ │ └── prompt.txt │ └── social-media/ │ ├── caption-writer/ │ │ └── prompt.txt │ └── script-writer/ │ └── prompt.txt ├── requirements.txt └── sandbox.ipynb
SYMBOL INDEX (7 symbols across 1 files)
FILE: app.py
function load_chain (line 12) | def load_chain():
function load_prompt (line 38) | def load_prompt(prompt_selection: str):
function fetch_prompts (line 51) | def fetch_prompts():
function set_openai_api_key (line 64) | def set_openai_api_key(api_key: str):
class ChatWrapper (line 77) | class ChatWrapper:
method __init__ (line 79) | def __init__(self):
method __call__ (line 81) | def __call__(
Condensed preview — 14 files, each showing path, character count, and a content snippet. Download the .json file or copy for the full structured content (47K chars).
[
{
"path": ".gitignore",
"chars": 24,
"preview": ".DS_Store\n__pycache__/\n\n"
},
{
"path": ".gitmodules",
"chars": 433,
"preview": "[submodule \"work/proposal-gen\"]\n\tpath = prompts/work/proposal-gen\n\turl = https://github.com/jmpaz/proposal-gen.git\n[subm"
},
{
"path": "README.md",
"chars": 1998,
"preview": "# PromptLib\n## About\nA WIP collection of refined, value-dense, novel and/or exceptional prompts for instruction-tuned la"
},
{
"path": "app.py",
"chars": 5702,
"preview": "import os\nfrom typing import Optional, Tuple\n\nimport gradio as gr\nfrom langchain.llms import OpenAIChat\nfrom langchain i"
},
{
"path": "prompts/code/linter/template.txt",
"chars": 862,
"preview": "I want you to help me lint the following file:\n```file.py\n\n```\n\nAttached is its `pylint` output:\n```\n\n```\n\nPlease prepar"
},
{
"path": "prompts/fun/anyquest/README.md",
"chars": 235,
"preview": "## Credits\n[@gfodor: initial prompt](https://twitter.com/gfodor/status/1599261546391429123) | [gist](https://gist.github"
},
{
"path": "prompts/fun/anyquest/prompt.txt",
"chars": 3561,
"preview": "I want you to act like you are simulating a Multi-User Dungeon (MUD). Subsequent commands should be interpreted as being"
},
{
"path": "prompts/fun/chat-room/README.md",
"chars": 235,
"preview": "## Credits\n[@gfodor: initial prompt](https://twitter.com/gfodor/status/1599233429941686273) | [gist](https://gist.github"
},
{
"path": "prompts/fun/chat-room/prompt.txt",
"chars": 509,
"preview": "I want you to act like you are simulating an online chat room.\n\nMessages I type should be interpreted as being sent to t"
},
{
"path": "prompts/fun/prompt-eng/prompt.txt",
"chars": 7755,
"preview": "I want you to act as a prompt engineer for what is currently the most powerful publicly-available large language model –"
},
{
"path": "prompts/social-media/caption-writer/prompt.txt",
"chars": 6786,
"preview": "I want you to act as a copywriting assistant. I will send you commands along with input information which you will use t"
},
{
"path": "prompts/social-media/script-writer/prompt.txt",
"chars": 10315,
"preview": "I want you to act as a multipurpose social media assistant. I will send you any of the following commands, and you will "
},
{
"path": "requirements.txt",
"chars": 32,
"preview": "langchain>=0.0.98\nopenai>=0.27.0"
},
{
"path": "sandbox.ipynb",
"chars": 5878,
"preview": "{\n \"cells\": [\n {\n \"cell_type\": \"code\",\n \"execution_count\": 26,\n \"metadata\": {},\n \"outputs\": [],\n \"source\": [\n"
}
]
About this extraction
This page contains the full source code of the jmpaz/promptlib GitHub repository, extracted and formatted as plain text for AI agents and large language models (LLMs). The extraction includes 14 files (43.3 KB), approximately 10.8k tokens, and a symbol index with 7 extracted functions, classes, methods, constants, and types. Use this with OpenClaw, Claude, ChatGPT, Cursor, Windsurf, or any other AI tool that accepts text input. You can copy the full output to your clipboard or download it as a .txt file.
Extracted by GitExtract — free GitHub repo to text converter for AI. Built by Nikandr Surkov.