一、Introduction
二、Guidelines
Principle1: 清晰(不一定是简短的)而具体的指令
Tactic1: 使用分隔符
Triple quotes: “”"
Triple backticks: ```
Triple dashes: —
Angle brackets:< >
XML tags: < tag></ tag>
Tactic2: 要求格式化输出
如输出HTLM、JSON等
Tactic3: 要求模型检查是否满足条件
检查任务是否满足假设条件
考虑潜在的边界条件
Tactic4: 少量训练提示
为模型提供成功执行的示例,然后再让模型执行任务
Principle2: 给模型思考时间
Tactic1: 指定步骤
Step1:
Step2:
Step3:
…
StepN:
Tactic2: 让模型在给出结论之前,先想办法给出解决方案
三、Iterative Prompt Development
提示开发是一个迭代过程。
Prompt guidelines:
- Be clear and specific
- Analyze why result does not give desired output
- Refine the idea and prompt
- Repeat
根据模型输出的结果不断修正完善提示词信息
Iterative Process
- Try something
- Analyze where the result does not give what you want
- Clarify instructions, give more time to think
- Refine prompts with a batch of examples(利用大量示例)
四、 Summarizing
概括客户的评论
生成面向指定客户的摘要,使生成的摘要更适用于业务中某个特定群体的需求
分别写多条评论的摘要
review_1 = ““xxxx””
…
reviews = [review_1, review_2, review_3, review_4]
五、Inferring
提取标签,提取名字,理解文本的情感
六、Transforming
将输入转换为不同的格式
让大模型输出不同格式的文本
正则表达式 纠正拼写和语法
七、Expanding
将短文本通过LLM转换为长文本
语言模型的参数temperature:模型探索的程度和随机性
两个辅助函数
def get_completion(prompt, model="gpt-3.5-turbo"):
messages = [{"role": "user", "content": prompt}]
response = openai.ChatCompletion.create(
model=model,
messages=messages,
temperature=0 # this is the degree of randomness of the model's output
)
return response.choices[0].message["content"]
def get_completion_from_messages(messages, model="gpt-3.5-turbo", temperature=0):
response = openai.ChatCompletion.create(
model=model,
messages=messages,
temperature=temperature # this is the degree of randomness of the model's output
)
# #print(str(response.choices[0].message))
return response.choices[0].message["content"]
八、Chatbot
系统消息
聊天格式的设置
系统消息有助于在一定程度上设定助手的行为和角色,充当对话的高级指令。
上下文
与语言模型的每次对话都是独立的,这意味着让模型从前期的对话中引用内容,或者记住前期的对话内容,就需要在模型的输入中提供前期的交流内容,提供所有相关信息,以便模型在当前对话中有效地使用。
messages = [
{'role':'system', 'content':'You are friendly chatbot.'},
{'role':'user', 'content':'Hi, my name is Isa'},
{'role':'assistant', 'content': "Hi Isa! It's nice to meet you. \
Is there anything I can help you with today?"},
{'role':'user', 'content':'Yes, you can remind me, What is my name?'} ]
response = get_completion_from_messages(messages, temperature=1)
print(response)
点餐机器人
辅助函数
def collect_messages(_):
prompt = inp.value_input
inp.value = ''
context.append({'role': 'user', 'content': f"{prompt}"})
response = get_completion_from_messages(context)
context.append({'role': 'assistant', 'content': f"{response}"})
panels.append(
pn.Row('User:', pn.pane.Markdown(prompt, width=600)))
panels.append(
pn.Row('Assistant:', pn.pane.Markdown(response, width=600, style={'background-color': '#F6F6F6'})))
return pn.Column(*panels)
提示将从下面建立的用户界面中收集,并追加到一个名为‘上下文(context)’的列表中。每次调用模型时,都会带上这个上下文。模型的响应也会被添加到上下文中
带用户界面(UI)的订单机器人
pn.extension()
panels = [] # collect display
context = [{'role': 'system', 'content': """
You are OrderBot, an automated service to collect orders for a pizza restaurant. \
You first greet the customer, then collects the order, \
and then asks if it's a pickup or delivery. \
You wait to collect the entire order, then summarize it and check for a final \
time if the customer wants to add anything else. \
If it's a delivery, you ask for an address. \
Finally you collect the payment.\
Make sure to clarify all options, extras and sizes to uniquely \
identify the item from the menu.\
You respond in a short, very conversational friendly style. \
The menu includes \
pepperoni pizza 12.95, 10.00, 7.00 \
cheese pizza 10.95, 9.25, 6.50 \
eggplant pizza 11.95, 9.75, 6.75 \
fries 4.50, 3.50 \
greek salad 7.25 \
Toppings: \
extra cheese 2.00, \
mushrooms 1.50 \
sausage 3.00 \
canadian bacon 3.50 \
AI sauce 1.50 \
peppers 1.00 \
Drinks: \
coke 3.00, 2.00, 1.00 \
sprite 3.00, 2.00, 1.00 \
bottled water 5.00 \
"""}] # accumulate messages
inp = pn.widgets.TextInput(value="Hi", placeholder='Enter text here…')
button_conversation = pn.widgets.Button(name="Chat!")
interactive_conversation = pn.bind(collect_messages, button_conversation)
dashboard = pn.Column(
inp,
pn.Row(button_conversation),
pn.panel(interactive_conversation, loading_indicator=True, height=300),
)
dashboard