ClaimMaster’s +Drafting patent application drafting tools integrate OpenAI or Microsoft Azure OpenAI GPT services, allowing you to quickly invoke previously defined GPT prompts enriched with inputs from the document. However, due to the way GPT models are trained, there are specific prompt formats that produce better output results. The practice of optimizing prompts is called “prompt engineering.” It involves strategic formulation of prompts or instructions to guide the GPT model to the desired output. Much of “prompt engineering” involves trial and error, but a number of common practices outlined below will ensure that you are getting the best results out of GPT with the carefully crafted prompt templates in ClaimMaster.
1. Separate GPT prompt instructions from context
You can tell GPT where the instruction portion ends and the context begins by using separators ### or —. Doing so ensures that GPT distinguishes between the instructions and the rest of the text input, especially if the size of the text section is large. For example, you can configure GPT templates in ClaimMaster with the [INPUT] section separated from the instructions as follows:
Re-write the following text section more elegantly without removing any sentences:
###
[INPUT]
###
2. Provide detailed instructions about the desired output
When crafting GPT prompts, try to be as specific, descriptive and detailed as possible about the desired context, outcome, length, format, style, etc. Instead of just saying what not to do, say what to do instead. In addition, try to reduce “fluffy” and imprecise descriptions, such as “fairly short”, “several sentences”, etc. Instead, provide precise bounds for the number of words, paragraphs, text length, etc.
Here’s an example a prompt template that provides multiple directions to GPT:
Prepare a Background of the invention section of a patent application for [INPUT]. Draft only the Background section of the application. Do not include other sections, such as the field of the invention, summary or written description. Draft in compete paragraphs and keep the word count below 500 words.
Here’s another prompt template that would rewrite the Summary or Abstract section generated using ClaimMaster’s patent drafting tools to sound a little less mechanical due to repetition to the claim language.
Re-write the following text to sound more natural with minimal changes. Do not omit sentences and do not change adjectives or nouns.
Text: ###
[INPUT]
###
For the above prompt, you’ll probably want to set “creativity” level close to 0 to decrease output variance, as explained in section 5.
3. Use the latest GPT models
For best output results, we generally recommend using the latest, most capable model, which at this time is GPT-4. Right now ClaimMaster provides free access to GPT-3.5-turbo model for +Drafting subscribers, but to access GPT-4, you’ll need to create an OpenAI or Microsoft Azure OpenAI account and configure ClaimMaster to use your API key/endpoint. Using your own account and private models with OpenAI or Azure OpenAI could also alleviate various security/data privacy concerns you or your clients may have about sending sensitive data to GPT services.
OpenAI states that GPT-4 is more reliable, creative, and able to handle much more nuanced instructions than GPT-3.5, especially for more complex tasks or prompts. We’ve tested both models and noticed subtle, but noticeable differences between the output of GPT-4 and GPT-3.5. Overall, the output GPT-4 generates is better written, less mechanical, and more precise. For example, we’ve asked GPT-4 and GPT-3.5 to answer the following prompt: “Explain how GPT prompt engineering can help when drafting patent applications. Limit output to 2 paragraphs.” The resulting explanation from GPT-4 answered the asked question:
Prompt engineering for GPT (Generative Pretrained Transformer) in patent application drafting involves the strategic formulation of prompts or instructions to guide the AI model in generating desired outputs. The prompts are designed to provide context and set the direction for the AI model’s responses. For instance, a prompt may instruct the AI model to draft a patent application for a specific invention, detailing its purpose, design, and functionality. The GPT model, trained on a diverse range of internet text, uses these prompts to generate text that is contextually relevant and coherent. It predicts the next word in a sentence by considering all the previous words used in the prompt. This way, it can generate a complete and detailed draft of a patent application based on the initial instructions provided. The quality of the output largely depends on the specificity and clarity of the prompts given. In the context of patent application drafting, prompt engineering can be further refined by incorporating specific legal and technical language or terminologies relevant to the invention. This helps in generating a more accurate and legally sound patent application. The AI model can also be trained to follow the structural and formatting rules of patent applications to ensure the generated draft meets the required standards.
Generated using GPT-4
On the other hand, when asked the same question, GPT-3.5.-turbo’s explanation sounded a little off and did not really explain ‘prompt engineering’ when applied to GPT:
Prompt engineering is a technique used in patent application drafting for GPT (Generative Pre-trained Transformer) technology. GPT is a type of language model that uses deep learning to generate human-like text based on a given prompt. In the context of patent application drafting, prompt engineering involves formulating specific prompts that guide the GPT model to generate patent claims or descriptions. The process begins by creating a prompt that outlines the desired structure and content of the patent application. This includes specifying the technical field, the problem to be solved, and the solution provided by the invention. The prompt may also include additional instructions to ensure the desired level of detail, clarity, and technical accuracy. The GPT model is then fine-tuned using a dataset of existing patent documents, enabling it to generate patent-specific language. During the drafting process, the model is prompted with specific questions or instructions to generate patent claims, descriptions, or other sections of the application. The generated text is subsequently reviewed, refined, and edited by the patent agent or attorney to ensure compliance with legal requirements and accuracy. Overall, prompt engineering allows patent agents to leverage the capabilities of GPT to assist in drafting patent applications, speeding up the process while maintaining the necessary technical specificity and legal compliance
Generated with GPT-3.5-turbo
That said, we’ve found that GPT-3.5 generally provides reasonably good results, especially for less intensive drafting tasks. While GPT-4 may produce better output, it’s also noticeably slower than GPT-3.5. It is also about 30x more expensive. However, when compared to the average cost for 1 attorney hour, its usage costs are still low considering the time it could save you.
4. Run GPT output through several fine-tuning prompts
In some cases, you may want to further fine-tune your output with other pre-defined GPT prompts. For example, you could first generate a Background text section for a particular topic, make minor edits to the result in the output window, and finally use another GPT prompt on the edited result to rewrite that section as concisely as possible. To simplify such recursive editing, ClaimMaster has a special input called “Text in the GPT Output window” that lets you use the text shown in the GPT output window as input for the next GPT prompts. This allows you to chain together multiple GPT templates to edit the text shown in the GPT output window.
Alternatively, you can also try to fine tune your results by constructing prompts that include multiple instructions to GPT, as explained in section 2 above.
5. Adjust “creativity” level for text output
ClaimMaster also lets you specify “randomness/creativity” level (a.k.a. ‘temperature’) level for each GPT prompt. That level controls how often the GPT model outputs a less likely result. By increasing the level of “randomness/creativity”, you are increasing the likelihood of more random and usually creative output. However, is not the same as “truthfulness” and higher values could lead to GPT “hallucinations.” Therefore, for most technical use cases such as explaining background technology or terms, we recommend keeping the “randomness/creativity” level low.
How are you using GPT prompts? Please share with us your favorite/effective prompts and techniques for GPT prompt engineering and we’ll incorporate those into future releases and patent drafting templates.
Additional Resources
If you’re interested in additional GPT resources related to “prompt engineering”, we recommend: