Configuring document workspaces for GPT/LLM patent drafting

Home » Tutorials » Configuring document workspaces for GPT/LLM patent drafting

ClaimMaster‘s patent application drafting tools use a local database of selected document sections (i.e., “document workspaces”) to perform Retrieval Augmented Generation (RAG) when creating text with GPT/LLM models. RAG is the process of retrieving relevant contextual information from a database of document snippets and passing that information to the LLM alongside the user’s prompt to improve its output quality. The RAG architecture in ClaimMaster uses semantic searching to find contextually relevant data based on the conceptual similarity of the input prompt.

The use of RAG is very useful for GenAI-enhanced patent drafting, as it allows you to pass text from the selected document sections of previous applications (e.g., Background, Abstract, claims, etc.) to the LLM to get better output results from the model. For example, if you are drafting a patent application for a particular client in semiconductor industry, the default GPT/LLM is able to draft text using its base knowledge of semiconductors, but it might not know enough about the client’s niche area and it also doesn’t have access to the examples or specific term definitions used in the previous applications for the same client. With RAG, you can get much better results by setting up a document workspace that includes sections of your client’s previous applications, enabling the GPT/LLM to use that context to generate more relevant output for this client. When you use the document workspaces feature with ClaimMaster, RAG is being performed automatically for each prompt that is configured to use that document workspace.  


To set up local document workspaces for your GPT/LLM prompts, perform the following steps:

  1. Configure the source for generating vector embeddings of your documents

    Semantic searches convert text queries to their mathematical vector format (i.e., vector embeddings) and use those vectors to search the vector database for the closest conceptual matches. Vector embeddings are generated using GPT/LLM. Notably, the same GPT/LLM source should be used both generating embeddings from the prompts and from the documents stored in the vector database.

    To specify the LLM for generating embeddings, click on the Preferences, Extra Tools, Help menu, then click on Preferences, then switch to Patent Drafting tab and click on GPT/LLM Settings button. In the settings window that comes up, specify valid RAG embeddings and LLM/GPT sources configured in GPT/LLM settings. Make sure to use the same service, either OpenAI, Azure OpenAI, or Ollama. If you are using a private GPT model hosted in Azure or OpenAI, you’ll need to specify the endpoint for generating embeddings provided to you by Azure.

    GPT/LLM embedding source

  2. Configure one or more local document workspaces

    Next, specify which documents you want to include into your local document workspace. A workspace could be a set of documents specific to a particular clients, technology, etc.

    local document workspace

    Note – the initial process of generating embeddings is quite slow. To speed up the generation of embeddings, it is recommended that you remove all sections from your documents that would not be useful for text generation. This will also improve search results when looking up relevant text snippets.

    Once you load your document sections into the workspace, you can check its contents by switching to the “Workspace Contents” tab and clicking on the “View stored workspace snippets” button. This will let you preview all of the stored workspace contents (non-editable).
    show workspace contents

  3. Configure default workspace/instructions for any GPT/LLM prompt template (Optional)

    Configure your GPT/LLM prompts to use one of the specified local document workspaces under the Local Documents tab, as shown here. This will be the default workspace for this prompt template. You can also directly set the desired workspace when before you send the drafting prompt to your GPT/LLM source.
    configure_workspace_for_gpt_template

    As part of specifying the workspace settings for the prompt, you can provide specific RAG prompt/instructions that will be used when providing context to GPT/LLM (along with the original prompt).  [RAGDATA] placeholder field will be used to populate the retrieved context with the text snippets returned from the workspace in the instructions to LLM. You can further use ### separators to distinguish the returned text snippets from the instructions to LLM.  In particular, you can control how the LLM interprets and uses information from your provided context. For example, if the prompt is “use ONLY the following information to answer the posed question: ### [RAGDATA]###“, then the LLM will use only information in your snippets to generate the response. On the other hand, if your RAG prompt is “use the following information in addition to you knowledge to answer the posed question: ### [RAGDATA]###“, then LLM with supplement its response with your provided context, but will not rely on it exclusively.

  4. Specify the desired workspace before sending the GPT/LLM prompt (Optional)

    When generating text with LLM, select the local workspace (and other related workspace settings) as part of the configured GPT/LLM prompt. You can use either the default workspace specified in the template or pick another one using the drop-down menu:

    When generating text with LLM, select the local workspace (and other related workspace settings) as part of the configured GPT/LLM prompt. You can use either the default workspace specified in the template or pick another one using the drop-down menu

    You can also test your currently selected LLM prompt with the contents of the configured workspace. Just click on the “Test with current LLM prompt” button to see which snippets from the document workspace are extracted from the vector database and are used to supplement your prompt:

    test rag with workspace

  5. Send the prompt to the configured GPT/LLM

    Once you’ve specified the desired workspace for the prompt (or using the default workspace configured for the prompt template), click “Send prompt to LLM” to send the prompt to the configured GPT/LLM source for processing.

    send gpt/llm prompt

    Notably, by default ClaimMaster will attempt to pull text snippets from the database that are the most contextually similar to your prompt.  However, in some cases you may want to get all contents of a particular workspace that match a particular section filter (e.g., all Background or Abstract sections).  In this case, you can simply insert the wildcard character “*” into your GPT/LLM prompt, such as shown below.

    wildcard_rag_instruction

    When ClaimMaster sees “*” in the prompt, it will not attempt to find the closest match from the vector database in your prompt, but rather will return all entries (limited by the # of snippets) from the specified workspace that match the specified document section filter.


F

For more information on this feature, check out the Online Manual.