OpenAI's GPT API settings
OpenAI's GPT API Settings
If you have your own account with OpenAI or Azure OpenAI, ClaimMaster lets you pass your API account information for completing GPT prompts (you'll incur your own costs for the generated text). You may find this option beneficial as you'll be able to use your custom end-points (Azure) or more advanced GPT models (ClaimMaster currently uses GPT-4o by default), as well as set the max limit on the tokens to generate more text than allowed ClaimMaster's default settings.
You can set GPT API settings in ClaimMaster's preferences:
In the following window, you can the following items.
- Specifies whether to use OpenAI's GPT service, Microsoft Azure OpenAI service, or local/private LLM server. For OpenAI and Azure, ClaimMaster has configured stateless, private OpenAI GPT models to use for its customers as default and you can also specify your own endpoint.
- Azure GPT endpoint or local LLM server address (if source is set to 'Local LLM"):
- Here you can specify your own Azure end-point to use with ClaimMaster if you have a separate agreement with Microsoft Azure in place and have configured a private GPT service. For Azure GPT access, the endpoint should specify the full address, such as https://{YOUR_RESOURCE_NAME}.openai.azure.com/openai/deployments/{YOUR_DEPLOYMENT_NAME}/chat/completions?api-version=2023-08-01-preview
- For local LLMs, the address could be a localhost address of the LLM server, such as http://localhost:4891/v1
- Note - this entry is not used for OpenAI GPT service.
- API Key - specifies GPT API key, either for a custom OpenAI or Azure OpenAI's services.
- Model for API - specifies GPT model name, either for a custom OpenAI or Azure OpenAI's service.
- Max tokens - specifies max # of tokens allowed by the GPT service.
- Enter the secret key (if provided) to unlock access to a more advanced GPT model.
- Click this button to save your information.