Explanation:
client = OpenAI(
api_key="...",
base_url="https://resource1.openai.azure.com/openai/v1/",
)
response = client.responses.create(
model="my-mini-gpt",
...
)
For Azure OpenAI in Microsoft Foundry, the base_url uses the Azure OpenAI resource name in the endpoint format:
https://<resource-name>.openai.azure.com/openai/v1/
In the question, the Azure OpenAI resource is named Resource1, so the first blank must be resource1. Microsoft documentation for Azure OpenAI v1 endpoints confirms that the endpoint must use the ...openai.azure.com/openai/v1/ path.
For the model parameter, Azure OpenAI requires the deployment name, not the underlying model name. Microsoft states that Azure OpenAI always requires the deployment name when calling APIs, even when the parameter is named model.
The deployed model is gpt-4.1-mini, but the deployment name is my-mini-gpt. Therefore, the second blank must be:
model="my-mini-gpt"
So the correct selections are:
base_url blank = resource1
model blank = my-mini-gpt