From 5c85c80fb9b2e49756a5c34f07b969d86b0bb3b4 Mon Sep 17 00:00:00 2001 From: liamsommer-mx Date: Wed, 21 Jan 2026 12:54:16 +0100 Subject: [PATCH 1/6] SAS-1719: updated openai docs to use Microsoft Foundry as name instead of Azure OpenAI --- .../external-platforms/openai.md | 56 +++++++++---------- 1 file changed, 26 insertions(+), 30 deletions(-) diff --git a/content/en/docs/marketplace/genai/reference-guide/external-platforms/openai.md b/content/en/docs/marketplace/genai/reference-guide/external-platforms/openai.md index df90a235bfd..d2614701d0e 100644 --- a/content/en/docs/marketplace/genai/reference-guide/external-platforms/openai.md +++ b/content/en/docs/marketplace/genai/reference-guide/external-platforms/openai.md @@ -11,7 +11,7 @@ aliases: ## Introduction {#introduction} -The [OpenAI Connector](https://marketplace.mendix.com/link/component/220472) allows you to integrate generative AI into your Mendix app. It is compatible with [OpenAI's platform](https://platform.openai.com/) as well as [Azure's OpenAI service](https://oai.azure.com/). +The [OpenAI Connector](https://marketplace.mendix.com/link/component/220472) allows you to integrate generative AI into your Mendix app. It is compatible with [OpenAI's platform](https://platform.openai.com/) and [Microsoft Foundry](https://learn.microsoft.com/en-us/azure/ai-foundry/what-is-ai-foundry), where you can access OpenAI models. ### Features {#features} @@ -21,7 +21,7 @@ OpenAI provides market-leading LLM capabilities with GPT-4: * Creativity – Generate, edit, and iterate with end-users on creative and technical writing tasks, such as composing songs, writing screenplays, or learning an end-user’s writing style. * Longer context – GPT-4 can handle over 25,000 words of text, allowing for use cases like long-form content creation, extended conversations, and document search and analysis. -Mendix provides dual-platform support for both [OpenAI](https://platform.openai.com/) and [Azure OpenAI](https://oai.azure.com/). +Mendix provides support for [OpenAI](https://platform.openai.com/) and [Microsoft Foundry](https://learn.microsoft.com/en-us/azure/ai-foundry/what-is-ai-foundry) (formerly known as Azure OpenAI or Cognitive Services). Microsoft Foundry is Microsoft's unified AI platform that provides access to OpenAI models. With the current version, Mendix supports the Chat Completions API for [text generation](https://platform.openai.com/docs/guides/text-generation), the Image Generations API for [images](https://platform.openai.com/docs/guides/images), the Embeddings API for [vector embeddings](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings), and indexes via [Azure AI Search](https://learn.microsoft.com/en-us/azure/search/) for knowledge base retrieval. @@ -33,7 +33,7 @@ By integrating Azure AI Search, the OpenAI Connector enables knowledge base retr ### Prerequisites {#prerequisites} -To use this connector, you need to either sign up for an [OpenAI account](https://platform.openai.com/) or have access to deployments at [Azure OpenAI](https://oai.azure.com/). +To use this connector, you need to either sign up for an [OpenAI account](https://platform.openai.com/) or have access to a [Microsoft Foundry](https://learn.microsoft.com/en-us/azure/ai-foundry/what-is-ai-foundry) project with OpenAI models deployed. ### Dependencies {#dependencies} @@ -74,33 +74,29 @@ The following inputs are required for the OpenAI configuration: | Endpoint | This is the API endpoint (for example, `https://api.openai.com/v1`) | | Token | This is the access token to authorize your API call.
To get an API, follow these steps:
  1. Create an account and sign in at [OpenAI](https://platform.openai.com/).
  2. Go to the [API key page](https://platform.openai.com/account/api-keys) to create a new secret key.
  3. Copy the API key and save this somewhere safe.
| -#### Azure OpenAI Configuration {#azure-openai-configuration} +#### Microsoft Foundry Configuration {#azure-openai-configuration} -The following inputs are required for the Azure OpenAI configuration: +The following inputs are required for the Microsoft Foundry configuration: | Parameter | Value | | -------------- | ------------------------------------------------------------ | | Display name | This is the name identifier of a configuration (for example, *MyConfiguration*). | -| API type | Select `AzureOpenAI`. | -| Endpoint | This is the API endpoint (for example, `https://your-resource-name.openai.azure.com/openai/deployments/`).
For details on how to obtain `your-resource-name`, see the [Obtaining Azure OpenAI Resource Name](#azure-resource-name) section below. | +| API type | Select `AzureOpenAI` for both Azure OpenAI and Microsoft Foundry deployments. | +| Endpoint | This is the API endpoint (for example, `https://your-resource-name.openai.azure.com/openai/deployments/`).
For details on how to obtain `your-resource-name`, see the [Obtaining Resource Name](#azure-resource-name) section below. | | Azure key type | This is the type of token that is entered in the API key field. For Azure OpenAI, two types of keys are currently supported: Microsoft Entra token and API key.
For details on how to generate a Microsoft Entra access token, see [How to Configure Azure OpenAI Service with Managed Identities](https://learn.microsoft.com/en-gb/azure/ai-services/openai/how-to/managed-identity). Alternatively, if your organization allows it, you could use the Azure `api-key` authentication mechanism. For details on how to obtain an API key, see the [Obtaining Azure OpenAI API keys](#azure-api-keys) section below. For more information, see the [Technical Reference](#technical-reference) section. | | Token / API key | This is the access token to authorize your API call. | -##### Obtaining the Azure OpenAI Resource Name {#azure-resource-name} +##### Obtaining the Resource Name {#azure-resource-name} -1. Go to the [Azure OpenAI portal](https://oai.azure.com/) and sign in. -2. In the upper-right corner, next to your Avatar, click on the scope dropdown. -3. The tab shows your Directory, Subscription, and Azure OpenAI resource. -4. Make sure the right Azure OpenAI resource is selected. -5. Use the copy icon ({{% icon name="copy" %}}) and use it as your resource name in the endpoint URL. +1. Go to the [Microsoft Foundry portal](https://ai.azure.com/) and sign in. +2. Select the right resource in the upper right corner. +3. The home page should show `Resource configuration` where you can find the `Azure OpenAI endpoint` +4. Use the copy icon ({{% icon name="copy" %}}) and use it as your resource name in the endpoint URL. -##### Obtaining the Azure OpenAI API Keys {#azure-api-keys} +##### Obtaining API Keys {#azure-api-keys} -1. Go to the [Azure OpenAI portal](https://oai.azure.com/) and sign in. -2. In the upper-right corner, next to your Avatar, click on the scope dropdown. -3. The tab shows your Directory, Subscription, and Azure OpenAI resource. -4. Make sure the right Azure OpenAI resource is selected. -5. You can now view ({{% icon name="view" %}}) and copy ({{% icon name="copy" %}}) the value of the **key1** or **key2** field as your API key while setting up the configuration. Note that these keys might not be visible for everyone in the Azure OpenAI Portal, depending on your organization's security settings. +1. On the same page where the resource name is located, you can find your API key information. +2. You can now view ({{% icon name="view" %}}) and copy ({{% icon name="copy" %}}) the value of the **key1** or **key2** field as your API key while setting up the configuration. Note that these keys might not be visible for everyone in the portal, depending on your organization's security settings. ##### Adding Azure AI Search Resources {#azure-ai-search} @@ -119,7 +115,7 @@ Currently, the only supported authorization method for Azure AI Search resources #### Configuring the OpenAI Deployed Models -A [Deployed Model](/appstore/modules/genai/genai-for-mx/commons/#deployed-model) represents a GenAI model instance that can be used by the app to generate text, embeddings, or images. For every model you want to invoke from your app, you need to create a `OpenAIDeployedModel` record, a specialization of `DeployedModel`. In addition to the model display name and a technical name/identifier, an OpenAI deployed model contains a reference to the additional connection details as configured in the previous step. For OpenAI, a set of common models will be prepopulated automatically upon saving the configuration. If you want to use additional models that are made available by OpenAI you need to configure additional OpenAI deployed models in your Mendix app. For Azure OpenAI no deployed models are created by default. The technical model names depend on the deployment names that were chosen while deploying the models in the [Azure Portal](https://oai.azure.com/resource/deployments). Therefore in this case you always need to configure the deployed models manually in your Mendix app. +A [Deployed Model](/appstore/modules/genai/genai-for-mx/commons/#deployed-model) represents a GenAI model instance that can be used by the app to generate text, embeddings, or images. For every model you want to invoke from your app, you need to create a `OpenAIDeployedModel` record, a specialization of `DeployedModel`. In addition to the model display name and a technical name/identifier, an OpenAI deployed model contains a reference to the additional connection details as configured in the previous step. For OpenAI, a set of common models can be created automatically using the designated button. If you want to use additional models that are made available by OpenAI you need to configure additional OpenAI deployed models in your Mendix app. For Microsoft Foundry the model names can be different. The technical model names depend on the deployment names that were chosen while deploying the models in the [Microsoft Foundry portal](https://ai.azure.com/). Therefore in this case you always need to configure the deployed models manually in your Mendix app. 1. If needed, click the three dots for an OpenAI configuration to open the "Manage Deployed Models" pop-up. 2. For every additional model, add a record. The following fields are required: @@ -127,7 +123,7 @@ A [Deployed Model](/appstore/modules/genai/genai-for-mx/commons/#deployed-model) | Field | Description | | -------------- | ------------------------------------------------------------ | | Display name | This is the reference to the model for app users in case they have to select which one is to be used. | - | Deployment name / Model name | This is the technical reference for the model. For OpenAI this is equal to the [model aliases](https://platform.openai.com/docs/models#current-model-aliases). For Azure OpenAI this is the deployment name from the [Azure Portal](https://oai.azure.com/resource/deployments). + | Deployment name / Model name | This is the technical reference for the model. For OpenAI this is equal to the [model aliases](https://platform.openai.com/docs/models#current-model-aliases). For Microsoft Foundry this is the deployment name from the [Microsoft Foundry portal](https://ai.azure.com/). | Output modality| Describes what the output of the model is. This connector currently supports Text, Embedding, and Image. | Input modality| Describes what input modalities are accepted by the model. This connector currently supports Text and Image. | Azure API version | Azure OpenAI only. This is the API version to use for this operation. It follows the `yyyy-MM-dd` format. For supported versions, see [Azure OpenAI documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/reference). The supported versions can vary depending on the type of model, so make sure to look for the right section (such as Chat Completions, Image Generation, or Embeddings) on that page. | @@ -196,7 +192,7 @@ For `Chat Completions with History`, `FileCollection` can optionally be added to Use the two OpenAI-specific microflow actions from the toolbox [Files: Initialize Collection with OpenAI File](#initialize-filecollection) and [Files: Add OpenAIFile to Collection](#add-file) to construct the input with either `FileDocuments` (for vision, it needs to be of type `Image`) or `URLs`. There are similar file operations exposed by the GenAI commons module that can be used for vision requests with the OpenAIConnector; however, these generic operations do not support the optional OpenAI-specific `Detail` attribute. {{% alert color="info" %}} -OpenAI and Azure OpenAI for vision do not necessarily all models provide feature parity when it comes to combining functionalities. In other words, Azure OpenAI does not support the use of JSON mode and function calling in combination with image (vision) input for certain models, so make sure to check the [Azure Documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models). +OpenAI and Microsoft Foundry do not necessarily provide feature parity across all models when it comes to combining functionalities. In other words, Microsoft Foundry does not support the use of JSON mode and function calling in combination with image (vision) input for certain models, so make sure to check the [Azure Documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models). When you use Azure OpenAI, it is recommended to set the optional `MaxTokens` input parameter; otherwise, the return output may be cut off. {{% /alert %}} @@ -212,7 +208,7 @@ For [Chat Completions (without history)](/appstore/modules/genai/genai-for-mx/co You can send up to 100 pages across multiple files, with a maximum combined size of 32 MB per conversation. Currently, processing multiple files with OpenAI is not always guaranteed and can lead to unexpected behavior (for example, only one file being processed). {{% alert color="info" %}} -Azure OpenAI does not currently support file input. +Microsoft Foundry does not currently support file input. Note that the model uses the file name when analyzing documents, which may introduce a potential vulnerability to prompt injection. To reduce this risk, consider modifying the string or not passing it at all. {{% /alert %}} @@ -323,13 +319,13 @@ To check your JDK version and update it if necessary, follow these steps: 2. You may also need to update Gradle. To do this, go to **Edit** > **Preferences** > **Deployment** > **Gradle directory**. Click **Browse** and select the appropriate Gradle version from the Mendix folder. For Mendix 10.10 and above, use Gradle 8.5. For Mendix 10 versions below 10.10, use Gradle 7.6.3. Then save your settings by clicking **OK**. 3. Rerun the project. -### Chat Completions with Vision and JSON Mode (Azure OpenAI) +### Chat Completions with Vision and JSON Mode (Microsoft Foundry) -Azure OpenAI does not support the use of JSON mode and function calling in combination with image (vision) input and will return a `400 - model error`. Make sure the optional input parameters `ResponseFormat` and `FunctionCollection` are set to `empty` for all chat completion operations if you want to use vision with Azure OpenAI. +Microsoft Foundry does not support the use of JSON mode and function calling in combination with image (vision) input and will return a `400 - model error`. Make sure the optional input parameters `ResponseFormat` and `ToolCollection` are set to `empty` for all chat completion operations if you want to use vision with Microsoft Foundry. -### Chat Completions with Vision Response is Cut Off (Azure OpenAI) +### Chat Completions with Vision Response is Cut Off (Microsoft Foundry) -When you use Azure OpenAI, it is recommended to set the optional `MaxTokens` input parameter; otherwise, the response may be cut off. For more details, see the [Azure OpenAI Documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/gpt-with-vision?tabs=rest%2Csystem-assigned%2Cresource#call-the-chat-completion-apis). +When you use Microsoft Foundry, it is recommended to set the optional `MaxTokens` input parameter; otherwise, the response may be cut off. For more details, see the [Azure OpenAI Documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/gpt-with-vision?tabs=rest%2Csystem-assigned%2Cresource#call-the-chat-completion-apis). ### Attribute or Reference Required Error Message After Upgrade @@ -342,9 +338,9 @@ If you encounter an error caused by conflicting Java libraries, such as `java.la ## Read More {#read-more} * [Prompt Engineering – OpenAI Documentation](https://platform.openai.com/docs/guides/prompt-engineering) -* [Introduction to Prompt Engineering – Microsoft Azure Documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/prompt-engineering) -* [Prompt Engineering Techniques – Microsoft Azure Documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/advanced-prompt-engineering?pivots=programming-language-chat-completions) +* [Introduction to Prompt Engineering – Microsoft Foundry Documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/prompt-engineering) +* [Prompt Engineering Techniques – Microsoft Foundry Documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/advanced-prompt-engineering?pivots=programming-language-chat-completions) * [ChatGPT Prompt Engineering for Developers - DeepLearning.AI](https://www.deeplearning.ai/short-courses/chatgpt-prompt-engineering-for-developers) * [Function Calling - OpenAI Documentation](https://platform.openai.com/docs/guides/function-calling) * [Vision - OpenAI Documentation](https://platform.openai.com/docs/guides/vision) -* [Vision - Azure OpenAI Documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/gpt-with-vision) +* [Vision - Microsoft Foundry Documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/gpt-with-vision) From 5ac024a00a2a5a67e7c5d2764bf621dacbe53fe1 Mon Sep 17 00:00:00 2001 From: liamsommer-mx Date: Wed, 21 Jan 2026 14:40:43 +0100 Subject: [PATCH 2/6] added limitation for embeddings and document chat --- .../mendix-cloud-genai/Mx GenAI Connector.md | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-) diff --git a/content/en/docs/marketplace/genai/mendix-cloud-genai/Mx GenAI Connector.md b/content/en/docs/marketplace/genai/mendix-cloud-genai/Mx GenAI Connector.md index a38cee15ab3..3650bc81b30 100644 --- a/content/en/docs/marketplace/genai/mendix-cloud-genai/Mx GenAI Connector.md +++ b/content/en/docs/marketplace/genai/mendix-cloud-genai/Mx GenAI Connector.md @@ -95,7 +95,7 @@ To use retrieval and generation in a single operation, an internally predefined {{< figure src="/attachments/appstore/platform-supported-content/modules/genai/mxgenAI-connector/mxgenaiconnector-rag.png" >}} -The returned `Response` includes [References](/appstore/modules/genai/genai-for-mx/commons/#reference) for each retreived chunk from the knowledge base. +The returned `Response` includes [References](/appstore/modules/genai/genai-for-mx/commons/#reference) for each retrieved chunk from the knowledge base. Optionally, you can control both reference creation and the output returned for the model during the insertion step: @@ -134,7 +134,9 @@ Document chat enables the model to interpret and analyze documents, such as PDFs For [Chat Completions (without history)](/appstore/modules/genai/genai-for-mx/commons/#chat-completions-without-history), `OptionalFileCollection` is an optional input parameter. For [Chat completions (with history)](/appstore/modules/genai/genai-for-mx/commons/#chat-completions-with-history), a `FileCollection` can optionally be added to individual user messages using [Add Message to Request](/appstore/modules/genai/genai-for-mx/commons/#chat-add-message-to-request). -In the entire conversation, you can pass up to five documents that are smaller than 4.5 MB each. The following file types are accepted: PDF, CSV, DOC, DOCX, XLS, XLSX, HTML, TXT, and MD. +In the entire conversation, you can pass up to five documents that are smaller than 4.5 MB each. Note that there is also a practical, model-dependent limit on the number of pages a document can contain - typically around 100 pages, but this is not fixed and can vary with the selected model and the complexity of the file (for example, images, heavy formatting, or embedded content can reduce the effective page limit). If you expect to work with very large documents, consider splitting them into smaller files or providing summarized extracts to improve reliability. + +The following file types are accepted: PDF, CSV, DOC, DOCX, XLS, XLSX, HTML, TXT, and MD. {{% alert color="info" %}} The model uses the file name when analyzing documents, which may introduce a potential vulnerability to prompt injection. To reduce this risk, consider modifying file names before including them in the request. @@ -201,12 +203,14 @@ The knowledge chunks are stored in an AWS OpenSearch Serverless database to ensu ##### Data Chunks -To add data to the knowledge base, you need discrete pieces of information and create knowledge base chunks for each one. Use the GenAICommons operations to first [initialize a ChunkCollection object](/appstore/modules/genai/genai-for-mx/commons/#chunkcollection-create), and then [add a KnowlegdebaseChunk](/appstore/modules/genai/genai-for-mx/commons/#chunkcollection-add-knowledgebasechunk) object to it for each piece of information. Both can be found in the **Toolbox** inside of the **GenAI Knowledge Base (Content)** category. +To add data to the knowledge base, you need discrete pieces of information and create knowledge base chunks for each one. Use the GenAICommons operations to first [initialize a ChunkCollection object](/appstore/modules/genai/genai-for-mx/commons/#chunkcollection-create), and then [add a KnowledgeBaseChunk](/appstore/modules/genai/genai-for-mx/commons/#chunkcollection-add-knowledgebasechunk) object to it for each piece of information. Both can be found in the **Toolbox** inside of the **GenAI Knowledge Base (Content)** category. ##### Chunking Strategy Dividing data into chunks is crucial for model accuracy, as it helps optimize the relevance of the content. The best chunking strategy is to keep a balance between reducing noise by keeping chunks small and retaining enough content within a chunk to get relevant results. Creating overlapping chunks can help preserve more context while maintaining a fixed chunk size. It is recommended to experiment with different chunking strategies to decide the best strategy for your data. In general, if chunks are logical and meaningful to humans, they will also make sense to the model. A chunk size of approximately 1500 characters with overlapping chunks has been proven to be effective for longer texts in the past. +Since embeddings operations have a maximum character limit of 2048 characters per chunk, you must ensure that your chunks do not exceed this limit before submitting them for embedding. Chunks exceeding this limit will cause the embedding operation to fail, so validate your input data accordingly. + The chunk collection can then be stored in the knowledge base using one of the following operations: ##### Add Data Chunks to Your Knowledge Base @@ -231,7 +235,7 @@ The following toolbox actions can be used to retrieve knowledge data from a coll 1. `Retrieve` retrieves knowledge base chunks from the knowledge base. You can use pagination via the `Offset` and `MaxNumberOfResults` parameters or apply filtering via a `MetadataCollection` or `MxObject`. 2. `Retrieve & Associate` is similar to the `Retrieve` but associates the returned chunks with a Mendix object if they were linked at the insertion stage. - {{% alert color="info" %}}You must define your entity specialized from `KnowledgeBaseChunk`, which is associated to the entity that was used to pass a MendixObject during the [insertion stage](#knowledge-base-insertion). + {{% alert color="info" %}}You must define your entity specialized from `KnowledgeBaseChunk`, which is associated with the entity that was used to pass a MendixObject during the [insertion stage](#knowledge-base-insertion). {{% /alert %}} 3. `Embed & Retrieve Nearest Neighbors` retrieves a list of type [KnowledgeBaseChunk](/appstore/modules/genai/genai-for-mx/commons/#knowledgebasechunk-entity) from the knowledge base that are most similar to a given `Content` by calculating the cosine similarity of its vectors. @@ -239,11 +243,11 @@ The following toolbox actions can be used to retrieve knowledge data from a coll ### Embedding Operations -If you are working directly with embedding vectors for specific use cases that do not include knowledge base interaction (for example clustering or classification), the below operations are relevant. For practical examples and guidance, consider referring to the [GenAI Showcase Application](https://marketplace.mendix.com/link/component/220475) showcase to see how these embedding-only operations can be used. +If you are working directly with embedding vectors for specific use cases that do not include knowledge base interaction (for example, clustering or classification), the below operations are relevant. For practical examples and guidance, consider referring to the [GenAI Showcase Application](https://marketplace.mendix.com/link/component/220475) showcase to see how these embedding-only operations can be used. To implement embeddings into your Mendix application, you can use the microflows in the **Knowledge Bases & Embeddings** folder inside of the GenAICommons module. Both microflows for embeddings are exposed as microflow actions under the **GenAI (Generate)** category in the **Toolbox** in Mendix Studio Pro. -These microflows require a [DeployedModel](/appstore/modules/genai/genai-for-mx/commons/#deployed-model) that determines the model and endpoint to use. Depending on the selected operation, an `InputText` String or a [ChunkCollection](/appstore/modules/genai/genai-for-mx/commons/#chunkcollection) needs to be provided. +These microflows require a [DeployedModel](/appstore/modules/genai/genai-for-mx/commons/#deployed-model) that determines the model and endpoint to use. Depending on the selected operation, an `InputText` String or a [ChunkCollection](/appstore/modules/genai/genai-for-mx/commons/#chunkcollection) needs to be provided. Note that embedding operations enforce a maximum character limit of 2048 characters per chunk; input exceeding this limit will cause the embedding operation to fail, so validate your input before submitting it for embedding. #### Embeddings (String) From 92a7813b3ecb81de824f326357438040c0fd07cd Mon Sep 17 00:00:00 2001 From: liamsommer-mx Date: Wed, 21 Jan 2026 14:58:50 +0100 Subject: [PATCH 3/6] changed other occurrences of "Azure OpenAI" to "Microsoft Foundry" --- content/en/docs/marketplace/genai/_index.md | 6 +++--- .../marketplace/genai/concepts/function-calling.md | 2 +- .../genai/concepts/prompt-engineering.md | 2 +- content/en/docs/marketplace/genai/how-to/_index.md | 2 +- .../genai/how-to/integrate_function_calling.md | 2 +- .../genai/how-to/start_from_a_starter_app.md | 4 ++-- .../genai/how-to/start_from_blank_app.md | 4 ++-- .../reference-guide/external-platforms/openai.md | 14 +++++++------- 8 files changed, 18 insertions(+), 18 deletions(-) diff --git a/content/en/docs/marketplace/genai/_index.md b/content/en/docs/marketplace/genai/_index.md index d2653f7c2b6..f0a56079bdb 100644 --- a/content/en/docs/marketplace/genai/_index.md +++ b/content/en/docs/marketplace/genai/_index.md @@ -16,7 +16,7 @@ These pages cover modules that integrate with generative AI tools. For running p ### Typical Use Cases -Mendix supports a variety of generative AI tasks by integrating with tools such as Amazon Bedrock or Azure OpenAI. Typical use cases include the following: +Mendix supports a variety of generative AI tasks by integrating with tools such as Amazon Bedrock or Microsoft Foundry. Typical use cases include the following: * Create conversational UIs for AI-powered chatbots and integrate those UIs into your Mendix applications. * Connect any model through our GenAI connectors, or by integrating your connector into our GenAI commons interface. @@ -58,7 +58,7 @@ Supercharge your applications with Mendix's Agents Kit. This powerful set of com | [MCP Server](/appstore/modules/genai/mcp-modules/mcp-server/) | Make your Mendix business logic available to any agent in your enterprise landscape with the Mendix MCP Server module. Expose reusable prompts, including the ability to use prompt parameters. List and run actions implemented in the application as a tool. | Module | 10.24 | | [Mendix Cloud GenAI Connector](/appstore/modules/genai/mx-cloud-genai/MxGenAI-connector/) | Connect to Mendix Cloud and utilize Mendix Cloud GenAI resource packs directly within your Mendix application. | Connector Module | 10.24 | | [Mistral Connector](/appstore/modules/genai/reference-guide/external-connectors/mistral/) | Connect to Mistral AI. | Connector Module | 10.24 | -| [OpenAI Connector](/appstore/modules/genai/openai/) | Connect to (Azure) OpenAI. | Connector Module | 10.24 | +| [OpenAI Connector](/appstore/modules/genai/openai/) | Connect to OpenAI and Microsoft Foundry. | Connector Module | 10.24 | | [PgVector Knowledge Base](/appstore/modules/genai/pgvector/) | Manage and interact with a PostgreSQL *pgvector* Knowledge Base. | Connector Module | 10.24 | | [RFP Assistant Starter App / Questionnaire Assistant Starter App](https://marketplace.mendix.com/link/component/235917) | The RFP Assistant Starter App and the Questionnaire Assistant Starter App leverage historical question-answer pairs (RFPs) and a continuously updated knowledge base to generate and assist in editing responses to RFPs. This offers a time-saving alternative to manually finding similar responses and enhancing the knowledge management process. | Starter App | 10.24 | | [Snowflake Showcase App](https://marketplace.mendix.com/link/component/225845) | Learn how to implement the Cortex functionalities in your app. | Showcase App | 10.24 | @@ -73,7 +73,7 @@ Mendix connectors offer direct support for the following models: | -------------- | --------------------- | --------------------- | ------------------- | ----------- | ----------------------- | | Mendix Cloud GenAI | Anthropic Claude 3.5 Sonnet, Anthropic Claude 3.7 Sonnet, Anthropic Claude 4.0 Sonnet, Anthropic Claude 4.5 Sonnet | Chat Completions | text, image, document | text | Function calling | | | Cohere Embed v3 English and multilangual, Cohere Embed v4 | Embeddings | text | embeddings | | -| Azure / OpenAI | gpt-4, gpt-4-turbo, gpt-4o, gpt-4o mini, gpt-4.1, gpt-4.1-mini, gpt-4.1-nano, gpt-5.0, gpt-5.0-mini, gpt-5.0-nano, gpt-5.1, gpt-5.2, o1, o1-mini, o3, o3-mini, o4-mini | Chat completions | text, image, document (OpenAI only) | text | Function calling | +| Microsoft Foundry (OpenAI) / OpenAI | gpt-4, gpt-4-turbo, gpt-4o, gpt-4o mini, gpt-4.1, gpt-4.1-mini, gpt-4.1-nano, gpt-5.0, gpt-5.0-mini, gpt-5.0-nano, gpt-5.1, gpt-5.2, o1, o1-mini, o3, o3-mini, o4-mini | Chat completions | text, image, document (OpenAI only) | text | Function calling | | | DALL·E 2, DALL·E 3, gpt-image-1 | Image generation | text | image | | | | text-embedding-ada-002, text-embedding-3-small, text-embedding-3-large | Embeddings | text | embeddings | | | Mistral | Mistral Large 3, Mistral Medium 3.1, Mistral Small 3.2, Ministral 3 (3B, 8B, 14B), Magistral (Small, Medium) | Chat Completions | text, image | text | Function calling | diff --git a/content/en/docs/marketplace/genai/concepts/function-calling.md b/content/en/docs/marketplace/genai/concepts/function-calling.md index 24b6a8fd390..1867f49b82c 100644 --- a/content/en/docs/marketplace/genai/concepts/function-calling.md +++ b/content/en/docs/marketplace/genai/concepts/function-calling.md @@ -58,7 +58,7 @@ We also strongly advise that you build user confirmation logic into function mic OpenAI's latest GPT-3.5 Turbo, GPT-4 Turbo and GPT-4o models are trained with function calling data. Older model versions may not support parallel function calling. For more details view [OpenAI Documentation](https://platform.openai.com/docs/guides/function-calling/supported-models). -For models used through Azure OpenAI, feature availability is currently different depending on method of input and deployment type. For details view [Azure OpenAI Documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models#differences-between-openai-and-azure-openai-gpt-4-turbo-ga-models). +For models used through Microsoft Foundry, feature availability is currently different depending on method of input and deployment type. For details view [Microsoft Foundry Documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models#differences-between-openai-and-azure-openai-gpt-4-turbo-ga-models). ### Supported Amazon Bedrock models {#supported-models-bedrock} diff --git a/content/en/docs/marketplace/genai/concepts/prompt-engineering.md b/content/en/docs/marketplace/genai/concepts/prompt-engineering.md index 6f04bcb049f..6805e635428 100644 --- a/content/en/docs/marketplace/genai/concepts/prompt-engineering.md +++ b/content/en/docs/marketplace/genai/concepts/prompt-engineering.md @@ -256,5 +256,5 @@ Check out the [GenAI](https://marketplace.mendix.com/link/component/220475) show * [OpenAI](https://platform.openai.com/docs/guides/prompt-engineering) * [Examples](https://platform.openai.com/docs/examples) -* [Azure OpenAI](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/prompt-engineering) +* [Microsoft Foundry](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/prompt-engineering) * [Prompt Engineering Guide](https://www.promptingguide.ai/) diff --git a/content/en/docs/marketplace/genai/how-to/_index.md b/content/en/docs/marketplace/genai/how-to/_index.md index 6c1a486caa4..71ccbb6f4bc 100644 --- a/content/en/docs/marketplace/genai/how-to/_index.md +++ b/content/en/docs/marketplace/genai/how-to/_index.md @@ -52,7 +52,7 @@ For any additional feedback, send a message in the [#genai-connectors](https://m * [AI Model Training: What it is and How it Works](https://www.mendix.com/blog/ai-model-training/) * [What Are the Different Types of AI Models?](https://www.mendix.com/blog/what-are-the-different-types-of-ai-models/) * [OpenAI Using the ‘GenAI for Mendix’ Module](https://www.mendix.com/blog/openai-using-the-genai-for-mendix-module/) -* [How to Configure Azure OpenAI Models in Mendix](https://www.mendix.com/blog/how-to-configure-azure-openai-models-in-mendix/) +* [How to Configure Microsoft Foundry OpenAI Models in Mendix](https://www.mendix.com/blog/how-to-configure-azure-openai-models-in-mendix/) #### Building your own Connector diff --git a/content/en/docs/marketplace/genai/how-to/integrate_function_calling.md b/content/en/docs/marketplace/genai/how-to/integrate_function_calling.md index 62e17b081cf..e7d1614d38b 100644 --- a/content/en/docs/marketplace/genai/how-to/integrate_function_calling.md +++ b/content/en/docs/marketplace/genai/how-to/integrate_function_calling.md @@ -48,7 +48,7 @@ Selecting the infrastructure for integrating GenAI into your Mendix application * [Mendix Cloud GenAI Resource Packs](/appstore/modules/genai/mx-cloud-genai/resource-packs/): The [Mendix Cloud GenAI Connector](https://marketplace.mendix.com/link/component/239449) allows you to utilize Mendix Cloud GenAI Resource Packs directly within your Mendix application. -* [OpenAI](/appstore/modules/genai/openai/): The [OpenAI Connector](https://marketplace.mendix.com/link/component/220472) supports both OpenAI’s platform and Azure’s OpenAI service. +* [OpenAI](/appstore/modules/genai/openai/): The [OpenAI Connector](https://marketplace.mendix.com/link/component/220472) supports both OpenAI's platform and Microsoft Foundry. * [Amazon Bedrock](/appstore/modules/genai/bedrock/): The [Amazon Bedrock Connector](https://marketplace.mendix.com/link/component/215042) allows you to leverage Amazon Bedrock’s fully managed service to integrate foundation models from Amazon and leading AI providers. diff --git a/content/en/docs/marketplace/genai/how-to/start_from_a_starter_app.md b/content/en/docs/marketplace/genai/how-to/start_from_a_starter_app.md index 9165b4fe1d4..ad54a3a7ffd 100644 --- a/content/en/docs/marketplace/genai/how-to/start_from_a_starter_app.md +++ b/content/en/docs/marketplace/genai/how-to/start_from_a_starter_app.md @@ -46,7 +46,7 @@ Selecting the infrastructure for integrating GenAI into your Mendix application * [Mendix Cloud GenAI Resources Packs](/appstore/modules/genai/mx-cloud-genai/resource-packs/): The [Mendix Cloud GenAI Connector](https://marketplace.mendix.com/link/component/239449) integrates LLMs by dragging and dropping common operations from its toolbox in Studio Pro. -* [OpenAI](/appstore/modules/genai/openai/): The [OpenAI Connector](https://marketplace.mendix.com/link/component/220472) supports OpenAI’s platform and Azure’s OpenAI service. +* [OpenAI](/appstore/modules/genai/openai/): The [OpenAI Connector](https://marketplace.mendix.com/link/component/220472) supports OpenAI's platform and Microsoft Foundry. * [Amazon Bedrock](/appstore/modules/genai/bedrock/): The [Amazon Bedrock Connector](https://marketplace.mendix.com/link/component/215042) allows you to leverage Amazon Bedrock’s fully managed service to integrate foundation models from Amazon and leading AI providers. @@ -90,7 +90,7 @@ Follow the steps below to configure OpenAI for your application. For more inform * **API Type**: Choose between **OpenAI** or **Azure OpenAI**. * **Endpoint**: Enter the endpoint URL for your selected API type. * **API key**: Provide the API key for authentication. - * If using Azure OpenAI, add the **Azure key type** by choosing between **OpenAI** or **Azure OpenAI**. + * If using Microsoft Foundry, add the **Azure key type** by choosing between **OpenAI** or **Azure OpenAI***. * After saving the changes, a new pop-up will appear to add the deployment models. Select **Add deployed model** and provide the following details (optional for the OpenAI API Type): * **Display name**: A reference name for the deployed model (e.g., "GPT-4 Conversational"). diff --git a/content/en/docs/marketplace/genai/how-to/start_from_blank_app.md b/content/en/docs/marketplace/genai/how-to/start_from_blank_app.md index 51222ea4d83..657e659e0a1 100644 --- a/content/en/docs/marketplace/genai/how-to/start_from_blank_app.md +++ b/content/en/docs/marketplace/genai/how-to/start_from_blank_app.md @@ -55,7 +55,7 @@ The [Blank GenAI App Template](https://marketplace.mendix.com/link/component/227 Selecting the infrastructure for integrating GenAI into your Mendix application is the first step. Depending on your use case and preferences, you can choose from the following options: * [Mendix Cloud GenAI Resources Packs](/appstore/modules/genai/mx-cloud-genai/resource-packs/): The [Mendix Cloud GenAI Connector](https://marketplace.mendix.com/link/component/239449) integrates LLMs by dragging and dropping common operations from its toolbox in Studio Pro. -* [OpenAI](/appstore/modules/genai/openai/): The [OpenAI Connector](https://marketplace.mendix.com/link/component/220472) supports both OpenAI’s platform and Azure’s OpenAI service. +* [OpenAI](/appstore/modules/genai/openai/): The [OpenAI Connector](https://marketplace.mendix.com/link/component/220472) supports both OpenAI's platform and Microsoft Foundry. * [Amazon Bedrock](/appstore/modules/genai/bedrock/): The [Amazon Bedrock Connector](https://marketplace.mendix.com/link/component/215042) allows you to leverage Amazon Bedrock’s fully managed service to integrate foundation models from Amazon and leading AI providers. @@ -131,7 +131,7 @@ Follow the steps below to configure OpenAI for your application. For more inform * **API Type**: Choose between **OpenAI** or **Azure OpenAI**. * **Endpoint**: Enter the endpoint URL for your selected API type. * **API key**: Provide the API key for authentication. - * If using Azure OpenAI, add the **Azure key type** by choosing between **OpenAI** or **Azure OpenAI**. + * If using Microsoft Foundry, add the **Azure key type** by choosing between **OpenAI** or **Azure OpenAI***. * After saving the changes, a new pop-up will appear to add the deployment models. Select **Add deployed model** and provide the following details (optional for the OpenAI API Type): * **Display name**: A reference name for the deployed model (e.g., "GPT-4 Conversational"). diff --git a/content/en/docs/marketplace/genai/reference-guide/external-platforms/openai.md b/content/en/docs/marketplace/genai/reference-guide/external-platforms/openai.md index d2614701d0e..e5e33d388fb 100644 --- a/content/en/docs/marketplace/genai/reference-guide/external-platforms/openai.md +++ b/content/en/docs/marketplace/genai/reference-guide/external-platforms/openai.md @@ -21,7 +21,7 @@ OpenAI provides market-leading LLM capabilities with GPT-4: * Creativity – Generate, edit, and iterate with end-users on creative and technical writing tasks, such as composing songs, writing screenplays, or learning an end-user’s writing style. * Longer context – GPT-4 can handle over 25,000 words of text, allowing for use cases like long-form content creation, extended conversations, and document search and analysis. -Mendix provides support for [OpenAI](https://platform.openai.com/) and [Microsoft Foundry](https://learn.microsoft.com/en-us/azure/ai-foundry/what-is-ai-foundry) (formerly known as Azure OpenAI or Cognitive Services). Microsoft Foundry is Microsoft's unified AI platform that provides access to OpenAI models. +Mendix provides support for [OpenAI](https://platform.openai.com/) and [Microsoft Foundry](https://learn.microsoft.com/en-us/azure/ai-foundry/what-is-ai-foundry) (formerly known as Azure OpenAI or Cognitive Services). Microsoft Foundry is Microsoft's unified AI platform that streamlines the creation and management of AI agents and models, including the OpenAI models. With the current version, Mendix supports the Chat Completions API for [text generation](https://platform.openai.com/docs/guides/text-generation), the Image Generations API for [images](https://platform.openai.com/docs/guides/images), the Embeddings API for [vector embeddings](https://platform.openai.com/docs/guides/embeddings/what-are-embeddings), and indexes via [Azure AI Search](https://learn.microsoft.com/en-us/azure/search/) for knowledge base retrieval. @@ -60,7 +60,7 @@ After you install the OpenAI Connector, you can find it in the **App Explorer**, 1. Add the module role **OpenAIConnector.Administrator** to your Administrator user role in the security settings of your app. 2. Add the **Configuration_Overview** page (**USE_ME > Configuration**) to your navigation, or add the **Snippet_Configurations** to a page that is already part of your navigation. -3. Continue setting up your OpenAI configuration at runtime. Follow the instructions in either [OpenAI Configuration](#openai-configuration) or [Azure OpenAI Configuration](#azure-openai-configuration), depending on which platform you are using. +3. Continue setting up your OpenAI configuration at runtime. Follow the instructions in either [OpenAI Configuration](#openai-configuration) or [Microsoft Foundry Configuration](#azure-openai-configuration), depending on which platform you are using. 4. Configure the models you need to use for your use case. #### OpenAI Configuration {#openai-configuration} @@ -81,7 +81,7 @@ The following inputs are required for the Microsoft Foundry configuration: | Parameter | Value | | -------------- | ------------------------------------------------------------ | | Display name | This is the name identifier of a configuration (for example, *MyConfiguration*). | -| API type | Select `AzureOpenAI` for both Azure OpenAI and Microsoft Foundry deployments. | +| API type | Select `AzureOpenAI` for Microsoft Foundry deployments. | | Endpoint | This is the API endpoint (for example, `https://your-resource-name.openai.azure.com/openai/deployments/`).
For details on how to obtain `your-resource-name`, see the [Obtaining Resource Name](#azure-resource-name) section below. | | Azure key type | This is the type of token that is entered in the API key field. For Azure OpenAI, two types of keys are currently supported: Microsoft Entra token and API key.
For details on how to generate a Microsoft Entra access token, see [How to Configure Azure OpenAI Service with Managed Identities](https://learn.microsoft.com/en-gb/azure/ai-services/openai/how-to/managed-identity). Alternatively, if your organization allows it, you could use the Azure `api-key` authentication mechanism. For details on how to obtain an API key, see the [Obtaining Azure OpenAI API keys](#azure-api-keys) section below. For more information, see the [Technical Reference](#technical-reference) section. | | Token / API key | This is the access token to authorize your API call. | @@ -90,7 +90,7 @@ The following inputs are required for the Microsoft Foundry configuration: 1. Go to the [Microsoft Foundry portal](https://ai.azure.com/) and sign in. 2. Select the right resource in the upper right corner. -3. The home page should show `Resource configuration` where you can find the `Azure OpenAI endpoint` +3. The home page should show `Resource configuration` where you can find the `Microsoft Foundry endpoint` 4. Use the copy icon ({{% icon name="copy" %}}) and use it as your resource name in the endpoint URL. ##### Obtaining API Keys {#azure-api-keys} @@ -194,10 +194,10 @@ Use the two OpenAI-specific microflow actions from the toolbox [Files: Initializ {{% alert color="info" %}} OpenAI and Microsoft Foundry do not necessarily provide feature parity across all models when it comes to combining functionalities. In other words, Microsoft Foundry does not support the use of JSON mode and function calling in combination with image (vision) input for certain models, so make sure to check the [Azure Documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models). -When you use Azure OpenAI, it is recommended to set the optional `MaxTokens` input parameter; otherwise, the return output may be cut off. +When you use Azure OpenAI, it is recommended to set the optional `MaxTokens` input parameter; otherwise, the response may be cut off. {{% /alert %}} -For more information on vision, see [OpenAI](https://platform.openai.com/docs/guides/vision) and [Azure OpenAI](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/gpt-with-vision) documentation. +For more information on vision, see [OpenAI](https://platform.openai.com/docs/guides/vision) and [Microsoft Foundry](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/gpt-with-vision) documentation. #### Document Chat {#chatcompletions-document} @@ -325,7 +325,7 @@ Microsoft Foundry does not support the use of JSON mode and function calling in ### Chat Completions with Vision Response is Cut Off (Microsoft Foundry) -When you use Microsoft Foundry, it is recommended to set the optional `MaxTokens` input parameter; otherwise, the response may be cut off. For more details, see the [Azure OpenAI Documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/gpt-with-vision?tabs=rest%2Csystem-assigned%2Cresource#call-the-chat-completion-apis). +When you use Microsoft Foundry, it is recommended to set the optional `MaxTokens` input parameter; otherwise, the response may be cut off. For more details, see the [Microsoft Foundry Documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/gpt-with-vision?tabs=rest%2Csystem-assigned%2Cresource#call-the-chat-completion-apis). ### Attribute or Reference Required Error Message After Upgrade From 3734cb9733074837a91af39204cbe4f6172f8cbb Mon Sep 17 00:00:00 2001 From: Karuna Vengurlekar Date: Thu, 22 Jan 2026 10:32:00 +0530 Subject: [PATCH 4/6] tw review 1 --- content/en/docs/marketplace/genai/_index.md | 2 +- .../en/docs/marketplace/genai/concepts/function-calling.md | 4 ++-- .../docs/marketplace/genai/how-to/start_from_a_starter_app.md | 2 +- .../en/docs/marketplace/genai/how-to/start_from_blank_app.md | 2 +- 4 files changed, 5 insertions(+), 5 deletions(-) diff --git a/content/en/docs/marketplace/genai/_index.md b/content/en/docs/marketplace/genai/_index.md index f0a56079bdb..0e2df5ccabe 100644 --- a/content/en/docs/marketplace/genai/_index.md +++ b/content/en/docs/marketplace/genai/_index.md @@ -60,7 +60,7 @@ Supercharge your applications with Mendix's Agents Kit. This powerful set of com | [Mistral Connector](/appstore/modules/genai/reference-guide/external-connectors/mistral/) | Connect to Mistral AI. | Connector Module | 10.24 | | [OpenAI Connector](/appstore/modules/genai/openai/) | Connect to OpenAI and Microsoft Foundry. | Connector Module | 10.24 | | [PgVector Knowledge Base](/appstore/modules/genai/pgvector/) | Manage and interact with a PostgreSQL *pgvector* Knowledge Base. | Connector Module | 10.24 | -| [RFP Assistant Starter App / Questionnaire Assistant Starter App](https://marketplace.mendix.com/link/component/235917) | The RFP Assistant Starter App and the Questionnaire Assistant Starter App leverage historical question-answer pairs (RFPs) and a continuously updated knowledge base to generate and assist in editing responses to RFPs. This offers a time-saving alternative to manually finding similar responses and enhancing the knowledge management process. | Starter App | 10.24 | +| [RFP Assistant Starter App / Questionnaire Assistant Starter App](https://marketplace.mendix.com/link/component/235917) | The RFP Assistant Starter App and the Questionnaire Assistant Starter App leverage historical question-answer pairs (RFPs) and a continuously updated knowledge base to generate and assist in editing responses to RFPs. This offers a time-saving alternative to manually finding similar responses and enhancing the knowledge management process. | Starter App | 10.24 | | [Snowflake Showcase App](https://marketplace.mendix.com/link/component/225845) | Learn how to implement the Cortex functionalities in your app. | Showcase App | 10.24 | Older versions of the marketplace modules and GenAI Showcase App are available in Studio Pro 9.24.2. diff --git a/content/en/docs/marketplace/genai/concepts/function-calling.md b/content/en/docs/marketplace/genai/concepts/function-calling.md index 1867f49b82c..8797f99edeb 100644 --- a/content/en/docs/marketplace/genai/concepts/function-calling.md +++ b/content/en/docs/marketplace/genai/concepts/function-calling.md @@ -56,9 +56,9 @@ We also strongly advise that you build user confirmation logic into function mic ### Supported OpenAI models {#supported-models-openai} -OpenAI's latest GPT-3.5 Turbo, GPT-4 Turbo and GPT-4o models are trained with function calling data. Older model versions may not support parallel function calling. For more details view [OpenAI Documentation](https://platform.openai.com/docs/guides/function-calling/supported-models). +OpenAI's latest GPT-3.5 Turbo, GPT-4 Turbo and GPT-4o models are trained with function calling data. Older model versions may not support parallel function calling. For more details, see [OpenAI Documentation](https://platform.openai.com/docs/guides/function-calling/supported-models). -For models used through Microsoft Foundry, feature availability is currently different depending on method of input and deployment type. For details view [Microsoft Foundry Documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models#differences-between-openai-and-azure-openai-gpt-4-turbo-ga-models). +For models used through Microsoft Foundry, feature availability is currently different depending on method of input and deployment type. For details, see [Microsoft Foundry Documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models#differences-between-openai-and-azure-openai-gpt-4-turbo-ga-models). ### Supported Amazon Bedrock models {#supported-models-bedrock} diff --git a/content/en/docs/marketplace/genai/how-to/start_from_a_starter_app.md b/content/en/docs/marketplace/genai/how-to/start_from_a_starter_app.md index ad54a3a7ffd..ac1a0fe409b 100644 --- a/content/en/docs/marketplace/genai/how-to/start_from_a_starter_app.md +++ b/content/en/docs/marketplace/genai/how-to/start_from_a_starter_app.md @@ -90,7 +90,7 @@ Follow the steps below to configure OpenAI for your application. For more inform * **API Type**: Choose between **OpenAI** or **Azure OpenAI**. * **Endpoint**: Enter the endpoint URL for your selected API type. * **API key**: Provide the API key for authentication. - * If using Microsoft Foundry, add the **Azure key type** by choosing between **OpenAI** or **Azure OpenAI***. + * If using Microsoft Foundry, add the **Azure key type** by choosing between **OpenAI** or **Azure OpenAI**. * After saving the changes, a new pop-up will appear to add the deployment models. Select **Add deployed model** and provide the following details (optional for the OpenAI API Type): * **Display name**: A reference name for the deployed model (e.g., "GPT-4 Conversational"). diff --git a/content/en/docs/marketplace/genai/how-to/start_from_blank_app.md b/content/en/docs/marketplace/genai/how-to/start_from_blank_app.md index 657e659e0a1..c1108a5b9a4 100644 --- a/content/en/docs/marketplace/genai/how-to/start_from_blank_app.md +++ b/content/en/docs/marketplace/genai/how-to/start_from_blank_app.md @@ -131,7 +131,7 @@ Follow the steps below to configure OpenAI for your application. For more inform * **API Type**: Choose between **OpenAI** or **Azure OpenAI**. * **Endpoint**: Enter the endpoint URL for your selected API type. * **API key**: Provide the API key for authentication. - * If using Microsoft Foundry, add the **Azure key type** by choosing between **OpenAI** or **Azure OpenAI***. + * If using Microsoft Foundry, add the **Azure key type** by choosing between **OpenAI** or **Azure OpenAI**. * After saving the changes, a new pop-up will appear to add the deployment models. Select **Add deployed model** and provide the following details (optional for the OpenAI API Type): * **Display name**: A reference name for the deployed model (e.g., "GPT-4 Conversational"). From 6e9e594c60e44266e545c2dc5b09f91102927ddb Mon Sep 17 00:00:00 2001 From: Karuna Vengurlekar Date: Thu, 22 Jan 2026 16:03:23 +0530 Subject: [PATCH 5/6] TW review2 --- .../external-platforms/openai.md | 20 +++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/content/en/docs/marketplace/genai/reference-guide/external-platforms/openai.md b/content/en/docs/marketplace/genai/reference-guide/external-platforms/openai.md index e5e33d388fb..722eb1efcbe 100644 --- a/content/en/docs/marketplace/genai/reference-guide/external-platforms/openai.md +++ b/content/en/docs/marketplace/genai/reference-guide/external-platforms/openai.md @@ -78,19 +78,19 @@ The following inputs are required for the OpenAI configuration: The following inputs are required for the Microsoft Foundry configuration: -| Parameter | Value | +| Parameter | Value | | -------------- | ------------------------------------------------------------ | | Display name | This is the name identifier of a configuration (for example, *MyConfiguration*). | | API type | Select `AzureOpenAI` for Microsoft Foundry deployments. | | Endpoint | This is the API endpoint (for example, `https://your-resource-name.openai.azure.com/openai/deployments/`).
For details on how to obtain `your-resource-name`, see the [Obtaining Resource Name](#azure-resource-name) section below. | -| Azure key type | This is the type of token that is entered in the API key field. For Azure OpenAI, two types of keys are currently supported: Microsoft Entra token and API key.
For details on how to generate a Microsoft Entra access token, see [How to Configure Azure OpenAI Service with Managed Identities](https://learn.microsoft.com/en-gb/azure/ai-services/openai/how-to/managed-identity). Alternatively, if your organization allows it, you could use the Azure `api-key` authentication mechanism. For details on how to obtain an API key, see the [Obtaining Azure OpenAI API keys](#azure-api-keys) section below. For more information, see the [Technical Reference](#technical-reference) section. | +| Azure key type | This is the type of token that is entered in the API key field. For Azure OpenAI, two types of keys are currently supported: Microsoft Entra token and API key.
For details on how to generate a Microsoft Entra access token, see [How to Configure Azure OpenAI Service with Managed Identities](https://learn.microsoft.com/en-gb/azure/ai-services/openai/how-to/managed-identity). Alternatively, if your organization allows it, you could use the Azure `api-key` authentication mechanism. For details on how to obtain an API key, see the [Obtaining API keys](#azure-api-keys) section below. For more information, see the [Technical Reference](#technical-reference) section. | | Token / API key | This is the access token to authorize your API call. | ##### Obtaining the Resource Name {#azure-resource-name} 1. Go to the [Microsoft Foundry portal](https://ai.azure.com/) and sign in. 2. Select the right resource in the upper right corner. -3. The home page should show `Resource configuration` where you can find the `Microsoft Foundry endpoint` +3. The home page should show **Resource configuration** where you can find the **Microsoft Foundry endpoint**. 4. Use the copy icon ({{% icon name="copy" %}}) and use it as your resource name in the endpoint URL. ##### Obtaining API Keys {#azure-api-keys} @@ -100,7 +100,7 @@ The following inputs are required for the Microsoft Foundry configuration: ##### Adding Azure AI Search Resources {#azure-ai-search} -| Parameter | Value | +| Parameter | Value | | -------------- | ------------------------------------------------------------ | | Display name | This is the name identifier of a Azure AI Search Resource (for example, *MySearchResource*). | | Endpoint URL | This is the API endpoint (for example, `https://your-resource-name.search.windows.net`).
For details on how to obtain `your-resource-name`, see [Azure AI Search service in the Azure portal](https://learn.microsoft.com/en-us/azure/search/search-create-service-portal). | @@ -115,18 +115,18 @@ Currently, the only supported authorization method for Azure AI Search resources #### Configuring the OpenAI Deployed Models -A [Deployed Model](/appstore/modules/genai/genai-for-mx/commons/#deployed-model) represents a GenAI model instance that can be used by the app to generate text, embeddings, or images. For every model you want to invoke from your app, you need to create a `OpenAIDeployedModel` record, a specialization of `DeployedModel`. In addition to the model display name and a technical name/identifier, an OpenAI deployed model contains a reference to the additional connection details as configured in the previous step. For OpenAI, a set of common models can be created automatically using the designated button. If you want to use additional models that are made available by OpenAI you need to configure additional OpenAI deployed models in your Mendix app. For Microsoft Foundry the model names can be different. The technical model names depend on the deployment names that were chosen while deploying the models in the [Microsoft Foundry portal](https://ai.azure.com/). Therefore in this case you always need to configure the deployed models manually in your Mendix app. +A [Deployed Model](/appstore/modules/genai/genai-for-mx/commons/#deployed-model) represents a GenAI model instance that can be used by the app to generate text, embeddings, or images. For every model you want to invoke from your app, you need to create a `OpenAIDeployedModel` record, a specialization of `DeployedModel`. In addition to the model display name and a technical name/identifier, an OpenAI deployed model contains a reference to the additional connection details as configured in the previous step. For OpenAI, a set of common models can be created automatically using the designated button. If you want to use additional models that are made available by OpenAI you need to configure additional OpenAI deployed models in your Mendix app. For Microsoft Foundry, the model names can be different. The technical model names depend on the deployment names that were chosen while deploying the models in the [Microsoft Foundry portal](https://ai.azure.com/). Therefore in this case you always need to configure the deployed models manually in your Mendix app. -1. If needed, click the three dots for an OpenAI configuration to open the "Manage Deployed Models" pop-up. +1. If needed, click the three dots ({{% icon name="three-dots-menu-horizontal" %}}) icon for an OpenAI configuration to open the **Manage Deployed Models** pop-up. 2. For every additional model, add a record. The following fields are required: - | Field | Description | + | Field | Description | | -------------- | ------------------------------------------------------------ | | Display name | This is the reference to the model for app users in case they have to select which one is to be used. | | Deployment name / Model name | This is the technical reference for the model. For OpenAI this is equal to the [model aliases](https://platform.openai.com/docs/models#current-model-aliases). For Microsoft Foundry this is the deployment name from the [Microsoft Foundry portal](https://ai.azure.com/). - | Output modality| Describes what the output of the model is. This connector currently supports Text, Embedding, and Image. - | Input modality| Describes what input modalities are accepted by the model. This connector currently supports Text and Image. - | Azure API version | Azure OpenAI only. This is the API version to use for this operation. It follows the `yyyy-MM-dd` format. For supported versions, see [Azure OpenAI documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/reference). The supported versions can vary depending on the type of model, so make sure to look for the right section (such as Chat Completions, Image Generation, or Embeddings) on that page. | + | Output modality | Describes what the output of the model is. This connector currently supports Text, Embedding, and Image. + | Input modality | Describes what input modalities are accepted by the model. This connector currently supports Text and Image. + | Azure API version | Azure OpenAI only. This is the API version to use for this operation. It follows the `yyyy-MM-dd` format. For supported versions, see [Azure OpenAI documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/reference). The supported versions can vary depending on the type of model, so make sure to look for the right section (such as Chat Completions, Image Generation, or Embeddings) on that page. | 3. Close the popup and test the configuration with the newly created deployed models. From b6d2b2c22eacf172102df456c76b37b9b78a01b5 Mon Sep 17 00:00:00 2001 From: Karuna Vengurlekar Date: Thu, 22 Jan 2026 16:32:26 +0530 Subject: [PATCH 6/6] one more instance --- .../genai/reference-guide/external-platforms/openai.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/docs/marketplace/genai/reference-guide/external-platforms/openai.md b/content/en/docs/marketplace/genai/reference-guide/external-platforms/openai.md index 722eb1efcbe..6e1460138ee 100644 --- a/content/en/docs/marketplace/genai/reference-guide/external-platforms/openai.md +++ b/content/en/docs/marketplace/genai/reference-guide/external-platforms/openai.md @@ -194,7 +194,7 @@ Use the two OpenAI-specific microflow actions from the toolbox [Files: Initializ {{% alert color="info" %}} OpenAI and Microsoft Foundry do not necessarily provide feature parity across all models when it comes to combining functionalities. In other words, Microsoft Foundry does not support the use of JSON mode and function calling in combination with image (vision) input for certain models, so make sure to check the [Azure Documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models). -When you use Azure OpenAI, it is recommended to set the optional `MaxTokens` input parameter; otherwise, the response may be cut off. +When you use Microsoft Foundry, it is recommended to set the optional `MaxTokens` input parameter; otherwise, the response may be cut off. {{% /alert %}} For more information on vision, see [OpenAI](https://platform.openai.com/docs/guides/vision) and [Microsoft Foundry](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/gpt-with-vision) documentation.