⚠ This page is served via a proxy. Original site: https://github.com
This service does not collect credentials or authentication data.
Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions content/en/docs/marketplace/genai/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ These pages cover modules that integrate with generative AI tools. For running p

### Typical Use Cases

Mendix supports a variety of generative AI tasks by integrating with tools such as Amazon Bedrock or Azure OpenAI. Typical use cases include the following:
Mendix supports a variety of generative AI tasks by integrating with tools such as Amazon Bedrock or Microsoft Foundry. Typical use cases include the following:

* Create conversational UIs for AI-powered chatbots and integrate those UIs into your Mendix applications.
* Connect any model through our GenAI connectors, or by integrating your connector into our GenAI commons interface.
Expand Down Expand Up @@ -58,9 +58,9 @@ Supercharge your applications with Mendix's Agents Kit. This powerful set of com
| [MCP Server](/appstore/modules/genai/mcp-modules/mcp-server/) | Make your Mendix business logic available to any agent in your enterprise landscape with the Mendix MCP Server module. Expose reusable prompts, including the ability to use prompt parameters. List and run actions implemented in the application as a tool. | Module | 10.24 |
| [Mendix Cloud GenAI Connector](/appstore/modules/genai/mx-cloud-genai/MxGenAI-connector/) | Connect to Mendix Cloud and utilize Mendix Cloud GenAI resource packs directly within your Mendix application. | Connector Module | 10.24 |
| [Mistral Connector](/appstore/modules/genai/reference-guide/external-connectors/mistral/) | Connect to Mistral AI. | Connector Module | 10.24 |
| [OpenAI Connector](/appstore/modules/genai/openai/) | Connect to (Azure) OpenAI. | Connector Module | 10.24 |
| [OpenAI Connector](/appstore/modules/genai/openai/) | Connect to OpenAI and Microsoft Foundry. | Connector Module | 10.24 |
| [PgVector Knowledge Base](/appstore/modules/genai/pgvector/) | Manage and interact with a PostgreSQL *pgvector* Knowledge Base. | Connector Module | 10.24 |
| [RFP Assistant Starter App / Questionnaire Assistant Starter App](https://marketplace.mendix.com/link/component/235917) | The RFP Assistant Starter App and the Questionnaire Assistant Starter App leverage historical question-answer pairs (RFPs) and a continuously updated knowledge base to generate and assist in editing responses to RFPs. This offers a time-saving alternative to manually finding similar responses and enhancing the knowledge management process. | Starter App | 10.24 |
| [RFP Assistant Starter App / Questionnaire Assistant Starter App](https://marketplace.mendix.com/link/component/235917) | The RFP Assistant Starter App and the Questionnaire Assistant Starter App leverage historical question-answer pairs (RFPs) and a continuously updated knowledge base to generate and assist in editing responses to RFPs. This offers a time-saving alternative to manually finding similar responses and enhancing the knowledge management process. | Starter App | 10.24 |
| [Snowflake Showcase App](https://marketplace.mendix.com/link/component/225845) | Learn how to implement the Cortex functionalities in your app. | Showcase App | 10.24 |

Older versions of the marketplace modules and GenAI Showcase App are available in Studio Pro 9.24.2.
Expand All @@ -73,7 +73,7 @@ Mendix connectors offer direct support for the following models:
| -------------- | --------------------- | --------------------- | ------------------- | ----------- | ----------------------- |
| Mendix Cloud GenAI | Anthropic Claude 3.5 Sonnet, Anthropic Claude 3.7 Sonnet, Anthropic Claude 4.0 Sonnet, Anthropic Claude 4.5 Sonnet | Chat Completions | text, image, document | text | Function calling |
| | Cohere Embed v3 English and multilangual, Cohere Embed v4 | Embeddings | text | embeddings | |
| Azure / OpenAI | gpt-4, gpt-4-turbo, gpt-4o, gpt-4o mini, gpt-4.1, gpt-4.1-mini, gpt-4.1-nano, gpt-5.0, gpt-5.0-mini, gpt-5.0-nano, gpt-5.1, gpt-5.2, o1, o1-mini, o3, o3-mini, o4-mini | Chat completions | text, image, document (OpenAI only) | text | Function calling |
| Microsoft Foundry (OpenAI) / OpenAI | gpt-4, gpt-4-turbo, gpt-4o, gpt-4o mini, gpt-4.1, gpt-4.1-mini, gpt-4.1-nano, gpt-5.0, gpt-5.0-mini, gpt-5.0-nano, gpt-5.1, gpt-5.2, o1, o1-mini, o3, o3-mini, o4-mini | Chat completions | text, image, document (OpenAI only) | text | Function calling |
| | DALL·E 2, DALL·E 3, gpt-image-1 | Image generation | text | image | |
| | text-embedding-ada-002, text-embedding-3-small, text-embedding-3-large | Embeddings | text | embeddings | |
| Mistral | Mistral Large 3, Mistral Medium 3.1, Mistral Small 3.2, Ministral 3 (3B, 8B, 14B), Magistral (Small, Medium) | Chat Completions | text, image | text | Function calling |
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -56,9 +56,9 @@ We also strongly advise that you build user confirmation logic into function mic

### Supported OpenAI models {#supported-models-openai}

OpenAI's latest GPT-3.5 Turbo, GPT-4 Turbo and GPT-4o models are trained with function calling data. Older model versions may not support parallel function calling. For more details view [OpenAI Documentation](https://platform.openai.com/docs/guides/function-calling/supported-models).
OpenAI's latest GPT-3.5 Turbo, GPT-4 Turbo and GPT-4o models are trained with function calling data. Older model versions may not support parallel function calling. For more details, see [OpenAI Documentation](https://platform.openai.com/docs/guides/function-calling/supported-models).

For models used through Azure OpenAI, feature availability is currently different depending on method of input and deployment type. For details view [Azure OpenAI Documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models#differences-between-openai-and-azure-openai-gpt-4-turbo-ga-models).
For models used through Microsoft Foundry, feature availability is currently different depending on method of input and deployment type. For details, see [Microsoft Foundry Documentation](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models#differences-between-openai-and-azure-openai-gpt-4-turbo-ga-models).

### Supported Amazon Bedrock models {#supported-models-bedrock}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -256,5 +256,5 @@ Check out the [GenAI](https://marketplace.mendix.com/link/component/220475) show

* [OpenAI](https://platform.openai.com/docs/guides/prompt-engineering)
* [Examples](https://platform.openai.com/docs/examples)
* [Azure OpenAI](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/prompt-engineering)
* [Microsoft Foundry](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/prompt-engineering)
* [Prompt Engineering Guide](https://www.promptingguide.ai/)
2 changes: 1 addition & 1 deletion content/en/docs/marketplace/genai/how-to/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ For any additional feedback, send a message in the [#genai-connectors](https://m
* [AI Model Training: What it is and How it Works](https://www.mendix.com/blog/ai-model-training/)
* [What Are the Different Types of AI Models?](https://www.mendix.com/blog/what-are-the-different-types-of-ai-models/)
* [OpenAI Using the ‘GenAI for Mendix’ Module](https://www.mendix.com/blog/openai-using-the-genai-for-mendix-module/)
* [How to Configure Azure OpenAI Models in Mendix](https://www.mendix.com/blog/how-to-configure-azure-openai-models-in-mendix/)
* [How to Configure Microsoft Foundry OpenAI Models in Mendix](https://www.mendix.com/blog/how-to-configure-azure-openai-models-in-mendix/)

#### Building your own Connector

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ Selecting the infrastructure for integrating GenAI into your Mendix application

* [Mendix Cloud GenAI Resource Packs](/appstore/modules/genai/mx-cloud-genai/resource-packs/): The [Mendix Cloud GenAI Connector](https://marketplace.mendix.com/link/component/239449) allows you to utilize Mendix Cloud GenAI Resource Packs directly within your Mendix application.

* [OpenAI](/appstore/modules/genai/openai/): The [OpenAI Connector](https://marketplace.mendix.com/link/component/220472) supports both OpenAIs platform and Azure’s OpenAI service.
* [OpenAI](/appstore/modules/genai/openai/): The [OpenAI Connector](https://marketplace.mendix.com/link/component/220472) supports both OpenAI's platform and Microsoft Foundry.

* [Amazon Bedrock](/appstore/modules/genai/bedrock/): The [Amazon Bedrock Connector](https://marketplace.mendix.com/link/component/215042) allows you to leverage Amazon Bedrock’s fully managed service to integrate foundation models from Amazon and leading AI providers.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ Selecting the infrastructure for integrating GenAI into your Mendix application

* [Mendix Cloud GenAI Resources Packs](/appstore/modules/genai/mx-cloud-genai/resource-packs/): The [Mendix Cloud GenAI Connector](https://marketplace.mendix.com/link/component/239449) integrates LLMs by dragging and dropping common operations from its toolbox in Studio Pro.

* [OpenAI](/appstore/modules/genai/openai/): The [OpenAI Connector](https://marketplace.mendix.com/link/component/220472) supports OpenAIs platform and Azure’s OpenAI service.
* [OpenAI](/appstore/modules/genai/openai/): The [OpenAI Connector](https://marketplace.mendix.com/link/component/220472) supports OpenAI's platform and Microsoft Foundry.

* [Amazon Bedrock](/appstore/modules/genai/bedrock/): The [Amazon Bedrock Connector](https://marketplace.mendix.com/link/component/215042) allows you to leverage Amazon Bedrock’s fully managed service to integrate foundation models from Amazon and leading AI providers.

Expand Down Expand Up @@ -90,7 +90,7 @@ Follow the steps below to configure OpenAI for your application. For more inform
* **API Type**: Choose between **OpenAI** or **Azure OpenAI**.
* **Endpoint**: Enter the endpoint URL for your selected API type.
* **API key**: Provide the API key for authentication.
* If using Azure OpenAI, add the **Azure key type** by choosing between **OpenAI** or **Azure OpenAI**.
* If using Microsoft Foundry, add the **Azure key type** by choosing between **OpenAI** or **Azure OpenAI**.

* After saving the changes, a new pop-up will appear to add the deployment models. Select **Add deployed model** and provide the following details (optional for the OpenAI API Type):
* **Display name**: A reference name for the deployed model (e.g., "GPT-4 Conversational").
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ The [Blank GenAI App Template](https://marketplace.mendix.com/link/component/227
Selecting the infrastructure for integrating GenAI into your Mendix application is the first step. Depending on your use case and preferences, you can choose from the following options:

* [Mendix Cloud GenAI Resources Packs](/appstore/modules/genai/mx-cloud-genai/resource-packs/): The [Mendix Cloud GenAI Connector](https://marketplace.mendix.com/link/component/239449) integrates LLMs by dragging and dropping common operations from its toolbox in Studio Pro.
* [OpenAI](/appstore/modules/genai/openai/): The [OpenAI Connector](https://marketplace.mendix.com/link/component/220472) supports both OpenAIs platform and Azure’s OpenAI service.
* [OpenAI](/appstore/modules/genai/openai/): The [OpenAI Connector](https://marketplace.mendix.com/link/component/220472) supports both OpenAI's platform and Microsoft Foundry.

* [Amazon Bedrock](/appstore/modules/genai/bedrock/): The [Amazon Bedrock Connector](https://marketplace.mendix.com/link/component/215042) allows you to leverage Amazon Bedrock’s fully managed service to integrate foundation models from Amazon and leading AI providers.

Expand Down Expand Up @@ -131,7 +131,7 @@ Follow the steps below to configure OpenAI for your application. For more inform
* **API Type**: Choose between **OpenAI** or **Azure OpenAI**.
* **Endpoint**: Enter the endpoint URL for your selected API type.
* **API key**: Provide the API key for authentication.
* If using Azure OpenAI, add the **Azure key type** by choosing between **OpenAI** or **Azure OpenAI**.
* If using Microsoft Foundry, add the **Azure key type** by choosing between **OpenAI** or **Azure OpenAI**.

* After saving the changes, a new pop-up will appear to add the deployment models. Select **Add deployed model** and provide the following details (optional for the OpenAI API Type):
* **Display name**: A reference name for the deployed model (e.g., "GPT-4 Conversational").
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,7 @@ To use retrieval and generation in a single operation, an internally predefined

{{< figure src="/attachments/appstore/platform-supported-content/modules/genai/mxgenAI-connector/mxgenaiconnector-rag.png" >}}

The returned `Response` includes [References](/appstore/modules/genai/genai-for-mx/commons/#reference) for each retreived chunk from the knowledge base.
The returned `Response` includes [References](/appstore/modules/genai/genai-for-mx/commons/#reference) for each retrieved chunk from the knowledge base.

Optionally, you can control both reference creation and the output returned for the model during the insertion step:

Expand Down Expand Up @@ -134,7 +134,9 @@ Document chat enables the model to interpret and analyze documents, such as PDFs

For [Chat Completions (without history)](/appstore/modules/genai/genai-for-mx/commons/#chat-completions-without-history), `OptionalFileCollection` is an optional input parameter. For [Chat completions (with history)](/appstore/modules/genai/genai-for-mx/commons/#chat-completions-with-history), a `FileCollection` can optionally be added to individual user messages using [Add Message to Request](/appstore/modules/genai/genai-for-mx/commons/#chat-add-message-to-request).

In the entire conversation, you can pass up to five documents that are smaller than 4.5 MB each. The following file types are accepted: PDF, CSV, DOC, DOCX, XLS, XLSX, HTML, TXT, and MD.
In the entire conversation, you can pass up to five documents that are smaller than 4.5 MB each. Note that there is also a practical, model-dependent limit on the number of pages a document can contain - typically around 100 pages, but this is not fixed and can vary with the selected model and the complexity of the file (for example, images, heavy formatting, or embedded content can reduce the effective page limit). If you expect to work with very large documents, consider splitting them into smaller files or providing summarized extracts to improve reliability.

The following file types are accepted: PDF, CSV, DOC, DOCX, XLS, XLSX, HTML, TXT, and MD.

{{% alert color="info" %}}
The model uses the file name when analyzing documents, which may introduce a potential vulnerability to prompt injection. To reduce this risk, consider modifying file names before including them in the request.
Expand Down Expand Up @@ -201,12 +203,14 @@ The knowledge chunks are stored in an AWS OpenSearch Serverless database to ensu

##### Data Chunks

To add data to the knowledge base, you need discrete pieces of information and create knowledge base chunks for each one. Use the GenAICommons operations to first [initialize a ChunkCollection object](/appstore/modules/genai/genai-for-mx/commons/#chunkcollection-create), and then [add a KnowlegdebaseChunk](/appstore/modules/genai/genai-for-mx/commons/#chunkcollection-add-knowledgebasechunk) object to it for each piece of information. Both can be found in the **Toolbox** inside of the **GenAI Knowledge Base (Content)** category.
To add data to the knowledge base, you need discrete pieces of information and create knowledge base chunks for each one. Use the GenAICommons operations to first [initialize a ChunkCollection object](/appstore/modules/genai/genai-for-mx/commons/#chunkcollection-create), and then [add a KnowledgeBaseChunk](/appstore/modules/genai/genai-for-mx/commons/#chunkcollection-add-knowledgebasechunk) object to it for each piece of information. Both can be found in the **Toolbox** inside of the **GenAI Knowledge Base (Content)** category.

##### Chunking Strategy

Dividing data into chunks is crucial for model accuracy, as it helps optimize the relevance of the content. The best chunking strategy is to keep a balance between reducing noise by keeping chunks small and retaining enough content within a chunk to get relevant results. Creating overlapping chunks can help preserve more context while maintaining a fixed chunk size. It is recommended to experiment with different chunking strategies to decide the best strategy for your data. In general, if chunks are logical and meaningful to humans, they will also make sense to the model. A chunk size of approximately 1500 characters with overlapping chunks has been proven to be effective for longer texts in the past.

Since embeddings operations have a maximum character limit of 2048 characters per chunk, you must ensure that your chunks do not exceed this limit before submitting them for embedding. Chunks exceeding this limit will cause the embedding operation to fail, so validate your input data accordingly.

The chunk collection can then be stored in the knowledge base using one of the following operations:

##### Add Data Chunks to Your Knowledge Base
Expand All @@ -231,19 +235,19 @@ The following toolbox actions can be used to retrieve knowledge data from a coll
1. `Retrieve` retrieves knowledge base chunks from the knowledge base. You can use pagination via the `Offset` and `MaxNumberOfResults` parameters or apply filtering via a `MetadataCollection` or `MxObject`.
2. `Retrieve & Associate` is similar to the `Retrieve` but associates the returned chunks with a Mendix object if they were linked at the insertion stage.

{{% alert color="info" %}}You must define your entity specialized from `KnowledgeBaseChunk`, which is associated to the entity that was used to pass a MendixObject during the [insertion stage](#knowledge-base-insertion).
{{% alert color="info" %}}You must define your entity specialized from `KnowledgeBaseChunk`, which is associated with the entity that was used to pass a MendixObject during the [insertion stage](#knowledge-base-insertion).
{{% /alert %}}

3. `Embed & Retrieve Nearest Neighbors` retrieves a list of type [KnowledgeBaseChunk](/appstore/modules/genai/genai-for-mx/commons/#knowledgebasechunk-entity) from the knowledge base that are most similar to a given `Content` by calculating the cosine similarity of its vectors.
4. `Embed & Retrieve Nearest Neighbors & Associate` combines the above actions `Retrieve & Associate` and `Embed & Retrieve Nearest Neighbors`.

### Embedding Operations

If you are working directly with embedding vectors for specific use cases that do not include knowledge base interaction (for example clustering or classification), the below operations are relevant. For practical examples and guidance, consider referring to the [GenAI Showcase Application](https://marketplace.mendix.com/link/component/220475) showcase to see how these embedding-only operations can be used.
If you are working directly with embedding vectors for specific use cases that do not include knowledge base interaction (for example, clustering or classification), the below operations are relevant. For practical examples and guidance, consider referring to the [GenAI Showcase Application](https://marketplace.mendix.com/link/component/220475) showcase to see how these embedding-only operations can be used.

To implement embeddings into your Mendix application, you can use the microflows in the **Knowledge Bases & Embeddings** folder inside of the GenAICommons module. Both microflows for embeddings are exposed as microflow actions under the **GenAI (Generate)** category in the **Toolbox** in Mendix Studio Pro.

These microflows require a [DeployedModel](/appstore/modules/genai/genai-for-mx/commons/#deployed-model) that determines the model and endpoint to use. Depending on the selected operation, an `InputText` String or a [ChunkCollection](/appstore/modules/genai/genai-for-mx/commons/#chunkcollection) needs to be provided.
These microflows require a [DeployedModel](/appstore/modules/genai/genai-for-mx/commons/#deployed-model) that determines the model and endpoint to use. Depending on the selected operation, an `InputText` String or a [ChunkCollection](/appstore/modules/genai/genai-for-mx/commons/#chunkcollection) needs to be provided. Note that embedding operations enforce a maximum character limit of 2048 characters per chunk; input exceeding this limit will cause the embedding operation to fail, so validate your input before submitting it for embedding.

#### Embeddings (String)

Expand Down
Loading