Guide to: Model Mate - Flexible Model Integration

Model Mate enables you to flexibly and easily integrate models in your job designs. Usecases for this feature are many and varied. In this article we provide some example usecases and step by step instructions for enabling the feature. Finally, we include some special considerations for image generation and processing.



This feature is available in Quality Flow and can only be configured through the code editor. You will need Quality Flow and LLM enabled for your team, and have at least one team model configured.

Usecase Examples

You can incorporate models into your workflows in many ways. Here are just a few examples, however you can utilize this tool however you like, based on your specific needs.

  1. A/B testing / Response Quality Assessment
    • Evaluating multiple live model responses and gathering feedback
    • Addressing specific questions about each response, such as accuracy and safety
    • Ranking responses
    • Categorizing images or provide feedback on visual characteristics
  2. AI Assistant
    • Contributors can use AI for tasks such as content summarization, content generation and language translation
  3. Content Screening
    • Detecting toxic content and warn contributors before they interact with it.
    • Screening contributor's responses and offer suggestions for improvement, or else directly applying fixes
  4. Model-Annotation
    • Generating pre-annotations that can be refined by contributors

How to configure the tool

Step 1: Ensure you have models configured for your team, instructions can be found here.

Step 2: In your Job Design page click on "Manage Language Models"


Step 3: Enable the specific model[s] you want to use in your Job. Take note of the Model ID #.


Step 4: Switch to Code Editor and configure the cml:model tag:

<cml:model name="NAME" model-id="MODEL ID" prompt="Prompt to send to the model." /> 

For example, the below snippet would send the prompt "What is the capital of the USA?" to the Model with the ID "115".

Screenshot 2024-05-03 at 1.56.33 PM.png

 Step 5: Choose how to display the model's response in the job.

For example, you can choose to display the response in one of the editable text tools on the platform (i.e. cml:smart_text, cml:text or cml:text_area), with the addition of a model-annotation parameter that references the name attribute from the cml:model tag.

Screenshot 2024-05-03 at 2.11.21 PM.png Or you may decide to just display the response on the page, using liquid syntax to reference the name attribute within an HTML element such a <div> or a <p>

Screenshot 2024-05-03 at 2.11.08 PM.png

You can also apply liquid logic to the model response. For example, a liquid parameter we recommend using to improve formatting of the response would be to add line breaks, as in the following snippet: Screenshot 2024-05-03 at 2.42.19 PM.png 

Note: when using cml:model your job must also include at least one other cml element (e.g. cml:checkbox, cml:radios, cml:text).

Step 6: Save, preview and launch!


<h2>What is the capital of the USA?</h2>
<cml:model name="model1" model-id="115" prompt="What is the capital of the USA? Respond with only the capital. Don't include any other context or explanation."/>
<cml:text label="Capital:" validates="required" model-annotation="model1" />


Screenshot 2024-05-08 at 8.32.28 PM.png


The cml:model tag is used with the following parameters:

  • name (required)
    • this is reference of your model. If you want to present your model response in an html element or a text element, you would use the name attribute
  • model-id (required)
    • this is the model ID of the model you are using. The model ID is assigned when you configure your model and which you see when you enable a model in your job
  • prompt (required)
    • this is the prompt that will be sent to the model. Use liquid syntax to access your columns in your uploaded data, or reference other form or text elements in your job's design
      • in the following example 'country' is a column header in your data:

Screenshot 2024-05-06 at 1.01.14 PM.png

      • in the following example, 'country' references a set of inputs to choose from in the <cml:select> tag, but this could also be a contributor's entry in a free text box, for example.

Screenshot 2024-05-06 at 1.34.47 PM.png

  • trigger (optional)
    • this is a reference to a button ID. By default the model response will load upon opening the page, if you would like the request to be triggered only after a contributor has provided inputs, you will need to include this, along with the button tag, as in the the example above, and the preview below.

<a target="_blank" class="btn" id="button-1">Get Response</a>

GIF of prompt (1).gif

  • trigger-limit (optional, defaults to "true")
    • when trigger-limit="true", contributors will only be able to click the button to send the model request once
    • if an error occurs, contributors will be able to click the button to send another request until they receive a successful response
    • when trigger-limit="false", contributors can freely click the button and send model requests unlimited times.
  • model-output (optional, defaults to "true")
    • when model-output="true", a column is appended to the output containing the model's response
    • in cases where model-output="true" and trigger-limit="false", only the final response is output.
    • the label of the column in the output corresponds to the name attribute you specified in the tag, e.g. "model1".
  • image (optional)
    • when you are integrating a model in order to process image data, you will need to include the image parameter along with the prompt within cml:model.
    • the image attribute currently accepts one image at a time in the following formats:
      • image URL
      • PNG (.png)
      • JPEG (.jpeg)
      • WEBP (.webp)
      • non-animated GIF (.gif)

Image Generation

In order to generate images using a model:

1. In the Model Configuration section, set up a model that enables image generation.

2. If you require your image to be in base64, ensure that the following is included in your input schema:

response_format: "b64_json"

3. You can only display your image in <p>, <div> or <cml:smart_text> tags.

Image Processing

To ensure your model can effectively process images:

  1. Include the image attribute.
    • Make sure to include the image attribute when calling the model. This attribute specifies the image you intend to send to the model.
  2. Update model configuration. Incorporate the following into your input and output schema.

Input schema:

"payload": {
"model": "gpt-4-turbo",
"messages": "${message_items}",
"temperature": 0.7
"message_item": {
"role": "user",
"content": [
"type": "text",
"text": "${message}"
"type": "image_url",
"image_url": {
"url": "${role}"

Output Schema:

"type: "multi-result",
"results": "/choices",
"text": "/message/content",
"role": "/message/role"


Was this article helpful?
0 out of 0 found this helpful

Have more questions? Submit a request
Powered by Zendesk