Model Mate enables you to flexibly and easily integrate models in your job designs. Usecases for this feature are many and varied. In this article we provide some example usecases and step by step instructions for enabling the feature. Finally, we include some special considerations for image generation and processing.
Note
This feature is available in Quality Flow and can only be configured through the code editor. You will need Quality Flow and LLM enabled for your team, and have at least one team model configured.
Usecase Examples
You can incorporate models into your workflows in many ways. Here are just a few examples, however you can utilize this tool however you like, based on your specific needs.
-
A/B testing / Response Quality Assessment
- Evaluating multiple live model responses and gathering feedback
- Addressing specific questions about each response, such as accuracy and safety
- Ranking responses
- Categorizing images or provide feedback on visual characteristics
-
AI Assistant
- Contributors can use AI for tasks such as content summarization, content generation and language translation
-
Content Screening
- Detecting toxic content and warn contributors before they interact with it.
- Screening contributor's responses and offer suggestions for improvement, or else directly applying fixes
-
Model-Annotation
- Generating pre-annotations that can be refined by contributors
How to configure the tool
Step 1: Ensure you have models configured for your team, instructions can be found here.
Step 2: In your Job Design page click on "Manage Language Models"
Step 3: Enable the specific model[s] you want to use in your Job. Take note of the Model ID #.
Step 4: Switch to Code Editor and configure the cml:model
tag:
<cml:model name="NAME" model-id="MODEL ID" prompt="Prompt to send to the model." />
For example, the below snippet would send the prompt "What is the capital of the USA?" to the Model with the ID "115".
Step 5: Choose how to display the model's response in the job.
For example, you can choose to display the response in one of the editable text tools on the platform (i.e. cml:smart_text
, cml:text
or cml:text_area
), with the addition of a model-annotation
parameter that references the name
attribute from the cml:model
tag.
Or you may decide to just display the response on the page, using liquid syntax to reference the name attribute within an HTML element such a <div>
or a <p>
.
cml:model
your job must also include at least one other cml element (e.g. cml:checkbox
, cml:radios
, cml:text
).Step 6: Save, preview and launch!
Example:
<h2>What is the capital of the USA?</h2>
<cml:model name="model1" model-id="115" prompt="What is the capital of the USA? Respond with only the capital. Don't include any other context or explanation."/>
<cml:text label="Capital:" validates="required" model-annotation="model1" />
Preview:
Parameters
The cml:model
tag is used with the following parameters:
-
name
(required)- This is reference of your model. If you want to present your model response in an html element or a text element, you would use the
name
attribute.
- This is reference of your model. If you want to present your model response in an html element or a text element, you would use the
-
model-id
(required)- This is the model ID of the model you are using. The model ID is assigned when you configure your model and which you see when you enable a model in your job.
-
prompt
(required)- This parameter allows you to customize the prompt you want to send to your model. You can include a standard string, or add references to different parts of your CML:
Ref Type |
Description |
---|---|
Source Data:
|
This is the prompt you will be sending within the request to your model. This is highly customizable. You can use references to your data using liquid syntax:
|
Element Content:
|
If you want to use what a contributor selected in a form element, wrote in a text element, or uploaded through the
Example: Within the job: |
CML code:
OR
|
If you want to send a reference to part of your CML, you can add an ID or Class to the CML you would like to send to the model, and then reference this ID or Class in your prompt. Example:
Within the job: |
-
trigger
(optional)- This is a reference to a Button ID. If you want your model request to be triggered when a contributor clicks a button, you must include this attribute as well as a button tag:
-
<a target="_blank" class="btn" id="button-1">Get Response</a>
- By default, if the
trigger
attribute is not included, the model request is sent on page load -
trigger
and the Button ID must be the same in order to work correctly. - Example syntax:
- Within the job:
-
trigger-limit
(optional, defaults to unlimited)-
trigger-limit
can only be used withtrigger
and allows you to specify the maximum number of times a contributor can click the associated button to send the same request to the model. - Example syntax:
trigger-limit="5"
, contributors can click the button and send model requests five times. - If an error occurs, contributors will be able to click the button to send another request until they receive the number of successful responses specified by the limit.
-
-
model-output
(optional, defaults to "true")- When
model-output="true"
, a column is appended to the output containing the model's response. - In cases where
model-output="true"
andtrigger-limit="false"
, only the final response is output. - The label of the column in the output corresponds to the name attribute you specified in the tag, e.g. "model1".
- When
-
image
(optional)- When you are integrating a model in order to process image data, you will need to include the
image
parameter along with the prompt withincml:model
. - The
image
attribute currently accepts one image at a time in the following formats:- image URL
- PNG (.png)
- JPEG (.jpeg)
- WEBP (.webp)
- non-animated GIF (.gif)
- When you are integrating a model in order to process image data, you will need to include the
-
validates
(optional)-
validates="required"
-
Defines whether or not the element is required to be answered.
-
Defaults to not required if not present.
-
-
In QA or review jobs you can optionally also use the following parameter:
-
model-request
(optional)-
By default, for work jobs,
model-request="true"
-
By default, for QA jobs,
model-request="false"
-
By default, for rework jobs,
model-request="false"
-
-
Image Generation
In order to generate images using a model:
1. In the Model Configuration section, set up a model that enables image generation.
2. If you require your image to be in base64, ensure that the following is included in your input schema:
response_format: "b64_json"
3. You can only display your image in <p>, <div> or <cml:smart_text> tags.
Image Processing
To ensure your model can effectively process images:
- Include the image attribute.
- Make sure to include the
image
attribute when calling the model. This attribute specifies the image you intend to send to the model.
- Make sure to include the
- Update model configuration. Incorporate the following into your input and output schema.
Input schema:
{
"payload": {
"model": "gpt-4-turbo",
"messages": "${message_items}",
"temperature": 0.7
},
"message_item": {
"role": "user",
"content": [
{
"type": "text",
"text": "${message}"
},
{
"type": "image_url",
"image_url": {
"url": "${role}"
}
}
]
}
}
Output Schema:
{
"type: "multi-result",
"results": "/choices",
"text": "/message/content",
"role": "/message/role"
}