Guide to: Configuring and Enabling a Model


ADAP supports bringing your own model, for a use case example, please see this article.

A model belongs to a team, and can be found via the Account page, on the MODELS tab:${TEAM_ID}/models

image (8).png


In order to see the models tab, your team must be enabled with the LLM feature flag and you must be team admin. Please contact your CSM or for assistance.


Configure a model

This interface allows an ADAP job to securely store information on how to interact with your model, define rate access limits, and translate the messages sent/received into formats that can be interpreted by both sides. When creating or editing a model, you will be presented with the following list of fields:

  • name string required
    • Model name
  • endpoint string required
    • Model endpoint
  • description string optional
    • Model description
  • header JSON required
    • Header to be used when calling model endpoint. If the model contains an API secret, this API secret can be stored as an encrypted value by providing parameter ${secret} instead of key.

Example: {"Content-Type": "application/json","Authorization": "${secret}"}

  • secret key string optional
    • Value for API secret ${secret} found on header, if defined.

Example: Bearer 123

  • http method string required
    • HTTP method used when calling model endpoint.
  • input schema JSON required
    • Schema to translate message from Appen jobs to your model. Appen jobs will always send messages using the internal structure shown below. If your model doesn’t use the same structure, input schema field can be used for this translation.

[{ message: string, role: string }]

  • payload JSON optional
    • Defines the structure of the payload sent to customer model;
  • message_item JSON optional
    • Defines the structure of the message sent to customer model.

Example configuration for an Open Ai Chat Completion model:

{ "payload": { "model": "gpt-3.5-turbo", "messages": "${message_items}", "temperature": 0.7 }, "message_item": { "role": "${role}", "content": "${message}" } }
  • output schema string required
    • Schema to translate response from your model into Appen internal structure shown below.

[{ text: string, role: string}]

  • type string required
    • Type of response received by customer model. “single-result” or “multi-result”
  • results string optional
    • Path where response can be found, if “multi-result” type
  • text string required
    • Path where text response can be found
  • role string required
    • Path where role response can be found

Example configuration for Open Ai Chat Completion

{ "type": "multi-result", "results": "/choices", "text": "/message/content", "role": "/message/role" }
  • method param string required
    • Request method. “REQUEST_BODY” or “REQUEST_PARAM”
  • rate string optional
  • rateintervalinsec string optional

Enable a Model in a Job

Once you have successfully configured a model, you can then enable it to be used within each individual job. As long as your team has feature flag LLM enabled (speak to your CSM or contact for assistance), you will see a link "Manage Language Models" and you will be able to manage and enable the available models in your current job.




Upon clicking this link, you will be presented with a list of models available to this job's team. Click on the checkbox to enable the model to the job. 

Screen Shot 2023-07-23 at 10.38.04 PM.png



Was this article helpful?
0 out of 0 found this helpful

Have more questions? Submit a request
Powered by Zendesk