Follow

Guide to: Configuring and Enabling a Model

Overview

ADAP supports bringing your own model, for use case examples, please see this article. and this article.

A model belongs to a team, and can be added and managed by Team Admins in several locations within the platform:

 

image (8).png

Note

In order to see the models tab, your team must be enabled with the LLM feature flag and you must be team admin. Please contact your CSM or help@appen.com for assistance.

Model Templates

Popular public-use models are provided under model templates to allow you to get started quickly and easily. 

Screenshot 2024-01-26 at 1.03.24 PM.png

When you select a model template, all required fields will be filled except for the secret key. If you already have your secret key, you can enter it here. To obtain a secret key visit the model's API website. 

You can customize the model name, model description, and edit any other fields to tailor the model to your specific use case(s).

Configure your own model

This interface allows an ADAP job to securely store information on how to interact with your model, define rate access limits, and translate the messages sent/received into formats that can be interpreted by both sides. When creating or editing a model, you will be presented with the following list of fields:

  • name (string,required): the name you define for your model
  • endpoint (string, required): your model endpoint
  • description (string, optional): the description you define for your model
  • header (JSON, required):
    • Header to be used when calling model endpoint. If the model contains an API secret, this API secret can be stored as an encrypted value by providing parameter ${secret} instead of key.
  • Example: {"Content-Type": "application/json","Authorization": "${secret}"}
  • secret key (string, optional):
    • The secret key is a value used to authenticate requests / access to your API
    • Value for API secret ${secret} found on header, if defined.
    • Example: Bearer 123
  • http method (string, required):
    • HTTP method used when calling model endpoint.
  • input schema (JSON, required):
    • Schema to translate message from Appen jobs to your model. Appen jobs will always send messages using the internal structure shown below.
    • [{ message: string, role: string }]
    • If your model doesn’t use the same structure, input schema field can be used for this translation.
    • payload (JSON, optional):
      • Payload refers to the data that is sent in the request / received in the response.
      • This input defines the structure of the payload sent to your model;
    • message_item (JSON, optional):
      • Defines the structure of the message sent to your model.

Example configuration for an Open Ai Chat Completion

  • output schema (string, required):
    • Schema to translate response from your model into Appen internal structure shown below.
    • [{ text: string, role: string}]
    • type (string, required)
      • Type of response received by customer model. “single-result” or “multi-result”
    • results (string, optional)
      • Path where response can be found, if “multi-result” type
    • text (string, required)
      • Path where text response can be found
    • role (string, required)
      • Path where role response can be found

Example configuration for Open Ai Chat Completion

  • method param (string, required): request method. “REQUEST_BODY” or “REQUEST_PARAM”
  • rate (string, optional): maximum times to call the model per rateintervalinsec
  • rateintervalinsec (string, optional): span of time (seconds)
    • Example: when rate: 10 & rateintervalinsec: 60, the model will be called a maximum of 10 times per 60 seconds.

 

Enable a Model in a Job

Once you have successfully configured a model, you can then enable it to be used within each individual job. As long as your team has feature flag LLM enabled (speak to your CSM or contact help@appen.com for assistance), you will see a link "Manage Language Models" and you will be able to manage and enable the available models in your current job.

 

f86fef0d-d525-46d2-a723-f43c1f656e8c.png

 

Upon clicking this link, you will be presented with a list of models available to this job's team. Click on the checkbox to enable the model to the job. 

Screen Shot 2023-07-23 at 10.38.04 PM.png

Advanced Configurations

Amazon Bedrock Integration

Bedrock Model Access

    • Make sure you have access to the bedrock model by checking this url
    • You may need to adjust the region, depending on where you want the model to be hosted

Lambda Function Set-up

    • Once your access to Amazon Bedrock is established you will need to create a Lambda Function to route requests to the bedrock api using boto3
    • Add Lambda Layer - boto3 does not have bedrock-runtime in Lambda by default, so you will need to add a lambda layer that has an updated version of boto3
    • Set your lambda function to use a python runtime, and upload the layer
    • Copy the following code to your Lambda Function (Note the below example is specific to Anthropic Claude 3; refer to the relevant AWS documentation for each individual model's inference paremeters and make changes to the code accordingly) 
import json 
import boto3
import os
# Initialize the Bedrock client
bedrock = boto3.client(service_name="bedrock-runtime", region_name="us-east-1")
def lambda_handler(event, context):
print(event)
api_key = os.environ['API_KEY']
auth = event.get('headers').get('authorization')
if auth != f'Bearer {api_key}':
return
{ 'statusCode': 401,
'body': "Invalid API key."
}
# Extract parameters from the event
input_payload = json.loads(event.get('body'))
modelId = input_payload['model']
temperature = float(input_payload.get('temperature', 0.0))
topP = float(input_payload.get('topP', 0.9))
maxTokenCount = int(input_payload.get('maxTokenCount', 300))
input_messages = input_payload['messages']

# Prepare the request body with the input message
# NOTE: this is where changes might be required depending on your chosen model's inference parameters
body=json.dumps(
{
"anthropic_version": "bedrock-2023-05-31",
"max_tokens": maxTokenCount,
"messages": input_messages
}
)
accept = "application/json"
contentType = "application/json"
# Invoke the Bedrock model
response = bedrock.invoke_model(
body=body, modelId=modelId, accept=accept, contentType=contentType
)

# Process the response from the model
response_body = json.loads(response['body'].read().decode())

# Return the model's response
return {
'statusCode': 200,
'body': json.dumps(response_body)
}
  • in the lambda configuration, add an environment variable with your API key to prevent public access to the API once deployed. WARNING: this is not best practice for authentication especially if the AWS account is shared with many users.

image-20240528-164649.png

  • also in the configuration, go to the Permissions tab and click on the role name. This will take you to the IAM console, where you can add permissions to access bedrock. Use the aws-managed AmazonBedrockFullAccessPolicy, or use an inline policy if you want more fine-grained accesses defined.

image-20240528-164657.png

image-20240528-164705.png

  • back in the lambda configuration page, under "General Configuration", increase the "Timeout" to ~1min+. The LLMs can take some time to generate responses, so this is required to avoid timing out.

image-20240528-164713.png

API Gateway Configuration

Once the Lambda configuration is complete, it must be linked with an API in API Gateway

  • In the API Gateway console, create a new HTTP API.

image-20240528-164721.png

  • create a new integration to the lambda function you just created, and give the API a name

image-20240528-164729.png

  • in Step 2, leave the method as ANY, or set to POST, if you wish to use other methods with the same resource path

image-20240528-164738.png

  • Step 3 is optional, add stages or skip the step
  • Finally, if you do define stages you'll need to deploy the API, if you are using the default stage, it should be set to auto-deploy by default
  • test your API in Postman; first, enter your "authorization" header with value "Bearer {YOUR_API_KEY}"

Screenshot 2024-05-30 at 3.32.15 PM.png

  • then format the body of the request and send it to the endpoint you configured in API Gateway

Screenshot 2024-05-30 at 3.33.11 PM.png

  • Now you can utilize this to access models in Bedrock from any of our AI tools by configuring them in the model configuration page. Here is an example of what your configuration might look like for an Anthropic Claude 3 model; although keep in mind that the input and output schemas will differ according to your models' request and response formats.
{
"NAME": "bedrock",
"ENDPOINT": "YOUR_NEW_ENDPOINT",
"DESCRIPTION": "bedrock-completion",
"HEADER": {"Content-Type":"application/json","Authorization":"Bearer ${secret}"},
"SECRET": "YOUR_API_KEY",
"HTTPMETHOD": "POST",
"INPUTSCHEMA": {
"payload": {
"model": "anthropic.claude-3-sonnet-20240229-v1:0 | YOUR_MODEL",
"messages": "${message_items}",
"temperature": "0.0",
"topP": "0.9",
"maxTokenCount": "300"
},
"message_item": {
"role": "${role}",
"content": "${message}"
}
},
"OUTPUTSCHEMA": {
"type": "multi-result",
"results": "/content",
"text": "/text"
},
"METHODPARAM": "REQUEST_BODY"
}
GPT-4 Turbo with Vision
Create an API key by going to this url The main difference from the standard model configuration is the inclusion of an image_url parameter in the input schema:
    {
"type": "image_url",
"image_url": {
"url": "${image_url}"
}
}
  • this addition allows you to send both a prompt and an image to the model
  • Navigate to the model configuration page and fill out the following info to successfully configure the model:
{
"NAME": "gpt-4-turbo",
"ENDPOINT": "https://api.openai.com/v1/chat/completions",
"DESCRIPTION": "gpt-4-turbo-completion",
"HEADER": {"Content-Type":"application/json","Authorization":"Bearer ${secret}"},
"SECRET": "YOUR_API_KEY",
"HTTPMETHOD": "POST",
"INPUTSCHEMA": {
"payload": {
"model": "gpt-4-turbo",
"messages": "${message_items}",
"temperature": 0.7
},
"message_item": {
"role": "user",
"content": [
{
"type": "text",
"text": "${message}"
},
{
"type": "image_url",
"image_url": {
"url": "${image_url}"
}
}
]
}
},
"OUTPUTSCHEMA": {
"type": "multi-result",
"results": "/content",
"text": "/text"
},
"METHODPARAM": "REQUEST_BODY"
}

Was this article helpful?
4 out of 4 found this helpful


Have more questions? Submit a request
Powered by Zendesk