top of page
Search

AWS Amplify Bedrock Integration

Updated: Nov 20, 2024

AI at Tenera

At Tenera, we are constantly exploring ways to integrate AI into our daily workflows to enhance efficiency and innovation. This blog post delves into the practical applications of AI within our operations, highlighting how we leverage AWS Bedrock to transform various aspects of our work. By integrating with AI, we can enable streamlined features like creating concise summaries of maintenance reports, organizing unstructured notes into actionable data, predict work and material needs based on historical data, and compare tendering bids to identify key differences. 


At Tenera, our goal is to make AI-enabled workflows more accessible, allowing us to swiftly test innovative ideas across different areas of our application. We are committed to providing the tools needed for easy and rapid fine-tuning of models and prompts to serve diverse purposes. This agility empowers us to experiment with a variety of concepts, quickly identifying the most effective use cases for AI within our operations.


Our goal is to lower the barriers to AI-enabled workflows, allowing us to quickly test and implement new ideas across our application. We focus on providing the tools necessary for easy fine-tuning of AI models and prompts, ensuring flexibility and adaptability. Through AWS Bedrock, we have a powerful platform that integrates well with AWS Amplify, offering multiple models for different use cases and a uniform interface for text generation, image recognition, and image generation. This post will guide you through our integrating with AWS Bedrock, detailing the steps and code snippets involved in setting up this robust AI infrastructure.


AWS Bedrock

AWS Bedrock is a fully managed service that makes it easy to build, train, and deploy machine learning models at scale. As part of the extensive AWS ecosystem, Bedrock seamlessly integrates with AWS Amplify, enabling us to leverage a cohesive and powerful infrastructure for our AI needs.

 

Our decision to integrate with AWS Bedrock is driven by several key factors. Firstly, being part of the AWS ecosystem ensures smooth compatibility with our existing tools and services, particularly AWS Amplify. Secondly, Bedrock offers a diverse selection of models tailored to various use cases, providing the flexibility needed to address different AI requirements.

 

Additionally, Bedrock provides a uniform interface for interacting with these models, simplifying the integration process across our application. This versatility includes capabilities for text generation, image recognition, and image generation, allowing us to experiment and implement AI solutions effectively across a broad spectrum of tasks.

 

 

AWS Amplify Lambda-to-Bedrock integration

 

Amplify can integrate with Bedrock in a few different ways. Amazon themselves offer a few guides for integrating Amplify gen-2 with Bedrock, but for those of us still on gen-1 they only offer a solution via Server-Side Rendering. This section will explain how a client rendered Gen-1 amplify application can still integrate with Bedrock with minimal effort.

 

Now, we can't simply make requests against bedrock directly from the client. Or at least we can't do that without exposing API/AWS keys to the public, which is a quick way to have malicious users hijacking your AWS account! This is part of the reason why there is a guide to accomplish this with SSR provided by amazon, because the credentials never leave the server. However, we can still do this by indirectly calling Bedrock from a backend server, which clients aren't able to access the source code of.

 

The first step to do this is creating a lambda function which we will use as a gateway to interact with Bedrock from the client side. Upon creation this function will be configured with proper access controls, i.e. that a user needs to be logged in to make requests to the lambda. This can be done as any usual Lambda function is created:


amplify add function

? Select which capability you want to add: Lambda function (serverless function)

 ? Provide a friendly name for your resource to be used as a label for this category in the project: lambdafunction

 ? Provide the AWS Lambda function name: BedRockLambdaFunction

 ? Choose the runtime that you want to use: NodeJS

 ? Choose the function template that you want to use: (Use arrow keys)

> Hello world function


This will set up a fresh new Lambda function without any special permissions or logic. Before we go any further, we should make sure the new Lambda has permissions to make requests against bedrock. To do this, we must configure the Lambda's IAM permissions in it's cloud-formation.json file by adding the following execution policy to it's resources section:

    "CustomLambdaExecutionPolicy": {

      "Type": "AWS::IAM::Policy",

      "Properties": {

        "PolicyName": "custom-lambda-execution-policy",

        "PolicyDocument": {

          "Version": "2012-10-17",

          "Statement": [

            {

              "Action": [

                "bedrock:InvokeModel",

                "bedrock:GetModel"

              ],

              "Resource": [

                "*"

              ],

              "Effect": "Allow"

            }

          ]

        },

        "Roles": [

          {

            "Ref": "LambdaExecutionRole"

          }

        ]

      },

      "DependsOn": "LambdaExecutionRole"

    }

 

This allows it to get a requested model from Bedrock, and to invoke requests against said model.

 

Next up, we want to set up the lambda's business logic to be able to call bedrock. See the following code snippets for detail:

 

 

import {

    BedrockRuntimeClient,

    ConverseCommand,

    ConverseCommandOutput

} from "@aws-sdk/client-bedrock-runtime";

 

const client = new BedrockRuntimeClient({ region: AWS_REGION });

 

exports.handler = async (event) => {

    const converseCommandJson = event.arguments.input.converseCommand;

    const converseCommand = new ConverseCommand(JSON.parse(converseCommandJson));

    const converseResponse: ConverseCommandOutput = await client.send(

        converseCommand

    );

    return JSON.stringify(converseResponse)

}

 

This code simply creates a new Bedrock Runtime Client, and passes through an incoming ConverseCommand through to bedrock, and returns the response from bedrock. This leaves it up to the client of this Lambda to put together the ConverseCommand itself, as if it's interfacing directly with Bedrock.

 

Finally, we configure the GraphQL Signature of the lambda in schema.graphql:

 

type Query {

  invokeBedrock(input: String): String @function(name: "invokeBedrock-${env}")

}

 

This is now ready to Amplify Push to the cloud.

 

Once complete, we can make requests against this lambda as we would any other, using the ConverseCommand request format. You can read about this in depth on AWS's documentation, but in short, this allows you to define which model you would like to use from Bedrock's roster of models, the system prompt through which the request is interpreted by the model, and any message or binary content that the model is able to handle.

 

await invokeBedrockClient.invokeBedrock({

    modelId: model,

    system: [{ text: prompt }],

    messages: [messages]

});

 

And there you have it, you are now able to make converseCommand requests from your front end as if you are interfacing with Bedrock directly. This should allow you to quickly integrate AI into any relevant feature in your application, with the ability to configure model and prompt without any backend changes at all.

 

Example - AI Summarizing Location Descriptions from Live Inspections

Here we show how this can be used to make quick and simple AI enable features by indirectly invoking bedrock models form the client. In the below example we use audio transcripts from inspections:

 

<< Could we insert the 4th video, Remote inspections live Demo from https://www.tenerapro.com/th2024 here? If not we can use the Gif Below>>



Conclusion

Integrating AI into our workflows at Tenera, especially through the robust capabilities of AWS Bedrock, represents a significant leap forward in our technological journey. By leveraging the seamless integration with AWS Amplify, we have created a flexible and scalable AI infrastructure that can support a wide array of applications, from text generation to image recognition. The process of setting up and utilizing Bedrock via Lambda functions allows us to quickly and efficiently deploy AI-driven solutions across our platform. As we continue to explore and refine our AI capabilities, we are excited about the potential innovations and efficiencies that will emerge, ultimately enhancing the value we deliver to our users. Stay tuned for more updates as we delve deeper into the transformative power of AI at Tenera!

Comments


bottom of page