Post Syndicated from Jeremy Ware original https://aws.amazon.com/blogs/security/enhancing-data-privacy-with-layered-authorization-for-amazon-bedrock-agents/
Customers are finding several advantages to using generative AI within their applications. However, using generative AI adds new considerations when reviewing the threat model of an application, whether you’re using it to improve the customer experience for operational efficiency, to generate more tailored or specific results, or for other reasons.
Generative AI models are inherently non-deterministic, meaning that even when given the same input, the output they generate can vary because of the probabilistic nature of the models. When using managed services such as Amazon Bedrock in your workloads, there are additional security considerations to help ensure protection of data that’s accessed by Amazon Bedrock.
In this blog post, we discuss the current challenges that you may face regarding data controls when using generative AI services and how to overcome them using native solutions within Amazon Bedrock and layered authorization.
Definitions
Before we get started, let’s review some definitions.
Amazon Bedrock Agents: You can use Amazon Bedrock Agents to autonomously complete multistep tasks across company systems and data sources. Agents can be used to enrich entry data to provide more accurate results or to automate repetitive tasks. Generative AI agents can make decisions based on input and the environmental data they have access to.
Layered authorization: Layered authorization is the practice of implementing multiple authorization checks between the application components beyond the initial point of ingress. This includes service-to-service authorization, carrying the true end-user identity through application components, and adding end-user authorization for each operation in addition to the service authorization.
Trusted identity propagation: Trusted identity propagation provides more simply defined, granted, and logged user access to AWS resources. Trusted identity propagation is built on the OAuth 2.0 authorization framework, which allows applications to access and share user data securely without the need to share passwords.
Amazon Verified Permissions: Amazon Verified Permissions is a fully managed authorization service that uses the provably correct Cedar policy language, so you can build more secure applications.
Challenge
As you build on AWS, there are several services and features that you can use to help ensure your data or your customers’ data is secure. This might include encryption at-rest with Amazon Simple Storage Service (Amazon S3) default encryption or AWS Key Management Service (AWS KMS) keys, or the use of prefixes in Amazon S3 or partition keys in Amazon DynamoDB to separate tenants’ data. These mechanisms are great for dealing with data at-rest and separation of data partitions, but after a generative AI powered application enables customers to access a variety of data (different sensitivity types of data, multiple tenants’ data, and so on) based on user input, the risk of disclosure of sensitive data increases (see the data privacy FAQ for more information about data privacy at AWS). This is because access to data is now being passed to an untrusted identity (the model) within the workload operating on behalf of the calling principal.
Many customers are using Amazon Bedrock Agents in their architecture to augment user input with additional information to improve responses. Agents might also be used to automate repetitive tasks and streamline workflows. For example, chatbots can be useful tools for improving user experiences, such as summarizing patient test results for healthcare providers. However, it’s important to understand the potential security risks and mitigation strategies when implementing chatbot solutions.
A common architecture involves invoking a chatbot agent through an Amazon API Gateway. The API gateway validates the API call using an Amazon Cognito or AWS Lambda authorizer and then passes the request to the chatbot agent to perform its function.
A potential risk arises when users can provide input prompts to the chatbot agent. This input could lead to prompt injection (OWASP LLM:01) or sensitive data disclosure (OWASP LLM:06) vulnerabilities. The root cause is that the chatbot agent often requires broad access permissions through an AWS Identity and Access Management (IAM) service role with access to various data stores (such as S3 buckets or databases), to fulfill its function. Without proper security controls, a threat actor from one tenant could potentially access or manipulate data belonging to another tenant.
Solution
While there is no single solution that can mitigate all risks, having a proper threat model of your consumer application to identify risks (such unauthorized access to data) is critical. AWS offers several generative AI security strategies to assist you in generating appropriate threat models. In this post, we focus on layered authorization throughout the application, focusing on a solution to support a consumer application.
Note: This can also be accomplished using Trusted identity propagation (TIP) and Amazon S3 Access Grants for a workforce application.
By using a strong authentication process such as an OpenID Connect (OIDC) identity provider (IdP) for your consumers enhanced with multi-factor authentication (MFA), you can govern access to invoke the agents at the API gateway. We recommend that you also pass custom parameters to the agent—as shown in Figure 1, using the JWT token from the header of the request. With such a configuration, the agent will evaluate an isAuthorized request with Amazon Verified Permissions to confirm that the calling user has access to the data requested prior to the agent running its described function. This architecture is shown in Figure 1:
The steps of the architecture are as follows:
- The client connects to the application frontend.
- The client is redirected to the Amazon Cognito user pool UI for authentication.
- The client receives a JWT token from Amazon Cognito.
- The application frontend uses the JWT token presented by the client to authorize a request to the Amazon Bedrock agent. The application frontend adds the JWT token to the
InvokeAgent API
call. - The agent reviews the request, calls the knowledge base if required, and calls the Lambda function. The agent includes the JWT token provided by the application frontend into the Lambda invocation context.
- The Lambda function uses the JWT token details to authorize subsequent calls to DynamoDB tables using Verified Permissions (6a), and calls the DynamoDB table only if the call is authorized (6b).
Deep dive
When you design an application behind an API gateway that triggers Amazon Bedrock agents, you must create an IAM service role for your agent with a trust policy that grants AssumeRole
access to Amazon Bedrock. This role should allow Amazon Bedrock to get the OpenAPI schema for your agent Action Group Lambda function from the S3 bucket and allow for the bedrock:InvokeModel
action to the specified model. If you did not select the default KMS key to encrypt your agent session data, you must grant access in the IAM service role to access the customer managed KMS key. Example policies and trust relationship are shown in the following examples.
The following policy grants permission to invoke an Amazon Bedrock model. This will be granted to the agent. In the resource, we are specifically targeting an approved foundational model (FM).
Next, we add a policy statement that allows the Amazon Bedrock agent access to S3:GetObject
and targets a specific S3 bucket with a condition that the account number matches one within our organization.
Finally, we add a trust policy that grants Amazon Bedrock permissions to assume the defined role. We have also added conditional statements to make sure that the service is calling on behalf of our account to help prevent the confused deputy problem.
Amazon Bedrock agents use a service role and don’t propagate the consumer’s identity natively. This is where the underlying problem of protecting tenants’ data might exist. If the agent is accessing unclassified data, then there’s no need to add layered authorization because there’s no additional segregation of access needed based on the authorization caller. But if the application has access to sensitive data, you must carry authorization into processing the agent’s function.
You can do this by adding an additional layer to the Lambda function triggered by invoking the agent. First, initialize the agent to make an isAuthorized
call to Verified Permissions. Only upon an Allow
response will the agent perform the rest of its function. If the response from Verified Permissions is Deny
, then the agent should return a status 403 or a friendly error message to the user.
Verified Permissions must have pre-built policies to dictate how authorization should occur when data is being accessed. For example, you might have a policy like the following to grant access to patient records if the calling principal is a doctor.
In this example, the authorization logic to handle this decision is within the agent Lambda. To do so, the Lambda function first builds the entities structure by decoding the JWT passed as a custom parameter to the Amazon Bedrock agent to assess the calling principal’s access. The requested data should also be included in the isAuthorized
call. After this data is passed to Verified Permissions, it will assess the access decision based on the context provided and the policies within the policy store. As a policy decision point (PDP), it’s important to note that the allow or deny decision must be enforced at the application level. Based on this decision, access to the data will be allowed or denied. The resources being accessed should be categorized to help the application evaluate access control. For example, if the data is stored in DynamoDB, then patients might be separated by partition keys that are defined in the Verified Permissions schema and referenced in a hierarchal sense.
Conclusion
In this post, you learned how you can improve data protection by using AWS native services to enforce layered authorization throughout a consumer application that uses Amazon Bedrock Agents. This post has shown you the steps to improve enforcement of access controls through identity processes. This can help you build applications using Amazon Bedrock Agents and maintain strong isolation of data to mitigate unintended sensitive data disclosure.
We recommend the Secure Generative AI Solutions using OWASP Framework workshop to learn more about using Verified Permissions and Amazon Bedrock Agents to enforce layered authorization throughout an application.
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.