Guardrails for Amazon Bedrock helps implement safeguards customized to your use cases and responsible AI policies (preview)

Post Syndicated from Antje Barth original https://aws.amazon.com/blogs/aws/guardrails-for-amazon-bedrock-helps-implement-safeguards-customized-to-your-use-cases-and-responsible-ai-policies-preview/

As part of your responsible artificial intelligence (AI) strategy, you can now use Guardrails for Amazon Bedrock (preview) to promote safe interactions between users and your generative AI applications by implementing safeguards customized to your use cases and responsible AI policies.

AWS is committed to developing generative AI in a responsible, people-centric way by focusing on education and science and helping developers to integrate responsible AI across the AI lifecycle. With Guardrails for Amazon Bedrock, you can consistently implement safeguards to deliver relevant and safe user experiences aligned with your company policies and principles. Guardrails help you define denied topics and content filters to remove undesirable and harmful content from interactions between users and your applications. This provides an additional level of control on top of any protections built into foundation models (FMs).

You can apply guardrails to all large language models (LLMs) in Amazon Bedrock, including fine-tuned models, and Agents for Amazon Bedrock. This drives consistency in how you deploy your preferences across applications so you can innovate safely while closely managing user experiences based on your requirements. By standardizing safety and privacy controls, Guardrails for Amazon Bedrock helps you build generative AI applications that align with your responsible AI goals.

Guardrails for Amazon Bedrock

Let me give you a quick tour of the key controls available in Guardrails for Amazon Bedrock.

Key controls
Using Guardrails for Amazon Bedrock, you can define the following set of policies to create safeguards in your applications.

Denied topics – You can define a set of topics that are undesirable in the context of your application using a short natural language description. For example, as a developer at a bank, you might want to set up an assistant for your online banking application to avoid providing investment advice.

I specify a denied topic with the name “Investment advice” and provide a natural language description, such as “Investment advice refers to inquiries, guidance, or recommendations regarding the management or allocation of funds or assets with the goal of generating returns or achieving specific financial objectives.”

Guardrails for Amazon Bedrock

Guardrails for Amazon Bedrock

Content filters – You can configure thresholds to filter harmful content across hate, insults, sexual, and violence categories. While many FMs already provide built-in protections to prevent the generation of undesirable and harmful responses, guardrails give you additional controls to filter such interactions to desired degrees based on your use cases and responsible AI policies. A higher filter strength corresponds to stricter filtering.

Guardrails for Amazon Bedrock

PII redaction (in the works) – You will be able to select a set of personally identifiable information (PII) such as name, e-mail address, and phone number, that can be redacted in FM-generated responses or block a user input if it contains PII.

Guardrails for Amazon Bedrock integrates with Amazon CloudWatch, so you can monitor and analyze user inputs and FM responses that violate policies defined in the guardrails.

Join the preview
Guardrails for Amazon Bedrock is available today in limited preview. Reach out through your usual AWS Support contacts if you’d like access to Guardrails for Amazon Bedrock.

During preview, guardrails can be applied to all large language models (LLMs) available in Amazon Bedrock, including Amazon Titan Text, Anthropic Claude, Meta Llama 2, AI21 Jurassic, and Cohere Command. You can also use guardrails with custom models as well as Agents for Amazon Bedrock.

To learn more, visit the Guardrails for Amazon Bedrock web page.

— Antje