Tag Archives: Business Productivity

Quickly go from Idea to PR with CodeCatalyst using Amazon Q

Post Syndicated from Brendan Jenkins original https://aws.amazon.com/blogs/devops/quickly-go-from-idea-to-pr-with-codecatalyst-using-amazon-q/

Amazon Q feature development enables teams using Amazon CodeCatalyst to scale with AI to assist developers in completing everyday software development tasks. Developers can now go from an idea in an issue to a fully tested, merge-ready, running application code in a Pull Request (PR) with natural language inputs in a few clicks. Developers can also provide feedback to Amazon Q directly on the published pull request and ask it to generate a new revision. If the code change falls short of expectations, a new development environment can be created directly from the pull request, necessary adjustments can be made manually, a new revision published, and proceed with the merge upon approval.

In this blog, we will walk through a use case leveraging the Modern three-tier web application blueprint, and adding a feature to the web application. We’ll leverage Amazon Q feature development to quickly go from Idea to PR. We also suggest following the steps outlined below in this blog in your own application so you can gain a better understanding of how you can use this feature in your daily work.

Solution Overview

Amazon Q feature development is integrated into CodeCatalyst. Figure 1 details how users can assign Amazon Q an issue. When assigning the issue, users answer a few preliminary questions and Amazon Q outputs the proposed approach, where users can either approve or provide additional feedback to Amazon Q. Once approved, Amazon Q will generate a PR where users can review, revise, and merge the PR into the repository.

Figure 1: Amazon Q feature development workflow

Figure 1: Amazon Q feature development workflow

Prerequisites

Although we will walk through a sample use case in this blog using a Blueprint from CodeCatalyst, after, we encourage you to try this with your own application so you can gain hands-on experience with utilizing this feature. If you are using CodeCatalyst for the first time, you’ll need:

Walkthrough

Step 1: Creating the blueprint

In this blog, we’ll leverage the Modern three-tier web application blueprint to walk through a sample use case. This blueprint creates a Mythical Mysfits three-tier web application with modular presentation, application, and data layers.

Figure 2: Creating a new Modern three-tier application blueprint

Figure 2: Creating a new Modern three-tier application blueprint

First, within your space click “Create Project” and select the Modern three-tier web application CodeCatalyst Blueprint as shown above in Figure 2.

Enter a Project name and select: Lambda for the Compute Platform and Amplify Hosting for Frontend Hosting Options. Additionally, ensure your AWS account is selected along with creating a new IAM Role.

Once the project is finished creating, the application will deploy via a CodeCatalyst workflow, assuming the AWS account and IAM role were setup correctly. The deployed application will be similar to the Mythical Mysfits website.

Step 2: Create a new issue

The Product Manager (PM) has asked us to add a feature to the newly created application, which entails creating the ability to add new mythical creatures. The PM has provided a detailed description to get started.

In the Issues section of our new project, click Create Issue

For the Issue title, enter “Ability to add a new mythical creature” and for the Description enter “Users should be able to add a new mythical creature to the website. There should be a new Add button on the UI, when prompted should allow the user to fill in Name, Age, Description, Good/Evil, Lawful/Chaotic, Species, Profile Image URI and thumbnail Image URI for the new creature. When the user clicks save, the application should leverage the existing API in app.py to save the new creature to the DynamoDB table.”

Furthermore, click Assign to Amazon Q as shown below in Figure 3.

Figure 3: Assigning a new issue to Amazon Q

Figure 3: Assigning a new issue to Amazon Q

Lastly, enable the Require Amazon Q to stop after each step and await review of its work. In this use case, we do not anticipate having any changes to our workflow files to support this new feature so we will leave the Allow Amazon Q to modify workflow files disabled as shown below in Figure 4. Click Create Issue and Amazon Q will get started.

Figure 4: Configurations for assigning Amazon Q

Figure 4: Configurations for assigning Amazon Q

Step 3: Review Amazon Qs Approach

After a few minutes, Amazon Q will generate its understanding of the project in the Background section as well as an Approach to make the changes for the issue you created as show in Figure 5 below

(**Note: The Background and Approach generated for you may be different than what is shown in Figure 5 below).

We have the option to proceed as is or can reply to the Approach via a Comment to provide feedback so Amazon Q can refine it to align better with the use case.

Figure 5: Reviewing Amazon Qs Background and Approach

Figure 5: Reviewing Amazon Qs Background and Approach

In the approach, we notice Amazon Q is suggesting it will create a new method to create and save the new item to the table, but we already have an existing method. We decide to leave feedback as show in Figure 6 letting Amazon Q know the existing method should be leveraged.

Figure 6: Provide feedback to Approach

Figure 6: Provide feedback to Approach

Amazon Q will now refine the approach based on the feedback provided. The refined approach generated by Amazon Q meets our requirements, including unit tests, so we decide to click Proceed as shown in Figure 7 below.

Figure 7: Confirm approach and click Proceed

Figure 7: Confirm approach and click Proceed

Now, Amazon Q will generate the code for implementation & create a PR with code changes that can be reviewed.

Step 4: Review the PR

Within our project, under Code on the left panel click on Pull requests. You should see the new PR created by Amazon Q.

The PR description contains the approach that Amazon Q took to generate the code. This is helpful to reviewers who want to gain a high-level understanding of the changes included in the PR before diving into the details. You will also be able to review all changes made to the code as shown below in Figure 8.

Figure 8: Changes within PR

Figure 8: Changes within PR

Step 5 (Optional): Provide feedback on PR

After reviewing the changes in the PR, I leave comments on a few items that can be improved. Notably, all fields on the new input form for creating a new creature should be required. After I complete leaving comments, I hit the Create Revision button. Amazon Q will take my comments, update the code accordingly and create a new revision of the PR as shown in Figure 9 below.

Figure 9: PR Revision created

Figure 9: PR Revision created.

After reviewing the latest revision created by Amazon Q, I am happy with the changes and proceed with testing the changes directly from CodeCatalyst by utilizing Dev Environments. Once I have completed testing of the new feature and everything works as expected, we will let our peers review the PR to provide feedback and approve the pull request.

As part of following the steps in this blog post, if you upgraded your Space to Standard or Enterprise tier, please ensure you downgrade to the Free tier to avoid any unwanted additional charges. Additionally, delete the project and any associated resources deployed in the walkthrough.

Unassign Amazon Q from any issues no longer being worked on. If Amazon Q has finished its work on an issue or could not find a solution, make sure to unassign Amazon Q to avoid reaching the maximum quota for generative AI features. For more information, see Managing generative AI features and Pricing.

Best Practices for using Amazon Q Feature Development

You can follow a few best practices to ensure you experience the best results when using Amazon Q feature development:

  1. When describing your feature or issue, provide as much context as possible to get the best result from Amazon Q. Being too vague or unclear may not produce ideal results for your use case.
  2. Changes and new features should be as focused as possible. You will likely not experience the best results when making large and complex changes in a single issue. Instead, break the changes or feature up into smaller, more manageable issues where you will see better results.
  3. Leverage the feedback feature to practice giving input on approaches Amazon Q takes to ensure it gets to a similar outcome as highlighted in the blog.

Conclusion

In this post, you’ve seen how you can quickly go from Idea to PR using the Amazon Q Feature development capability in CodeCatalyst. You can leverage this new feature to start building new features in your applications. Check out Amazon CodeCatalyst feature development today.

About the authors

Brent Everman

Brent is a Senior Technical Account Manager with AWS, based out of Pittsburgh. He has over 17 years of experience working with enterprise and startup customers. He is passionate about improving the software development experience and specializes in AWS’ Next Generation Developer Experience services.

Brendan Jenkins

Brendan Jenkins is a Solutions Architect at Amazon Web Services (AWS) working with Enterprise AWS customers providing them with technical guidance and helping achieve their business goals. He has an area of specialization in DevOps and Machine Learning technology.

Fahim Sajjad

Fahim is a Solutions Architect at Amazon Web Services. He helps customers transform their business by helping in designing their cloud solutions and offering technical guidance. Fahim graduated from the University of Maryland, College Park with a degree in Computer Science. He has deep interested in AI and Machine learning. Fahim enjoys reading about new advancements in technology and hiking.

Abdullah Khan

Abdullah is a Solutions Architect at AWS. He attended the University of Maryland, Baltimore County where he earned a degree in Information Systems. Abdullah currently helps customers design and implement solutions on the AWS Cloud. He has a strong interest in artificial intelligence and machine learning. In his spare time, Abdullah enjoys hiking and listening to podcasts.

An introduction to Amazon WorkMail Audit Logging

Post Syndicated from Zip Zieper original https://aws.amazon.com/blogs/messaging-and-targeting/an-introduction-to-amazon-workmail-audit-logging/

Amazon WorkMail’s new audit logging capability equips email system administrators with powerful visibility into mailbox activities and system events across their organization. As announced in our recent “What’s New” post, this feature enables the comprehensive capture and delivery of critical email data, empowering administrators to monitor, analyze, and maintain compliance.

With audit logging, WorkMail records a wide range of events, including metadata about messages sent, received, and failed login attempts, and configuration changes. Administrators have the option to deliver these audit logs to their preferred AWS services, such as Amazon Simple Storage System (S3) for long-term storage, Amazon Kinesis Data Firehose for real-time data streaming, or Amazon CloudWatch Logs for centralized log management. Additionally, standard CloudWatch metrics on audit logs provide deep insights into the usage and health of WorkMail mailboxes within the organization.

By leveraging Amazon WorkMail’s audit logging capabilities, enterprises have the ability to strengthen their security posture, fulfill regulatory requirements, and gain critical visibility into the email activities that underpin their daily operations. This post will explore the technical details and practical use cases of this powerful new feature.

In this blog, you will learn how to configure your WorkMail organization to send email audit logs to Amazon CloudWatch Logs, Amazon S3, and Amazon Data Firehose . We’ll also provide examples that show how to monitor access to your Amazon WorkMail Organization’s mailboxes by querying the logs via CloudWatch Log Insights.

Email security

Imagine you are the email administrator for a biotech company, and you’ve received a report about spam complaints coming from your company’s email system. When you investigate, you learn these complaints point to unauthorized emails originating from several of your company’s mailboxes. One or more of your company’s email accounts may have been compromised by a hacker. You’ll need to determine the specific mailboxes involved, understand who has access to those mailboxes, and how the mailboxes have been accessed. This will be useful in identifying mailboxes with multiple failed logins or unfamiliar IP access, which can indicate unauthorized attempts or hacking. To identify the cause of the security breach, you require access to detailed audit logs and familiar tools to analyze extensive log data and locate the root of your issues.

Amazon WorkMail Audit Logging

Amazon WorkMail is a secure, managed business email service that hosts millions of mailboxes globally. WorkMail features robust audit logging capabilities, equipping IT administrators and security experts with in-depth analysis of mailbox usage patterns. Audit logging provides detailed insights into user activities within WorkMail. Organizations can detect potential security vulnerabilities by utilizing audit logs. These logs document user logins, access permissions, and other critical activities. WorkMail audit logging facilitates compliance with various regulatory requirements, providing a clear audit trail of data privacy and security. WorkMail’s audit logs are crucial for maintaining the integrity, confidentiality, and reliability of your organization’s email system.

Understanding WorkMail Audit Logging

Amazon WorkMail’s audit logging feature provides you with the data you need to have a thorough understanding of your email mailbox activities. By sending detailed logs to Amazon CloudWatch Logs, Amazon S3, and Amazon Data Firehose, administrators can identify mailbox access issues, track access by IP addresses, and review mailbox data movements or deletions using familiar tools. It is also possible to configure multiple destinations for each log to meet the needs of a variety of use cases, including compliance archiving.

WorkMail offers four audit logs:

  • ACCESS CONTROL LOGS – These logs record evaluations of access control rules, noting whether access to the endpoint was granted or denied in accordance with the configured rules;
  • AUTHENTICATION LOGS – These logs capture details of login activities, chronicling both successful and failed authentication attempts;
  • AVAILABILITY PROVIDER LOGS – These logs document the use of the Availability Providers feature, tracking its operational status and interactions feature;
  • MAILBOX ACCESS LOGS – Logs in this category record each attempt to access mailboxes within the WorkMail Organization, providing a detailed account of credential and protocol access patterns.

Once audit logging is enabled, alerts can be configured to warn of authentication or access anomalies that surpass predetermined thresholds. JSON formatting allows for advanced processing and analysis of audit logs by third party tools. Audit logging stores all interactions with the exception of web mail client authentication metrics.

WorkMail audit logging in action

Below are two examples that show how WorkMail’s audit logging can be used to investigate unauthorized login attempts, and diagnose a misconfigured email client. In both examples, we’ll use WorkMail’s Mailbox Access Control Logs and query the mailbox access control logs in CloudWatch Log Insights.

In our first example, we’re looking for unsuccessful login attempts in a target timeframe. In CloudWatch Log Insights we run this query:

fields user, source_ip, protocol, auth_successful, auth_failed_reason | filter auth_successful = 0

CloudWatch Log Insights returns all records in the timeframe, providing auth_succesful = 0 (false) and auth_failed_reason = Invalid username or password. We also see the source_ip, which we may decide to block in a WorkMail access control rule, or any other network security system.

Log - unsuccessful Login Attempt

Mailbox Access Control Log – an unsuccessful login attempt

In this next example, consider a WorkMail organization that has elected to block the IMAP protocol using a WorkMail access control rule (below):

WorkMail Access Control Rule blocking IMAP

WorkMail Access Control Rule – block IMAP protocol

Because some email clients use IMAP by default, occasionally new users in this example organization are denied access to email due to an incorrectly configured email client. Using WorkMail’s mailbox access control logs in CloudWatch Log Insights we run this query:

fields user_id, source_ip, protocol, rule_id, access_granted | filter access_granted = 0

And we see the user’s attempt to access their email inbox via IMAP has been denied by the access control rule_id (below):

WorkMail Access Control logs - IMAP blocked by access rule

WorkMail Access Control logs – IMAP blocked by access rule

Conclusion

Amazon WorkMail’s audit logging feature offers comprehensive view of your organization’s email activities. Four different logs provide visibility into access controls, authentication attempts, interactions with external systems, and mailbox activities. It provides flexible log delivery through native integration with AWS services and tools. Enabling WorkMail’s audit logging capabilities helps administrators meet compliance requirements and enhances the overall security and reliability of their email system.

To learn more about audit logging on Amazon WorkMail, you may comment on this post (below), view the WorkMail documentation, or reach out to your AWS account team.

To learn more about Amazon WorkMail, or to create a no-cost 30-day test organization, see Amazon WorkMail.

About the Authors

Miguel

Luis Miguel Flores dos Santos

Miguel is a Solutions Architect at AWS, boasting over a decade of expertise in solution architecture, encompassing both on-premises and cloud solutions. His focus lies on resilience, performance, and automation. Currently, he is delving into serverless computing. In his leisure time, he enjoys reading, riding motorcycles, and spending quality time with family and friends.

Andy Wong

Andy Wong

Andy Wong is a Sr. Product Manager with the Amazon WorkMail team. He has 10 years of diverse experience in supporting enterprise customers and scaling start-up companies across different industries. Andy’s favorite activities outside of technology are soccer, tennis and free-diving.

Zip

Zip

Zip is a Sr. Specialist Solutions Architect at AWS, working with Amazon Pinpoint and Simple Email Service and WorkMail. Outside of work he enjoys time with his family, cooking, mountain biking, boating, learning and beach plogging.

How To Build an Email Service on SES

Post Syndicated from tweirjon original https://aws.amazon.com/blogs/messaging-and-targeting/how-to-build-an-email-service-on-ses/

Foundations

Amazon Simple Email Service (SES) handles hundreds of billions of email messages every month. While many are outbound, one of the fastest-growing parts of the business is for inbound traffic. Customers send and receive email via SES using a combination of public SMTP interfaces and the SES SDK. Traditionally, most customers used SES alongside their existing corporate mail systems, but did you know it’s possible to build a complete email service with SES at its core? In fact, it’s already been done – it’s known as Amazon WorkMail, and it provides mailbox and calendar services to tens of thousands of customers (and millions of mailboxes) around the world.

Ingredients for Success

Email transport depends on a few core components. First of all, you have to be a reputable sender, or the receiving email systems are going to reject anything you try to send. You also have to be insulated against spurious reports of abuse, so that one bad apple can’t take down the entire service for everyone. The solution for both of those issues is the same: have an enormous number of public Mail Transfer Agents (MTAs), and manage their IP reputations actively. If someone reports spam coming from one of those IPs, and it gets added to a block list somewhere on the internet, you have to have a rapid response mechanism to engage with the block list operator and take their prescribed steps to clean up the entry.

The Highest Standards of Security

Similarly, you have to consult those same block lists when mail is sent to your own systems from anywhere on the internet. Inbound email is subjected to a variety of authentication steps before it’s released for delivery to a destination. Quality providers will leverage checks called SPF (Sender Policy Framework) and DKIM (Domain Keys Identified Mail). SPF is designed to prevent malicious senders from masquerading as other domains, and DKIM enables a receiving system to validate the authenticity of the sender and to confirm it hasn’t been manipulated while in transit. If either of these checks fail, a receiving system may take action ranging from dropping the message entirely, to flagging it as suspicious but still delivering it to the user’s inbox. A third security control, DMARC (Domain-based Message Authentication, Reporting, and Conformance) takes SPF and DKIM outputs and generates a series of instructions for receiving mailbox providers about what to do with questionable mail. Any serious provider will support these mechanisms and provide visibility into their actual performance on your email.

Amazon WorkMail’s Interface with SES

Once you’ve got clean email and reputable senders or recipients, you have to be able to figure out where to deliver the message itself. SES Inbound has a specific internal action when used with WorkMail, where the message is routed to WorkMail’s own infrastructure for matching against a known user’s inbox and performing the indexing and storage operations necessary to make it show up in your desktop, web, or mobile mail client. There are a number of options which may take place while that message is in transit, however, and the SES framework supports those with its flexible routing options. For example, a very popular choice is for customers to trigger a transport rule powered by AWS Lambda for inbound and/or outbound messages. Some of these are simple – they append a standard banner to the message if it is inbound from an external source, for example – but there is really no limit to what programmatic steps can be taken. You could submit message content to a large language model (LLM) for training or inspection. You could examine its use of language with AWS Bedrock to train a foundational model in generative AI about how to write emails itself. WorkMail and SES support and encourage these kind of big ideas for working with your message content.

Managing Spikes and Growth

Another critical advantage SES provides is the ability to absorb huge spikes in inbound traffic, and to sustain very large permitted volumes of outbound traffic as well. Email’s underlying standards and protocols offer administrators some degree of control over delays in transit, by implementing retry intervals to buffer messages if they can’t be delivered immediately. The classic on-premise enterprise use case, however, still runs the risk of overwhelming the capacity of the (single) mail server, either due to a malicious action by a sender or a huge increase in usage over a very short period of time. SES absorbs those spikes automatically and has orders of magnitude more capacity than any typical on-premise deployment, meaning that your mail enjoys multiple tiers of buffering only when required, and with no introduced latency if buffering is unnecessary.

Putting it All Together

So how does it all work together? The inbound use case is our main focus. When a message arrives via SMTP, SES first interrogates a back-end directory to confirm that the message is destined for an SES customer. If so, it looks up how the customer’s domain is configured, or if it is a WorkMail customer domain. From there the message passes through the SES message scanner, where its content is evaluated for spam or malware, and a scoring indicator is added to the message headers. That score may result in the message being dropped altogether, or it may result in the message ultimately being delivered to a Junk Mail folder in a WorkMail mailbox. Once scored, the message is either stored in the customer’s S3 storage, or delivered to WorkMail for further processing, such as being put in a specific folder, or redirected to another recipient. Once it’s stored somewhere, the customer can interact with it either using SES APIs, or via standard mail clients interacting with a WorkMail mailbox. In practice a mailbox is a structured object format also within S3, but without raw S3 access because the storage is managed as a system resource within WorkMail instead of being owned by an end customer.

The Customer Experience

When a WorkMail customer wants to send a message, they compose it in a mail client and then click ‘Send’ to send it via SMTP. In the outbound case WorkMail relays the message to SES internet-facing mail relays, which in turn look up the recipient domain information for details on how to route it. SES mail relays also perform the necessary security and authentication checks to ensure that the message is sent by a valid user (either SES native or WorkMail) and that the content is cryptographically signed so a receiving system can verify it hasn’t been manipulated in transit, using the DKIM mechanism described previously. When those steps are complete, the message is handed off to the next mail relay on the internet, and SES has no further role in its future unless a receiving system flags it as abusive. In that case the feedback is delivered to SES automatically and a series of containment actions are considered based on the nature and history of abuse reports. Thus the feedback loop to IP reputation is maintained even in the case of a rogue actor sending bad mail.

Robust Tooling Makes Email Look Easy

The bottom line is that SES enables these flows, and a customer wanting to build a comprehensive mail system could do so themselves if they didn’t want to use WorkMail or another existing email service provider. We’ve seen a tremendous range of creative solution-building from customers when they combine SES inbound and outbound mail, a subset of WorkMail mailboxes and their own rules and organization policies, the use of AWS Lambdas, and inline email security gateways. The flexibility to build whatever you need, without being tied to a single product vendor, is what makes SES so popular with its customers, and ensures that WorkMail – as a turnkey mail service – works so reliably for those customers who just need their mail and calendar to work.

New AWS AppFabric Improves Application Observability for SaaS Applications

Post Syndicated from Donnie Prakoso original https://aws.amazon.com/blogs/aws/new-aws-appfabric-improves-application-observability-for-saas-applications/

In today’s business landscape, companies strive to equip their employees with the most suitable and efficient tools to perform their jobs effectively. To achieve this goal, many companies turn to Software-as-a-Service (SaaS) applications. This approach allows companies to optimize their workflows, enhance employee productivity, and focus their resources on core business activities rather than software development and maintenance.

As the use of SaaS applications expands, there’s an increasing need for solutions that can proactively identify and address potential security threats to maintain uninterrupted business operations. Security teams spend time monitoring application usage data for threats or suspicious behavior, and they’re responsible for maintaining security oversight to meet regulatory and compliance requirements.

Unfortunately, integrating SaaS applications with existing security tools requires many teams to build, manage, and maintain point-to-point (P2P) integrations. These P2P integrations are needed so security teams can monitor event logs to understand user or system activity from each application.

Introducing AWS AppFabric
Today, we’re launching AWS AppFabric, a fully managed service that aggregates and normalizes security data across SaaS applications to improve observability and help reduce operational effort and cost with no integration work necessary.

Here’s an animated GIF that gives you a quick look at how AWS AppFabric works.

With AppFabric, you can easily integrate leading SaaS applications without building and managing custom code or point-to-point integrations. For more information on what’s supported, refer to Supported Applications for AppFabric.

The generative AI features of AppFabric, powered by Amazon Bedrock, will be available in a future release. To learn more, visit the AWS AppFabric website.

When the SaaS applications are authorized and connected, AppFabric ingests the data and normalizes disparate security data such as user activity logs; this is accomplished using the Open Cybersecurity Schema Framework (OCSF), an industry standard schema and open-source project co-founded by AWS. This delivers an extensible framework for developing schemas and a vendor-agnostic core security schema.

The data is then enriched with a user identifier, such as a corporate email address. This reduces security incident response time because you gain full visibility to user information for each incident. You can ingest normalized and enriched data to your preferred security tools, which allows you to set common policies, standardize security alerts, and easily manage user access across multiple applications.

Getting Started with AWS AppFabric
To get started with AppFabric, you need to create an App bundle, a one-time process. This stores all AppFabric app authorizations and ingestions, including the encryption key used. When you create an app bundle, AppFabric creates the required AWS Identity and Access Management (IAM) role in your AWS account, which is required to send metrics to Amazon CloudWatch and to access AWS resources such as Amazon Simple Storage Service (Amazon S3) and Amazon Kinesis Data Firehose.

Creating an App Bundle
First, I select Getting started from the home page or left navigation panel from within the AWS Management Console.

Following the step-by-step instructions to set up AppFabric, I select Create app bundle.

In the Encryption section, I use AWS Key Management Service (AWS KMS) to define an encryption key to securely protect my data in all unauthorized applications. The KMS key encrypts my data within my internal data stores used as my ingestion destinations; for this example, my destination is Amazon S3. My key options include AWS owned and Customer managed. Select Customer managed if you want to use a key you have inside KMS.

Authorizing Applications
Once I have created the app bundle, the next step is Create app authorization. On this page, I can select the supported SaaS application that I want to connect to my app bundle.

Then, I need to enter my application credentials so that AppFabric can connect; one of the advantages of using AppFabric is that it connects directly into SaaS applications without the need for me to write any code.

I can set up multiple app authorizations by repeating this step, as required, for each application. The credentials required for authorization vary by app; see the AppFabric documentation for details.

Setting up Audit Log Ingestions
Now I have created an app authorization in my app bundle. I can proceed with Set up audit log ingestions. This step ingests and normalizes audit logs and delivers them to one or more destinations within AWS, including Amazon S3 or Amazon Kinesis Data Firehose.

Under Select app authorizations, I select the authorized app that I created in the previous step. Here, I can choose more than one authorized application that allows me to consolidate data from various SaaS applications into a single destination. Then, I can select a destination for the audit logs of the selected apps. If I selected multiple app authorizations, the destination is applied to each authorized app. Currently, AppFabric supports the following destinations:

  • Amazon S3 – New Bucket
  • Amazon S3 – Existing Bucket
  • Amazon Kinesis Data Firehose

When I select a destination, additional fields appear. For example, if I select Amazon S3 – New Bucket, I need to fill the details for my Amazon S3 bucket and the optional prefix.

After that, I need to define Schema & Format of the ingested audit log data for my selected applications. Here, I have three options:

  • OCSF – JSON
  • OCSF – Parquet
  • Raw – JSON


AppFabric normalizes the audit log data to the OCSF schema and formats the audit log data into JSON or Parquet format. For OCSF – JSON and OCSF – Parquet options, AppFabric automatically maps the fields and enriches the field with user email as an identifier. As for the Raw – JSON data format, AppFabric simply provides the audit log data in its original JSON form.

To see a detailed view of my ingestion status, on the Ingestions page, I select my existing ingestion.

Here, I see the ingestion status is Enabled and the status for my Amazon S3 bucket is Active.

After my ingestion runs for around 10 minutes, I can see AppFabric stored the audit data logs in my Amazon S3 bucket.

When I open the file, I can see all the audit data logs from the SaaS application.

With audit data logs now in Amazon S3, I can also use AWS services to analyze and extract insights from the log data. For example, from data in Amazon S3, I can use AWS Glue and run a query using Amazon Athena. The following screenshot shows how I run a query for all activities in the audit data logs.

User Access
AWS AppFabric also has a feature called User access to allow security and IT admin teams to quickly see who has access to which applications. Using an employee’s corporate email address, AppFabric searches all authorized applications in the app bundle to return a list of apps that the user has access to. This helps to identify unauthorized user access and accelerate user deprovisioning.

Things to Know
Availability — AWS AppFabric is generally available today in US East (N. Virginia), Europe (Ireland), and Asia Pacific (Tokyo), with availability in additional AWS Regions coming soon.

AWS AppFabric generative AI capabilities – Available in a future release, AWS AppFabric will empower you to automatically perform tasks across applications using generative AI. Powered by Amazon Bedrock, this AI assistant generates answers to natural language queries, automates task management, and surfaces insights across SaaS applications.

Integrations with SaaS applications — AppFabric connects SaaS applications including Asana, Atlassian Jira suite, Dropbox, Miro, Okta, Slack, Smartsheet, Webex by Cisco, Zendesk, and Zoom. Refer to Supported applications for more details.

Integration with Security Tools — Audit data log from AppFabric is compatible with security tools, such as Logz.io, Netskope, NetWitness, Rapid7, and Splunk, or a customer’s proprietary security solution. Refer to Compatible security tools and services for more details on how to set up specific security tools and services.

Learn more
To get started, go to AWS AppFabric for more information and pricing details.

Happy building.
— Donnie

Learn how to streamline and secure your SaaS applications at AWS Applications Innovation Day

Post Syndicated from Phil Goldstein original https://aws.amazon.com/blogs/aws/learn-how-to-streamline-and-secure-your-saas-applications-at-aws-applications-innovation-day/

Companies continue to adopt software as a service (SaaS) applications at a rapid clip, with recent research showing that the average SaaS portfolio now has at least 200 applications. While organizations purchase these purpose-built tools to make their employees more productive, they now must contend with growing security complexities, context switching, and data silos.

If your company faces these issues, or you want to avoid them in the future, join us on Tuesday, June 27, for a free-to-attend online event AWS Applications Innovation Day. AWS will stream the event simultaneously across multiple platforms, including LinkedIn Live, Twitter, YouTube, and Twitch. You can also join us in person in Seattle to hear from Dilip Kumar, Vice President of AWS Applications and an executive panel with AWS Partners Splunk, Asana, and Okta.

Join us for Applications Innovation Day June 27, 2023.

Applications Innovation Day is designed to give you the tools you need to improve how your organization uses and secures SaaS applications. Sessions throughout the day will show you how you can secure data while providing your employees with the best tools for the job. You’ll also learn how to support the right mix of applications to improve workforce collaboration, and how to use generative artificial intelligence securely and effectively to improve insights and enhance employee productivity.

We’ll start the virtual broadcast with a keynote from Dilip Kumar, Vice President of AWS Applications, who will discuss the way we use and govern SaaS applications at AWS. He’ll also discuss how we’ll make it easier to deploy purpose-built SaaS applications like Asana, Okta, Splunk, Zoom, and others across your business, including the announcement of some exciting new innovations from AWS.

AWS product leaders will present technical breakout sessions during the day on the productivity and security aspects of managing a SaaS application tech stack. Sessions will cover a wide range of topics, including how the nature of productivity at work is changing, how AI is transforming SaaS applications and collaboration, how you can improve your security observability across your applications, and how you can create custom analytics on SaaS application activity.

Overall, the event is a great opportunity for security leaders, IT administrators and operations leaders, and anyone leading digital workplace and transformation initiatives to learn how to better leverage and govern SaaS applications.

To register for AWS Applications Innovation Day, simply go to the event page.