ISO 27001 Certification: What it is and why it matters

Post Syndicated from Drew Burton original https://blog.rapid7.com/2022/12/06/iso-27001-certification-what-it-is-and-why-it-matters/

ISO 27001 Certification: What it is and why it matters

Did you know that Rapid7 information security management system (ISMS) is ISO 27001 certified? This certification validates that our security strategy and processes meet very high standards. It underscores our commitment to corporate and customer data security.

What is ISO 27001?

ISO 27001 is an internationally recognized standard for information security management published by the International Standards Organization (ISO). It details requirements for establishing, implementing, maintaining and continually improving an ISMS.

ISO 27001 is focused on risk management and taking a holistic approach to security. Unlike some standards and frameworks, ISO 27001 does not require the implementation of specific technical controls. Instead, it provides a framework and checklist of controls that can be used to develop and maintain a comprehensive ISMS.

It is one of more than ten published standards in the ISO 27000 family. It is the only standard among them that an organization can be certified against.

To become ISO 27001 certified, an organization must:

  • Systematically examine its information security risks, taking account of the threats, vulnerabilities, and impacts.
  • Design and implement a coherent and comprehensive suite of information security controls and risk avoidance measures.
  • Adopt an overarching management process that ensures the information security controls continue to meet the organization’s information security needs over time.

Then, the ISMS must be audited by a third party. This is a rigorous process, which determines whether the organization has implemented applicable best practices as defined in the standard. Certified organizations must undergo annual audits to maintain compliance. Rapid7’s ISMS was audited by Schellman.

Why does ISO 27001 certification matter?

Rapid7 is committed to helping our customers reduce risk to their organizations. ISO 27001 certification is one way that we demonstrate that commitment. It is worth noting that certification is not a legal requirement, rather, it is proof that an organization’s security strategy and processes meet very high standards. Rapid7 believes that maintaining the highest standards of information security for ourselves and our clients is essential.

As noted above, ISO 27001 provides a framework to meet those standards. That framework is based on three guiding principles to help organizations build their security strategy and develop effective policies and controls: Confidentiality, Integrity, and Availability.

  • Confidentiality means that data should be kept private, secure, and accessible only by authorized individuals.
  • Integrity requires that organizations ensure consistent, accurate, reliable, and secure data.
  • Availability means systems, applications, and data are available and accessible to satisfy business needs.

Rapid7’s security strategy reflects these principles. Our platform and products are designed to fit securely into your environment and your data is accessible when you need it—with full visibility into where it lives, who has access to it, and how it is used. When you partner with Rapid7, your data stays safe. Period.

For more information about the policies and procedures Rapid7 has in place to keep our data, platform, and products secure, visit the Trust section of our website.

Object Lock 101: Protecting Data From Ransomware

Post Syndicated from Molly Clancy original https://www.backblaze.com/blog/object-lock-101-protecting-data-from-ransomware/

Cybercriminals are good at what they do. It’s an unfortunate reality, but one that you should be prepared for if you are in charge of keeping data safe. A study of penetration testing projects from Positive Technologies found that, “In 93% of cases, an external attacker can breach an organization’s network perimeter and gain access to local network resources.”

With this knowledge, smart companies prepare in advance rather than hoping to avoid being attacked. Recovering from a ransomware attack is much easier when you maintain safe, reliable backups—especially if you implement a 3-2-1 backup strategy. But even with a strong backup strategy in place, you’re not fully protected. Anything that’s connected to a compromised network is vulnerable, including backups. Cybercriminals are savvy, and they’ve shown they can target backups to gain leverage and force companies to pay—something that’s increasingly going to put you on the wrong side of the law.

That doesn’t have to be your story. With advances in backup protection like Object Lock, you can add one more layer of defense between cybercriminals and your valuable, irreplaceable data.

In this post, we’ll explain:

  • What Object Lock is.
  • What Object Lock does.
  • Why you should use it.
  • When you should use it.

More On Protecting Your Business From Ransomware Attacks

This post is a part of our ongoing series on ransomware. Take a look at our other posts for more information on how businesses can defend themselves against a ransomware attack, the latest patterns in ransomware attacks, and more.

➔ Download The Complete Guide to Ransomware

What Is Object Lock?

Object Lock is a powerful backup protection tool that prevents a file from being altered or deleted until a given date. When you set the lock, you can specify the length of time an object should be locked. Any attempts to manipulate, copy, encrypt, change, or delete the file will fail during that time. (NOTE: At Backblaze, the Object Lock feature was previously referred to as “File Lock,” and you may see the term from time to time in documentation. They are one and the same.)

Reminder: What Is an Object?

An object is a unit of data that contains all of the bytes that constitute what you would typically think of as a file. That file could be an image, video, document, audio recording, etc. An object also includes metadata so that it can be easily analyzed.

What Does Object Lock Do?

Object Lock allows you to store objects using a Write Once, Read Many (WORM) model, meaning after it’s written, data cannot be modified or deleted for a defined period of time. The files may be accessed, but no one can change them, including the file owner or whoever set the Object Lock.

What is Object Lock Legal Hold?

Object Lock Legal Hold also prevents a file from being changed or deleted, but the lock does not have a defined retention period—a file is immutable until Object Lock Legal Hold is removed.

What Is an Air Gap, and How Does Object Lock Provide One?

Object Lock creates a virtual air gap for your data. The term comes from the world of LTO tape. When backups are written to tape, the tapes are then physically removed from the network, creating a gap of air between backups and production systems. In the event of a ransomware attack, you can just pull the tapes from the previous day to restore systems.

Object Lock does the same thing, but it all happens in the cloud. Instead of physically isolating data, Object Lock virtually isolates the data.

What Is Immutable Data? Is It the Same as Object Lock?

In object storage, immutability is a characteristic of an object that cannot be modified or changed. It is different from Object Lock in that Object Lock is a function offered by object storage providers that allows you to create immutable or unchangeable objects. Immutability is the characteristic you want to achieve, and Object Lock is the way you achieve it.

How Does Object Lock Work With Veeam Ransomware Protection?

Veeam, a backup software provider, offers immutability as a feature to protect your data. The immutability feature in Veeam works hand-in-hand with the Object Lock functionality offered by cloud providers like Backblaze. If you’re using a cloud storage provider to store backups and they support Object Lock (which we think all should, not that we’re biased), you can configure your backup software to save your immutable backups to a storage bucket with Object Lock enabled. As a certified Veeam Ready-Object and Veeam Ready-Object with Immutability partner, utilizing this feature with Backblaze is as simple as checking a box in your settings.

For a step-by-step video on how to back up Veeam to Backblaze B2 Cloud Storage with Object Lock functionality, check out the video below.

Does Object Lock Work With Other Integrations?

Object Lock works with many Backblaze B2 integrations in addition to Veeam, including MSP360, Commvault, Rubrik, and more. You can also enable Object Lock using the Backblaze S3 Compatible API, the B2 Native API, the Backblaze B2 SDKs, and the CLI.

Why Should You Use Object Lock?

Using Object Lock to protect your data means no one—not cybercriminals, not ransomware viruses, not even you—can edit or delete your files. If your systems are compromised by ransomware, you can trust that your backup data stored with Object Lock hasn’t been deleted or altered. There’s no added cost to use Object Lock with Backblaze B2 beyond what you would pay to store the data anyway (but other cloud providers charge for Object Lock, so you should be sure to check fees when comparing cloud storage providers).

Finally, data security experts strongly recommend using Object Lock to protect your critical backups. Not only is it recommended, but in some industries Object Lock is necessary to maintain data protection standards required by compliance agencies. One other thing to consider: Many companies are adopting cyber insurance, and often those companies require immutable backups for you to be fully covered.

The question really isn’t, “Why should you use Object Lock?” but rather “Why aren’t you?”

When Should You Use Object Lock?

The immutability achieved by Object Lock is useful for protecting against ransomware, but there are some additional use cases that make it valuable to businesses as well.

What Are the Different Use Cases for Object Lock?

Object Lock comes in handy in a few different use cases:

  1. To replace an LTO tape system: Most folks looking to migrate from tape are concerned about maintaining the security of the air gap that tape provides. With Object Lock you can create a backup that’s just as secure as air-gapped tape without the need for expensive physical infrastructure.
  2. To protect and retain sensitive data: If you work in an industry subject to HIPAA regulations or if you need to retain and protect data for legal reasons, Object Lock allows you to easily set appropriate retention periods for regulatory compliance.
  3. As part of a disaster recovery and business continuity plan: The last thing you want to worry about in the event you are attacked by ransomware is whether your backups are safe. Being able to restore systems from backups stored with Object Lock can help you minimize downtime and interruptions, comply with cybersecurity insurance requirements, and achieve recovery time objectives easier.

Protecting Your Data With Object Lock

To summarize, here are a few key points to remember about Object Lock:

  • Object Lock creates a virtual air gap using a WORM model.
  • Data that is protected using Object Lock is immutable, meaning it’s unchangeable.
  • With Object Lock enabled, no one can encrypt, tamper with, or delete your locked data.
  • Object Lock can be used to replace tapes, protect sensitive data, and defend against ransomware.

Ransomware attacks can be disruptive, but your story doesn’t have to end with you feeling forced to pay against your better judgment or facing extended downtime. As cybercriminals become bolder and more advanced, creating immutable, air-gapped backups using Object Lock functionality puts a manageable recovery in closer reach.

Have questions about Object Lock functionality and ransomware? Let us know in the comments.

The post Object Lock 101: Protecting Data From Ransomware appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Enable federation to Amazon QuickSight with automatic provisioning of users between AWS IAM Identity Center and Microsoft Azure AD

Post Syndicated from Aditya Ravikumar original https://aws.amazon.com/blogs/big-data/enable-federation-to-amazon-quicksight-with-automatic-provisioning-of-users-between-aws-iam-identity-center-and-microsoft-azure-ad/

Organizations are working towards centralizing their identity and access strategy across all their applications, including on-premises, third-party, and applications on AWS. Many organizations use identity providers (IdPs) based on OIDC or SAML-based protocols like Microsoft Azure Active Directory (Azure AD) and manage user authentication along with authorization centrally. This authorizes users to access Amazon QuickSight assets-analyses, dashboards, folders, and datasets-through centrally managed Azure AD and AWS IAM Identity Center (successor to AWS Single Sign-On).

IAM Identity Center is an authentication process that allows users to sign into multiple applications with a single set of usernames and passwords. IAM Identity Center makes it easy to centrally manage access to multiple AWS accounts and business applications. It provides your workforce with single sign-on (SSO) access to all assigned accounts and applications from one place.

In this post, we walk you through the steps required to configure federated SSO along with automated email sync between QuickSight and Azure AD via IAM Identity Center. We also demonstrate ways System for Cross-domain Identity Management (SCIM) keeps your IAM Identity Center identities in sync with identities from your IdP.

Solution overview

The following is the reference architecture for configuring IAM Identity Center with Azure AD for automated federation to QuickSight and the AWS Management Console.

The following are the steps involved to set up federated SSO from Azure to QuickSight:

  1. Configure Azure as an IdP in IAM Identity Center.
  2. Register an IAM Identity Center application in Azure AD.
  3. Configure the application in Azure AD.
  4. Enable automatic provisioning of users and groups.
  5. Enable email syncing for federated users in QuickSight console.
  6. Create a QuickSight application in IAM Identity Center.
  7. Add the IAM Identity Center application as a SAML IdP.
  8. Configure AWS Identity and Access Management (IAM) policies and roles.
  9. Configure attribute mappings in IAM Identity Center.
  10. Validate federation to QuickSight from IAM Identity Center.

Prerequisites

To complete this walkthrough, you must have the following prerequisites:

  • An Azure AD subscription with Administrator permission.
  • QuickSight account subscription with Administrator permission.
  • IAM Administrator account.
  • IAM Identity Center Administrator account.

Configure Azure as IdP in IAM Identity Center

To configure Azure as an IdP, complete the following steps:

  1. On the IAM Identity Center console, choose Enable.
    choose Enable
  2. Choose Choose your identity source.
    Choose your identity source.
  3. Select External identity provider to manage all users and groups.
  4. Choose Next.
    Choose Next.
  5. In the Configure external identity provider section, download the service provider metadata file.
  6. Save the AWS access portal sign-in URL, IAM Identity Center Assertion Consumer Service (ACS) URL, and IAM Identity Center issuer URL.
    These are used later in this post.
  7. Leave this tab open in your browser while proceeding to the next steps.

Register an IAM Identity Center application in Azure AD

To register an IAM Identity Center application in Azure AD, complete the following steps:

  1. Sign in to your Azure portal using an administrator account.
  2. Under Azure Services, choose Azure AD and under Manage, choose Enterprise applications.
    Under Azure Services, choose Azure AD and under Manage, choose Enterprise applications.
  3. Choose New application.
  4. Choose Create your own application.
  5. Enter a name for the application.
  6. Select the option Integrate any other application you don’t find in the gallery (Non-gallery).
  7. Choose Create.
    Choose Create.

Configure the application in Azure AD

To configure your application, complete the following steps:

  1. Under Enterprise applications, choose All applications and select the application created in the previous step.
  2. Under Manage, choose Single Sign-on.
  3. Choose SAML.
    Choose SAML.
  4. Choose Single Sign-on to set up SSO with SAML.
  5. Choose Upload metadata file, and upload the file you downloaded from IAM Identity Center.
  6. Choose Edit to edit the Basic SAML Configuration section.
    6. Choose Edit to edit the Basic SAML Configuration section.
  • For Identifier (Entity ID), enter the IAM Identity Center issuer URL.
  • For Reply URL (Assertion Consumer Service URL), enter the IAM Identity Center ACS URL.
  1. Under SAML Signing Certificate, choose Download next to Federation Metadata XML.
    7. Under SAML Signing Certificate, choose Download next to Federation Metadata XML.

We use this XML document in later steps when setting up the SAML provider in IAM and in IAM Identity Center.

  1. Leave this tab open in your browser while moving to the next steps.
  2. Switch to the IAM Identity Center tab to complete its setup.
  3. Under Identity provider metadata, choose IdP SAML metadata and upload the federation metadata XML file you downloaded.
    10. Under Identity provider metadata, choose IdP SAML metadata and upload the federation metadata XML file you downloaded.
  4. Review and confirm the changes.
    11. Review and confirm the changes.

Enable automatic provisioning of users and groups

IAM Identity Center supports System for Cross-domain Identity Management (SCIM) v2.0 standard. SCIM keeps your IAM Identity Center identities in sync with external IdPs. This includes any provisioning, updates, and deprovisioning of users between IdP and IAM Identity Center. To enable SCIM, complete the following steps:

  1. On the IAM Identity Center console, choose Settings in the navigation pane.
  2. Next to Automatic provisioning, choose Enable.
    2. Next to Automatic provisioning, choose Enable.
  3. Copy the SCIM endpoint and Access token.
    Copy the SCIM endpoint and Access token.
  4. Switch to the Azure AD tab.
  5. On the Default Directory Overview page, under Manage, choose Users.
    5. On the Default Directory Overview page, under Manage, choose Users.
  6. Choose New user and Create new user(s).
    Make sure the user profile has valid information under First name, Last name, and Email attribute.
    Make sure the user profile has valid information under First name, Last name, and Email attribute.
  7. Under Enterprise applications, choose All applications and select the application you created earlier.
  8. Under Manage, choose Users and groups.
    8. Under Manage, choose Users and groups.
  9. Choose Add user/group, and select the users you created earlier.
  10. Choose Assign.
    Choose Assign.
  11. Under Manage, choose Provisioning and Get started.
    11. Under Manage, choose Provisioning and Get started.
  12. Choose Provisioning Mode as Automatic.
  13. For Tenant URL, enter the SCIM endpoint.
  14. For Secret Token, enter the Access token.
  15. Choose Test Connection and Save.
    15. Choose Test Connection and Save.
  16. Under Provisioning, choose Start provisioning.
    16. Under Provisioning, choose Start provisioning.

Make sure the user profile has valid information under First name, Last name, and Email attribute. This is the key value for email sync with QuickSight.
Make sure the user profile has valid information under First name, Last name, and Email attribute. This is the key value for email sync with QuickSight.

On the IAM Identity Center console, under Users, you can now see all the users provisioned from Azure AD.
On the IAM Identity Center console, under Users, you can now see all the users provisioned from Azure AD.

Enable email syncing for federated users in QuickSight console

Complete the following steps to enable email syncing for federated users:

  1. Sign in as an admin user to the QuickSight console and choose Manage QuickSight from the user name menu.
    1. Sign in as an admin user to the QuickSight console and choose Manage QuickSight from the user name menu.
  2. Choose Single sign-on (SSO) in the navigation pane.
    2. Choose Single sign-on (SSO) in the navigation pane.
  3. Under Email Syncing for Federated Users, select ON.
    3. Under Email Syncing for Federated Users, select ON.

Create a QuickSight application in IAM Identity Center

Complete the following steps to create a custom SAML 2.0 application in IAM Identity Center.

  1. On the IAM Identity Center console, choose Applications in the navigation pane.
  2. Choose Add application.
    2Choose Add application.
  3. Under Preintegrated applications, search for and choose Amazon QuickSight.
  4. Choose Next.
    Choose Next.
  5. For Display name, enter a name, such as Amazon QuickSight.
  6. For Description, enter a description.
  7. Download the IAM Identity Center SAML metadata file to use later in this post.
  8. For Application start URL, leave as is.
  9. For Relay state, enter https://quicksight.aws.amazon.com.
  10. For Session duration, choose your session duration. The recommended value is 8 hours.
  11. For Application ACS URL, enter https://signin.aws.amazon.com/saml.
  12. For Application SAML audience, enter urn:amazon:webservices.
  13. Choose Submit
    After your settings are saved, your application configuration should look similar to the following screenshot.
    After your settings are saved, your application configuration should look similar to the following screenshot.

You can now assign your users to this application, so that the application appears in their IAM Identity Center portal after login.

  1. On the application page, under Assigned users, choose Assign Users.
    14. On the application page, under Assigned users, choose Assign Users.
  2. Select your users.
  3. Optionally, if you want to enable multiple users in your organization to use QuickSight, the fastest and easiest way is to use IAM Identity Center groups.
  4. Choose Assign Users.
    17. Choose Assign Users.

Add the IAM Identity Center application as a SAML IdP

Complete the following steps to configure IAM Identity Center as your SAML IdP:

  1. Open a new tab in your browser.
  2. Sign in to the IAM console in your AWS account with admin permissions.
  3. Choose Identity providers in the navigation pane.
  4. Choose Add provider.
  5. Select SAML for Provider type.
  6. For Provider name, enter IAM_Identity_Center.
  7. Choose Choose File to upload the metadata document you downloaded earlier from the Amazon QuickSight application.
  8. Choose Add Provider.
    8. Choose Add Provider
  9. On the summary page, record the value for the provider ARN (arn:aws:iam::<AccountID>:saml-provider/IAM_Identity_Center).

You will use this ARN while configuring claims rules later in this post.

Configure IAM policies

In this step, you create three IAM policies for different role permissions in QuickSight:

  • QuickSight-Federated-Admin
  • QuickSight-Federated-Author
  • QuickSight-Federated-Reader

Use the following steps to set up QuickSight-Federated-Admin policyThis policy grants admin privileges in QuickSight to the federated user:

  1. On the IAM console, choose Policies in the navigation pane
  2. Choose Create policy.
  3. Choose JSON and replace the existing text with the following code:

    {
        "Statement": [
            {
                "Action": [
                    "quicksight:CreateAdmin"
                ],
                "Effect": "Allow",
                "Resource": [
                    "arn:aws:quicksight::<yourAWSAccountID>:user/${aws:userid}"
                ]
            }
        ],
        "Version": "2012-10-17"
    }

Ignore the “Missing ARN Region: Add a Region to the quicksight resource ARN” error and continue. Optionally, you could also add a specific AWS region in the ARN.

  1. Choose Review policy
  2. For Name enter QuickSight-Federated-Admin.
  3. Choose Create policy.
  4. Repeat these steps to create the QuickSight-Federated-Author policy using the following JSON code to grant author privileges in QuickSight to the federated user:
    {
        "Statement": [
            {
                "Action": [
                    "quicksight:CreateUser"
                ],
                "Effect": "Allow",
                "Resource": [
                    "arn:aws:quicksight::<yourAWSAccountID>:user/${aws:userid}"
                ]
            }
        ],
        "Version": "2012-10-17"
    }

Ignore the “Missing ARN Region: Add a Region to the quicksight resource ARN” error and continue. Optionally, you could also add a specific AWS region in the ARN.

  1. Repeat these steps to create the QuickSight-Federated-Reader policy using the following JSON code to grant reader privileges in QuickSight to the federated user:
    {
        "Statement": [
            {
                "Action": [
                    "quicksight:CreateReader"
                ],
                "Effect": "Allow",
                "Resource": [
                    "arn:aws:quicksight::<yourAWSAccountID>:user/${aws:userid}"
                ]
            }
        ],
        "Version": "2012-10-17"
    }

Ignore the “Missing ARN Region: Add a Region to the quicksight resource ARN” error and continue. Optionally, you could also add a specific AWS region in the ARN.

Configure IAM roles

Next, create roles that your Azure AD and IAM Identity Center users assume when federating into QuickSight. The following steps set up the admin role:

  1. On the IAM console, choose Roles in the navigation pane.
  2. Choose Create role.
  3. For Select type of trusted entity, choose SAML 2.0 federation.
  4. For SAML provider, choose the provider you created earlier (IAM_Identity_Center).
  5. Select Allow programmatic and AWS Management Console access.
  6. For Attribute, make sure SAML:aud is selected.
  7. For Value, make sure https://signin.aws.amazon.com/saml is selected.
  8. Choose Next.
    Choose Next.
  9. Choose the QuickSight-Federated-Admin IAM policy you created earlier.
  10. Choose Next: Tags.
  11. Choose Next: Review.
  12. For Role name, enter QuickSight-Admin-Role.
  13. For Role description, enter a description.
    13. For Role description, enter a description.
  14. Choose Create role.
  15. On the IAM console, in the navigation pane, choose Roles.
  16. Choose the QuickSight-Admin-Role role you created to open the role’s properties.
  17. Record the role ARN to use later.
  18. On the Trust relationships tab, choose Edit trust policy.
    18. On the Trust relationships tab, choose Edit trust policy.
  19. For the policy details, enter the following JSON:
    {
        "Version": "2012-10-17",
         "Statement": [
     {
        "Effect": "Allow",
        "Principal": {
    "Federated": "arn:aws:iam::<yourAWSAccountID>:saml-provider/IAM_Identity_Center"
                            },
                "Action": "sts:AssumeRoleWithSAML",
                "Condition": {
                    "StringEquals": {
                        "SAML:aud": "https://signin.aws.amazon.com/saml"	
               }
                }
            },
            {	
                		"Effect": "Allow",
                		"Principal": {
                    	 "Federated":"arn:aws:iam::<yourAWSAccountID>:saml-provider/IAM_Identity_Center"
                				},
                		  "Action": "sts:TagSession",
                   "Condition": {
                    	  "StringLike": {
                       "aws:RequestTag/Email": "*"
               }
                }
            }
        ]
    }

  20. Choose Update Policy.
  21. Repeat these steps to create the roles QuickSight-Author-Role and QuickSight-Reader-Role. Attach the QuickSight-Federated-Author and QuickSight-Federated-Reader policies to their respectively roles.

Configure attribute mappings in IAM Identity Center

The final step is to configure the attribute mappings in IAM Identity Center. The attributes you map here become part of the SAML assertion that is sent to the QuickSight application. You can choose which user attributes in your application map to corresponding user attributes in your connected directory. For more information, refer to Attribute mappings.

  1. On IAM Identity Center console, choose Applications in the navigation pane.
    1. On IAM Identity Center console, choose Applications in the navigation pane.
  2. Select the Amazon QuickSight application you created earlier.
    2. Select the Amazon QuickSight application you created earlier.
  3. On the Actions menu, choose Edit attribute mappings.
  4. Configure the following mappings:
User attribute in the application Maps to this string value or user attribute in IAM Identity Center Format
Subject ${user:email} emailAddress
https://aws.amazon.com/SAML/Attributes/Role arn:aws:iam:: <YourAWSAccount ID>:saml-provider/IAM_Identity_Center, arn:aws:iam:: <YourAWSAccount ID>:role/QuickSight-Admin-Role unspecified
https://aws.amazon.com/SAML/Attributes/RoleSessionName ${user:email} unspecified
https://aws.amazon.com/SAML/Attributes/PrincipalTag:Email ${user:email} url
  1. Choose Save changes.
    Choose Save changes

Validate federation to QuickSight from IAM Identity Center

On the IAM Identity Center console, note down the user portal URL available on the Settings page. We suggest you log out of your AWS account first, or open an incognito browser window. Navigate to the user portal URL, sign in with the credentials of an AD user, and choose your QuickSight application.

On the IAM Identity Center console, note down the user portal URL available on the Settings page.

Navigate to the user portal URL, sign in with the credentials of an AD user, and choose your QuickSight application.

You’re automatically redirected to the QuickSight console.

You’re automatically redirected to the QuickSight console.

You’re automatically redirected to the QuickSight console.

Summary

This post provided step-by-step instructions to configure federated SSO with Azure AD as IdP through IAM Identity Center. We also discussed how SCIM keeps your IAM Identity Center identities in sync with identities from your IdP. This includes any provisioning, updating, and deprovisioning of users between your IdP and IAM Identity Center.

If you have any questions or feedback, please leave a comment.

For additional discussions and help getting answers to your questions, check out the QuickSight Community.


About the author

Aditya Ravikumar is a Solutions Architect at Amazon Web Services. He is based in Seattle, USA. Aditya’s core interests include software development, databases, data analytics and machine learning. He works with AWS customers/partners to provide guidance and technical assistance to transform their business through innovative use of cloud technologies.

Srikanth Baheti is a Specialized World Wide Sr. Solution Architect for Amazon QuickSight. He started his career as a consultant and worked for multiple private and government organizations. Later he worked for PerkinElmer Health and Sciences & eResearch Technology Inc, where he was responsible for designing and developing high traffic web applications, highly scalable and maintainable data pipelines for reporting platforms using AWS services and Serverless computing.

Raji Sivasubramaniam is a Sr. Solutions Architect at AWS, focusing on Analytics. Raji is specialized in architecting end-to-end Enterprise Data Management, Business Intelligence and Analytics solutions for Fortune 500 and Fortune 100 companies across the globe. She has in-depth experience in integrated healthcare data and analytics with wide variety of healthcare datasets including managed market, physician targeting and patient analytics.

Get your head in the cloud(s)

Post Syndicated from Dina Durutlic original https://blog.rapid7.com/2022/12/06/get-your-head-in-the-cloud-s/

Get your head in the cloud(s)

Many organizations are in the midst of adopting the cloud faster than ever before; it’s arguably mission critical for their success and longevity. Just look at initiatives like digital transformation or even the digital twin which aims to bridge the gap between the physical and the digital by leveraging IoT. Organizations are realizing the endless possibilities that the cloud provides — such as optimization of their processes, data accessibility, and unlocked collaboration & innovation. By definition, the cloud enables integrated data continuity, and by 2025, the world will store 200 zettabytes of data, according to Cybersecurity Ventures. A huge percentage of that data will be in the cloud.

However, the promise of the cloud isn’t just lucrative for companies, it opens up new opportunities for attackers. Many threats that impact a cloud environment are not contained there. They can either originate elsewhere or start in the cloud, but can move depending on the motive of the attacker. As organizations continue to go beyond on-prem, security teams need support.

Enter, automation.

The resource and bandwidth constraints that teams face have been well documented across the industry, so we won’t rehash that here. But it is important to emphasize it when it pertains to priorities around cloud security. In order to stay ahead of evolving threats, security teams need to prioritize cloud detection and response. Automation is a means to do just that.

Automation provides a way to cut down the time it would take to address malicious activity, especially when compared to a manual approach. It can also enable more effective and efficient communication with important stakeholders who may have a hand to play in alert validation and response.

At Rapid7, we’re constantly innovating new ways to inject highly customizable automation into our cloud offerings, all with the aim of making your team — and by extension, your cloud security — stronger and more efficient.

Achieving security at speed

Rapid7 provides security professionals with the centralized monitoring, comprehensive context, and automation necessary to confidently take action against threats. One of the primary challenges security teams face when responding to threats in the cloud is being able to answer simple questions like:

  • What is this cloud resource?
  • Who owns it?
  • Is this normal behavior for this resource, or is it abnormal?

Some of these questions can be answered with data, but some may require stakeholders outside the security team to weigh in, such as the Cloud Infrastructure or DevOps team. The traditional process of engaging these teams might mean that you spend precious time locating or opening a new channel in your ChatOps platform, and copying & pasting alert data alongside a manually-typed message asking for help. This works, but can quickly become inefficient and untenable with higher alert volumes. Rapid7 offers customers a solution to this challenge; what if that process could be automated?

Instead of forcing customers to manually pass data back and forth, Rapid7’s solutions provide a way to orchestrate the routing of cloud threat detections to the right communication channel, after gathering as much context as possible regarding the associated cloud resources automatically. This way, those responsible for responding to these threats can jump right into decision-making with all the data they need in a centralized place.

Despite the security challenges, the future is very much still going to be in the cloud. As security professionals, we work to ensure that cloud operations are as secure as they can be, while providing tools and workflows that make the work your security team does day in and day out more efficient and effective. Automation is just such an innovation. Request a demo of our Cloud Risk Complete and Threat Complete offerings to learn how Rapid7 can help your organization today!

[$] Checking page-cache status with cachestat()

Post Syndicated from original https://lwn.net/Articles/917096/

The kernel’s page cache holds pages from files in RAM, allowing those
pages to be accessed without expensive trips to persistent storage.
Applications are normally entirely unaware of the page cache’s operation;
it speeds things up and that is all that matters. Some applications,
though, can benefit from knowledge about how much of a given file is
present in the page cache at any given time; the proposed
cachestat() system call
from Nhat Pham is the latest in a long
series of attempts to make that information available.

CryWiper Data Wiper Targeting Russian Sites

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/12/crywiper-data-wiper-targeting-russian-sites.html

Kaspersky is reporting on a data wiper masquerading as ransomware that is targeting local Russian government networks.

The Trojan corrupts any data that’s not vital for the functioning of the operating system. It doesn’t affect files with extensions .exe, .dll, .lnk, .sys or .msi, and ignores several system folders in the C:\Windows directory. The malware focuses on databases, archives, and user documents.

So far, our experts have seen only pinpoint attacks on targets in the Russian Federation. However, as usual, no one can guarantee that the same code won’t be used against other targets.

Nothing leading to an attribution.

News article.

Slashdot thread.

Introducing Serverlesspresso Extensions

Post Syndicated from Benjamin Smith original https://aws.amazon.com/blogs/compute/introducing-serverlesspresso-extensions/

Today the Serverless DA team is launching Serverlesspresso Extensions, a new program that lets you contribute to Serverlesspresso. The best extensions will be added to the Serverlesspresso application running in production and featured on the AWS Compute Blog.

What is Serverlesspresso?

Serverlesspresso is a multi-tenant event-driven serverless application for a pop-up coffee bar that allows you to order from your phone. In 2022, Serverlesspresso processed over 20,000 orders at technology events around the world. At this year’s re:Invent, it featured in the keynote of Amazon CTO, Dr Werner Vogels. It was showcased as an example of an event-driven application that can be easily evolved.

The architecture comprises several serverless apps and has been open-source and freely available since it was launched at re:Invent 2021.

What is extensibility?

Extensibility is the ability to add new functionality to an existing piece of software without modifying the core code already in place. Extensions for web browsers are an example of how useful extensibility can be. The core web browser code is not changed or affected when third parties write extensions, but end users can gain new, rich functionality not envisioned or intended by the original browser authors.

In many production business applications extensibility can help you keep up with the pace of your users requests. It allows you to create new and useful functionality without having to rearchitect the core, original part of your code. Choosing an architectural style that supports this concept can help you retain flexibility as your users needs change.

How EDA supports extensibility

Serverlesspresso is built on an event-driven architecture (EDA). This is an architecture style that uses events to decouple an application’s components. Event-driven architecture offers an effective way to create loosely coupled communication between microservices. This makes it a good architectural choice when you are designing workloads that will require extensibility.

Loosely coupled microservices are able to scale and fail independently, increasing the resilience of the application. Development teams can build and release features for their team’s microservice quickly, without needing to worry about the behavior of other microservices in the application. In addition, new features can be added on top of existing events without making changes to the rest of the application.

Choreography and orchestration are two different models for how distributed services can communicate with one another. In orchestration, communication is more tightly controlled. A central service coordinates the interaction and order in which services are invoked.

Choreography achieves communication without tight control. Events flow between services without any centralized coordination. Many applications, including Serverlesspresso use both choreography and orchestration for different use cases. Event buses such as Amazon EventBridge can be used for choreography, and workflow orchestration services like AWS Step Functions can help build for orchestration.

New functional requirements come up all the time in production applications. We can address new requirements for an event driven application by creating new rules for events in the Event Bus. These rules can add new functionality to the application without having any impact to the existing application stack.

Characteristics of a Serverlesspresso EDA extension

  1. Extension resources do not have permission to interact with resources outside the extension definition (including core app resources).
  2. Extensions must contain at least one new EventBridge rule that routes existing Serverlesspresso events.
  3. Extensions can be deployed and deleted independently of other extensions and the core application.

Building a Serverlesspresso extension

This section shows how to build an extension for Serverlesspresso that adds new functionality while remaining decoupled from the core application. Anyone can contribute an extension to Serverlesspresso. Use the Serverlesspresso extensions GitHub repository to host your extension:

  1. Complete the GitHub issue template:
  2. Clone the repository. Duplicate, and rename the example _extension_model directory.
  3. Add the associated extension template and source files.
  4. Add the required meta information to `README.md`.
  5. Make a pull request to the repository with the new extension files.

Additional guidance can be found in the repository’s PUBLISHING.md file.

Tools and resources to help you build

Event decoupling introduces a new set of challenges. Finding events and their schema can be a difficult process. Developers must coordinate with the team responsible for publishing an event, or look through documentation to find its schema, and then manually create an object for the event in order to use it in their code.

The Amazon EventBridge schema registry helps solve this challenge. It automatically finds events and their structure, or schema, and stores them in a shared central location. For serverlesspresso Extensions, we have created the Serverlesspresso events catalog, and filled it with events from the EventBridge schema registry. Here, all Serverlesspresso events have been documented to help you understand how to use them in your extensions. This includes the services that produce and consumer the event as well as example schemes for each event.

The event player

The event player is a Step Functions workflow that simulates 15 minutes of operation at the Serverlesspresso bar. It does this by replaying an array of realistic events. Use the event player to generate Serverlesspresso events, when building and testing your extensions. Each event is emitted onto an event bus named Serverlesspresso.

  1. Clone this repository: git clone https://github.com/aws-samples/serverless-coffee.git
  2. Change directory to the event player: cd extensibility/EventPlayer
  3. Deploy the EventPlayer using the AWS SAM CLI:
    sam build && sam deploy --guided

This deploys a Step Functions workflow and a custom event bus called “Serverlesspresso

Running the events player

  1. Open the event player from the AWS Management Console.
  2. Choose Start execution, leave the default input payload and choose Start execution.

The player takes approximately 15 minutes to complete.

About your extension submission

Extensions will be reviewed by the Serverless DA team within 14 days of submission. When submitting your extension, your extension will become part of the open source offering and is covered by the existing license in the repo. It may be used by any customer under the same license. For additional guidance and ideas to help build your Serverlesspresso extensions, use the following resources:

Conclusion

You can now build extensions for Serverlesspresso, and potentially be featured on the AWS Compute Blog by submitting a Serverlesspresso extension. The best extensions will be added to Serverlesspresso in production.

Some demo extensions have been built and documented at https://github.com/aws-samples/serverless-coffee/tree/main/extensions. You can download and install these extensions to see how they are constructed before creating your own.

Visit the Serverless Workflows Collection to browse the many deployable workflows to help build your serverless applications.

A 10-minute guide to the Linux ABI (opensource.com)

Post Syndicated from original https://lwn.net/Articles/917052/

Alison Chaiken provides an
overview
of Linux ABI concerns on opensource.com.

Understanding the stable ABI is a bit subtle. Consider that, while
most of sysfs is stable ABI, the debug interfaces are guaranteed to
be unstable since they expose kernel internals to userspace. In
general, Linus Torvalds has pronounced that by “don’t break
userspace,” he means to protect ordinary users who “just want it to
work” rather than system programmers and kernel engineers, who
should be able to read the kernel documentation and source code to
figure out what has changed between releases.

Стопяващата се (драма за претопяването на българската) нация

Post Syndicated from original https://yurukov.net/blog/2022/nsi-ethical-distribution/

След излизането на първите данни на НСИ имаше, разбира се, много възгласи колко се е стопила нацията за последните 10 години. Намаление наистина има и то за съжаление е закономерно с процесите започнали още в края на 70-те задълбочени през 90-те и чиито ефекти виждаме ясно днес. Променената възрастова структура на хората имащи деца, многото деца родени и предимно оставащи в чужбина и повишената смъртност през пандемията влошават това положение.

Разбира се, има и много неразбиране на материята, което видяхме излязло на ярък гръмогласен парад през последните месеци. То включва теми като емиграция, раждаемост, смъртност и особено влиянието на сексуалното здраве, образование и абортите.

Затова надали може да се учудим на апокалиптичните сензационни заглавия. Това, което не видяхме обаче, са гръмките заглавия след като миналата седмица НСИ пусна данните за етническото разпределение от преброяването през 2021-ва. Вместо това се потвърди наблюдение базирано на данните от определени райони, което повтарям от години

С години се спекулираше много, включително колко коректен е въпроса и дори дали трябва да се позволява на хората да посочват какви са етнически – някои настояваха, че преброителите трябва да ги преценяват на око на какви им изглеждат с аргумента „ма то си е ясно“. Обсъждал съм нашироко в коментари тук абсурдността на тези и други твърдения. Трудно се говори по темата без да се зачекнат тезите така любими на по-крайно националистически настроените, но сериозно исторически необвързани наши сънародници. Затова искам да ви кажа, защо тези данни са всъщност също толкова тревожни, колкото и голяма част от резултатите от преброяването.

Това, което видяхме и което медиите единодушно решиха, че не си струва, защото не е скандална сензация, е че всъщност двете най-големи малцинствени групи в България намаляват. Наистина, цялото население намалява, но при етническите турци и роми това е много по-силно изразено. Докато при етническите българи намалението е – 9.6%, при определилите за етнически турци то е 13.6%, а при ромите е почти двойно – 18%. Така апокалиптичните прогнози през годините за „претопяване“ всъщност е с обратен знак – делът на ромското население е паднал с 8.5%, а на етническите българи се е увеличил.

Именно това обяснявам от години и срещам почти винаги насмешка – че емиграцията особено сред ромското население е значително по-голяма. Причините са както икономически, така социални, дискриминационни и дори корекционни. Немалко градове и села в страната ефективно са под контрола на местни феодали защитавани от прокуратура и политически сделки. Това важи с особена сила в Родопите. Сегрегацията и сериозната дискриминация на ромите специално пък създава реални пречки да се реализират. Това, както и особено силно изразената бедност в някои региони прави емиграцията единствен избор.

Докато един или повече от тези аргументи важи за повечето български граждани, вече имаме ясни данни доказващи нещо наблюдавано отдавна. Може би най-добре беше илюстрирано от човек, който срещнах докато бях доброволец при изборната секция във Франкфурт преди години. Заговорихме се докато чакаме за трудностите в Германия и отношението към чужденците като цяло. Беше от ромски произход и е работел като строител в България. Сподели, че „и в България се отнасят с мен като циганин, и в Германия се отнасят с мен като с циганин, но ми дават поне шанс да работя нещо смислено“. Към онзи момент имаше строителна фирма, осигуряваше хляб на 10 семейства и си плащаше данъците в Германия.

Примерите са най-шарени, също както емиграцията ни. Нагледал съм се на премного, но предпочитам да говоря за този феномен с данни. Факт е, че както при бежанците, пропускаме голям шанс да приобщим и да работим с всякакви общности и култури. Всичко това е заради някаква дълбока озлобеност и криворазбран модел за хомогенност, който не е съществувал до преди 3 поколения.

Затова нека не говорим по темата демография с емоции и усещания на база кой какво видял в коридор на болница или по някоя улица, а разбиране, че светът е доста по-сложен и шарен. Също и че сами си го причиняваме всичко това и не ни помага никак.

The post Стопяващата се (драма за претопяването на българската) нация first appeared on Блогът на Юруков.

Renewal of AWS CyberGRX assessment to enhance customers’ third-party due diligence process

Post Syndicated from Naranjan Goklani original https://aws.amazon.com/blogs/security/renewal-of-aws-cybergrx-assessment-to-enhance-customers-third-party-due-diligence-process/

CyberGRX

Amazon Web Services (AWS) is pleased to announce renewal of the AWS CyberGRX cyber risk assessment report. This third-party validated report helps customers perform effective cloud supplier due diligence on AWS and enhances their third-party risk management process.

With the increase in adoption of cloud products and services across multiple sectors and industries, AWS has become a critical component of customers’ third-party environments. Regulated customers are held to high standards by regulators and auditors when it comes to exercising effective due diligence on third parties.

Many customers use third-party cyber risk management (TPCRM) services such as CyberGRX to better manage risks from their evolving third-party environments and to drive operational efficiencies. To help with such efforts, AWS has completed the CyberGRX assessment of its security posture. CyberGRX security analysts perform the assessment and validate the results annually.

The CyberGRX assessment applies a dynamic approach to third-party risk assessment. This approach integrates advanced analytics, threat intelligence, and sophisticated risk models with vendors’ responses to provide an in-depth view of how a vendor’s security controls help protect against potential threats.

Vendor profiles are continuously updated as the risk level of cloud service providers changes, or as AWS updates its security posture and controls. This approach eliminates outdated static spreadsheets for third-party risk assessments, in which the risk matrices are not updated in near real time.

In addition, AWS customers can use the CyberGRX Framework Mapper to map AWS assessment controls and responses to well-known industry standards and frameworks, such as National Institute of Standards and Technology (NIST) 800-53, NIST Cybersecurity Framework, International Organization for Standardization (ISO) 27001, Payment Card Industry Data Security Standard (PCI DSS), and U.S. Health Insurance Portability and Assessment Act (HIPAA). This mapping can reduce customers’ third-party supplier due-diligence burden.

Customers can access the AWS CyberGRX report at no additional cost. Customers can request access to the report by completing an access request form, available on the AWS CyberGRX page.

As always, we value your feedback and questions. Reach out to the AWS Compliance team through the Contact Us page. If you have feedback about this post, submit comments in the Comments section below. To learn more about our other compliance and security programs, see AWS Compliance Programs.

Want more AWS Security news? Follow us on Twitter.

Naranjan Goklani

Naranjan Goklani

Naranjan is a Security Audit Manager at AWS, based in Toronto (Canada). He leads audits, attestations, certifications, and assessments across North America and Europe. Naranjan has more than 13 years of experience in risk management, security assurance, and performing technology audits. Naranjan previously worked in one of the Big 4 accounting firms and supported clients from the financial services, technology, retail, ecommerce, and utilities industries.

Email delta cost usage report in a multi-account organization using AWS Lambda

Post Syndicated from Ashutosh Dubey original https://aws.amazon.com/blogs/architecture/email-delta-cost-usage-report-in-a-multi-account-organization-using-aws-lambda/

Overview of solution

AWS Organizations gives customers the ability to consolidate their billing across accounts. This reduces billing complexity and centralizes cost reporting to a single account. These reports and cost information are available only to users with billing access to the primary AWS account.

In many cases, there are members of senior leadership or finance decision makers who don’t have access to AWS accounts, and therefore depend on individuals or additional custom processes to share billing information. This task becomes specifically complicated when there is a complex account organization structure in place.

In such cases, you can email cost reports periodically and automatically to these groups or individuals using AWS Lambda. In this blog post, you’ll learn how to send automated emails for AWS billing usage and consumption drifts from previous days.

Solution architecture

Account structure and architecture diagram

Figure 1. Account structure and architecture diagram

AWS provides the Cost Explorer API to enable you to programmatically query data for cost and usage of AWS services. This solution uses a Lambda function to query aggregated data from the API, format that data and send it to a defined list of recipients.

  1. Amazon EventBridge (Amazon CloudWatch Events) is configured to cue the Lambda function at a specific time.
  2. The function uses the AWS Cost Explorer API to fetch the cost details for each account.
  3. The Lambda function calculates the change in cost over time and formats the information to be sent in an email.
  4. The formatted information is passed to Amazon Simple Email Service (Amazon SES).
  5. The report is emailed to the recipients configured in the environment variables of the function.

Prerequisites

For this walkthrough, you should have the following prerequisites:

Walkthrough

  • Download the AWS CloudFormation template from this link: AWS CloudFormation template
  • Once downloaded, open the template in your favorite text editor
  • Update account-specific variables in the template. You need to update the tuple, dictionary, display list, and display list monthly sections of the script for all the accounts which you want to appear in the daily report email. Refer to Figure 2 for an example of some dummy account IDs and email IDs.
A screenshot showing account IDs in AWS Lambda

Figure 2. Account IDs in AWS Lambda

  • Optionally, locate “def send_report_email” in the template. The subject variable controls the subject line of the email. This can be modified to something meaningful to the recipients.

After these changes are made according to your requirements, you can deploy the CloudFormation template:

  1. Log in to the Cloud Formation console.
  2. Choose Create Stack. From the dropdown, choose With new resources (standard).
  3. On the next screen under Specify Template, choose Upload a template file.
  4. Click Choose file. Choose the local template you modified earlier, then choose Next.
  5. Fill out the parameter fields with valid email address. For SchduleExpression, use a valid Cron expression for when you would like the report sent. Choose Next.
    Here is an example for a cron schedule:  18 11 * * ? *
    (This example cron expression sets the schedule to send every day at 11:18 UTC time.)
    This creates the Lambda function and needed AWS Identity and Access Management (AWS IAM) roles.

You will now need to make a few modifications to the created resources.

  1. Log in to the IAM console.
  2. Choose Roles.
  3. Locate the role created by the CloudFormation template called “daily-services-usage-lambdarole
  4. Under the Permissions tab, choose Add Permissions. From the dropdown., choose Attach Policy.
  5. In the search bar, search for “Billing”.
  6. Select the check box next to the AWS Managed Billing Policy and then choose Attach Policy.
  7. Log in to the AWS Lambda console.
  8. Choose the DailyServicesUsage function.
  9. Choose the Configuration tab.
  10. In the options that appear, choose General Configuration.
  11. Choose the Edit button.
  12. Change the timeout option to 10 seconds, because the default of three seconds may not be enough time to run the function to retrieve the cost details from multiple accounts.
  13. Choose Save.
  14. Still under the General Configuration tab, choose the Permissions option and validate the execution role.
    The edited IAM execution role should display the Resource details for which the access has been gained. Figure 3 shows that the allow actions to aws-portal for Billing, Usage, PaymentMethods, and ViewBilling are enabled. If the Resource summary does not show these permissions, the IAM role is likely not correct. Go back to the IAM console and confirm that you updated the correct role with billing access.
A screenshot of the AWS Lambda console showing Lambda role permissions

Figure 3. Lambda role permissions

  • Optionally, in the left navigation pane, choose Environment variables. Here you will see the email recipients you configured in the Cloud Formation template. If changes are needed to the list in the future, you can add or remove recipients by editing the environment variables. You can skip this step if you’re satisfied with the parameters you specified earlier.

Next, you will create a few Amazon SES identities for the email addresses that were provided as environment variables for the sender and recipients:

  1. Log in to the SES console.
  2. Under Configuration, choose Verified Identities.
  3. Choose Create Identity.
  4. Choose the identity type Email Address, fill out the Email address field with the sender email, and choose Create Identify.
  5. Repeat this step for all receiver emails.

The email IDs included will receive an email for the confirmation. Once confirmed, the status shows as verified in the Verified Identities tab of the SES console. The verified email IDs will start receiving the email with the cost reports.

Amazon EventBridge (CloudWatch) event configuration

To configure events:

    1. Go to the Amazon EventBridge console.
    2. Choose Create rule.
    3. Fill out the rule details with meaningful descriptions.
    4. Under Rule Type, choose Schedule.
    5. Schedule the cron pattern from when you would like the report to run.

Figure 4 shows that the highlighted rule is configured to run the Lambda function every 24 hours.

A screenshot of the Amazon EventBridge console showing an EventBridge rule

Figure 4. EventBridge rule

An example AWS Daily Cost Report email

From[email protected] (the email ID mentioned as “sender”)
Sent: Tuesday, April 12, 2022 1:43 PM
To[email protected] (the email ID mentioned as “receiver”)
Subject: AWS Daily Cost Report for Selected Accounts (the subject of email as set in the Lambda function)

Figure 5 shows the first part of the cost report. It provides the cost summary and delta of the cost variance percentage compare to the previous day. You can also see the trend based on the last seven days from the same table. This helps in understanding a pattern around cost and usage.

This summary is broken down per account, and then totaled, in order to help you understand the accounts contributing to the cost changes. The daily change percentages are also color coded to highlight significant variations.

AWS Daily Cost Report email body part 1

Figure 5. AWS Daily Cost Report email body part 1

The second part of the report in the email provides the service-related cost breakup for each account configured in the Account dictionary section of the function. This is a further drilldown report; you will get these for all configured accounts.

AWS Daily Cost Report email body part 2

Figure 6. AWS Daily Cost Report email body part 2

Cleanup

  • Delete the Amazon CloudFormation stack.
  • Delete the identities on Amazon SES.
  • Delete the Amazon EventBridge (CloudWatch) event rule.

Conclusion

The blog demonstrates how you can automatically and seamlessly share your AWS accounts’ billing and change information with your leadership and finance teams daily (or on any schedule you choose). While the solution was designed for accounts that are part of an organization in the service AWS organizations, it could also be deployed in a standalone account without making any changes. This allows information sharing without the need to provide account access to the recipients, and avoids any dependency on other manual processes. As a next step, you can also store these reports in Amazon Simple Storage Service (Amazon S3), generate a historical trend summary for consumption, and continue making informed decisions.

Additional reading

How to investigate and take action on security issues in Amazon EKS clusters with Amazon Detective – Part 2

Post Syndicated from Marshall Jones original https://aws.amazon.com/blogs/security/how-to-investigate-and-take-action-on-security-issues-in-amazon-eks-clusters-with-amazon-detective-part-2/

In part 1 of this of this two-part series, How to detect security issues in Amazon EKS cluster using Amazon GuardDuty, we walked through a real-world observed security issue in an Amazon Elastic Kubernetes Service (Amazon EKS) cluster and saw how Amazon GuardDuty detected each phase by following MITRE ATT&CK tactics.

In this blog post, we’ll walk you through investigative techniques to use with Amazon Detective, paired with the GuardDuty EKS and malware findings from the security issue. After we have identified impacted resources through our investigation, we’ll provide example remediation tactics and preventative controls to address and help prevent security issues in EKS clusters.

Amazon Detective can help you investigate security issues and related resources in your account. Detective provides EKS coverage that you can enable within your accounts. When this coverage is enabled, Detective can help investigate and remediate potentially unauthorized EKS activity that results from misconfiguration of the control plane nodes or application. Although GuardDuty is not a prerequisite to enable Detective, it is recommended that you enable GuardDuty to enhance the visualization capabilities in Detective with GuardDuty findings.

Prerequisites

You must have the following services enabled in your AWS account to generate and investigate findings associated with EKS security events in a similar manner as outlined in this blog. If you do not have GuardDuty enabled, you can still investigate with Detective, but in a limited capacity.

Investigate with Amazon Detective

In the five phases we walked through in part 1, we discussed GuardDuty findings and MITRE ATT&CK tactics that can help you detect and understand each phase of the unauthorized activity, from the initial misconfiguration to the impact on our application when the EKS cluster is used for crypto mining.

The next recommended step is to investigate the EKS cluster and any associated resources. Amazon Detective can help you to investigate whether there was any other related unauthorized activity in the environment. We will walk through Detective capabilities for visualizing and gathering important information to effectively respond to the security issue. If you’re interested in creating detailed incident response playbooks for your security team to follow in your own environment, refer to these sample AWS incident response playbooks.

Depending on your scenario, there are various resources you can use to start your investigation, such as Security Hub findings, GuardDuty findings, related Kubernetes subjects, or an AWS account’s AWS CloudTrail activity. For our walkthrough, we’ll start our investigation from the GuardDuty finding and use the EKS cluster resource to pivot to the Detective console, as shown in Figure 7. Although we initially focus on the EKS cluster, you could start from any entities that are supported in the Detective behavior graph structure in the Amazon Detective User Guide. For example, we could start directly with the Kubernetes subject system:anonymous and find activity associated with the anonymous user.

Figure 7: Example Detective popup from GuardDuty finding for EKS cluster

Figure 7: Example Detective popup from GuardDuty finding for EKS cluster

We’ll now go over the information that you would need to gather from Detective in order to investigate the example security issue.

To investigate EKS cluster findings with Detective

  1. In the GuardDuty console, navigate to an individual finding and hover over Investigate with Detective. Choose one of the specific resources to start. In the image below, we selected the EKS cluster resource to investigate with Detective. You will need to gather some preliminary information about the IAM roles associated with the EKS cluster.
    • Questions: When was the cluster created? What IAM role created the cluster? What IAM role is assigned to the cluster?
    • Why it matters: If you are an incident responder, these details can potentially help you identify the owner of the cluster and help you determine what IAM principals are involved.
    • What next: Start looking into each IAM principal’s activity, as seen in CloudTrail, to investigate whether the IAM entity itself is potentially compromised or what other resources may have been impacted.
    Figure 8: Detective summary page for EKS cluster metadata details

    Figure 8: Detective summary page for EKS cluster metadata details

  2. Next, on the EKS cluster overview page, you can see the container details associated with the cluster.
    • Question: What are some of the other container details for the cluster? Does anything look out of the ordinary? Is it using a public image? Is it missing a network policy?
    • Why it matters: Based on the architecture related to this cluster, you might be able to use this information to determine whether there are unauthorized containers. The contents of unauthorized containers will depend on your organization but typically consist of public images or unauthorized RBAC, pod security policies, or network policy configurations. It’s important to keep in mind that when you look at data in Detective, the scope time is very important. When you pivot from a GuardDuty finding, the scope time will be set to the first time the GuardDuty finding was seen to the last time the finding was seen. The container details reflect the containers that were running during the selected scope time. Changing the scope time might change the containers that are listed in the table shown in Figure 9.
    • What next: Information found on this page can help to highlight unauthorized resources or configurations that will need to be remediated. You will also need to look at how these resources were initially created and if there are missing guardrails that should have been created during the provisioning of the cluster.
    Figure 9: Detective summary page for EKS container metadata details

    Figure 9: Detective summary page for EKS container metadata details

  3. Finally, you will see associated security findings with this specific EKS cluster, similar to Figure 10, at the bottom of the EKS cluster overview page in Detective.
    • Question: Are there any other security findings associated with this cluster that I previously was not aware of?
    • Why it matters: In our example scenario, we walked through the findings that were initially detected and the events that unfolded from those findings. After further investigation, you might see other findings that were not part of the original investigation. This can occur if your security team is only investigating specific findings or severity values. The finding for PrivilegeEscalation:Kubernetes/PrivilegedContainer informs you that a privileged container was launched on your Kubernetes cluster by using an image that has never before been used to launch privileged containers in your cluster. A privileged container has root level access to the host. The other finding, Persistence:Kubernetes/ContainerWithSensitiveMount, informs you that a container was launched with a configuration that included a sensitive host path with write access in the volumeMounts section. This makes the sensitive host path accessible and writable from inside the container. Any finding associated to the suspicious or compromised cluster is valuable because it provides additional insight into what the unauthorized entity was trying to accomplish after the initial detection.
    • What next: With Detective, you might want to continue your investigation by selecting each of these findings and reviewing all details related to the finding. Depending on the findings, you could bring in additional team members to help investigate further. For this example, we will move on to the next step.
    Figure 10: Example Detective summary of security findings associated with the EKS cluster

    Figure 10: Example Detective summary of security findings associated with the EKS cluster

  4. Shift from the EKS cluster overview section to the Kubernetes API activity section, similar to Figure 11 below. This will give you the opportunity to dig into the API activity associated with this cluster.
    1. Question: What other Kubernetes API activity was attempted from the cluster? Which API calls were successful? Which API calls failed? What was the unauthorized user trying to do?
    2. Why it matters: It’s important to determine which actions were successfully invoked by the unauthorized user so that appropriate remediation actions can be taken. You can look at trends of successful and failed API calls, and can even search by Subject, IP address, or Kubernetes API call.
    3. What next: You might want to look at all cluster role binding from days before the first GuardDuty finding was seen to determine if there was any other suspicious activity you should be investigating regarding the cluster.
    Figure 11: Example Detective summary page for Kubernetes API activity on the EKS cluster

    Figure 11: Example Detective summary page for Kubernetes API activity on the EKS cluster

  5. Next, you will want to look at the Newly observed Kubernetes API calls section, similar to Figure 12 below.
    • Question: What are some of the more recent Kubernetes API calls? What are they trying to access right now and are they successful? Do I need to start taking action for other resources outside of EKS?
    • Why it matters: This data shows Kubernetes subjects who were observed issuing API calls to this cluster for the first time during our scope time. Detective provides you this information by keeping a baseline of the activity associated with supported AWS resources. This can help you more quickly determine whether activity might be suspicious and worth looking into. In our example, we used the search functionality to look at API calls associated with the built-in Kubernetes secrets management. A common way to start your search is to see if an unauthorized user has successfully accessed any secrets, which can help you determine what information you might want to search in the overall API call volume section discussed in step 4.
    • What next: If the unauthorized user has successfully accessed any secret, those secrets should be marked as compromised, and they should be rotated immediately.
    Figure 12: Example Detective summary for newly observed Kubernetes API calls from the EKS cluster

    Figure 12: Example Detective summary for newly observed Kubernetes API calls from the EKS cluster

  6. You can also consider the following question when you look at the Newly observed Kubernetes API calls section.
    • Question: Has the IP address associated with the finding been communicating with any other resources in our environment, and if so, what are the details of that communication?
    • Why it matters: To answer this question, you can use Detective’s search functionality and the ability to use wild cards to search for IP addresses with the same first three octets. Also note that you can use CIDR notation to search, as well. Based on the results in the example in Figure 13, you can see that there are a number of related IP addresses associated with the environment. With this information, you now can look at the traffic associated with these different IPs and what resources they were communicating with.
    Figure 13: Example Detective results page from a query against IP addresses associated with the EKS cluster

    Figure 13: Example Detective results page from a query against IP addresses associated with the EKS cluster

  7. You can select one of the IP addresses in the search results to get more information related to it, similar to Figure 14 below.
    1. Question: What was the first time an IP address was observed in the environment? When was the last time it was observed?
    2. Why it matters: You can use this information to start isolating where unauthorized activity is coming from and what actions are being taken. You can also start creating a time series of unauthorized activity and scope.
    3. What next: You can repeat some of the previous investigation steps for each IP address, like looking at the different tabs to review New behavior, Resource interaction, and Kubernetes activity.
    Figure 14: Example Detective results page for specific IP address and associated metadata details

    Figure 14: Example Detective results page for specific IP address and associated metadata details

In summary, we began our investigation with a GuardDuty finding about an anonymous API request that was successful in using system:anonymous on one of our EKS clusters. We then used Detective to investigate and visualize activity associated with that EKS cluster, such as volume of successful or unsuccessful API requests, where and when those actions were attempted and other security findings associated with the resource. Once we have completed the investigation, we can confirm scope and impact of the security event and start moving towards taking action.

Remediation techniques for Amazon EKS

In this section, we will focus on how to remediate the security issue in our example. Your actions will vary based on your organization and the resources affected. It’s important to note that these actions will impact the EKS cluster and associated workloads, and should accordingly be performed by or coordinated with the cluster operator.

Before you take action on the EKS cluster, you will need to preserve forensic artifacts and evidence for the impacted EKS resources. The order of operations for these actions matters, because you want to get all the data from forensic artifacts in order to determine the overall impact to the resources affected. If you quarantine resources before you capture forensic artifacts, there is a risk that running processes will be interrupted or that the malware attempts to destroy resources that are valuable to a forensics investigation, to cover its tracks.

To preserve forensic evidence

  1. Enable termination protection on the impacted worker node and change the shutdown behavior to Stop.
  2. Label the offending pod or node with a label indicating that it is part of an active investigation.
  3. Cordon the worker node.
  4. Capture both volatile (temporary memory) and non-volatile (Amazon EBS snapshots) artifacts on the worker node.

Now that you have the forensic evidence, you can start to quarantine your EKS resources to restrict unauthorized network communication. The main objective is to prevent the affected EKS pods from communicating with internal resources or exfiltrating data externally.

To quarantine EKS resources

  1. Isolate the pod by creating a network policy that denies ingress and egress traffic to the pod.
  2. Attach a security group to the host and remove inbound and outbound rules. Take this action if you believe the underlying host has been compromised.

    Depending on existing inbound and outbound rules on the security group, the connections will either be tracked or untracked. Applying an isolation security group will drop untracked connections. For tracked connections, new connections with the host will not be allowed from the isolation security group, but existing tracked connections will not be interrupted.

    Important: This action will affect all containers running on the host.

  3. Attach a deny rule for the EKS resources in a network access control list (network ACL). Because network ACLs are stateless firewalls, all connections will be interrupted, whether they are tracked or untracked connections.

    Important: This action will affect all subnets using the network ACL and all resources within those subnets.

At this point, the affected EKS resources are quarantined, but the cluster is still configured to allow anonymous, unauthenticated access. You will need to remove all unauthorized permissions that were created or added.

To remove unauthorized permissions

  1. Update the RBAC configuration to remove system:anonymous access.
  2. Revoke temporary security credentials that are assigned to the pod or worker node, if necessary. You can also remove the IAM role associated with the EKS resources.

    Note: Removing IAM policies or attaching IAM policies to restrict permissions will affect the resources that are using the IAM role.

  3. Remove any unauthorized ClusterRoleBinding created by the system:anonymous user.
  4. Redeploy the compromised pod or workload resource.

The actions taken so far primarily target the EKS resource, but based on our Detective investigation, there are other actions you might need to take. Because secrets were involved that could be used outside of the EKS cluster, those secrets will need to be rotated wherever they are referenced. Detective will also suggest additional areas where you can investigate and remediate additional unauthorized activity in your AWS account.

It is important that your team go through game days or run-throughs for investigating and responding to different scenarios in order to make sure the team is prepared. You can run through the EKS security workshop to get your security team more familiar with remediation for EKS.

For more information about responding to EKS cluster related security issues, refer to GuardDuty EKS remediation in the GuardDuty User Guide and the EKS Best Practices Guide.

Preventative controls for EKS

This section covers several preventative controls that you can use to protect EKS clusters.

How can I prevent external access to the EKS cluster?

To help prevent external access to your EKS clusters, limit the exposure of your API server. You can achieve that in two ways:

  1. Set the API server endpoint access to Private. This will effectively forbid anyone outside of the VPC to send Kubernetes API requests to your EKS cluster.
  2. Set an IP address allow list for the EKS cluster public access endpoint.

How can I prevent giving admin access to the EKS cluster?

To help prevent an EKS cluster user from granting any type of access to anonymous or unauthenticated users, you can set up a ValidatingAdmissionWebhook. This is a special type of Kubernetes admission controller that can be configured in the Kubernetes API. (To learn how to build serverless admission webhooks, see the blog post Building serverless admission webhooks for Kubernetes with AWS SAM.)

The ValidatingAdmissionWebhook will deny a Kubernetes API request that matches all of the following checks:

  1. The request is creating or modifying a ClusterRoleBinding or RoleBinding.
  2. The subjects section contains either of the following:
    • The user system:anonymous
    • The group system:unauthenticated

How can I prevent malicious images from being deployed?

Now that you have set controls to prevent external access to the EKS cluster and prevent granting access to anonymous users, you can focus on preventing the deployment of potentially malicious images.

Malicious container images can have different origins, including:

  1. Images stored in public or unauthorized registries
  2. Images replacing the ones that are stored in authorized registries
  3. Authorized images that contain software with existing or newly discovered vulnerabilities

You can address these sources of malicious images by doing the following:

  1. Use admission controllers to verify that images meet your organization’s requirements, including for the image origin. You can also refer to this this blog post to implement a solution with a webhook and admission controllers.
  2. Enable tag immutability in your registry, a control that prevents an actor from maliciously replacing container images without changing the image’s tags. Additionally, you can enable an AWS Config rule to check tag immutability
  3. Configure another ValidatingAdmissionWebhook that will only accept images if they meet all of the following criteria.
    1. Images that come from approved registries.
    2. Images that pass the vulnerability scan during deployment time.
    3. Images that are signed by a trusted party. Amazon Elastic Container Registry (Amazon ECR) is working on a product enhancement to store image signatures. Currently, you can use an open-source cosign tool to verify and store image signatures.

      Note: These criteria can vary based on your use case and internal security and compliance standards.

The above controls will help prevent the deployment of a vulnerable, unauthorized, or potentially malicious container image.

How can I prevent lateral movement inside the cluster?

To prevent lateral movement inside the cluster, it is recommended to use network policies, as follows:

  • Enforce Kubernetes network policies to enforce ingress and egress controls within the cluster. You can implement these policies by following the steps in the Securing your cluster with network policies EKS workshop.

It’s important to note that you could use security groups for the same purpose, but pod security groups should only be used if the cluster is compromised and when you want to control the traffic between a pod and a resource that resides in the VPC, not inter-pod traffic.

In this section, we’ve reviewed different preventative controls that could have helped mitigate our example security incident. With the first preventative control, we could have prevented external actors from connecting to the API server. The second control could have prevented granting access to anonymous users. The third control could have prevented the deployment of an unauthorized or vulnerable container image. Finally, the fourth control could have helped limit the impact of the deployed vulnerable images to only the pods where the images were deployed, making it harder to laterally move to other pods in the cluster.

Conclusion

In this post, we walked you through how to investigate an EKS cluster related security issue with Amazon Detective. We also provided some recommended remediation and preventative controls to put in place for the EKS cluster specific security issues. When pairing GuardDuty’s ability for continuous threat detection and monitoring with Detective’s organization and visualization capabilities, you enable your security team to conduct faster and more effective investigation. By providing the security team the ability quickly view an organized set of data associated with security events within your AWS account, you reduce the overall Mean Time to Respond (MTTR).

Now that you understand the investigative capabilities with Detective, it’s time to try things out! It is important that you provide a mechanism for your security team to practice detection, investigation, and remediation techniques using security incident response simulations. By periodically running simulations, your security team will be prepared to quickly respond to possible security events. You can find more detailed incident response playbooks that can assist you in preparing for events in your environment, see these sample AWS incident response playbooks.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a thread on Amazon GuardDuty re:Post.

Want more AWS Security news? Follow us on Twitter.

Author

Marshall Jones

Marshall is a worldwide senior security specialist solutions architect at AWS. His background is in AWS consulting and security architecture, focused on a variety of security domains including edge, threat detection, and compliance. Today, he helps enterprise customers adopt and operationalize AWS security services to increase security effectiveness and reduce risk.

Jonathan Nguyen

Jonathan Nguyen

Jonathan is a shared delivery team senior security consultant at AWS. His background is in AWS security, with a focus on threat detection and incident response. He helps enterprise customers develop a comprehensive AWS security strategy, deploy security solutions at scale, and train customers on AWS security best practices.

Manuel Martinez Arizmendi

Manuel Martinez Arizmendi

Manuel works a Security Engineer at Amazon Detective providing new security investigation capabilities to AWS customers. Based on Boston,MA and originally from Madrid, Spain, when he’s not at work, he enjoys playing and watching soccer, playing videogames, and hanging out with his friends.

Perform multi-cloud analytics using Amazon QuickSight, Amazon Athena Federated Query, and Microsoft Azure Synapse

Post Syndicated from Harish Rajagopalan original https://aws.amazon.com/blogs/big-data/perform-multi-cloud-analytics-using-amazon-quicksight-amazon-athena-federated-query-and-microsoft-azure-synapse/

In this post, we show how to use Amazon QuickSight and Amazon Athena Federated Query to build dashboards and visualizations on data that is stored in Microsoft Azure Synapse databases.

Organizations today use data stores that are best suited for the applications they build. Additionally, they may also continue to use some of their legacy data stores as they modernize and migrate to the cloud. These disparate data stores might be spread across on-premises data centers and different cloud providers. This presents a challenge for analysts to be able to access, visualize, and derive insights from the disparate data stores.

QuickSight is a fast, cloud-powered business analytics service that enables employees within an organization to build visualizations, perform ad hoc analysis, and quickly get business insights from their data on their devices anytime. Amazon Athena is a serverless interactive query service that provides full ANSI SQL support to query a variety of standard data formats, including CSV, JSON, ORC, Avro, and Parquet, that are stored on Amazon Simple Storage Service (Amazon S3). For data that isn’t stored on Amazon S3, you can use Athena Federated Query to query the data in place or build pipelines that extract data from multiple data sources and store it in Amazon S3.

Athena uses data source connectors that run on AWS Lambda to run federated queries. A data source connector is a piece of code that can translate between your target data source and Athena. You can think of a connector as an extension of Athena’s query engine. In this post, we use the Athena connector for Azure Synapse analytics that enables Athena to run SQL queries on your Azure Synapse databases using JDBC.

Solution overview

Consider the following reference architecture for visualizing data from Azure Synapse Analytics.

In order to explain this architecture, let’s walk through a sample use case of analyzing fitness data of users. Our sample dataset contains users’ fitness information like age, height, and weight, and daily run stats like miles, calories, average heart rate, and average speed, along with hours of sleep.

We run queries on this dataset to derive insights using visualizations in QuickSight. With QuickSight, you can create trends of daily miles run, keep track of the average heart rate over a period of time, and detect anomalies, if any. You can also track your daily sleep patterns and compare how rest impacts your daily activities. The out-of-the-box insights feature gives vital weekly insights that can help you be on top of your fitness goals. The following screenshot shows sample rows of our dataset stored in Azure Synapse.


Prerequisites

Make sure you have the following prerequisites:

  • An AWS account set up with QuickSight enabled. If you don’t have a QuickSight account, you can sign up for one. You can access the QuickSight free trial as part of the AWS Free Tier option.
  • An Azure account with data pre-loaded in Synapse. We use a sample fitness dataset in this post. We used a data generator to generate this data.
  • A virtual private connection (VPN) between AWS and Azure.

Note that the AWS resources for the steps in this post need to be in the same Region.

Configure a Lambda connector

To configure your Lambda connector, complete the following steps:

  1. Load the data.
    In the Azure account, the sample data for fitness devices is stored and accessed in an Azure Synapse Analytics workspace using a dedicated SQL pool table. The firewall settings for Synapse should allow for access to a VPC through a VPN. You can use your data or tables that you need to connect QuickSight to in this step.
  2. On the Amazon S3 console, create a spillover bucket and note the name to use in a later step.
    This bucket is used for storing the spillover data for the Synapse connector.
  3. On the AWS Serverless Application Repository console, choose Available applications in the navigation pane.
  4. On the Public applications tab, search for synapse and choose AthenaSynapseConnector with the AWS verified author tag.
  5. Create the Lambda function with the following configuration:
    1. For Name, enter AthenaSynapseConnector.
    2. For SecretNamePrefix, enter AthenaJdbcFederation.
    3. For SpillBucket, enter the name of the S3 bucket you created.
    4. For DefaultConnectionString, enter the JDBC connection string from the Azure SQL pool connection strings property.
    5. For LambdaFunctionName, enter a function name.
    6. For SecurityGroupIds and SubnetIds, enter the security group and subnet for your VPC (this is needed for the template to run successfully).
    7. Leave the remaining values as their default.
  6. Choose Deploy.
  7. After the function is deployed successfully, navigate to the athena_hybrid_azure function.
  8. On the Configurations tab, choose Environment variables in the navigation pane.
  9. Add the key azure_synapse_demo_connection_string with the same value as the default key (the JDBC connection string from the Azure SQL pool connection strings property).

    For this post, we removed the VPC configuration.
  10. Choose VPC in the navigation pane and choose None to remove the VPC configuration.
    Now you’re ready to configure the data source.
  11. On the Athena console, choose Data sources in the navigation pane.
  12. Choose Create data source.
  13. Choose Microsoft Azure Synapse as your data source.
  14. Choose Next.
  15. Create a data source with the following parameters:
    1. For Data source name, enter azure_synapse_demo.
    2. For Connection details, choose the Lambda function athena_hybrid_azure.
  16. Choose Next.

Create a dataset on QuickSight to read the data from Azure Synapse

Now that the configuration on the Athena side is complete, let’s configure QuickSight.

  1. On the QuickSight console, on the user name menu, choose Manage QuickSight.
  2. Choose Security & permissions in the navigation pane.
  3. Under QuickSight access to AWS services, choose Manage.
  4. Choose Amazon Athena and in the pop-up permissions box, choose Next.
  5. On the S3 Bucket tab, select the spill bucket you created earlier.
  6. On the Lambda tab, select the athena_hybrid_azure function.
  7. Choose Finish.
  8. If the QuickSight access to AWS services window appears, choose Save.
  9. Choose the QuickSight icon on the top left and choose New dataset.
  10. Choose Athena as the data source.
  11. For Data source name, enter a name.
  12. Check the Athena workgroup settings where the Athena data source was created.
  13. Choose Create data source.
  14. Choose the catalog azure_synapse_demo and the database dbo.
  15. Choose Edit/Preview data.
  16. Change the query mode to SPICE.
  17. Choose Publish & Visualize.
  18. Create an analysis in QuickSight.
  19. Publish a QuickSight dashboard.

If you’re new to QuickSight or looking to build stunning dashboards, this workshop provides step-by-step instructions to grow your dashboard building skills from basic to advanced level. The following screenshot is an example dashboard to give you some inspiration.

Clean up

To avoid ongoing charges, complete the following steps:

  1. Delete the S3 bucket created for the Athena spill data.
  2. Delete the Athena data source.
  3. On the AWS CloudFormation console, select the stack you created for AthenaSynapseConnector and choose Delete.
    This will delete the created resources, such as the Lambda function. Check the stack’s Events tab to track the progress of the deletion, and wait for the stack status to change to DELETE_COMPLETE.
  4. Delete the QuickSight datasets.
  5. Delete the QuickSight analysis.
  6. Delete your QuickSight subscription and close the account.

Conclusion

In this post, we showed you how to overcome the challenges of connecting to and analyzing data in other clouds by using AWS analytics services to connect to Azure Synapse Analytics with Athena Federated Query and QuickSight. We also showed you how to visualize and derive insights from the fitness data using QuickSight. With QuickSight and Athena Federated Query, organizations can now access additional data sources beyond those already supported natively by QuickSight. If you have data in sources other than Amazon S3, you can use Athena Federated Query to analyze the data in place or build pipelines that extract and store data in Amazon S3.

For more information and resources for QuickSight and Athena, visit Analytics on AWS.


About the authors

Harish Rajagopalan is a Senior Solutions Architect at Amazon Web Services. Harish works with enterprise customers and helps them with their cloud journey.

Salim Khan is a Specialist Solutions Architect for Amazon QuickSight. Salim has over 16 years of experience implementing enterprise business intelligence (BI) solutions. Prior to AWS, Salim worked as a BI consultant catering to industry verticals like Automotive, Healthcare, Entertainment, Consumer, Publishing and Financial Services. He has delivered business intelligence, data warehousing, data integration and master data management solutions across enterprises.

Sriram Vasantha is a Senior Solutions Architect at AWS in Central US helping customers innovate on the cloud. Sriram focuses on application and data modernization, DevSecOps, and digital transformation. In his spare time, Sriram enjoys playing different musical instruments like Piano, Organ, and Guitar.

Adarsha Nagappasetty is a Senior Solutions Architect at Amazon Web Services. Adarsha works with enterprise customers in Central US and helps them with their cloud journey. In his spare time, Adarsha enjoys spending time outdoors with his family!

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close