Покриване на проблемите в градската среда

Post Syndicated from original https://yurukov.net/blog/2021/gradska-sreda/

Много може да се каже за проблемите и липсата на решения за тях в градовете ни. София се откроява в това отношение не само като най-голям и натоварен град, но и този с най-лоша инфраструктура. Провалът е непропорционален на парите, които се изливат в града за ремонти и обновяване и в голяма степен се дължат на смесица от нехайство, липса на организация и контрол и нарочно затваряне на очите в полза на свързани интереси.

Тук ще ви покажа един пример за това. Преди две седмици – на 13-ти февруари – минавах покрай парка пред националния стадион и забелязах, че един от билбордовете е закрепен странно. Вместо нитовете, за които имаше място в основата, бяха заварили малки планки и занитили тях с болтове аналогични на тези, с които закрепих люлката на дъщеря си. Цялото това изпълнение беше силно раждясало, заварките видимо се люпиха, а планките бяха силно изтънели. Снимах проблема с идеята да подам сигнал по-късно.

24 часа по-късно в неделя ток уби дете пред УМБАЛ „Свети Иван Рилски“ заради оголен кабел вързан по аналогично аматьорски безотговорен начин за ограда. Отворих пак снимките ми от билборда и забелязах, че и там има същия проблем – кабел, който просто е бетониран в земята и виси свободно в основата на колоната. Бях подал вече сигнал в портала на Софийска община. Пуснах снимките във Facebook и в коментарите стана дума, че кабелът дори не е правилния вид за тази употреба.

Направих си труда да потърся в историята на Google Street View как е изглеждал въпросния билборд през годините.

Тук виждате през 2012 завинтен правилно за земята билборд.
През 2015-та билбордът е сменен, но пак е завинтен.
През 2018-та същия е вече с бетонирана основа.
През август 2019-та вече е в сегашния си вид със стърчащия кабел.

Това означава, че поне година и половина билбордът е стоял така заварен за земята. Дори през август 2019-та се вижда сериозна ръжда по планките.

Не открих разрешението за поставяне или каквато и да е документация за обекта в регистрите на направление Архитектура и градоустройство на СО. Не изгледа да има и предписания по него или заповед за премахване. Няма регистър на преместваемите обекти в София, нещо, което отдавна призоваваме да бъде направено. С оглед на скорошния трагичен случай става ясно колко спешно е нужно това, заедно с регистър на всички подобни елементи като шахтите и улеснена процедура по премахване на обекти.

След кратко търсене в мрежата открих, че собственик на билборда, поне според сайта им, е Джей Си Деко Имидж АД или търговското им име JCDecaux Image. Имат над 400 такива билборда само в София. Дали го притежават в последната година и половина не мога да кажа, но определено продават рекламно пространство на него. Докато търсех собственика, всъщност, се натъкнах на учудващо голям брой такива съоръжения в града. Виждаме ги постоянно, но като ги видиш нашарени на картите си даваш сметка колко абсурдно пренаселени са градовете ни визуално. Това разбира се е просто още един пример за лошото управление на града и инфраструктурата му.

Конкретния билборд обаче е не само загрозяващ, но и опасен. Затова подадох сигнал с надеждата, че ще бъде обезопасен, правилно поставен или в най-добрия случай – махнат. 4 дни след сигнала ми обаче резултатът е съвсем друг.

Просто са скрили това, което има отдолу. Не знам дали са го направили от общината или собственика. Все още си седи така. Кабелът продължава да си стърчи. Планките си остават полу-заварени и силно корозирали, а всичко това пази от погледите на досадни граждани с няколко занитени ламарини. Очакването явно е да се забрави случая поне докато не падне билборда.

Доста проблеми в градската среда се решават така – замазват се, покриват се, бетонират се набързо само и само да се скрият от поглед докато не се разцепи или пропадне улицата, не падне фасада или покрив и някой не умре в процеса. После отново се правят заявки, правят проверки, мълчи се кой е отговорен и кой не си е свършил работата.

Истината е, че проблемът, както във всяка организация публична или частна, идва от най-отгоре. Когато ръководството не дава пример не само с действията си, но и с отношението към такива проблеми, няма как надолу да обръщат внимание на критичните неща, а какво остава на важните детайли. Отделно липсата на последствия. С години работа на дребно, обслужване на схемички, наместване на калинки и липса на обща идея какво се случва с градовете ни, местните администрации практически са вдигнали ръце от подобни проблеми. Това обезверява хората и дава все по-малко надежда, че ще бъде обърнато внимание на подобни сигнали, а още по-малко, че градската среда ще върви към по-добро. Особено в София.

The post Покриване на проблемите в градската среда first appeared on Блогът на Юруков.

VMware vCenter Server CVE-2021-21972 Remote Code Execution Vulnerability: What You Need to Know

Post Syndicated from boB Rudis original https://blog.rapid7.com/2021/02/24/vmware-vcenter-server-cve-2021-21972-remote-code-execution-vulnerability-what-you-need-to-know/

VMware vCenter Server CVE-2021-21972 Remote Code Execution Vulnerability: What You Need to Know

This blog post was co-authored by Bob Rudis and Caitlin Condon.

What’s up?

On Feb. 23, 2021, VMware published an advisory (VMSA-2021-0002) describing three weaknesses affecting VMware ESXi, VMware vCenter Server, and VMware Cloud Foundation.

Before digging into the individual vulnerabilities, it is vital that all organizations that use the HTML5 VMware vSphere Client, i.e., VMware vCenter Server (7.x before 7.0 U1c, 6.7 before 6.7 U3l and 6.5 before 6.5 U3n) and VMware Cloud Foundation (4.x before 4.2 and 3.x before 3.10.1.2) immediately restrict network access to those clients—especially if they are not segmented off on a management network—implement the mitigation noted below, and consider performing accelerated/immediate patching on those systems.

Vulnerability details and recommendations

CVE-2021-21972 is a critical (CVSSv3 base 9.8) unauthenticated remote code execution vulnerability in the HTML5 vSphere client. Any malicious actor with access to port 443 can exploit this weakness and execute commands with unrestricted privileges.

PT Swarm has provided a detailed walkthrough of this weakness and how to exploit it.

Rapid7 researchers have independently analyzed, tested, and confirmed the exploitability of this weakness and have provided a full technical analysis.

Proof-of-concept working exploits are beginning to appear on public code-sharing sites.

Organizations should restrict access to this plugin and patch affected systems immediately (i.e., not wait for standard patch change windows).

VMware has provided steps for a temporary mitigation, which involves disabling the plugin.

CVE-2021-21973 is an important (CVSSv3 base 8.8) heap-overflow-based remote code execution vulnerability in VMware ESXi OpenSLP. Attackers with same-segment network access to port 427 on affected systems may be able to use the heap-overflow weakness to perform remote code execution.

VMware has provided steps for a temporary mitigation, which involves disabling the SLP service on affected systems.

Rapid7 recommends applying the vendor-provided patches as soon as possible after performing the vendor-recommended mitigation.

CVE-2021-21974 is a moderate (CVSSv3 base 5.3) server-side request forgery vulnerability affecting the HTML5 vSphere Client. Attackers with access to port 443 of affected systems can use this weakness to gain access to underlying system information.

VMware has provided steps for a temporary mitigation, which involves disabling the plugin.

Since attackers will already be focusing on VMware systems due to the other high-severity weaknesses, Rapid7 recommends applying the vendor-provided patches as soon as possible after performing the vendor-recommended mitigation.

Attacker activity

Rapid7 Labs has not detected broad scanning for internet-facing VMware vCenter servers, but Bad Packets has reported that they’ve detected opportunistic scanning. We will continue to monitor Project Heisenberg for attacker activity and update this blog post as we have more information.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

[$] A pair of Python vulnerabilities

Post Syndicated from original https://lwn.net/Articles/846847/rss

Two separate vulnerabilities led to the fast-tracked release
of Python 3.9.2 and 3.8.8 on February 19, though source-only
releases
of 3.7.10 and 3.6.13 came a few days earlier. The
vulnerabilities may be problematic for some Python users and
workloads; one could potentially lead to remote code execution. The other
is, arguably, not exactly a flaw in the Python standard library—it simply
also follows an older standard—but it can lead to web cache
poisoning
attacks.

Run usage analytics on Amazon QuickSight using AWS CloudTrail

Post Syndicated from Sunil Salunkhe original https://aws.amazon.com/blogs/big-data/run-usage-analytics-on-amazon-quicksight-using-aws-cloudtrail/

Amazon QuickSight is a cloud-native BI service that allows end users to create and publish dashboards in minutes, without provisioning any servers or requiring complex licensing. You can view these dashboards on the QuickSight product console or embed them into applications and websites. After you deploy a dashboard, it’s important to assess how they and other assets are being adopted, accessed, and used across various departments or customers.

In this post, we use a QuickSight dashboard to present the following insights:

  • Most viewed and accessed dashboards
  • Most updated dashboards and analyses
  • Most popular datasets
  • Active users vs. idle users
  • Idle authors
  • Unused datasets (wasted SPICE capacity)

You can use these insights to reduce costs and create operational efficiencies in a deployment. The following diagram illustrates this architecture.

The following diagram illustrates this architecture.

Solution components

The following table summarizes the AWS services and resources that this solution uses.

Resource Type Name Purpose
AWS CloudTrail logs CloudTrailMultiAccount Capture all API calls for all AWS services across all AWS Regions for this account. You can use AWS Organizations to consolidate trails across multiple AWS accounts.
AWS Glue crawler

QSCloudTrailLogsCrawler

QSProcessedDataCrawler

Ensures that all CloudTrail data is crawled periodically and that partitions are updated in the AWS Glue Data Catalog.
AWS Glue ETL job QuickSightCloudTrailProcessing Reads catalogued data from the crawler, processes, transforms, and stores it in an S3 output bucket.
AWS Lambda function ExtractQSMetadata_func Extracts event data using the AWS SDK for Python, Boto3. The event data is enriched with QuickSight metadata objects like user, analysis, datasets, and dashboards.
Amazon Simple Storage Service (s3)

CloudTrailLogsBucket

QuickSight-BIonBI-processed

One bucket stores CloudTrail data. The other stores processed data.
Amazon QuickSight Quicksight_BI_On_BO_Analysis Visualizes the processed data.

 Solution walkthrough

AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. You can use CloudTrail to log, continuously monitor, and retain account activity related to actions across your AWS infrastructure. You can define a trail to collect API actions across all AWS Regions. Although we have enabled a trail for all Regions in our solution, the dashboard shows the data for single Region only.

After you enable CloudTrail, it starts capturing all API actions and then, at 15-minute intervals, delivers logs in JSON format to a configured Amazon Simple Storage Service (Amazon S3) bucket. Before the logs are made available to our ad hoc query engine, Amazon Athena, they must be parsed, transformed, and processed by the AWS Glue crawler and ETL job.

Before the logs are made available to our ad hoc query engine

This will be handled by AWS Glue Crawler & AWS Glue ETL Job. The AWS Glue crawler crawls through the data every day and populates new partitions in the Data Catalog. The data is later made available as a table on the Athena console for processing by the AWS Glue ETL job. Glue ETL Job QuickSightCloudtrail_GlueJob.txt filters logs and processes only those events where the event source is QuickSight. (for example, eventSource = quicksight.amazonaws.com’).

  This will be handled by AWS Glue Crawler & AWS Glue ETL Job.

The following screenshot shows the sample JSON for the QuickSight API calls.

The following screenshot shows the sample JSON for the QuickSight API calls.

The job processes those events and creates a Parquet file. The following table summarizes the file’s data points.

Quicksightlogs
Field Name Data Type
eventtime Datetime
eventname String
awsregion String
accountid String
username String
analysisname String
Date Date

The processed data is stored in an S3 folder at s3://<BucketName>/processedlogs/. For performance optimization during querying and connecting this data to QuickSight for visualization, these logs are partitioned by date field. For this reason, we recommend that you configure the AWS Glue crawler to detect the new data and partitions and update the Data Catalog for subsequent analysis. We have configured the crawler to run one time a day.

We need to enrich this log data with metadata from QuickSight, such as a list of analyses, users, and datasets. This metadata can be extracted using descibe_analysis, describe_user, describe_data_set in the AWS SDK for Python.

We provide an AWS Lambda function that is ideal for this extraction. We configured it to be triggered once a day through Amazon EventBridge. The extracted metadata is stored in the S3 folder at s3://<BucketName>/metadata/.

Now that we have processed logs and metadata for enrichment, we need to prepare the data visualization in QuickSight. Athena allows us to build views that can be imported into QuickSight as datasets.

We build the following views based on the tables populated by the Lambda function and the ETL job:

CREATE VIEW vw_quicksight_bionbi 
AS 
  SELECT Date_parse(eventtime, '%Y-%m-%dT%H:%i:%SZ') AS "Event Time", 
         eventname  AS "Event Name", 
         awsregion  AS "AWS Region", 
         accountid  AS "Account ID", 
         username   AS "User Name", 
         analysisname AS "Analysis Name", 
         dashboardname AS "Dashboard Name", 
         Date_parse(date, '%Y%m%d') AS "Event Date" 
  FROM   "quicksightbionbi"."quicksightoutput_aggregatedoutput" 

CREATE VIEW vw_users 
AS 
  SELECT usr.username "User Name", 
         usr.role     AS "Role", 
         usr.active   AS "Active" 
  FROM   (quicksightbionbi.users 
          CROSS JOIN Unnest("users") t (usr)) 

CREATE VIEW vw_analysis 
AS 
  SELECT aly.analysisname "Analysis Name", 
         aly.analysisid   AS "Analysis ID" 
  FROM   (quicksightbionbi.analysis 
          CROSS JOIN Unnest("analysis") t (aly)) 

CREATE VIEW vw_analysisdatasets 
AS 
  SELECT alyds.analysesname "Analysis Name", 
         alyds.analysisid   AS "Analysis ID", 
         alyds.datasetid    AS "Dataset ID", 
         alyds.datasetname  AS "Dataset Name" 
  FROM   (quicksightbionbi.analysisdatasets 
          CROSS JOIN Unnest("analysisdatasets") t (alyds)) 

CREATE VIEW vw_datasets 
AS 
  SELECT ds.datasetname AS "Dataset Name", 
         ds.importmode  AS "Import Mode" 
  FROM   (quicksightbionbi.datasets 
          CROSS JOIN Unnest("datasets") t (ds))

QuickSight visualization

Follow these steps to connect the prepared data with QuickSight and start building the BI visualization.

  1. Sign in to the AWS Management Console and open the QuickSight console.

You can set up QuickSight access for end users through SSO providers such as AWS Single Sign-On (AWS SSO), Okta, Ping, and Azure AD so they don’t need to open the console.

You can set up QuickSight access for end users through SSO providers

  1. On the QuickSight console, choose Datasets.
  2. Choose New dataset to create a dataset for our analysis.

Choose New dataset to create a dataset for our analysis.

  1. For Create a Data Set, choose Athena.

In the previous steps, we prepared all our data in the form of Athena views.

  1. Configure permission for QuickSight to access AWS services, including Athena and its S3 buckets. For information, see Accessing Data Sources.

Configure permission for QuickSight to access AWS services,

  1. For Data source name, enter QuickSightBIbBI.
  2. Choose Create data source.

Choose Create data source.

  1. On Choose your table, for Database, choose quicksightbionbi.
  2. For Tables, select vw_quicksight_bionbi.
  3. Choose Select.

Choose Select.

  1. For Finish data set creation, there are two options to choose from:
    1. Import to SPICE for quicker analytics – Built from the ground up for the cloud, SPICE uses a combination of columnar storage, in-memory technologies enabled through the latest hardware innovations, and machine code generation to run interactive queries on large datasets and get rapid responses. We use this option for this post.
    2. Directly query your data – You can connect to the data source in real time, but if the data query is expected to bring bulky results, this option might slow down the dashboard refresh.
  2. Choose Visualize to complete the data source creation process.

Choose Visualize to complete the data source creation process.

Now you can build your visualizations sheets. QuickSight refreshes the data source first. You can also schedule a periodic refresh of your data source.

Now you can build your visualizations sheets.

The following screenshot shows some examples of visualizations we built from the data source.

The following screenshot shows some examples of visualizations we built from the data source.

 

This dashboard presents us with two main areas for cost optimization:

  • Usage analysis – We can see how analyses and dashboards are being consumed by users. This area highlights the opportunity for cost saving by looking at datasets that have not been used for the last 90 days in any of the analysis but are still holding a major chunk of SPICE capacity.
  • Account governance – Because author subscriptions are charged on a fixed fee basis, it’s important to monitor if they are actively used. The dashboard helps us identify idle authors for the last 60 days.

Based on the information in the dashboard, we could do the following to save costs:

Conclusion

In this post, we showed how you can use CloudTrail logs to review the use of QuickSight objects, including analysis, dashboards, datasets, and users. You can use the information available in dashboards to save money on storage, subscriptions, understand maturity of QuickSight Tool adoption and more.


About the Author

Sunil SalunkheSunil Salunkhe is a Senior Solution Architect working with Strategic Accounts on their vision to leverage the cloud to drive aggressive growth strategies. He practices customer obsession by solving their complex challenges in all the aspects of the cloud journey including scale, security and reliability. While not working, he enjoys playing cricket and go cycling with his wife and a son.

Field Notes: Stopping an Automatically Started Database Instance with Amazon RDS

Post Syndicated from Islam Ghanim original https://aws.amazon.com/blogs/architecture/field-notes-stopping-an-automatically-started-database-instance-with-amazon-rds/

Customers needing to keep an Amazon Relational Database Service (Amazon RDS) instance stopped for more than 7 days, look for ways to efficiently re-stop the database after being automatically started by Amazon RDS. If the database is started and there is no mechanism to stop it; customers start to pay for the instance’s hourly cost. Moreover, customers with database licensing agreements could incur penalties for running beyond their licensed cores/users.

Stopping and starting a DB instance is faster than creating a DB snapshot, and then restoring the snapshot. However, if you plan to keep the Amazon RDS instance stopped for an extended period of time, it is advised to terminate your Amazon RDS instance and recreate it from a snapshot when needed.

This blog provides a step-by-step approach to automatically stop an RDS instance once the auto-restart activity is complete. This saves any costs incurred once the instance is turned on. The proposed architecture is fully serverless and requires no management overhead. It relies on AWS Step Functions and a set of Lambda functions to monitor RDS instance state and stop the instance when required.

Architecture overview

Given the autonomous nature of the architecture and to avoid management overhead, the architecture leverages serverless components.

  • The architecture relies on RDS event notifications. Once a stopped RDS instance is started by AWS due to exceeding the maximum time in the stopped state; an event (RDS-EVENT-0154) is generated by RDS.
  • The RDS event is pushed to a dedicated SNS topic rds-event-notifications-topic.
  • The Lambda function start-statemachine-execution-lambda is subscribed to the SNS topic rds-event-notifications-topic.
    • The function filters messages with event code: RDS-EVENT-0154. In order to restrict the ‘force shutdown’ activity further, the function validates that the RDS instance is tagged with auto-restart-protection and that the tag value is set to ‘yes’.
    • Once all conditions are met, the Lambda function starts the AWS Step Functions state machine execution.
  • The AWS Step Functions state machine integrates with two Lambda functions in order to retrieve the instance state, as well as attempt to stop the RDS instance.
    • In case the instance state is not ‘available’, the state machine waits for 5 minutes and then re-checks the state.
    • Finally, when the Amazon RDS instance state is ‘available’; the state machine will attempt to stop the Amazon RDS instance.

Prerequisites

In order to implement the steps in this post, you need an AWS account as well as an IAM user with permissions to provision and delete resources of the following AWS services:

  • Amazon RDS
  • AWS Lambda
  • AWS Step Functions
  • AWS CloudFormation
  • AWS SNS
  • AWS IAM

Architecture implementation

You can implement the architecture using the AWS Management Console or AWS CLI.  For faster deployment, the architecture is available on GitHub. For more information on the repo, visit GitHub.

The steps below explain how to build the end-to-end architecture from within the AWS Management Console:

Create an SNS topic

  • Open the Amazon SNS console.
  • On the Amazon SNS dashboard, under Common actions, choose Create Topic.
  • In the Create new topic dialog box, for Topic name, enter a name for the topic (rds-event-notifications-topic).
  • Choose Create topic.
  • Note the Topic ARN for the next task (for example, arn:aws:sns:us-east-1:111122223333:my-topic).

Configure RDS event notifications

Amazon RDS uses Amazon Simple Notification Service (Amazon SNS) to provide notification when an Amazon RDS event occurs. These notifications can be in any notification form supported by Amazon SNS for an AWS Region, such as an email, a text message, or a call to an HTTP endpoint.

For this architecture, RDS generates an event indicating that instance has automatically restarted because it exceed the maximum duration to remain stopped. This specific RDS event (RDS-EVENT-0154) belongs to ‘notification’ category. For more information, visit Using Amazon RDS Event Notification.

To subscribe to an RDS event notification

  • Sign in to the AWS Management Console and open the Amazon RDS console.
  • In the navigation pane, choose Event subscriptions.
  • In the Event subscriptions pane, choose Create event subscription.
  • In the Create event subscription dialog box, do the following:
    • For Name, enter a name for the event notification subscription (RdsAutoRestartEventSubscription).
    • For Send notifications to, choose the SNS topic created in the previous step (rds-event-notifications-topic).
    • For Source type, choose ‘Instances’. Since our source will be RDS instances.
    • For Instances to include, choose ‘All instances’. Instances are included or excluded based on the tag, auto-restart-protection. This is to keep the architecture generic and to avoid regular configurations moving forward.
    • For Event categories to include, choose ‘Select specific event categories’.
    • For Specific event, choose ‘notification’. This is the category under which the RDS event of interest falls. For more information, review Using Amazon RDS Event Notification.
    •  Choose Create.
    • The Amazon RDS console indicates that the subscription is being created.

Create Lambda functions

Following are the three Lambda functions required for the architecture to work:

  1. start-statemachine-execution-lambda, the function will subscribe to the newly created SNS topic (rds-event-notifications-topic) and starts the AWS Step Functions state machine execution.
  2. retrieve-rds-instance-state-lambda, the function is triggered by AWS Step Functions state machine to retrieve an RDS instance state (example, available or stopped)
  3. stop-rds-instance-lambda, the function is triggered by AWS Step Functions state machine in order to attempt to stop an RDS instance.

First, create the Lambda functions’ execution role.

To create an execution role

  • Open the roles page in the IAM console.
  • Choose Create role.
  • Create a role with the following properties.
    • Trusted entity – Lambda.
    • Permissions – AWSLambdaBasicExecutionRole.
    • Role namerds-auto-restart-lambda-role.
    • The AWSLambdaBasicExecutionRole policy has the permissions that the function needs to write logs to CloudWatch Logs.

Now, create a new policy and attach to the role in order to allow the Lambda function to: start an AWS StepFunctions state machine execution, stop an Amazon RDS instance, retrieve RDS instance status, list tags and add tags.

Use the JSON policy editor to create a policy

  • Sign in to the AWS Management Console and open the IAM console.
  • In the navigation pane on the left, choose Policies.
  • Choose Create policy.
  • Choose the JSON tab.
  • Paste the following JSON policy document:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "rds:AddTagsToResource",
                "rds:ListTagsForResource",
                "rds:DescribeDBInstances",
                "states:StartExecution",
                "rds:StopDBInstance"
            ],
            "Resource": "*"
        }
    ]
}
  • When you are finished, choose Review policy. The Policy Validator reports any syntax errors.
  • On the Review policy page, type a Name (rds-auto-restart-lambda-policy) and a Description (optional) for the policy that you are creating. Review the policy Summary to see the permissions that are granted by your policy. Then choose Create policy to save your work.

To link the new policy to the AWS Lambda execution role

  • Sign in to the AWS Management Console and open the IAM console.
  • In the navigation pane, choose Policies.
  • In the list of policies, select the check box next to the name of the policy to attach. You can use the Filter menu and the search box to filter the list of policies.
  • Choose Policy actions, and then choose Attach.
  • Select the IAM role created for the three Lambda functions. After selecting the identities, choose Attach policy.

Given the principle of least privilege, it is recommended to create 3 different roles restricting a function’s access to the needed resources only. 

Repeat the following step 3 times to create 3 new Lambda functions. Differences between the 3 Lambda functions are: (1) code and (2) triggers:

  • Open the Lambda console.
  • Choose Create function.
  • Configure the following settings:
    • Name
      • start-statemachine-execution-lambda
      • retrieve-rds-instance-state-lambda
      • stop-rds-instance-lambda
    • Runtime – Python 3.8.
    • Role – Choose an existing role.
    • Existing role – rds-auto-restart-lambda-role.
    • Choose Create function.
    • To configure a test event, choose Test.
    • For Event name, enter test.
  • Choose Create.
  • For the Lambda function —  start-statemachine-execution-lambda, use the following Python 3.8 sample code:
import json
import boto3
import logging
import os

#Logging
LOGGER = logging.getLogger()
LOGGER.setLevel(logging.INFO)

#Initialise Boto3 for RDS
rdsClient = boto3.client('rds')

def lambda_handler(event, context):

    #log input event
    LOGGER.info("RdsAutoRestart Event Received, now checking if event is eligible. Event Details ==> ", event)

    #Input event from the SNS topic originated from RDS event notifications
    snsMessage = json.loads(event['Records'][0]['Sns']['Message'])
    rdsInstanceId = snsMessage['Source ID']
    stepFunctionInput = {"rdsInstanceId": rdsInstanceId}
    rdsEventId = snsMessage['Event ID']

    #Retrieve RDS instance ARN
    db_instances = rdsClient.describe_db_instances(DBInstanceIdentifier=rdsInstanceId)['DBInstances']
    db_instance = db_instances[0]
    rdsInstanceArn = db_instance['DBInstanceArn']

    # Filter on the Auto Restart RDS Event. Event code: RDS-EVENT-0154. 

    if 'RDS-EVENT-0154' in rdsEventId:

        #log input event
        LOGGER.info("RdsAutoRestart Event detected, now verifying that instance was tagged with auto-restart-protection == yes")

        #Verify that instance is tagged with auto-restart-protection tag. The tag is used to classify instances that are required to be terminated once started. 

        tagCheckPass = 'false'
        rdsInstanceTags = rdsClient.list_tags_for_resource(ResourceName=rdsInstanceArn)
        for rdsInstanceTag in rdsInstanceTags["TagList"]:
            if 'auto-restart-protection' in rdsInstanceTag["Key"]:
                if 'yes' in rdsInstanceTag["Value"]:
                    tagCheckPass = 'true'
                    #log instance tags
                    LOGGER.info("RdsAutoRestart verified that the instance is tagged auto-restart-protection = yes, now starting the Step Functions Flow")
                else:
                    tagCheckPass = 'false'


        #log instance tags
        LOGGER.info("RdsAutoRestart Event detected, now verifying that instance was tagged with auto-restart-protection == yes")

        if 'true' in tagCheckPass:

            #Initialise StepFunctions Client
            stepFunctionsClient = boto3.client('stepfunctions')

            # Start StepFunctions WorkFlow
            # StepFunctionsArn is stored in an environment variable
            stepFunctionsArn = os.environ['STEPFUNCTION_ARN']
            stepFunctionsResponse = stepFunctionsClient.start_execution(
            stateMachineArn= stepFunctionsArn,
            name=event['Records'][0]['Sns']['MessageId'],
            input= json.dumps(stepFunctionInput)

        )

    else:

        LOGGER.info("RdsAutoRestart Event detected, and event is not eligible")

    return {
            'statusCode': 200
        }

And then, configure an SNS source trigger for the function start-statemachine-execution-lambda. RDS event notifications will be published to this SNS topic:

  • In the Designer pane, choose Add trigger.
  • In the Trigger configurations pane, select SNS as a trigger.
  • For SNS topic, choose the SNS topic previously created (rds-event-notifications-topic)
  • For Enable trigger, keep it checked.
  • Choose Add.
  • Choose Save.

For the Lambda function — retrieve-rds-instance-state-lambda, use the following Python 3.8 sample code:

import json
import logging
import boto3

#Logging
LOGGER = logging.getLogger()
LOGGER.setLevel(logging.INFO)

#Initialise Boto3 for RDS
rdsClient = boto3.client('rds')


def lambda_handler(event, context):
    

    #log input event
    LOGGER.info(event)
    
    #rdsInstanceId is passed as input to the lambda function from the AWS StepFunctions state machine.  
    rdsInstanceId = event['rdsInstanceId']
    db_instances = rdsClient.describe_db_instances(DBInstanceIdentifier=rdsInstanceId)['DBInstances']
    db_instance = db_instances[0]
    rdsInstanceState = db_instance['DBInstanceStatus']
    return {
        'statusCode': 200,
        'rdsInstanceState': rdsInstanceState,
        'rdsInstanceId': rdsInstanceId
    }

Choose Save.

For the Lambda function, stop-rds-instance-lambda, use the following Python 3.8 sample code:

import json
import logging
import boto3

#Logging
LOGGER = logging.getLogger()
LOGGER.setLevel(logging.INFO)

#Initialise Boto3 for RDS
rdsClient = boto3.client('rds')


def lambda_handler(event, context):
    
    #log input event
    LOGGER.info(event)
    
    rdsInstanceId = event['rdsInstanceId']
    
    #Stop RDS instance
    rdsClient.stop_db_instance(DBInstanceIdentifier=rdsInstanceId)
    
    #Tagging
    
    
    return {
        'statusCode': 200,
        'rdsInstanceId': rdsInstanceId
    }

Choose Save.

Create a Step Function

AWS Step Functions will execute the following service logic:

  1. Retrieve RDS instance state by calling Lambda function, retrieve-rds-instance-state-lambda. The Lambda function then returns the parameter, rdsInstanceState.
  2. If the rdsInstanceState parameter value is ‘available’, then the state machine will step into the next action calling the Lambda function, stop-rds-instance-lambda. If the rdsInstanceState is not ‘available’, the state machine will then wait for 5 minutes and then re-check the RDS instance state again.
  3. Stopping an RDS instance is an asynchronous operation and accordingly the state machine will keep polling the instance state once every 5 minutes until the rdsInstanceState parameter value becomes ‘stopped’. Only then, the state machine execution will complete successfully.

  • An RDS instance path to ‘available’ state may vary depending on the various maintenance activities scheduled for the instance.
  • Once the RDS notification event is generated, the instance will go through multiple states till it becomes ‘available’.
  • The use of the 5 minutes timer is to make sure that the automation flow will keep attempting to stop the instance once it becomes available.
  • The second part will make sure that the flow doesn’t end till the instance status is changed to ‘stopped’ and hence notifying the system administrator.

To create an AWS Step Functions state machine

  • Sign in to the AWS Management Console and open the Amazon RDS console.
  • In the navigation pane, choose State machines.
  • In the State machines pane, choose Create state machine.
  • On the Define state machine page, choose Author with code snippets. For Type, choose Standard.
  • Enter a Name for your state machine, stop-rds-instance-statemachine.
  • In the State machine definition pane, add the following state machine definition using the ARNs of the two Lambda function created earlier, as shown in the following code sample:
{
  "Comment": "stop-rds-instance-statemachine: Automatically shutting down RDS instance after a forced Auto-Restart",
  "StartAt": "retrieveRdsInstanceState",
  "States": {
    "retrieveRdsInstanceState": {
      "Type": "Task",
      "Resource": "retrieve-rds-instance-state-lambda Arn",
      "Next": "isInstanceAvailable"
    },
    "isInstanceAvailable": {
      "Type": "Choice",
      "Choices": [
        {
          "Variable": "$.rdsInstanceState",
          "StringEquals": "available",
          "Next": "stopRdsInstance"
        }
      ],
      "Default": "waitFiveMinutes"
    },
    "waitFiveMinutes": {
      "Type": "Wait",
      "Seconds": 300,
      "Next": "retrieveRdsInstanceState"
    },
    "stopRdsInstance": {
      "Type": "Task",
      "Resource": "stop-rds-instance-lambda Arn",
      "Next": "retrieveRDSInstanceStateStopping"
    },
    "retrieveRDSInstanceStateStopping": {
      "Type": "Task",
      "Resource": "retrieve-rds-instance-state-lambda Arn",
      "Next": "isInstanceStopped"
    },
    "isInstanceStopped": {
      "Type": "Choice",
      "Choices": [
        {
          "Variable": "$.rdsInstanceState",
          "StringEquals": "stopped",
          "Next": "notifyDatabaseAdmin"
        }
      ],
      "Default": "waitFiveMinutesStopping"
    },
    "waitFiveMinutesStopping": {
      "Type": "Wait",
      "Seconds": 300,
      "Next": "retrieveRDSInstanceStateStopping"
    },
    "notifyDatabaseAdmin": {
      "Type": "Pass",
      "Result": "World",
      "End": true
    }
  }
}

This is a definition of the state machine written in Amazon States Language which is used to describe the execution flow of an AWS Step Function.

Choose Next.

  • In the Name pane, enter a name for your state machine, stop-rds-instance-statemachine.
  • In the Permissions pane, choose Create new role. Take note of the the new role’s name displayed at the bottom of the page (example, StepFunctions-stop-rds-instance-statemachine-role-231ffecd).
  • Choose Create state machine
  • By default, the created role only grants the state machine access to CloudWatch logs. Since the state machine will have to make Lambda calls, then another IAM policy has to be associated with the new role.

Use the JSON policy editor to create a policy

  • Sign in to the AWS Management Console and open the IAM console.
  • In the navigation pane on the left, choose Policies.
  • Choose Create policy.
  • Choose the JSON tab.
  • Paste the following JSON policy document:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "lambda:InvokeFunction",
"Resource": "*"
}
]
}
  • When you are finished, choose Review policy. The Policy Validator reports any syntax errors.
  • On the Review policy page, type a Name rds-auto-restart-stepfunctions-policy and a Description (optional) for the policy that you are creating. Review the policy Summary to see the permissions that are granted by your policy.
  • Choose Create policy to save your work.

To link the new policy to the AWS Step Functions execution role

  • Sign in to the AWS Management Console and open the IAM console.
  • In the navigation pane, choose Policies.
  • In the list of policies, select the check box next to the name of the policy to attach. You can use the Filter menu and the search box to filter the list of policies.
  • Choose Policy actions, and then choose Attach.
  • Select the IAM role create for the state machine (example, StepFunctions-stop-rds-instance-statemachine-role-231ffecd). After selecting the identities, choose Attach policy.

 

Testing the architecture

In order to test the architecture, create a test RDS instance, tag it with auto-restart-protection tag and set the tag value to yes. While the RDS instance is still in creation process, test the Lambda function —  start-statemachine-execution-lambda with a sample event that simulates that the instance was started as it exceeded the maximum time to remain stopped (RDS-EVENT-0154).

To invoke a function

  • Sign in to the AWS Management Console and open the Lambda console.
  • In navigation pane, choose Functions.
  • In Functions pane, choose start-statemachine-execution-lambda.
  • In the upper right corner, choose Test.
  • In the Configure test event page, choose Create new test event and in Event template, leave the default Hello World option.
    {
    "Records": [
        {
        "EventSource": "aws:sns",
        "EventVersion": "1.0",
        "EventSubscriptionArn": "<RDS Event Subscription Arn>",
        "Sns": {
            "Type": "Notification",
            "MessageId": "10001-2d55da-9a73-5e42d46748c0",
            "TopicArn": "<SNS Topic Arn>",
            "Subject": "RDS Notification Message",
            "Message": "{\"Event Source\":\"db-instance\",\"Event Time\":\"2020-07-09 15:15:03.031\",\"Identifier Link\":\"https://console.aws.amazon.com/rds/home?region=<region>#dbinstance:id=<RDS instance id>\",\"Source ID\":\"<RDS instance id>\",\"Event ID\":\"http://docs.amazonwebservices.com/AmazonRDS/latest/UserGuide/USER_Events.html#RDS-EVENT-0154\",\"Event Message\":\"DB instance started\"}",
            "Timestamp": "2020-07-09T15:15:03.991Z",
            "SignatureVersion": "1",
            "Signature": "YsuM+L6N8rk+pBPBWoWeRcSuYqo/BN5v9D2lyoSg0B0uS46Q8NZZSoZWaIQi25TXfHY3RYXCXF9WbVGXiWa4dJs2Mjg46anM+2j6z9R7BDz0vt25qCrCyWhmWtc7yeETrlwa0jCtR/wxXFFexRwynqlZeDfvQpf/x+KNLrnJlT61WZ2FMTHYs124RwWU8NY3pm1Os0XOIvm8rfv3ywm1ccZfP4rF7Lfn+2EK6a0635Z/5aiyIlldNZxbgRYTODJYroO9INTlF7NPzVV1Y/K0E9aaL/wQgLZNquXQGCAxPFWy5lxJKeyUocOWcG48KJGIBUC36JJaqVdIilbZ9HvxTg==",
            "SigningCertUrl": "https://sns.<region>.amazonaws.com/SimpleNotificationService-a86cb10b4e1f29c941702d737128f7b6.pem",
            "UnsubscribeUrl": "https://sns.<region>.amazonaws.com/?Action=Unsubscribe&SubscriptionArn=<arn>",
            "MessageAttributes": {}
        }
        }
    ]
    }
start-statemachine-execution-lambda uses the SNS MessageId parameter as name for the AWS Step Functions execution. The name field is unique for a certain period of time, accordingly, with every test run the MessageId parameter value must be changed. 
  • Choose Create and then choose Test. Each user can create up to 10 test events per function. Those test events are not available to other users.
  • AWS Lambda executes your function on your behalf. The handler in your Lambda function receives and then processes the sample event.
  • Upon successful execution, view results in the console.
  • The Execution result section shows the execution status as succeeded and also shows the function execution results, returned by the return statement. Following is a sample response of the test execution:

Now, verify the execution of the AWS Step Functions state machine:

  • Sign in to the AWS Management Console and open the Amazon RDS console.
  • In navigation pane, choose State machines.
  • In the State machine pane, choose stop-rds-instance-statemachine.
  • In the Executions pane, choose the execution with the Name value passed in the test event MessageId parameter.
  • In the Visual workflow pane, the real-time execution status is displayed:

  • Under the Step details tab, all details related to inputs, outputs and exceptions are displayed:

Monitoring

It is recommended to use Amazon CloudWatch to monitor all the components in this architecture. You can use AWS Step Functions to log the state of the execution, inputs and outputs of each step in the flow. So when things go wrong, you can diagnose and debug problems quickly.

Cost

When you build the architecture using serverless components, you pay for what you use with no upfront infrastructure costs. Cost will depend on the number of RDS instances tagged to be protected against an automatic start.

Architectural considerations

This architecture has to be deployed per AWS Account per Region.

Conclusion

The blog post demonstrated how to build a fully serverless architecture that monitors and stops RDS instances restarted by AWS. This helps to avoid falling behind on any required maintenance updates. This architecture helps you save cost incurred by started instances’ running hours and licensing implications.  Feel free to submit enhancements to the GitHub repository or provide feedback in the comments.

Field Notes provides hands-on technical guidance from AWS Solutions Architects, consultants, and technical account managers, based on their experiences in the field solving real-world business problems for customers

A new Debian debuginfod service

Post Syndicated from original https://lwn.net/Articles/847256/rss

Sergio Durigan Junior has announced the availability of a debuginfod server for Debian
systems. “In a nutshell, by using a debuginfod service you will not need to
install debuginfo (a.k.a. dbgsym) files anymore; the symbols will be
served to GDB (or any other debuginfo consumer that supports debuginfod)
over the network. Ultimately, this makes the debugging experience much
smoother (I myself never remember the full URL of our debuginfo
repository when I need it).

Building a serverless multi-player game that scales

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/building-a-serverless-multiplayer-game-that-scales/

This post is written by Tim Bruce, Sr. Solutions Architect, Developer Acceleration.

Game development is a highly iterative process with rapidly changing requirements. Many game developers want to maximize the time spent building features and less time configuring servers, managing infrastructure, and mastering scale.

AWS Serverless provides four key benefits for customers. First, it can help move from idea to market faster, by reducing operational overhead. Second, customers may realize lower costs with serverless by not over-provisioning hardware and software to operate. Third, serverless scales with user activity. Finally, serverless services provide built-in integration, allowing you to focus on your game instead of connecting pieces together.

For AWS Gaming customers, these benefits allow your teams to spend more time focusing on gameplay and content, instead of undifferentiated tasks such as setting up and maintaining servers and software. This can result in better gameplay and content, and a faster time-to-market.

This blog post introduces a game with a serverless-first architecture. Simple Trivia Service is a web-based game showing architectural patterns that you can apply in your own games.

Introducing the Simple Trivia Service

The Simple Trivia Service offers single- and multi-player trivia games with content created by players. There are many features in Simple Trivia Service found in games, such as user registration, chat, content creation, leaderboards, game play, and a marketplace.

Simple Trivia Service UI

Authenticated players can chat with other players, create and manage quizzes, and update their profile. They can play single- and multi-player quizzes, host quizzes, and buy and sell quizzes on the marketplace. The single- and multi-player game modes show how games with different connectivity and technical requirements can be delivered with serverless first architectures. The game modes and architecture solutions are covered in the Simple Trivia Service backend architecture section.

Simple Trivia Service front end

The Simple Trivia Service front end is a Vue.js single page application (SPA) that accesses backend services. The SPA app, accessed via a web browser, allows users to make requests to the game endpoints using HTTPS, secure WebSockets, and WebSockets over MQTT. These requests use integrations to access the serverless backend services.

Vue.js helps make this reference architecture more accessible. The front end uses AWS Amplify to build, deploy, and host the SPA without the need to provision and manage any resources.

Simple Trivia Service backend architecture

The backend architecture for Simple Trivia Service is defined in a set of AWS Serverless Application Model (AWS SAM) templates for portions of the game. A deployment guide is included in the README.md file in the GitHub repository. Here is a visual depiction of the backend architecture.

Reference architecture

Services used

Simple Trivia Service is built using AWS Lambda, Amazon API Gateway, AWS IoT, Amazon DynamoDB, Amazon Simple Notification Service (SNS), AWS Step Functions, Amazon Kinesis, Amazon S3, Amazon Athena, and Amazon Cognito:

  • Lambda enables serverless microservice features in Simple Trivia Service.
  • API Gateway provides serverless endpoints for HTTP/RESTful and WebSocket communication while IoT delivers a serverless endpoint for WebSockets over MQTT communication.
  • DynamoDB enables data storage and retrieval for internet-scale applications.
  • SNS provides microservice communications via publish/subscribe functionality.
  • Step Functions coordinates complex tasks to ensure appropriate outcomes.
  • Analytics for Simple Trivia Service are delivered via Kinesis and S3 with Athena providing a query/visualization capability.
  • Amazon Cognito provides secure, standards-based login and a user directory.

Two managed services that are not serverless, Amazon VPC NAT Gateway and Amazon ElastiCache for Redis, are also used. VPC NAT Gateway is required by VPC-enabled Lambda functions to reach services outside of the VPC, such as DynamoDB. ElastiCache provides an in-memory database suited for applications with submillisecond latency requirements.

User security and enabling communications to backend services

Players are required to register and log in before playing. Registration and login credentials are sent to Amazon Cognito using the Secure Remote Password protocol. Upon successfully logging in, Amazon Cognito returns a JSON Web Token (JWT) and an Amazon Cognito user session.

The JWT is included within requests to API Gateway, which validates the token before allowing the request to be forwarded to Lambda.

IoT requires additional security for users by using an AWS Identity and Access Management (IAM) policy. A policy attached to the Amazon Cognito user allows the player to connect, subscribe, and send messages to the IoT endpoint.

Game types and supporting architectures

Simple Trivia Service’s three game modes define how players interact with the backend services. These modes align to different architectures used within the game.

“Single Player” quiz architecture

“Single Player” quiz architecture

Single player quizzes have simple rules, short play sessions, and appeal to wide audiences. Single player game communication is player-to-endpoint only. This is accomplished with API Gateway via an HTTP API.

Four Lambda functions (ActiveGamesList, GamePlay, GameAnswer, and LeaderboardGet) enable single player games. These functions are integrated with API Gateway and respond to specific requests from the client. API Gateway forwards the request, including URI, body, and query string, to the appropriate Lambda function.

When a player chooses “Play”, a request is sent to API Gateway, which invokes the ActiveGamesList function. This function queries the ActiveGames DynamoDB table and returns the list of active games to the user.

The player selects a game, resulting in another request triggering the GamePlay function. GamePlay retrieves the game’s questions from the GamesDetail DynamoDB table. The front end maintains the state for the user during the game.

When all questions are answered, the SPA sends the player’s responses to API Gateway, invoking the GameAnswer function. This function scores the player’s responses against the GameDetails table. The score and answers are sent to the user.

Additionally, this function sends the player score for the leaderboard and player experience to two SNS topics (LeaderboardTopic and PlayerProgressTopic). The ScorePut and PlayerProgressPut functions subscribe to these topics. These two functions write the details to the Leaderboard and Player Progress DynamoDB tables.

This architecture processes these two actions asynchronously, resulting in the player receiving their score and answers without having to wait. This also allows for increased security for player progress, as only the PlayerProgressPut function is allowed to write to this table.

Finally, the player can view the game’s leaderboard, which is returned to the player as the response to the GetLeaderboard function. The function retrieves the top 10 scores and the current player’s score from the Leaderboard table.

“Multi-player – Casual and Competitive” architecture

“Multiplayer – Casual and Competitive” architecture

These game types require player-to-player and service-to-player communication. This is typically performed using TCP/UDP sockets or the WebSocket protocol. API Gateway WebSockets provides WebSocket communication and enables Lambda functions to send messages to and receive messages from game hosts and players.

Game hosts start games via the “Host” button, messaging the LiveAdmin function via API Gateway. The function adds the game to the LiveGames table, which allows players to find and join the game. A list of questions for the game is sent to the game host from the LiveAdmin function at this time. Additionally, the game host is added to the GameConnections table, which keeps track of which connections are related to a game. Players, via the LivePlayer function, are also added to this table when they join a game.

The game host client manages the state of the game for all players and controls the flow of the game, sending questions, correct answers, and leaderboards to players via API Gateway and the LiveAdmin function. The function only sends game messages to the players in the GameConnections table. Player answers are sent to the host via the LivePlayer function.

To end the game, the game host sends a message with the final leaderboard to all players via the LiveAdmin function. This function also stores the leaderboard in the Leaderboard table, removes the game from the ActiveGames table, and sends player progression messages to the Player Progress topic.

“Multi-player – Live Scoreboard” architecture

“Multiplayer – Live Scoreboard” architecture

This is an extension of other multi-player game types requiring similar communications. This uses IoT with WebSockets over MQTT as the transport. It enables the client to subscribe to a topic and act on messages it receives. IoT manages routing messages to clients based on their subscriptions.

This architecture moves the state management from the game host client to a data store on the backend. This change requires a database that can respond quickly to user actions. Simple Trivia Service uses ElastiCache for Redis for this database. Game questions, player responses, and the leaderboard are all stored and updated in Redis during the quiz. The ElastiCache instance is blocked from internet traffic by placing it in a VPC. A security group configures access for the Lambda functions in the same VPC.

Game hosts for this type of game start the game by hosting it, which sends a message to IoT, triggering the CacheGame function. This function adds the game to the ActiveGames table and caches the quiz details from DynamoDB into Redis. Players join the game by sending a message, which is delivered to the JoinGame function. This adds the user record to Redis and alerts the game host that a player has joined.

Game hosts can send questions to the players via a message that invokes the AskQuestion function. This function updates the current question number in Redis and sends the question to subscribed players via the AskQuestion function. The ReceiveAnswer function processes player responses. It validates the response, stores it in Redis, updates the scoreboard, and replies to all players with the updated scoreboard after the first correct answer. The game scoreboard is updated for players in real time.

When the game is over, the game host sends a message to the EndGame function via IoT. This function writes the game leaderboard to the Leaderboard table, sends player progress to the Player Progress SNS topic, deletes the game from cache, and removes the game from the ActiveGames table.

Conclusion

This post introduces the Simple Trivia Service, a single- and multi-player game built using a serverless-first architecture on AWS. I cover different solutions that you can use to enable connectivity from your game client to a serverless-first backend for both single- and multi-player games. I also include a walkthrough of the architecture for each of these solutions.

You can deploy the code for this solution to your own AWS account via instructions in the Simple Trivia Service GitHub repository.

For more serverless learning resources, visit Serverless Land.

How to set up a recurring Security Hub summary email

Post Syndicated from Justin Criswell original https://aws.amazon.com/blogs/security/how-to-set-up-a-recurring-security-hub-summary-email/

AWS Security Hub provides a comprehensive view of your security posture in Amazon Web Services (AWS) and helps you check your environment against security standards and best practices. In this post, we’ll show you how to set up weekly email notifications using Security Hub to provide account owners with a summary of the existing security findings to prioritize, new findings, and links to the Security Hub console for more information.

When you enable Security Hub, it collects and consolidates findings from AWS security services that you’re using, such as intrusion detection findings from Amazon GuardDuty, vulnerability scans from Amazon Inspector, Amazon Simple Storage Service (Amazon S3) bucket policy findings from Amazon Macie, publicly accessible and cross-account resources from IAM Access Analyzer, and resources lacking AWS WAF coverage from AWS Firewall Manager. Security Hub also consolidates findings from integrated AWS Partner Network (APN) security solutions.

Cloud security processes can differ from traditional on-premises security in that security is often decentralized in the cloud. With traditional on-premises security operations, security alerts are typically routed to centralized security teams operating out of security operations centers (SOCs). With cloud security operations, it’s often the application builders or DevOps engineers who are best situated to triage, investigate, and remediate the security alerts. This integration of security into DevOps processes is referred to as DevSecOps, and as part of this approach, centralized security teams look for additional ways to proactively engage application account owners in improving the security posture of AWS accounts.

This solution uses Security Hub custom insights, AWS Lambda, and the Security Hub API. A custom insight is a collection of findings that are aggregated by a grouping attribute, such as severity or status. Insights help you identify common security issues that might require remediation action. Security Hub includes several managed insights, or you can create your own custom insights. Amazon SNS topic subscribers will receive an email, similar to the one shown in Figure 1, that summarizes the results of the Security Hub custom insights.

Figure 1: Example email with a summary of security findings for an account

Figure 1: Example email with a summary of security findings for an account

Solution overview

This solution assumes that Security Hub is enabled in your AWS account. If it isn’t enabled, set up the service so that you can start seeing a comprehensive view of security findings across your AWS accounts.

A recurring Security Hub summary email provides recipients with a proactive communication that summarizes the security posture and any recent improvements within their AWS accounts. The email message contains the following sections:

Here’s how the solution works:

  1. Seven Security Hub custom insights are created when you first deploy the solution.
  2. An Amazon CloudWatch time-based event invokes a Lambda function for processing.
  3. The Lambda function gets the results of the custom insights from Security Hub, formats the results for email, and sends a message to Amazon SNS.
  4. Amazon SNS sends the email notification to the address you provided during deployment.
  5. The email includes the summary and links to the Security Hub UI so that the recipient can follow the remediation workflow.

Figure 2 shows the solution workflow.

Figure 2: Solution overview, deployed through AWS CloudFormation

Figure 2: Solution overview, deployed through AWS CloudFormation

Security Hub custom insight

The finding results presented in the email are summarized by Security Hub custom insights. A Security Hub insight is a collection of related findings. Each insight is defined by a group by statement and optional filters. The group by statement indicates how to group the matching findings, and identifies the type of item that the insight applies to. For example, if an insight is grouped by resource identifier, then the insight produces a list of resource identifiers. The optional filters narrow down the matching findings for the insight. For example, you might want to see only the findings from specific providers or findings associated with specific types of resources. Figure 3 shows the seven custom insights that are created as part of deploying this solution.

Figure 3: Custom insights created by the solution

Figure 3: Custom insights created by the solution

Sample custom insight

Security Hub offers several built-in managed (default) insights. You can’t modify or delete managed insights. You can view the custom insights created as part of this solution in the Security Hub console under Insights, by selecting the Custom Insights filter. From the email, follow the link for “Summary Email – 02 – Failed AWS Foundational Security Best Practices” to see the summarized finding counts, as well as graphs with related data, as shown in Figure 4.

Figure 4: Detail view of the email titled “Summary Email – 02 – Failed AWS Foundational Security Best Practices”

Figure 4: Detail view of the email titled “Summary Email – 02 – Failed AWS Foundational Security Best Practices”

Let’s evaluate the filters that create this custom insight:

Filter setting Filter results
Type is “Software and Configuration Checks/Industry and Regulatory Standards/AWS-Foundational-Security-Best-Practices” Captures all current and future findings created by the security standard AWS Foundational Security Best Practices.
Status is FAILED Captures findings where the compliance status of the resource doesn’t pass the assessment.
Workflow Status is not SUPPRESSED Captures findings where Security Hub users haven’t updated the finding to the SUPPRESSED status.
Record State is ACTIVE Captures findings that represent the latest assessment of the resource. Security Hub automatically archives control-based findings if the associated resource is deleted, the resource does not exist, or the control is disabled.
Group by SeverityLabel Creates the insight and populates the counts.

Solution artifacts

The solution provided with this blog post consists of two files:

  1. An AWS CloudFormation template named security-hub-email-summary-cf-template.json.
  2. A zip file named sec-hub-email.zip for the Lambda function that generates the Security Hub summary email.

In addition to the Security Hub custom insights as discussed in the previous section, the solution also deploys the following artifacts:

  1. An Amazon Simple Notification Service (Amazon SNS) topic named SecurityHubRecurringSummary and an email subscription to the topic.
    Figure 5: SNS topic created by the solution

    Figure 5: SNS topic created by the solution

    The email address that subscribes to the topic is captured through a CloudFormation template input parameter. The subscriber is notified by email to confirm the subscription, and after confirmation, the subscription to the SNS topic is created.

    Figure 6: SNS email subscription

    Figure 6: SNS email subscription

  2. Two Lambda functions:
    1. A Lambda function named *-CustomInsightsFunction-* is used only by the CloudFormation template to create the custom Insights.
    2. A Lambda function named SendSecurityHubSummaryEmail queries the custom insights from the Security Hub API and uses the insights’ data to create the summary email message. The function then sends the email message to the SNS topic.

      Figure 7: Example of Lambda functions created by the solution

      Figure 7: Example of Lambda functions created by the solution

  3. Two IAM roles for the Lambda functions provide the following rights, respectively:
    1. The minimum rights required to create insights and to create CloudWatch log groups and logs.
      {
          "Version": "2012-10-17",
          "Statement": [
              {
                  "Action": [
                      "logs:CreateLogGroup",
                      "logs:CreateLogStream",
                      "logs:PutLogEvents"
                  ],
                  "Resource": "arn:aws:logs:*:*:*",
                  "Effect": "Allow"
              },
              {
                  "Action": [
                      "securityhub:CreateInsight"
                  ],
                  "Resource": "*",
                  "Effect": "Allow"
              }
          ]
      }
      

    2. The minimum rights required to query Security Hub insights and to send email messages to the SNS topic named SecurityHubRecurringSummary.
      {
          "Version": "2012-10-17",
          "Statement": [
              {
                  "Action": "sns:Publish",
                  "Resource": "arn:aws:sns:[REGION]:[ACCOUNT-ID]:SecurityHubRecurringSummary",
                  "Effect": "Allow"
              }
          ]
      } ,
      {
          "Version": "2012-10-17",
          "Statement": [
              {
                  "Effect": "Allow",
                  "Action": [
                      "securityhub:Get*",
                      "securityhub:List*",
                      "securityhub:Describe*"
                  ],
                  "Resource": "*"
              }
          ]
      }             
      

  4. A CloudWatch scheduled event named SecurityHubSummaryEmailSchedule for invoking the Lambda function that generates the summary email. The default schedule is every Monday at 8:00 AM GMT. This schedule can be overwritten by using a CloudFormation input parameter. Learn more about creating Cron expressions.

    Figure 8: Example of CloudWatch schedule created by the solution

    Figure 8: Example of CloudWatch schedule created by the solution

Deploy the solution

The following steps demonstrate the deployment of this solution in a single AWS account and Region. Repeat these steps in each of the AWS accounts that are active with Security Hub, so that the respective application owners can receive the relevant data from their accounts.

To deploy the solution

  1. Download the CloudFormation template security-hub-email-summary-cf-template.json and the .zip file sec-hub-email.zip from https://github.com/aws-samples/aws-security-hub-summary-email.
  2. Copy security-hub-email-summary-cf-template.json and sec-hub-email.zip to an S3 bucket within your target AWS account and Region. Copy the object URL for the CloudFormation template .json file.
  3. On the AWS Management Console, open the service CloudFormation. Choose Create Stack with new resources.

    Figure 9: Create stack with new resources

    Figure 9: Create stack with new resources

  4. Under Specify template, in the Amazon S3 URL textbox, enter the S3 object URL for the file security-hub-email-summary-cf-template.json that you uploaded in step 1.

    Figure 10: Specify S3 URL for CloudFormation template

    Figure 10: Specify S3 URL for CloudFormation template

  5. Choose Next. On the next page, under Stack name, enter a name for the stack.

    Figure 11: Enter stack name

    Figure 11: Enter stack name

  6. On the same page, enter values for the input parameters. These are the input parameters that are required for this CloudFormation template:
    1. S3 Bucket Name: The S3 bucket where the .zip file for the Lambda function (sec-hub-email.zip) is stored.
    2. S3 key name (with prefixes): The S3 key name (with prefixes) for the .zip file for the Lambda function.
    3. Email address: The email address of the subscriber to the Security Hub summary email.
    4. CloudWatch Cron Expression: The Cron expression for scheduling the Security Hub summary email. The default is every Monday 8:00 AM GMT. Learn more about creating Cron expressions.
    5. Additional Footer Text: Text that will appear at the bottom of the email message. This can be useful to guide the recipient on next steps or provide internal resource links. This is an optional parameter; leave it blank for no text.
    Figure 12: Enter CloudFormation parameters

    Figure 12: Enter CloudFormation parameters

  7. Choose Next.
  8. Keep all defaults in the screens that follow, and choose Next.
  9. Select the check box I acknowledge that AWS CloudFormation might create IAM resources, and then choose Create stack.

Test the solution

You can send a test email after the deployment is complete. To do this, navigate to the Lambda console and locate the Lambda function named SendSecurityHubSummaryEmail. Perform a manual invocation with any event payload to receive an email within a few minutes. You can repeat this procedure as many times as you wish.

Conclusion

We’ve outlined an approach for rapidly building a solution for sending a weekly summary of the security posture of your AWS account as evaluated by Security Hub. This solution makes it easier for you to be diligent in reviewing any outstanding findings and to remediate findings in a timely way based on their severity. You can extend the solution in many ways, including:

  1. Add links in the footer text to the remediation workflows, such as creating a ticket for ServiceNow or any Security Information and Event Management (SIEM) that you use.
  2. Add links to internal wikis for workflows like organizational exceptions to vulnerabilities or other internal processes.
  3. Extend the solution by modifying the custom insights content, email content, and delivery frequency.

To learn more about how to set up and customize Security Hub, see these additional blog posts.

If you have feedback about this post, submit comments in the Comments section below. If you have any questions about this post, start a thread on the AWS Security Hub forum.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Justin Criswell

Justin is a Senior Security Specialist Solutions Architect at AWS. His background is in cloud security and customer success. Today, he is focused on helping enterprise AWS customers adopt and operationalize AWS Security Services to increase visibility and reduce risk.

Author

Kavita Mahajan

Kavita is a Senior Partner Solutions Architect at AWS. She has a background in building software systems for the financial sector, specifically insurance and banking. Currently, she is focused on helping AWS partners develop capabilities, grow their practices, and innovate on the AWS platform to deliver the best possible customer experience.

Software Engineering, Vulnerability and Risk Management: Revolutionizing the Security Landscape at Rapid7

Post Syndicated from Rapid7 original https://blog.rapid7.com/2021/02/24/software-engineering-vulnerability-and-risk-management-revolutionizing-the-security-landscape-at-rapid7/

Software Engineering, Vulnerability and Risk Management: Revolutionizing the Security Landscape at Rapid7

At Rapid7, our software engineers defend the digital world and design the future of security. With a supportive, collaborative team, immense learning and development opportunities to fine-tune and hone in on skills and knowledge, opportunities to work with innovative technology, and the pursuance of continuous innovation to achieve secure advancement for all, joining our team of Vulnerability and Risk Management software engineers is a no-brainer.

As we continue to build this team, we are looking for engineers who exemplify our core values and are passionate about making a positive impact on our customers.

Read on to meet and learn more about our North America VRM Software Engineering team, why they chose to bring their talents to Rapid7, and why you should, too!

Courtney Wood: Software Engineer II, VRM (Los Angeles)

Software Engineering, Vulnerability and Risk Management: Revolutionizing the Security Landscape at Rapid7

Rapid7 is an amazing company to learn and grow your career. As someone who began my career at Rapid7, I was intimidated by my lack of cybersecurity knowledge. Fortunately, I joined a team full of passionate engineers who were more than willing to teach me about the cybersecurity landscape. The people at Rapid7 truly make this an amazing place to work. The VRM software engineering team is a bright, enthusiastic, and determined group of people who consistently exhibit a “never done” attitude. On top of that, they are a team that loves to have fun! Whether it’s KBBQ dinners, team-building activities, or just a competitive game of ping pong, there is always something exciting going on in the office.

David Castellanos: Manager, Engineering, VRM (Toronto)

Software Engineering, Vulnerability and Risk Management: Revolutionizing the Security Landscape at Rapid7

Cybersecurity is a dynamic and ever-changing field that takes on the problems of an ever-connected world. Sophisticated cyber-actors and nation-states exploit vulnerabilities to steal information and money and are developing capabilities to disrupt, destroy, or threaten the delivery of essential services. At Rapid7, we are tackling those challenges head on. We develop software solutions to help private companies as well as public institutions secure their information infrastructure. We develop a range of software solutions, from cloud-based services to on-premises software. We need engaged and committed software engineers who enjoy solving complex and often difficult problems to simplify security practices for our customers. We need creative people that can collaborate, challenge us, and help us grow and innovate. We offer a collaborative environment that will both challenge and help you (and us) grow together.

Pearce Barry: Manager, Engineering, VRM (Austin)

Software Engineering, Vulnerability and Risk Management: Revolutionizing the Security Landscape at Rapid7

Enjoy “taking the offensive” with your work? Our Offensive Security teams build and improve a number of well-known, top-tier security applications, such as Metasploit Framework and Metasploit Pro. These teams also create exciting new security apps, like AttackerKB (The Attacker Knowledge Base). Working alongside the bright and curious software developers on these teams provides an amazing opportunity to learn and grow while helping make our customers more secure with your contributions. In addition, your impact has the potential to be felt even beyond our customers with the open source (Metasploit Framework) and open data (AttackerKB) nature of some of our projects!

Jimmy Cancilla: Lead Software Engineer, VRM (Toronto)

Software Engineering, Vulnerability and Risk Management: Revolutionizing the Security Landscape at Rapid7

We are a full-stack team, and our work spans the technological spectrum. We offer opportunities that include UI development, building and managing cloud-based web services, as well as working with low-level network scanning technologies. We are a diverse team made up of an exceptional group of brilliant engineers who are eager and willing to share their knowledge. By joining the team, not only would you be bringing a unique perspective to the table, but you would also be able to expand your expertise and skills. Also, we have beer on tap in the office!

Richard Tsang: Manager, Engineering, VRM (Toronto)

Software Engineering, Vulnerability and Risk Management: Revolutionizing the Security Landscape at Rapid7

Do you know about CVE-2013-4866? No? It details a hardcoded PIN in a Smart Bidet giving attackers access to the functionality of the toilet—discomforting to know. Unfortunately, InsightVM doesn’t scan for this, but for all the hundreds of thousands of other vulnerabilities out there, we work to understand and distill this information down into actionable steps that give our customers a peace of mind knowing which risks hide within their environment and what can be done to secure it. If you’re curious of all the various products in existence and ways we can harden (and weaken) security, join our InsightVM Coverage Team and learn of the craziness that is the reality of cybersecurity.

Interested in learning more and joining the herd? Check out our Software Engineer, VRM roles in North America today and read more about our technology in our blog!

Through the eyes of a Cloudflare Technical Support Engineer

Post Syndicated from Justina Wong original https://blog.cloudflare.com/through-the-eyes-tech-support-engineer/

Through the eyes of a Cloudflare Technical Support Engineer

This post originally appeared on Landing Jobs under the title Mission: Protect the Internet where you can find open positions at Cloudflare Lisbon.

Justina Wong, Technical Support Team Lead in Lisbon, talks about what it’s like working at Cloudflare, and everything you need to know if you want to join us.

Through the eyes of a Cloudflare Technical Support Engineer

Justina joined Cloudflare about three years ago in London as a Technical Support Engineer. Currently, she’s part of their Customer Support team working in Lisbon as a team lead.

I can’t speak for others, but I love the things you can learn from the others. There are so many talented individuals who are willing and ready to teach/share. They are my inspiration and I want to become them!

On a Mission to Protect the Internet

Justina’s favourite Cloudflare products are firewall-related ones. The company’s primary care is for the customers and they want to make attack mitigation as easy as possible. As she puts it, “the fact that these protections are on multiple layers, like L7, L3/4, is very important, and I’m proud to be someone who can help our customers when they face certain attacks.”.

Cloudflare is constantly releasing new products to help build a better Internet, so product managers are always on top of tool updates to facilitate that. The company believes that it’s not only important to help customers from the product side, but it’s also as important to teach them how to help themselves so that they can fix their issues promptly without having to wait for an answer.

Company culture and Office vibes

According to Justina, one of the amazing things about Cloudflare is the unified company culture. As their SVP of Engineering, Usman, said in a recent meeting with the team, “Be helpful, look around for problems and help find solutions”.

Every Cloudflare office has its own little “flare”: London’s love of mince pies; Singapore’s super fun cultural richness in one location (they have four new years in one year, officially); and Lisbon’s forever love (and fight) for pastéis de nata.

Each office also has its own function or focus, so people working at Cloudflare get to meet very diverse individuals. For Justina, the things that she’d loved the most are learning from all of the engineers in London, picking up new customer service skills in Singapore and helping to build the new Lisbon office. She says that every time she goes to a different office, they have grown at least 50% in headcount compared to when she was last there. Talk about growth!

As a hiring manager, she also says that the company is mindful of diversity.

Through the eyes of a Cloudflare Technical Support Engineer

Working remote

Like everywhere else, remote work has become the current normal at Cloudflare. As someone who enjoyed being in the office, Justina says “all the countless times I just walked over to someone to ask a question, now all turned into a chat message; or the random coffee chat when we waited for our coffee to be done.”

Funnily enough, the EMEA CSUP team is working closer than before the pandemic. Previously, each office was somewhat in its own communication bubble, now it has turned into a collective conversation. This is great for getting to know colleagues during and beyond work hours.

What you need to know if you want to land a job at Cloudflare in Lisbon

For Cloudflare, growing the team is a continuous challenge, and Justina has never needed to do as many interviews as she has done in the Lisbon office. Although it’s a huge challenge for her, it’s also fun. Since the company is hiring aggressively despite the pandemic, their teams are eager to welcome anyone who’s ready to be part of Lisbon Cloudflare.

One of the things you can expect if you work at Cloudflare is for your manager to care and for your feedback to be heard. We know these are valuable things when considering where to work. So if you’re someone who’s willing to learn and is excited about their technologies, this call is for you. The company is expanding in different markets, so they’re looking for tech candidates who can speak multiple languages.

Currently, Cloudflare has over 25 open positions for their offices in Lisbon. Categories include Security Engineers, Full-Stack Developers, Data Scientists, and more.

Universal design for learning in computing | Hello World #15

Post Syndicated from Hayley Leonard original https://www.raspberrypi.org/blog/universal-design-for-learning-in-computing-hello-world-15/

In our brand-new issue of Hello World magazine, Hayley Leonard from our team gives a primer on how computing educators can apply the Universal Design for Learning framework in their lessons.

Cover of issue 15 of Hello World magazine

Universal Design for Learning (UDL) is a framework for considering how tools and resources can be used to reduce barriers and support all learners. Based on findings from neuroscience, it has been developed over the last 30 years by the Center for Applied Special Technology (CAST), a nonprofit education research and development organisation based in the US. UDL is currently used across the globe, with research showing it can be an efficient approach for designing flexible learning environments and accessible content.

A computing classroom populated by students with diverse genders and ethnicities

Engaging a wider range of learners is an important issue in computer science, which is often not chosen as an optional subject by girls and those from some minority ethnic groups. Researchers at the Creative Technology Research Lab in the US have been investigating how UDL principles can be applied to computer science, to improve learning and engagement for all students. They have adapted the UDL guidelines to a computer science education context and begun to explore how teachers use the framework in their own practice. The hope is that understanding and adapting how the subject is taught could help to increase the representation of all groups in computing.

The UDL guidelines help educators anticipate barriers to learning and plan activities to overcome them.

A scientific approach

The UDL framework is based on neuroscientific evidence which highlights how different areas or networks in the brain work together to process information during learning. Importantly, there is variation across individuals in how each of these networks functions and how they interact with each other. This means that a traditional approach to teaching, in which a main task is differentiated for certain students with special educational needs, may miss out on the variation in learning between all students across different tasks.

A stylised representation of the human brain
The UDL framework is based on neuroscientific evidence

The UDL guidelines highlight different opportunities to take learner differences into account when planning lessons. The framework is structured according to three main principles, which are directly related to three networks in the brain that play a central role in learning. It encourages educators to plan multiple, flexible methods of engagement in learning (affective networks), representation of the teaching materials (recognition networks), and opportunities for action and expression of what has been learnt (strategic networks).

The three principles of UDL are each expanded into guidelines and checkpoints that allow educators to identify the different methods of engagement, representation, and expression to be used in a particular lesson. Each principle is also broken down into activities that allow learners to access the learning goals, remain engaged and build on their learning, and begin to internalise the approaches to learning so that they are empowered for the future.

Examples of UDL guidelines for computer science education from the Creative Technology Research Lab

Multiple means of engagement Multiple means of representation Multiple means of
action and expression
Provide options for recruiting interests
* Give students choice (software, project, topic)
* Allow students to make projects relevant to culture and age
Provide options for perception
* Model computing through physical representations as well as through interactive whiteboard/videos etc.
* Select coding apps and websites that allow adjustment of visual settings (e.g. font size/contrast) and that are compatible with screen readers
Provide options for physical action
* Include CS unplugged activities that show physical relationships of abstract computing concepts
* Use assistive technology, including a larger or smaller mouse or touchscreen devices
Provide options for sustaining effort and persistence
* Utilise pair programming and group work with clearly defined roles
* Discuss the integral role of perseverance and problem-solving in computer science
Provide options for language, mathematical expressions, and symbols
* Teach and review computing vocabulary (e.g. code, animations, algorithms)
* Provide reference sheets with images of blocks, or with common syntax when using text
Provide options for expression and communication
* Provide sentence starters or checklists for communicating in order to collaborate, give feedback, and explain work
* Provide options that include starter code
Provide options for self-regulation
* Break up coding activities with opportunities for reflection, such as ‘turn and talk’ or written questions
* Model different strategies for dealing with frustration appropriately
Provide options for comprehension
* Encourage students to ask questions as comprehension checkpoints
* Use relevant analogies and make cross-curricular connections explicit
Provide options for executive function
* Embed prompts to stop and plan, test, or debug throughout a lesson or project
* Demonstrate debugging with think-alouds

Each principle of the UDL framework is associated with three areas of activity which may be considered when planning lessons or units of work. It will not be the case that each area of activity should be covered in every lesson, and some may prove more important in particular contexts than others. The full table and explanation can be found on the Creative Technology Research Lab website at ctrl.education.ufl.edu/projects/tactic.

Applying UDL to computer science education

While an advantage of UDL is that the principles can be applied across different subjects, it is important to think carefully about what activities to address these principles could look like in the case of computer science.

Maya Israel
Researcher Maya Israel will speak at our April seminar

Researchers at the Creative Technology Research Lab, led by Maya Israel, have identified key activities, some of which are presented in the table on the previous page. These guidelines will help educators anticipate potential barriers to learning and plan activities that can overcome them, or adapt activities from those in existing schemes of work, to help engage the widest possible range of students in the lesson.

UDL in the classroom

As well as suggesting approaches to applying UDL to computer science education, the research team at the Creative Technology Research Lab has also investigated how teachers are using UDL in practice. Israel and colleagues worked with four novice computer science teachers in US elementary schools to train them in the use of UDL and understand how they applied the framework in their teaching.

Smiling learners in a computing classroom

The research found that the teachers were most likely to include in their teaching multiple means of engagement, followed by multiple methods of representation. For example, they all offered choice in their students’ activities and provided materials in different formats (such as oral and visual presentations and demonstrations). They were less likely to provide multiple means of action and expression, and mainly addressed this principle through supporting students in planning work and checking their progress against their goals.

Although the study included only four teachers, it highlighted the flexibility of the UDL approach in catering for different needs within variable teaching contexts. More research will be needed in future, with larger samples, to understand how successful the approach is in helping a wide range of students to achieve good learning outcomes.

Find out more about using UDL

There are numerous resources designed to help teachers learn more about the UDL framework and how to apply it to teaching computing. The CAST website (helloworld.cc/cast) includes an explainer video and the detailed UDL guidelines. The Creative Technology Research Lab website has computing-specific ideas and lesson plans using UDL (helloworld.cc/udl).

Maya Israel will be presenting her research at our computing education research seminar series, on 20 April 2021. Our seminars are free to attend and open to anyone from anywhere around the world. Find out more about the current seminar series, which focuses on diversity and inclusion, and sign up to attend for free.

Further reading

The post Universal design for learning in computing | Hello World #15 appeared first on Raspberry Pi.

Twelve-Year-Old Vulnerability Found in Windows Defender

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/02/twelve-year-old-vulnerability-found-in-windows-defender.html

Researchers found, and Microsoft has patched, a vulnerability in Windows Defender that has been around for twelve years. There is no evidence that anyone has used the vulnerability during that time.

The flaw, discovered by researchers at the security firm SentinelOne, showed up in a driver that Windows Defender — renamed Microsoft Defender last year — uses to delete the invasive files and infrastructure that malware can create. When the driver removes a malicious file, it replaces it with a new, benign one as a sort of placeholder during remediation. But the researchers discovered that the system doesn’t specifically verify that new file. As a result, an attacker could insert strategic system links that direct the driver to overwrite the wrong file or even run malicious code.

It isn’t unusual that vulnerabilities lie around for this long. They can’t be fixed until someone finds them, and people aren’t always looking.

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close