Tag Archives: security

Canadian Centre for Cyber Security Assessment Summary report now available in AWS Artifact

Post Syndicated from Rob Samuel original https://aws.amazon.com/blogs/security/canadian-centre-for-cyber-security-assessment-summary-report-now-available-in-aws-artifact/

French version

At Amazon Web Services (AWS), we are committed to providing continued assurance to our customers through assessments, certifications, and attestations that support the adoption of AWS services. We are pleased to announce the availability of the Canadian Centre for Cyber Security (CCCS) assessment summary report for AWS, which you can view and download on demand through AWS Artifact.

The CCCS is Canada’s authoritative source of cyber security expert guidance for the Canadian government, industry, and the general public. Public and commercial sector organizations across Canada rely on CCCS’s rigorous Cloud Service Provider (CSP) IT Security (ITS) assessment in their decision to use CSP services. In addition, CCCS’s ITS assessment process is a mandatory requirement for AWS to provide cloud services to Canadian federal government departments and agencies.

The CCCS Cloud Service Provider Information Technology Security Assessment Process determines if the Government of Canada (GC) ITS requirements for the CCCS Medium Cloud Security Profile (previously referred to as GC’s PROTECTED B/Medium Integrity/Medium Availability [PBMM] profile) are met as described in ITSG-33 (IT Security Risk Management: A Lifecycle Approach, Annex 3 – Security Control Catalogue). As of September, 2021, 120 AWS services in the Canada (Central) Region have been assessed by the CCCS, and meet the requirements for medium cloud security profile. Meeting the medium cloud security profile is required to host workloads that are classified up to and including medium categorization. On a periodic basis, CCCS assesses new or previously unassessed services and re-assesses the AWS services that were previously assessed to verify that they continue to meet the GC’s requirements. CCCS prioritizes the assessment of new AWS services based on their availability in Canada, and customer demand for the AWS services. The full list of AWS services that have been assessed by CCCS is available on our Services in Scope by Compliance Program page.

To learn more about the CCCS assessment or our other compliance and security programs, visit AWS Compliance Programs. If you have questions about this blog post, please start a new thread on the AWS Artifact forum or contact AWS Support.

If you have feedback about this post, submit comments in the Comments section below. Want more AWS Security news? Follow us on Twitter.

Rob Samuel

Rob Samuel

Rob Samuel is a Principal technical leader for AWS Security Assurance. He partners with teams across AWS to translate data protection principles into technical requirements, aligns technical direction and priorities, orchestrates new technical solutions, helps integrate security and privacy solutions into AWS services and features, and addresses cross-cutting security and privacy requirements and expectations. Rob has more than 20 years of experience in the technology industry, and has previously held leadership roles, including Head of Security Assurance for AWS Canada, Chief Information Security Officer (CISO) for the Province of Nova Scotia, various security leadership roles as a public servant, and served as a Communications and Electronics Engineering Officer in the Canadian Armed Forces.

Naranjan Goklani

Naranjan Goklani

Naranjan Goklani is a Security Audit Manager at AWS, based in Toronto (Canada). He leads audits, attestations, certifications, and assessments across North America and Europe. Naranjan has more than 12 years of experience in risk management, security assurance, and performing technology audits. Naranjan previously worked in one of the Big 4 accounting firms and supported clients from the retail, ecommerce, and utilities industries.

Brian Mycroft

Brian Mycroft

Brian Mycroft is a Chief Technologist at AWS, based in Ottawa (Canada), specializing in national security, intelligence, and the Canadian federal government. Brian is the lead architect of the AWS Secure Environment Accelerator (ASEA) and focuses on removing public sector barriers to cloud adoption.

.


 

Rapport sommaire de l’évaluation du Centre canadien pour la cybersécurité disponible sur AWS Artifact

Par Robert Samuel, Naranjan Goklani et Brian Mycroft
Amazon Web Services (AWS) s’engage à fournir à ses clients une assurance continue à travers des évaluations, des certifications et des attestations qui appuient l’adoption des services proposés par AWS. Nous avons le plaisir d’annoncer la mise à disposition du rapport sommaire de l’évaluation du Centre canadien pour la cybersécurité (CCCS) pour AWS, que vous pouvez dès à présent consulter et télécharger à la demande sur AWS Artifact.

Le CCC est l’autorité canadienne qui met son expertise en matière de cybersécurité au service du gouvernement canadien, du secteur privé et du grand public. Les organisations des secteurs public et privé établies au Canada dépendent de la rigoureuse évaluation de la sécurité des technologies de l’information s’appliquant aux fournisseurs de services infonuagiques conduite par le CCC pour leur décision relative à l’utilisation de ces services infonuagiques. De plus, le processus d’évaluation de la sécurité des technologies de l’information est une étape obligatoire pour permettre à AWS de fournir des services infonuagiques aux agences et aux ministères du gouvernement fédéral canadien.

Le Processus d’évaluation de la sécurité des technologies de l’information s’appliquant aux fournisseurs de services infonuagiques détermine si les exigences en matière de technologie de l’information du Gouvernement du Canada (GC) pour le profil de contrôle de la sécurité infonuagique moyen (précédemment connu sous le nom de Protégé B/Intégrité moyenne/Disponibilité moyenne) sont satisfaites conformément à l’ITSG-33 (Gestion des risques liés à la sécurité des TI : Une méthode axée sur le cycle de vie, Annexe 3 – Catalogue des contrôles de sécurité). En date de septembre 2021, 120 services AWS de la région (centrale) du Canada ont été évalués par le CCC et satisfont aux exigences du profil de sécurité moyen du nuage. Satisfaire les exigences du niveau moyen du nuage est nécessaire pour héberger des applications classées jusqu’à la catégorie moyenne incluse. Le CCC évalue périodiquement les nouveaux services, ou les services qui n’ont pas encore été évalués, et réévalue les services AWS précédemment évalués pour s’assurer qu’ils continuent de satisfaire aux exigences du Gouvernement du Canada. Le CCC priorise l’évaluation des nouveaux services AWS selon leur disponibilité au Canada et en fonction de la demande des clients pour les services AWS. La liste complète des services AWS évalués par le CCC est consultable sur notre page Services AWS concernés par le programme de conformité.

Pour en savoir plus sur l’évaluation du CCC ainsi que sur nos autres programmes de conformité et de sécurité, visitez la page Programmes de conformité AWS. Comme toujours, nous accordons beaucoup de valeur à vos commentaires et à vos questions; vous pouvez communiquer avec l’équipe Conformité AWS via la page Communiquer avec nous.

Si vous avez des commentaires sur cette publication, n’hésitez pas à les partager dans la section Commentaires ci-dessous. Vous souhaitez en savoir plus sur AWS Security? Retrouvez-nous sur Twitter.

Biographies des auteurs :

Rob Samuel : Rob Samuel est responsable technique principal d’AWS Security Assurance. Il collabore avec les équipes AWS pour traduire les principes de protection des données en recommandations techniques, aligne la direction technique et les priorités, met en œuvre les nouvelles solutions techniques, aide à intégrer les solutions de sécurité et de confidentialité aux services et fonctionnalités proposés par AWS et répond aux exigences et aux attentes en matière de confidentialité et de sécurité transversale. Rob a plus de 20 ans d’expérience dans le secteur de la technologie et a déjà occupé des fonctions dirigeantes, comme directeur de l’assurance sécurité pour AWS Canada, responsable de la cybersécurité et des systèmes d’information (RSSI) pour la province de la Nouvelle-Écosse, divers postes à responsabilités en tant que fonctionnaire et a servi dans les Forces armées canadiennes en tant qu’officier du génie électronique et des communications.

Naranjan Goklani : Naranjan Goklani est responsable des audits de sécurité pour AWS, il est basé à Toronto (Canada). Il est responsable des audits, des attestations, des certifications et des évaluations pour l’Amérique du Nord et l’Europe. Naranjan a plus de 12 ans d’expérience dans la gestion des risques, l’assurance de la sécurité et la réalisation d’audits de technologie. Naranjan a exercé dans l’une des quatre plus grandes sociétés de comptabilité et accompagné des clients des industries de la distribution, du commerce en ligne et des services publics.

Brian Mycroft : Brian Mycroft est technologue en chef pour AWS, il est basé à Ottawa (Canada) et se spécialise dans la sécurité nationale, le renseignement et le gouvernement fédéral du Canada. Brian est l’architecte principal de l’AWS Secure Environment Accelerator (ASEA) et s’intéresse principalement à la suppression des barrières à l’adoption du nuage pour le secteur public.

How to protect HMACs inside AWS KMS

Post Syndicated from Jeremy Stieglitz original https://aws.amazon.com/blogs/security/how-to-protect-hmacs-inside-aws-kms/

Today AWS Key Management Service (AWS KMS) is introducing new APIs to generate and verify hash-based message authentication codes (HMACs) using the Federal Information Processing Standard (FIPS) 140-2 validated hardware security modules (HSMs) in AWS KMS. HMACs are a powerful cryptographic building block that incorporate secret key material in a hash function to create a unique, keyed message authentication code.

In this post, you will learn the basics of the HMAC algorithm as a cryptographic building block, including how HMACs are used. In the second part of this post, you will see a few real-world use cases that show an application builder’s perspective on using the AWS KMS HMAC APIs.

HMACs provide a fast way to tokenize or sign data such as web API requests, credit cards, bank routing information, or personally identifiable information (PII).They are commonly used in several internet standards and communication protocols such as JSON Web Tokens (JWT), and are even an important security component for how you sign AWS API requests.

HMAC as a cryptographic building block

You can consider an HMAC, sometimes referred to as a keyed hash, to be a combination function that fuses the following elements:

  • A standard hash function such as SHA-256 to produce a message authentication code (MAC).
  • A secret key that binds this MAC to that key’s unique value.

Combining these two elements creates a unique, authenticated version of the digest of a message. Because the HMAC construction allows interchangeable hash functions as well as different secret key sizes, one of the benefits of HMACs is the easy replaceability of the underlying hash function (in case faster or more secure hash functions are required), as well as the ability to add more security by lengthening the size of the secret key used in the HMAC over time. The AWS KMS HMAC API is launching with support for SHA-224, SHA-256, SHA-384, and SHA-512 algorithms to provide a good balance of key sizes and performance trade-offs in the implementation. For more information about HMAC algorithms supported by AWS KMS, see HMAC keys in AWS KMS in the AWS KMS Developer Guide.

HMACs offer two distinct benefits:

  1. Message integrity: As with all hash functions, the output of an HMAC will result in precisely one unique digest of the message’s content. If there is any change to the data object (for example you modify the purchase price in a contract by just one digit: from “$350,000” to “$950,000”), then the verification of the original digest will fail.
  2. Message authenticity: What distinguishes HMAC from other hash methods is the use of a secret key to provide message authenticity. Only message hashes that were created with the specific secret key material will produce the same HMAC output. This dependence on secret key material ensures that no third party can substitute their own message content and create a valid HMAC without the intended verifier detecting the change.

HMAC in the real world

HMACs have widespread applications and industry adoption because they are fast, high performance, and simple to use. HMACs are particularly popular in the JSON Web Token (JWT) open standard as a means of securing web applications, and have replaced older technologies such as cookies and sessions. In fact, Amazon implements a custom authentication scheme, Signature Version 4 (SigV4), to sign AWS API requests based on a keyed-HMAC. To authenticate a request, you first concatenate selected elements of the request to form a string. You then use your AWS secret key material to calculate the HMAC of that string. Informally, this process is called signing the request, and the output of the HMAC algorithm is informally known as the signature, because it simulates the security properties of a real signature in that it represents your identity and your intent.

Advantages of using HMACs in AWS KMS

AWS KMS HMAC APIs provide several advantages over implementing HMACs in application software because the key material for the HMACs is generated in AWS KMS hardware security modules (HSMs) that are certified under the FIPS 140-2 program and never leave AWS KMS unencrypted. In addition, the HMAC keys in AWS KMS can be managed with the same access control mechanisms and auditing features that AWS KMS provides on all AWS KMS keys. These security controls ensure that any HMAC created in AWS KMS can only ever be verified in AWS KMS using the same KMS key. Lastly, the HMAC keys and the HMAC algorithms that AWS KMS uses conform to industry standards defined in RFC 2104 HMAC: Keyed-Hashing for Message Authentication.

Use HMAC keys in AWS KMS to create JSON Web Tokens

The JSON Web Token (JWT) open standard is a common use of HMAC. The standard defines a portable and secure means to communicate a set of statements, known as claims, between parties. HMAC is useful for applications that need an authorization mechanism, in which claims are validated to determine whether an identity has permission to perform some action. Such an application can only work if a validator can trust the integrity of claims in a JWT. Signing JWTs with an HMAC is one way to assert their integrity. Verifiers with access to an HMAC key can cryptographically assert that the claims and signature of a JWT were produced by an issuer using the same key.

This section will walk you through an example of how you can use HMAC keys from AWS KMS to sign JWTs. The example uses the AWS SDK for Python (Boto3) and implements simple JWT encoding and decoding operations. This example shows the ease with which you can integrate HMAC keys in AWS KMS into your JWT application, even if your application is in another language or uses a more formal JWT library.

Create an HMAC key in AWS KMS

Begin by creating an HMAC key in AWS KMS. You can use the AWS KMS console or call the CreateKey API action. The following example shows creation of a 256-bit HMAC key:

import boto3

kms = boto3.client('kms')

# Use CreateKey API to create a 256-bit key for HMAC
key_id = kms.create_key(
	KeySpec='HMAC_256',
	KeyUsage='GENERATE_VERIFY_MAC'
)['KeyMetadata']['KeyId']

Use the HMAC key to encode a signed JWT

Next, you use the HMAC key to encode a signed JWT. There are three components to a JWT token: the set of claims, header, and signature. The claims are the very application-specific statements to be authenticated. The header describes how the JWT is signed. Lastly, the MAC (signature) is the output of applying the header’s described operation to the message (the combination of the claims and header). All these are packed into a URL-safe string according to the JWT standard.

The following example uses the previously created HMAC key in AWS KMS within the construction of a JWT. The example’s claims simply consist of a small claim and an issuance timestamp. The header contains key ID of the HMAC key and the name of the HMAC algorithm used. Note that HS256 is the JWT convention used to represent HMAC with SHA-256 digest. You can generate the MAC using the new GenerateMac API action in AWS KMS.

import base64
import json
import time

def base64_url_encode(data):
	return base64.b64encode(data, b'-_').rstrip(b'=')

# Payload contains simple claim and an issuance timestamp
payload = json.dumps({
	"does_kms_support_hmac": "yes",
	"iat": int(time.time())
}).encode("utf8")

# Header describes the algorithm and AWS KMS key ID to be used for signing
header = json.dumps({
	"typ": "JWT",
	"alg": "HS256",
	"kid": key_id #This key_id is from the “Create an HMAC key in AWS KMS” #example. The “Verify the signed JWT” example will later #assert that the input header has the same value of the #key_id 
}).encode("utf8")

# Message to sign is of form <header_b64>.<payload_b64>
message = base64_url_encode(header) + b'.' + base64_url_encode(payload)

# Generate MAC using GenerateMac API of AWS KMS
MAC = kms.generate_mac(
	KeyId=key_id, #This key_id is from the “Create an HMAC key in AWS KMS” 
				 #example
	MacAlgorithm='HMAC_SHA_256',
	Message=message
)['Mac']

# Form JWT token of form <header_b64>.<payload_b64>.<mac_b64>
jwt_token = message + b'.' + base64_url_encode(mac)

Verify the signed JWT

Now that you have a signed JWT, you can verify it using the same KMS HMAC key. The example below uses the new VerifyMac API action to validate the MAC (signature) of the JWT. If the MAC is invalid, AWS KMS returns an error response and the AWS SDK throws an exception. If the MAC is valid, the request succeeds and the application can continue to do further processing on the token and its claims.

def base64_url_decode(data):
	return base64.b64decode(data + b'=' * (4 - len(data) % 4), b'-_')

# Parse out encoded header, payload, and MAC from the token
message, mac_b64 = jwt_token.rsplit(b'.', 1)
header_b64, payload_b64 = message.rsplit(b'.', 1)

# Decode header and verify its contents match expectations
header_map = json.loads(base64_url_decode(header_b64).decode("utf8"))
assert header_map == {
	"typ": "JWT",
	"alg": "HS256",
	"kid": key_id #This key_id is from the “Create an HMAC key in AWS KMS” 
				 #example
}

# Verify the MAC using VerifyMac API of AWS KMS. # If the verification fails, this will throw an error.
kms.verify_mac(
	KeyId=key_id, #This key_id is from the “Create an HMAC key in AWS KMS” 
				 #example
	MacAlgorithm='HMAC_SHA_256',
	Message=message,
	Mac=base64_url_decode(mac_b64)
)

# Decode payload for use application-specific validation/processing
payload_map = json.loads(base64_url_decode(payload_b64).decode("utf8"))

Create separate roles to control who has access to generate HMACs and who has access to validate HMACs

It’s often helpful to have separate JWT creators and validators so that you can distinguish between the roles that are allowed to create tokens and the roles that are allowed to verify tokens. HMAC signatures performed outside of AWS-KMS don’t work well for this because you can’t isolate creators and verifiers if they both must have a copy of the same key. However, this is not an issue for HMAC keys in AWS KMS. You can use key policies to separate out who has permission to ask AWS KMS to generate HMACs and who has permission to ask AWS KMS to validate. Each party uses their own unique access keys to access the HMAC key in AWS KMS. Only HSMs in AWS KMS will ever have access to the actual key material. See the following example key policy statements that separate out GenerateMac and VerifyMac permissions:

{
	"Id": "example-jwt-policy",
	"Version": "2012-10-17",
	"Statement": [
		{
			"Sid": "Allow use of the key for creating JWTs",
			"Effect": "Allow",
			"Principal": {
				"AWS": "arn:aws:iam::111122223333:role/JwtProducer"
			},
			"Action": [
				"kms:GenerateMac"
			],
			"Resource": "*"
		},
		{
			"Sid": "Allow use of the key for validating JWTs",
			"Effect": "Allow",
			"Principal": {
				"AWS": "arn:aws:iam::111122223333:role/JwtConsumer"
			},
			"Action": [
				"kms:VerifyMac"
			],
			"Resource": "*"
		}
	]
}

Conclusion

In this post, you learned about the new HMAC APIs in AWS KMS (GenerateMac and VerifyMac). These APIs complement existing AWS KMS cryptographic operations: symmetric key encryption, asymmetric key encryption and signing, and data key creation and key enveloping. You can use HMACs for JWTs, tokenization, URL and API signing, as a key derivation function (KDF), as well as in new designs that we haven’t even thought of yet. To learn more about HMAC functionality and design, see HMAC keys in AWS KMS in the AWS KMS Developer Guide.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the KMS re:Post or contact AWS Support.
Want more AWS Security news? Follow us on Twitter.

Author

Jeremy Stieglitz

Jeremy is the Principal Product Manager for AWS Key Management Service (KMS) where he drives global product strategy and roadmap for AWS KMS. Jeremy has more than 20 years of experience defining new products and platforms, launching and scaling cryptography solutions, and driving end-to-end product strategies. Jeremy is the author or co-author of 23 patents in network security, user authentication and network automation and control.

Author

Peter Zieske

Peter is a Senior Software Developer on the AWS Key Management Service team, where he works on developing features on the service-side front-end. Outside of work, he enjoys building with LEGO, gaming, and spending time with family.

Sharing security expertise through CodeQL packs (Part I)

Post Syndicated from Andrew Eisenberg original https://github.blog/2022-04-19-sharing-security-expertise-through-codeql-packs-part-i/

Congratulations! You’ve discovered a security bug in your own code before anyone has exploited it. It’s a big relief. You’ve created a CodeQL query to find other places where this happens and ensure this will never happen again, and you’ve deployed your new query to be run on every pull request in your repo to prevent similar mistakes from ever being made again.

What’s the best way to share this knowledge with the community to help protect the open source ecosystem by making sure that the same vulnerability is never introduced into anyone’s codebase, ever?

The short answer: produce a CodeQL pack containing your queries, and publish them to GitHub. CodeQL packaging is a beta feature in the CodeQL ecosystem. With CodeQL packaging, your expertise is documented, concise, executable, and easily shareable.

This is the first post of a two-part series on CodeQL packaging. In this post, we show how to use CodeQL packs to share security expertise. In the next post, we will discuss some of our implementation and design decisions.

Modeling a vulnerability in CodeQL

CodeQL’s customizability makes it great for detecting vulnerabilities in your code. Let’s use the Exec call vulnerable to binary planting query as an example. This query was developed by our team in response to discovering a real vulnerability in one of our open source repositories.

The purpose of this query is to detect executables that are potentially vulnerable to Windows binary planting, an exploit where an attacker could inject a malicious executable into a pull request. This query is meant to be evaluated on JavaScript code that is run inside of a GitHub Action. It matches all arguments to calls to the ToolRunner (a GitHub Action API) where the argument has not been sanitized (that is, ensured to be safe) by having been wrapped in a call to safeWhich. The implementation details of this query are not relevant to this post, but you can explore this query and other domain-specific queries like it in the repository.

This query is currently protecting us on every pull request, but in its current form, it is not easily available for others to use. Even though this vulnerability is relatively difficult to attack, the surface area is large, and it could affect any GitHub Action running on Windows in public repositories that accept pull requests. You could write a stern blog post on the dangers of invoking unqualified Windows executables in untrusted pull requests (maybe you’re even reading such a post right now!), but your impact will be much higher if you could share the query to help anyone find the bug in their code. This is where CodeQL packaging comes in. Using CodeQL packaging, not only can developers easily learn about the binary planting pattern, but they can also automatically apply the pattern to find the bug in their own code.

Sharing queries through CodeQL packs

If you think that your query is general purpose and applicable to all repositories in all situations, then it is best to contribute it to our open source CodeQL query repository (and collect a bounty in the process!). That way, your query will be run on every pull request on every repository that has GitHub code scanning enabled.

However, many (if not most) queries are domain specific and not applicable to all repositories. For example, this particular binary planting query is only applicable to GitHub Actions implemented in JavaScript. The best way to share such queries query is by creating a CodeQL pack and publishing it to the CodeQL package registry to make it available to the world. Once published, CodeQL packs are easily shared with others and executed in their CI/CD pipeline.

There are two kinds of CodeQL packs:

  • Query packs, which contain a set of pre-compiled queries that can be easily evaluated on a CodeQL database.
  • Library packs, which contain CodeQL libraries (*.qll files), but do not contain any runnable queries. Library packs are meant to be used as building blocks to produce other query packs or library packs.

In the rest of this post, we will show you how to create, share, and consume a CodeQL query pack. Library packs will be introduced in a future blog post.

To create a CodeQL pack, you’ll need to make sure that you’ve installed and set up the CodeQL CLI. You can follow the instructions here.

The next step is to create a qlpack.yml file. This file declares the CodeQL pack and information about it. Any *.ql files in the same directory (or sub-directory) as a qlpack.yml file are considered part of the package. In this case, you can place binary-planting.ql next to the qlpack.yml file.

Here is the qlpack.yml from our example:

name: aeisenberg/codeql-actions-queries
version: 1.0.1
dependencies:
 codeql/javascript-all: ~0.0.10

All CodeQL packs must have a name property. If they are going to be published to the CodeQL registry, then they must have a scope as part of the name. The scope is the part of the package name before the slash (in this example: aeisenberg). It should be the username or organization on github.com that will own this package. Anyone publishing a package must have the proper privileges to do so for that scope. The name part of the package name must be unique within the scope. Additionally, a version, following standard semver rules, is required for publishing.

The dependencies block lists all of the dependencies of this package and their compatible version ranges. Each dependency is referenced as the scope/name of a CodeQL library pack, and each library pack may in turn depend on other library packs declared in their qlpack.yml files. Each query pack must (transitively) depend on exactly one of the core language packs (for example, JavaScript, C#, Ruby, etc.), which determines the language your query can analyze.

In this query pack, the standard JavaScript libraries, codeql/javascript-all, is the only dependency and the semver range ~0.0.10 means any version >= 0.0.10 and < 0.1.0 suffices.

With the qlpack.yml defined, you can now install all of your declared dependencies. Run the codeql pack install command in the root directory of the CodeQL pack:

$ codeql pack install
Dependencies resolved. Installing packages...
Install location: /Users/andrew.eisenberg/.codeql/packages
Installed fresh codeql/[email protected]

After making any changes to the query, you can then publish the query to the GitHub registry. You do this by running the codeql pack publish command in the root of the CodeQL pack.

Here is the output of the command:

$ codeql pack publish
Running on packs: aeisenberg/codeql-actions-queries.
Bundling and then publishing qlpack located at '/Users/andrew.eisenberg/git-repos/codeql-actions-queries'.
Bundled qlpack created at '/var/folders/41/kxmfbgxj40dd2l_x63x9fw7c0000gn/T/codeql-docker17755193287422157173/.Docker Package Manager/codeql-actions-queries.1.0.1.tgz'.
Packaging> Package 'aeisenberg/codeql-actions-queries' will be published to registry 'https://ghcr.io/v2/' as 'aeisenberg/codeql-actions-queries'.
Packaging> Package 'aeisenberg/[email protected]' will be published locally to /Users/andrew.eisenberg/.codeql/packages/aeisenberg/codeql-actions-queries/1.0.1
Publish successful.

You have successfully published your first CodeQL pack! It is now available in the registry on GitHub.com for anyone else to run using the CodeQL CLI. You can view your newly-published package on github.com:

CodeQL pack on github.com

At the time of this writing, packages are initially uploaded as private packages. If you want to make it public, you must explicitly change the permissions. To do this, go to the package page, click on package settings, then scroll down to the Danger Zone:

Danger Zone!

And click Change visibility.

Running queries from CodeQL packs using the CodeQL CLI

Running the queries in a CodeQL pack is simple using the CodeQL CLI. If you already have a database created, just call the codeql database analyze command with the --download option, passing a reference to the package you want to use in your analysis:

$ codeql database analyze --format=sarif-latest --output=out.sarif --download my-db aeisenberg/codeql-actions-queries@^1.0.1

The --download option asks CodeQL to download any CodeQL packs that aren’t already available. The ^1.0,0 is optional and specifies that you want to run the latest version of the package that is compatible with ^1.0.1. If no version range is specified, then the latest version is always used. You can also pass a list of packages to evaluate. The CodeQL CLI will download and cache each specified package and then run all queries in their default query suite.

To run a subset of queries in a pack, add a : and a path after it:

aeisenberg/codeql-actions-queries@^1.0.1:binary-planting.ql

Everything after the : is interpreted as a path relative to the root of the pack, and you can specify a single query, a query directory, or a query suite (.qls file).

Evaluating CodeQL packs from code scanning

Run the queries from your CodeQL pack in GitHub code scanning is easy! In your code scanning workflow, in the github/codeql-action/init step, add packs entry to list the packs you want to run:

- uses: github/codeql-action/init@v1
  with:
    packs:
      - aeisenberg/[email protected]
    languages: javascript

Note that specifying a path after a colon is not yet supported in the codeql-action, so using this approach, you can only run the default query suite of a pack in this manner.

Conclusion

We’ve shown how easy it is to share your CodeQL queries with the world using two CLI commands: the first resolves and retrieves your dependencies and the second compiles, bundles, and publishes your package.

To recap:

Publishing a CodeQL query pack consists of:

  1. Create the qlpack.yml file.
  2. Run codeql pack install to download dependencies.
  3. Write and test your queries.
  4. Run codeql pack publish to share your package in GHCR.

Using a CodeQL query pack from GHCR on the command line consists of:

  1. codeql database analyze --download path/to/my-db aeisenberg/[email protected]

Using a CodeQL query pack from GHCR in code-scanning consists of:

  1. Adding a config-file input to the github/codeql-action/init action
  2. Adding a packs block in the config file

The CodeQL Team has already published all of our standard queries as query packs, and all of our core libraries as library packs. Any pack named {*}-queries is a query pack and contains queries that can be used to scan your code. Any pack named {*}-all is a library pack and contains CodeQL libraries (*.qll files) that can be used as the building blocks for your queries. When you are creating your own query packs, you should be adding as a dependency the library pack for the language that your query will scan.

If you are interested in understanding more about how we’ve implemented packaging and some of our design decisions, please check out our second post in this series. Also, if you are interested in learning more or contributing to CodeQL, get involved with the Security Lab.

Sharing your security expertise has never been easier!

Git security vulnerability announced

Post Syndicated from Taylor Blau original https://github.blog/2022-04-12-git-security-vulnerability-announced/

Today, the Git project released new versions which address a pair of security vulnerabilities.

GitHub is unaffected by these vulnerabilities1. However, you should be aware of them and upgrade your local installation of Git, especially if you are using Git for Windows, or you use Git on a multi-user machine.

CVE-2022-24765

This vulnerability affects users working on multi-user machines where a malicious actor could create a .git directory in a shared location above a victim’s current working directory. On Windows, for example, an attacker could create C:\.git\config, which would cause all git invocations that occur outside of a repository to read its configured values.

Since some configuration variables (such as core.fsmonitor) cause Git to execute arbitrary commands, this can lead to arbitrary command
execution when working on a shared machine.

The most effective way to protect against this vulnerability is to upgrade to Git v2.35.2. This version changes Git’s behavior when looking for a top-level .git directory to stop when its directory traversal changes ownership from the current user. (If you wish to make an exception to this behavior, you can use the new multi-valued safe.directory configuration).

If you can’t upgrade immediately, the most effective ways to reduce your risk are the following:

  • Define the GIT_CEILING_DIRECTORIES environment variable to contain the parent directory of your user profile (i.e., /Users on macOS,
    /home on Linux, and C:\Users on Windows).
  • Avoid running Git on multi-user machines when your current working directory is not within a trusted repository.

Note that many tools (such as the Git for Windows installation of Git Bash, posh-git, and Visual Studio) run Git commands under the hood. If you are on a multi-user machine, avoid using these tools until you have upgraded to the latest release.

Credit for finding this vulnerability goes to 俞晨东.

[source]

CVE-2022-24767

This vulnerability affects the Git for Windows uninstaller, which runs in the user’s temporary directory. Because the SYSTEM user account inherits the
default permissions of C:\Windows\Temp (which is world-writable), any authenticated user can place malicious .dll files which are loaded when
running the Git for Windows uninstaller when run via the SYSTEM account.

The most effective way to protect against this vulnerability is to upgrade to Git for Windows v2.35.2. If you can’t upgrade
immediately, reduce your risk with the following:

  • Avoid running the uninstaller until after upgrading
  • Override the SYSTEM user’s TMP environment variable to a directory which can only be written to by the SYSTEM user
  • Remove unknown .dll files from C:\Windows\Temp before running the
    uninstaller
  • Run the uninstaller under an administrator account rather than as the
    SYSTEM user

Credit for finding this vulnerability goes to the Lockheed Martin Red Team.

[source]

Download Git 2.35.2


  1. GitHub does not run git outside of known repositories, so is not susceptible to the attack described by CVE-2022-24765. Likewise, GitHub does not use Git for Windows, and so is unaffected by CVE-2022-24767 entirely. 

Git Credential Manager: authentication for everyone

Post Syndicated from Matthew John Cheetham original https://github.blog/2022-04-07-git-credential-manager-authentication-for-everyone/

Universal Git Authentication

“Authentication is hard. Hard to debug, hard to test, hard to get right.” – Me

These words were true when I wrote them back in July 2020, and they’re still true today. The goal of Git Credential Manager (GCM) is to make the task of authenticating to your remote Git repositories easy and secure, no matter where your code is stored or how you choose to work. In short, GCM wants to be Git’s universal authentication experience.

In my last blog post, I talked about the risk of proliferating “universal standards” and how introducing Git Credential Manager Core (GCM Core) would mean yet another credential helper in the wild. I’m therefore pleased to say that we’ve managed to successfully replace both GCM for Windows and GCM for Mac and Linux with the new GCM! The source code of the older projects has been archived, and they are no longer shipped with distributions like Git for Windows!

In order to celebrate and reflect this successful unification, we decided to drop the “Core” moniker from the project’s name to become simply Git Credential Manager or GCM for short.

Git Credential Manager

If you have followed the development of GCM closely, you might have also noticed we have a new home on GitHub in our own organization, github.com/GitCredentialManager!

We felt being homed under github.com/microsoft or github.com/github didn’t quite represent the ethos of GCM as an open, universal and agnostic project. All existing issues and pull requests were migrated, and we continue to welcome everyone to contribute to the project.

GCM Home

Interacting with HTTP remotes without the help of a credential helper like GCM is becoming more difficult with the removal of username/password authentication at GitHub and Bitbucket. Using GCM makes it easy, and with exciting developments such as using GitHub Mobile for two-factor authentication and OAuth device code flow support, we are making authentication more seamless.

Hello, Linux!

In the quest to become a universal solution for Git authentication, we’ve worked hard on getting GCM to work well on various Linux distributions, with a primary focus on Debian-based distributions.

Today we have Debian packages available to download from our GitHub releases page, as well as tarballs for other distributions (64-bit Intel only). Being built on the .NET platform means there should be a reduced effort to build and run anywhere the .NET runtime runs. Over time, we hope to expand our support matrix of distributions and CPU architectures (by adding ARM64 support, for example).

Due to the broad and varied nature of Linux distributions, it’s important that GCM offers many different credential storage options. In addition to GPG encrypted files, we added support for the Secret Service API via libsecret (also see the GNOME Keyring), which provides a similar experience to what we provide today in GCM on Windows and macOS.

Windows Subsystem for Linux

In addition to Linux distributions, we also have special support for using GCM with Windows Subsystem for Linux (WSL). Using GCM with WSL means that all your WSL installations can share Git credentials with each other and the Windows host, enabling you to easily mix and match your development environments.

Easily mix and match your development environments

You can read more about using GCM inside of your WSL installations here.

Hello, GitLab

Being universal doesn’t just mean we want to run in more places, but also that we can help more users with whatever Git hosting service they choose to use. We are very lucky to have such an engaged community that is constantly working to make GCM better for everyone.

On that note, I am thrilled to share that through a community contribution, GCM now has support for GitLab.  Welcome to the family!

GCM for everyone

Look Ma, no terminals!

We love the terminal and so does GCM. However, we know that not everyone feels comfortable typing in commands and responding to prompts via the keyboard. Also, many popular tools and IDEs that offer Git integration do so by shelling out to the git executable, which means GCM may be called upon to perform authentication from a GUI app where there is no terminal(!)

GCM has always offered full graphical authentication prompts on Windows, but thanks to our adoption of the Avalonia project that provides a cross-platform .NET XAML framework, we can now present graphical prompts on macOS and Linux.

GCM continues to support terminal prompts as a first-class option for all prompts.

GCM continues to support terminal prompts as a first-class option for all prompts. We detect environments where there is no GUI (such as when connected over SSH without display forwarding) and instead present the equivalent text-based prompts. You can also manually disable the GUI prompts if you wish.

Securing the software supply chain

Keeping your source code secure is a critical step in maintaining trust in software, whether that be keeping commercially sensitive source code away from prying eyes or protecting against malicious actors making changes in both closed and open source projects that underpin much of the modern world.

In 2020, an extensive cyberattack was exposed that impacted parts of the US federal government as well as several major software companies. The US president’s recent executive order in response to this cyberattack brings into focus the importance of mechanisms such as multi-factor authentication, conditional access policies, and generally securing the software supply chain.

Store ALL the credentials

Git Credential Manager creates and stores credentials to access Git repositories on a host of platforms. We hold in the highest regard the need to keep your credentials and access secure. That’s why we always keep your credentials stored using industry standard encryption and storage APIs.

GCM makes use of the Windows Credential Manager on Windows and the login keychain on macOS.

In addition to these existing mechanisms, we also support several alternatives across supported platforms, giving you the choice of how and where you wish to store your generated credentials (such as GPG-encrypted credential files).

Store all your credentials

GCM can now also use Git’s git-credential-cache helper that is commonly built and available in many Git distributions. This is a great option for cloud shells or ephemeral environments when you don’t want to persist credentials permanently to disk but still want to avoid a prompt for every git fetch or git push.

Modern windows authentication (experimental)

Another way to keep your credentials safe at rest is with hardware-level support through technologies like the Trusted Platform Module (TPM) or Secure Enclave. Additionally, enterprises wishing to make sure your device or credentials have not been compromised may want to enforce conditional access policies.

Integrating with these kinds of security modules or enforcing policies can be tricky and is platform-dependent. It’s often easier for applications to hand over responsibility for the credential acquisition, storage, and policy
enforcement to an authentication broker.

An authentication broker performs credential negotiation on behalf of an app, simplifying many of these problems, and often comes with the added benefit of deeper integration with operating system features such as biometrics.

Authentication broker diagram

I’m happy to announce that GCM has gained experimental support for brokered authentication (Windows-only at the moment)!

On Windows, the authentication broker is a component that was first introduced in Windows 10 and is known as the Web Account Manager (WAM). WAM enables apps like GCM to support modern authentication experiences such as Windows Hello and will apply conditional access policies set by your work or school.

Please note that support for the Windows broker is currently experimental and limited to authentication of Microsoft work and school accounts against Azure DevOps.

Click here to read more about GCM and WAM, including how to opt-in and current known issues.

Even more improvements

GCM has been a hive of activity in the past 18 months, with too many new features and improvements to talk about in detail! Here’s a quick rundown of additional updates since our July 2020 post:

  • Automatic on-premises/self-hosted instance detection
  • GitHub Enterprise Server and GitHub AE support
  • Shared Microsoft Identity token caches with other developer tools
  • Improved network proxy support
  • Custom TLS/SSL root certificate support
  • Admin-less Windows installer
  • Improved command line handling and output
  • Enterprise default setting support on Windows
  • Multi-user support
  • Better diagnostics

Thank you!

The GCM team would also like to personally thank all the people who have made contributions, both large and small, to the project:

@vtbassmatt, @kyle-rader, @mminns, @ldennington, @hickford, @vdye, @AlexanderLanin, @derrickstolee, @NN, @johnemau, @karlhorky, @garvit-joshi, @jeschu1, @WormJim, @nimatt, @parasychic, @cjsimon, @czipperz, @jamill, @jessehouwing, @shegox, @dscho, @dmodena, @geirivarjerstad, @jrbriggs, @Molkree, @4brunu, @julescubtree, @kzu, @sivaraam, @mastercoms, @nightowlengineer

Future work

While we’ve made a great deal of progress toward our universal experience goal, we’re not slowing down anytime soon; we’re still full steam ahead with GCM!

Our focus for the next period will be on iterating and improving our authentication broker support, providing stronger protection of credentials, and looking to increase performance and compatibility with more environments and uses.

Prevent the introduction of known vulnerabilities into your code

Post Syndicated from Courtney Claessens original https://github.blog/2022-04-06-prevent-introduction-known-vulnerabilities-into-your-code/

Understanding your supply chain is critical to maintaining the security of your software. Dependabot already alerts you when vulnerabilities are found in your existing dependencies, but what if you add a new dependency with a vulnerability? With the dependency review action, you can proactively block pull requests that introduce dependencies with known vulnerabilities.

How it works

The GitHub Action automates finding and blocking vulnerabilities that are currently only displayed in the rich diff of a pull request. When you add the dependency review action to your repository, it will scan your pull requests for dependency changes. Then, it will check the GitHub Advisory Database to see if any of the new dependencies have existing vulnerabilities. If they do, the action will raise an error so that you can see which dependency has a vulnerability and implement the fix with the contextual intelligence provided. The action is supported by a new API endpoint that diffs the dependencies between any two revisions.

Demo of dependency review enforcement

The action can be found on GitHub Marketplace and in your repository’s Actions tab under the Security heading. It is available for all public repositories, as well as private repositories that have Github Advanced Security licensed.

We’re continuously improving the experience

While we’re currently in public beta, we’ll be adding functionality for you to have more control over what causes the action to fail and can set criteria on the vulnerability severity, license type, or other factors We’re also improving how failed action runs are surfaced in the UI and increasing flexibility around when it’s executed.

If you have feedback or questions

We’re very keen to hear any and all feedback! Pop into the feedback discussion, and let us know how the new action is working for you, and how you’d like to see it grow.

For more information, visit the action and the documentation.

Best practices: Securing your Amazon Location Service resources

Post Syndicated from Dave Bailey original https://aws.amazon.com/blogs/security/best-practices-securing-your-amazon-location-service-resources/

Location data is subjected to heavy scrutiny by security experts. Knowing the current position of a person, vehicle, or asset can provide industries with many benefits, whether to understand where a current delivery is, how many people are inside a venue, or to optimize routing for a fleet of vehicles. This blog post explains how Amazon Web Services (AWS) helps keep location data secured in transit and at rest, and how you can leverage additional security features to help keep information safe and compliant.

The General Data Protection Regulation (GDPR) defines personal data as “any information relating to an identified or identifiable natural person (…) such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person.” Also, many companies wish to improve transparency to users, making it explicit when a particular application wants to not only track their position and data, but also to share that information with other apps and websites. Your organization needs to adapt to these changes quickly to maintain a secure stance in a competitive environment.

On June 1, 2021, AWS made Amazon Location Service generally available to customers. With Amazon Location, you can build applications that provide maps and points of interest, convert street addresses into geographic coordinates, calculate routes, track resources, and invoke actions based on location. The service enables you to access location data with developer tools and to move your applications to production faster with monitoring and management capabilities.

In this blog post, we will show you the features that Amazon Location provides out of the box to keep your data safe, along with best practices that you can follow to reach the level of security that your organization strives to accomplish.

Data control and data rights

Amazon Location relies on global trusted providers Esri and HERE Technologies to provide high-quality location data to customers. Features like maps, places, and routes are provided by these AWS Partners so solutions can have data that is not only accurate but constantly updated.

AWS anonymizes and encrypts location data at rest and during its transmission to partner systems. In parallel, third parties cannot sell your data or use it for advertising purposes, following our service terms. This helps you shield sensitive information, protect user privacy, and reduce organizational compliance risks. To learn more, see the Amazon Location Data Security and Control documentation.

Integrations

Operationalizing location-based solutions can be daunting. It’s not just necessary to build the solution, but also to integrate it with the rest of your applications that are built in AWS. Amazon Location facilitates this process from a security perspective by integrating with services that expedite the development process, enhancing the security aspects of the solution.

Encryption

Amazon Location uses AWS owned keys by default to automatically encrypt personally identifiable data. AWS owned keys are a collection of AWS Key Management Service (AWS KMS) keys that an AWS service owns and manages for use in multiple AWS accounts. Although AWS owned keys are not in your AWS account, Amazon Location can use the associated AWS owned keys to protect the resources in your account.

If customers choose to use their own keys, they can benefit from AWS KMS to store their own encryption keys and use them to add a second layer of encryption to geofencing and tracking data.

Authentication and authorization

Amazon Location also integrates with AWS Identity and Access Management (IAM), so that you can use its identity-based policies to specify allowed or denied actions and resources, as well as the conditions under which actions are allowed or denied on Amazon Location. Also, for actions that require unauthenticated access, you can use unauthenticated IAM roles.

As an extension to IAM, Amazon Cognito can be an option if you need to integrate your solution with a front-end client that authenticates users with its own process. In this case, you can use Cognito to handle the authentication, authorization, and user management for you. You can use Cognito unauthenticated identity pools with Amazon Location as a way for applications to retrieve temporary, scoped-down AWS credentials. To learn more about setting up Cognito with Amazon Location, see the blog post Add a map to your webpage with Amazon Location Service.

Limit the scope of your unauthenticated roles to a domain

When you are building an application that allows users to perform actions such as retrieving map tiles, searching for points of interest, updating device positions, and calculating routes without needing them to be authenticated, you can make use of unauthenticated roles.

When using unauthenticated roles to access Amazon Location resources, you can add an extra condition to limit resource access to an HTTP referer that you specify in the policy. The aws:referer request context value is provided by the caller in an HTTP header, and it is included in a web browser request.

The following is an example of a policy that allows access to a Map resource by using the aws:referer condition, but only if the request comes from the domain example.com.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "MapsReadOnly",
      "Effect": "Allow",
      "Action": [
        "geo:GetMapStyleDescriptor",
        "geo:GetMapGlyphs",
        "geo:GetMapSprites",
        "geo:GetMapTile"
      ],
      "Resource": "arn:aws:geo:us-west-2:111122223333:map/MyMap",
      "Condition": {
        "StringLike": {
          "aws:Referer": "https://www.example.com/*"
        }
      }
    }
  ]
}

To learn more about aws:referer and other global conditions, see AWS global condition context keys.

Encrypt tracker and geofence information using customer managed keys with AWS KMS

When you create your tracker and geofence collection resources, you have the option to use a symmetric customer managed key to add a second layer of encryption to geofencing and tracking data. Because you have full control of this key, you can establish and maintain your own IAM policies, manage key rotation, and schedule keys for deletion.

After you create your resources with customer managed keys, the geometry of your geofences and all positions associated to a tracked device will have two layers of encryption. In the next sections, you will see how to create a key and use it to encrypt your own data.

Create an AWS KMS symmetric key

First, you need to create a key policy that will limit the AWS KMS key to allow access to principals authorized to use Amazon Location and to principals authorized to manage the key. For more information about specifying permissions in a policy, see the AWS KMS Developer Guide.

To create the key policy

Create a JSON policy file by using the following policy as a reference. This key policy allows Amazon Location to grant access to your KMS key only when it is called from your AWS account. This works by combining the kms:ViaService and kms:CallerAccount conditions. In the following policy, replace us-west-2 with your AWS Region of choice, and the kms:CallerAccount value with your AWS account ID. Adjust the KMS Key Administrators statement to reflect your actual key administrators’ principals, including yourself. For details on how to use the Principal element, see the AWS JSON policy elements documentation.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Amazon Location",
      "Effect": "Allow",
      "Principal": {
        "AWS": "*"
      },
      "Action": [
        "kms:DescribeKey",
        "kms:CreateGrant"
      ],
      "Resource": "*",
      "Condition": {
        "StringEquals": {
          "kms:ViaService": "geo.us-west-2.amazonaws.com",
          "kms:CallerAccount": "111122223333"
        }
      }
    },
    {
      "Sid": "Allow access for Key Administrators",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::111122223333:user/KMSKeyAdmin"
      },
      "Action": [
        "kms:Create*",
        "kms:Describe*",
        "kms:Enable*",
        "kms:List*",
        "kms:Put*",
        "kms:Update*",
        "kms:Revoke*",
        "kms:Disable*",
        "kms:Get*",
        "kms:Delete*",
        "kms:TagResource",
        "kms:UntagResource",
        "kms:ScheduleKeyDeletion",
        "kms:CancelKeyDeletion"
      ],
      "Resource": "*"
    }
  ]
}

For the next steps, you will use the AWS Command Line Interface (AWS CLI). Make sure to have the latest version installed by following the AWS CLI documentation.

Tip: AWS CLI will consider the Region you defined as the default during the configuration steps, but you can override this configuration by adding –region <your region> at the end of each command line in the following command. Also, make sure that your user has the appropriate permissions to perform those actions.

To create the symmetric key

Now, create a symmetric key on AWS KMS by running the create-key command and passing the policy file that you created in the previous step.

aws kms create-key –policy file://<your JSON policy file>

Alternatively, you can create the symmetric key using the AWS KMS console with the preceding key policy.

After running the command, you should see the following output. Take note of the KeyId value.

{
  "KeyMetadata": {
    "Origin": "AWS_KMS",
    "KeyId": "1234abcd-12ab-34cd-56ef-1234567890ab",
    "Description": "",
    "KeyManager": "CUSTOMER",
    "Enabled": true,
    "CustomerMasterKeySpec": "SYMMETRIC_DEFAULT",
    "KeyUsage": "ENCRYPT_DECRYPT",
    "KeyState": "Enabled",
    "CreationDate": 1502910355.475,
    "Arn": "arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab",
    "AWSAccountId": "111122223333",
    "MultiRegion": false
    "EncryptionAlgorithms": [
      "SYMMETRIC_DEFAULT"
    ],
  }
}

Create an Amazon Location tracker and geofence collection resources

To create an Amazon Location tracker resource that uses AWS KMS for a second layer of encryption, run the following command, passing the key ID from the previous step.

aws location \
	create-tracker \
	--tracker-name "MySecureTracker" \
	--kms-key-id "1234abcd-12ab-34cd-56ef-1234567890ab"

Here is the output from this command.

{
    "CreateTime": "2021-07-15T04:54:12.913000+00:00",
    "TrackerArn": "arn:aws:geo:us-west-2:111122223333:tracker/MySecureTracker",
    "TrackerName": "MySecureTracker"
}

Similarly, to create a geofence collection by using your own KMS symmetric keys, run the following command, also modifying the key ID.

aws location \
	create-geofence-collection \
	--collection-name "MySecureGeofenceCollection" \
	--kms-key-id "1234abcd-12ab-34cd-56ef-1234567890ab"

Here is the output from this command.

{
    "CreateTime": "2021-07-15T04:54:12.913000+00:00",
    "TrackerArn": "arn:aws:geo:us-west-2:111122223333:geofence-collection/MySecureGeoCollection",
    "TrackerName": "MySecureGeoCollection"
}

By following these steps, you have added a second layer of encryption to your geofence collection and tracker.

Data retention best practices

Trackers and geofence collections are stored and never leave your AWS account without your permission, but they have different lifecycles on Amazon Location.

Trackers store the positions of devices and assets that are tracked in a longitude/latitude format. These positions are stored for 30 days by the service before being automatically deleted. If needed for historical purposes, you can transfer this data to another data storage layer and apply the proper security measures based on the shared responsibility model.

Geofence collections store the geometries you provide until you explicitly choose to delete them, so you can use encryption with AWS managed keys or your own keys to keep them for as long as needed.

Asset tracking and location storage best practices

After a tracker is created, you can start sending location updates by using the Amazon Location front-end SDKs or by calling the BatchUpdateDevicePosition API. In both cases, at a minimum, you need to provide the latitude and longitude, the time when the device was in that position, and a device-unique identifier that represents the asset being tracked.

Protecting device IDs

This device ID can be any string of your choice, so you should apply measures to prevent certain IDs from being used. Some examples of what to avoid include:

  • First and last names
  • Facility names
  • Documents, such as driver’s licenses or social security numbers
  • Emails
  • Addresses
  • Telephone numbers

Latitude and longitude precision

Latitude and longitude coordinates convey precision in degrees, presented as decimals, with each decimal place representing a different measure of distance (when measured at the equator).

Amazon Location supports up to six decimal places of precision (0.000001), which is equal to approximately 11 cm or 4.4 inches at the equator. You can limit the number of decimal places in the latitude and longitude pair that is sent to the tracker based on the precision required, increasing the location range and providing extra privacy to users.

Figure 1 shows a latitude and longitude pair, with the level of detail associated to decimals places.

Figure 1: Geolocation decimal precision details

Figure 1: Geolocation decimal precision details

Position filtering

Amazon Location introduced position filtering as an option to trackers that enables cost reduction and reduces jitter from inaccurate device location updates.

  • DistanceBased filtering ignores location updates wherein devices have moved less than 30 meters (98.4 ft).
  • TimeBased filtering evaluates every location update against linked geofence collections, but not every location update is stored. If your update frequency is more often than 30 seconds, then only one update per 30 seconds is stored for each unique device ID.
  • AccuracyBased filtering ignores location updates if the distance moved was less than the measured accuracy provided by the device.

By using filtering options, you can reduce the number of location updates that are sent and stored, thus reducing the level of location detail provided and increasing the level of privacy.

Logging and monitoring

Amazon Location integrates with AWS services that provide the observability needed to help you comply with your organization’s security standards.

To record all actions that were taken by users, roles, or AWS services that access Amazon Location, consider using AWS CloudTrail. CloudTrail provides information on who is accessing your resources, detailing the account ID, principal ID, source IP address, timestamp, and more. Moreover, Amazon CloudWatch helps you collect and analyze metrics related to your Amazon Location resources. CloudWatch also allows you to create alarms based on pre-defined thresholds of call counts. These alarms can create notifications through Amazon Simple Notification Service (Amazon SNS) to automatically alert teams responsible for investigating abnormalities.

Conclusion

At AWS, security is our top priority. Here, security and compliance is a shared responsibility between AWS and the customer, where AWS is responsible for protecting the infrastructure that runs all of the services offered in the AWS Cloud. The customer assumes the responsibility to perform all of the necessary security configurations to the solutions they are building on top of our infrastructure.

In this blog post, you’ve learned the controls and guardrails that Amazon Location provides out of the box to help provide data privacy and data protection to our customers. You also learned about the other mechanisms you can use to enhance your security posture.

Start building your own secure geolocation solutions by following the Amazon Location Developer Guide and learn more about how the service handles security by reading the security topics in the guide.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on Amazon Location Service forum or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Rafael Leandro Junior

Rafael Leandro, Junior

Rafael Leandro, Junior, is a senior global solutions architect who currently focuses on the consumer packaged goods and transportation industries. He helps large global customers on their journeys with AWS.

David Bailey

David Bailey

David Bailey is a senior security consultant who helps AWS customers achieve their cloud security goals. He has a passion for building new technologies and providing mentorship for others.

The end of the road for Cloudflare CAPTCHAs

Post Syndicated from Reid Tatoris original https://blog.cloudflare.com/end-cloudflare-captcha/

The end of the road for Cloudflare CAPTCHAs

The end of the road for Cloudflare CAPTCHAs

There is no point in rehashing the fact that CAPTCHA provides a terrible user experience. It’s been discussed in detail before on this blog, and countless times elsewhere. One of the creators of the CAPTCHA has publicly lamented that he “unwittingly created a system that was frittering away, in ten-second increments, millions of hours of a most precious resource: human brain cycles.” We don’t like them, and you don’t like them.

So we decided we’re going to stop using CAPTCHAs. Using an iterative platform approach, we have already reduced the number of CAPTCHAs we choose to serve by 91% over the past year.

Before we talk about how we did it, and how you can help, let’s first start with a simple question.

Why in the world is CAPTCHA still used anyway?

If everyone agrees CAPTCHA is so bad, if there have been calls to get rid of it for 15 years, if the creator regrets creating it, why is it still widely used?

The frustrating truth is that CAPTCHA remains an effective tool for differentiating real human users from bots despite the existence of CAPTCHA-solving services. Of course, this comes with a huge trade off in terms of usability, but generally the alternatives to CAPTCHA are blocking or allowing traffic, which will inherently increase either false positives or false negatives. With a choice between increased errors and a poor user experience (CAPTCHA), many sites choose CAPTCHA.

CAPTCHAs are also a safe choice because so many other sites use them. They delegate abuse response to a third party, and remove the risk from the website with a simple integration. Using the most common solution will rarely get you into trouble. Plug, play, forget.

Lastly, CAPTCHA is useful because it has a long history of a known and stable baseline. We’ve tracked a metric called CAPTCHA (or Challenge) Solve Rate for many years. CAPTCHA solve rate is the number of CAPTCHAs solved, divided by the number of page loads. For our purposes both failing or not attempting to solve the CAPTCHA count as a failure, since in either case a user cannot access the content they want to. We find this metric to typically be stable for any particular website. That is, if the solve rate is 1%, it tends to remain at 1% over time. We also find that any change in solve rate – up or down – is a strong indicator of an attack in progress. Customers can monitor the solve rate and create alerts to notify them when it changes, then investigate what might be happening.

Many alternatives to CAPTCHA have been tried, including our own Cryptographic Attestation. However, to date, none have seen the amount of widespread adoption of CAPTCHAs. We believe attempting to replace CAPTCHA with a single alternative is the main reason why. When you replace CAPTCHA, you lose the stable history of the solve rate, and making decisions becomes more difficult. If you switch from deciphering text to picking images, you will get vastly different results. How do you know if those results are good or bad? So, we took a different approach.

Many solutions, not one

Rather than try to unilaterally deprecate and replace CAPTCHA with a single alternative, we built a platform to test many alternatives and see which had the best potential to replace CAPTCHA. We call this Cloudflare Managed Challenge.

The end of the road for Cloudflare CAPTCHAs

Managed Challenge is a smarter solution than CAPTCHA. It defers the decision about whether to serve a visual puzzle to a later point in the flow after more information is available from the browser. Previously, a Cloudflare customer could only choose between either a CAPTCHA or JavaScript Challenge as the action of a security or firewall rule. Now, the Managed Challenge option will decide to show a visual puzzle or other means of proving humanness to visitors based on the client behavior exhibited during a challenge and based on the telemetry we receive from the visitor. A customer simply tells us, “I want you (Cloudflare) to take appropriate actions to challenge this type of traffic as you see necessary.

With Managed Challenge, we adapt the actual challenge outcome to the individual visitor/browser. As a result, we can fine-tune the difficulty of the challenge itself and avoid showing visual puzzles to more than 90% of human requests, while at the same time presenting harder challenges to visitors that exhibit non-human behaviors.

When a visitor encounters a Managed Challenge, we first run a series of small non-interactive JavaScript challenges gathering more signals about the visitor/browser environment. This means we deploy in-browser detections and challenges at the time the request is made. Challenges are selected based on what characteristics the visitor emits and based on the initial information we have about the visitor. Those challenges include, but are not limited to, proof-of-work, proof-of-space, probing for web APIs, and various challenges for detecting browser-quirks and human behavior.

They also include machine learning models that detect common features of end visitors who were able to pass a CAPTCHA before. The computational hardness of those initial challenges may vary by visitor, but is targeted to run fast. Managed Challenge is also integrated into the Cloudflare Bot Management and Super Bot Fight Mode systems by consuming signals and data from the bot detections.

After our non-interactive challenges have been run, we evaluate the gathered signals. If by the combination of those signals we are confident that the visitor is likely human, no further action is taken, and the visitor is redirected to the destined page without any interaction required. However, in some cases, if the signal is weak, we present a visual puzzle to the visitor to prove their humanness. In the context of Managed Challenge, we’re also experimenting with other privacy-preserving means of attesting humanness, to continue reducing the portion of time that Managed Challenge uses a visual puzzle step.

We started testing Managed Challenge last year, and initially, we chose from a rotating subset of challenges, one of them being CAPTCHA. At the start, CAPTCHA was still used in the vast majority of cases. We compared the solve rate for the new challenge in question, with the existing, stable solve rate for CAPTCHA. We thus used CAPTCHA solve rate as a goal to work towards as we improved our CAPTCHA alternatives, getting better and better over time. The challenge platform allows our engineers to easily create, deploy, and test new types of challenges without impacting customers. When a challenge turns out to not be useful, we simply deprecate it. When it proves to be useful, we increase how often it is used. In order to preserve ground-truth, we also randomly choose a small subset of visitors to always solve a visual puzzle to validate our signals.

Managed Challenge performs better than CAPTCHA

The Challenge Platform now has the same stable solve rate as previously used CAPTCHAs.

The end of the road for Cloudflare CAPTCHAs

Using an iterative platform approach, we have reduced the number of CAPTCHAs we serve by 91%. This is only the start. By the end of the year, we will reduce our use of CAPTCHA as a challenge to less than 1%. By skipping the visual puzzle step for almost all visitors, we are able to reduce the visitor time spent in a challenge from an average of 32 seconds to an average of just one second to run our non-interactive challenges. We also see churn improvements: our telemetry indicates that visitors with human properties are 31% less likely to abandon a Managed Challenge than on the traditional CAPTCHA action.

Today, the Managed Challenge platform rotates between many challenges. A Managed Challenge instance consists of many sub-challenges: some of them are established and effective, whereas others are new challenges we are experimenting with. All of them are much, much faster and easier for humans to complete than CAPTCHA, and almost always require no interaction from the visitor.

Managed Challenge replaces CAPTCHA for Cloudflare

We have now deployed Managed Challenge across the entire Cloudflare network. Any time we show a CAPTCHA to a visitor, it’s via the Managed Challenge platform, and only as a benchmark to confirm our other challenges are performing as well.

All Cloudflare customers can now choose Managed Challenge as a response option to any Firewall rule instead of CAPTCHA. We’ve also updated our dashboard to encourage all Cloudflare customers to make this choice.

The end of the road for Cloudflare CAPTCHAs

You’ll notice that we changed the name of the CAPTCHA option to ‘Legacy CAPTCHA’. This more accurately describes what CAPTCHA is: an outdated tool that we don’t think people should use. As a result, the usage of CAPTCHA across the Cloudflare network has dropped significantly, and usage of managed challenge has increased dramatically.

The end of the road for Cloudflare CAPTCHAs

As noted above, today CAPTCHA represents 9% of Managed Challenge solves (light blue), but that number will decrease to less than 1% by the end of the year. You’ll also see the gray bar above, which shows when our customers have chosen to show a CAPTCHA as a response to a Firewall rule triggering. We want that number to go to zero, but the good news is that 63% of customers now choose Managed Challenge rather than CAPTCHA when they create a Firewall rule with a challenge response action.

The end of the road for Cloudflare CAPTCHAs

We expect this number to increase further over time.

If you’re using the Cloudflare WAF, log into the Dashboard today and look at all of your Firewall rules. If any of your rules are using “Legacy CAPTCHA” as a response, please change it now! Select the “Managed Challenge” response option instead. You’ll give your users a better experience, while maintaining the same level of protection you have today. If you’re not currently a Cloudflare customer, stay tuned for ways you can reduce your own use of CAPTCHA.

WAF mitigations for Spring4Shell

Post Syndicated from Michael Tremante original https://blog.cloudflare.com/waf-mitigations-sping4shell/

WAF mitigations for Spring4Shell

WAF mitigations for Spring4Shell

A set of high profile vulnerabilities have been identified affecting the popular Java Spring Framework and related software components – generally being referred to as Spring4Shell.

Four CVEs have been released so far and are being actively updated as new information emerges. These vulnerabilities can result, in the worst case, in full remote code execution (RCE) compromise:

Customers using Java Spring and related software components, such as the Spring Cloud Gateway, should immediately review their software and update to the latest versions by following the official Spring project guidance.

The Cloudflare WAF team is actively monitoring these CVEs and has already deployed a number of new managed mitigation rules. Customers should review the rules listed below to ensure they are enabled while also patching the underlying Java Spring components.

CVE-2022-22947

A new rule has been developed and deployed for this CVE with an emergency release on March 29:

Managed Rule Spring – CVE:CVE-2022-22947

  • WAF rule ID: e777f95584ba429796856007fbe6c869
  • Legacy rule ID: 100522

Note that the above rule is disabled by default and may cause some false positives. We advise customers to review rule matches or to deploy the rule with a LOG action before switching to BLOCK.

CVE-2022-22950

Currently, available PoCs are blocked by the following rule:

Managed Rule PHP – Code Injection

  • WAF rule ID: 55b100786189495c93744db0e1efdffb
  • Legacy rule ID: PHP100011

CVE-2022-22963

Currently, available PoCs are blocked by the following rule:

Managed Rule Plone – Dangerous File Extension

  • WAF rule ID: aa3411d5505b4895b547d68950a28587
  • Legacy WAF ID: PLONE0001

We also deployed a new rule via an emergency release on March 31 (today at time of writing) to cover additional variations attempting to exploit this vulnerability:

Managed Rule Spring – Code Injection

  • WAF rule ID: d58ebf5351d843d3a39a4480f2cc4e84
  • Legacy WAF ID: 100524

Note that the newly released rule is disabled by default and may cause some false positives. We advise customers to review rule matches or to deploy the rule with a LOG action before switching to BLOCK.

Additionally, customers can receive protection against this CVE by deploying the Cloudflare OWASP Core Ruleset with default or better settings on our new WAF. Customers using our legacy WAF will have to configure a high OWASP sensitivity level.

CVE-2022-22965

We are currently investigating this recent CVE and will provide an update to our Managed Ruleset as soon as possible if an applicable mitigation strategy or bypass is found. Please review and monitor our public facing change log.

Future-proofing SaltStack

Post Syndicated from Lenka Mareková original https://blog.cloudflare.com/future-proofing-saltstack/

Future-proofing SaltStack

Future-proofing SaltStack

At Cloudflare, we are preparing the Internet and our infrastructure for the arrival of quantum computers. A sufficiently large and stable quantum computer will easily break commonly deployed cryptography such as RSA. Luckily there is a solution: we can swap out the vulnerable algorithms with so-called post-quantum algorithms that are believed to be secure even against quantum computers. For a particular system, this means that we first need to figure out which cryptography is used, for what purpose, and under which (performance) constraints. Most systems use the TLS protocol in a standard way, and there a post-quantum upgrade is routine. However, some systems such as SaltStack, the focus of this blog post, are more interesting. This blog post chronicles our path of making SaltStack quantum-secure, so welcome to this adventure: this secret extra post-quantum blog post!

SaltStack, or simply Salt, is an open-source infrastructure management tool used by many organizations. At Cloudflare, we rely on Salt for provisioning and automation, and it has allowed us to grow our infrastructure quickly.

Salt uses a bespoke cryptographic protocol to secure its communication. Thus, the first step to a post-quantum Salt was to examine what the protocol was actually doing. In the process we discovered a number of security vulnerabilities (CVE-2022-22934, CVE-2022-22935, CVE-2022-22936). This blogpost chronicles the investigation, our findings, and how we are helping secure Salt now and in the Quantum future.

Cryptography in Salt

Let’s start with a high-level overview of Salt.

The main agents in a Salt system are servers and clients (referred to as masters and minions in the Salt documentation). A server is the central control point for a number of clients, which can be in the tens of thousands: it can issue a command to the entire fleet, provision client machines with different characteristics, collect reports on jobs running on each machine, and much more. Depending on the architecture, there can be multiple servers managing the same fleet of clients. This is what makes Salt great, as it helps the management of complex infrastructure.

By default, the communication between a server and a client happens over ZeroMQ on top of TCP though there is an experimental option to switch to a custom transport directly on TCP. The cryptographic protocol is largely the same for both transports. The experimental TCP transport has an option to enable TLS which does not replace the custom protocol but wraps it in server-authenticated TLS. More about that later on.

The custom protocol relies on a setup phase in which each server and each client has its own long-term RSA-2048 keypair. On the surface similar to TLS, Salt defines a handshake, or key exchange protocol, that generates a shared secret, and a record protocol which uses this secret with symmetric encryption (the symmetric channel).

Key exchange protocol

In its basic form, the key exchange (or handshake) protocol is an RSA key exchange in which the server chooses the secret and encrypts it to the client’s public key. The exact form of the protocol then depends on whether either party already knows the other party’s long-term public key, since certificates (like in TLS) are not used. By default, clients trust the server’s public key on first use, and servers only trust the client’s public key after it has been accepted by an out-of-band mechanism. The shared secret is shared among the entire fleet of clients, so it is not specific to a particular server and client pair. This allows the server to encrypt a broadcast message only once. We will come back to this performance trade-off later on.

Future-proofing SaltStack
Salt key exchange (as of version 3004) under default settings, showing the first connection between the given server and client.

Symmetric channel

The shared secret is used as the key for encryption. Most of the messages between a server and a client are secured in an Encrypt-then-MAC fashion, with AES-192 in CBC mode with SHA-256 for HMAC. For certain more sensitive messages, variations on this protocol are used to add more security. For example, commands are signed using the server’s long-term secret key, and “pillar data” (deemed more sensitive) is encrypted only to a particular client using a freshly generated secret key.

Future-proofing SaltStack
Symmetric channel in Salt (as of version 3004).

Security vulnerabilities

We found that the protocol variation used for pillar messages contains a flaw. As shown in the diagram below, a monster-in-the-middle attacker (MitM) that sits between a server and a client can substitute arbitrary “pillar data” to the client. It needs to know the client’s public key, but that is easy to find since clients broadcast it as part of their key exchange request. The initial key exchange can be observed, or one could be triggered on-demand using a specially crafted message. This MitM was possible because neither the newly generated key nor the actual payload were authenticated as coming from the server. This matters because “pillar data” can include anything from specifying packages to be installed to credentials and cryptographic keys. Thus, it is possible that this flaw could allow an attacker to gain access to the vulnerable client machine.

Future-proofing SaltStack
Illustration of the monster-in-the-middle attack, CVE-2022-22934.

We reported the issue to Salt November 5, 2021, which assigned CVE-2022-22934 to it. Earlier this week, on March 28, 2022, Salt released a patch that adds a signature of the server on the pillar message to prevent the attack.

There were several other smaller issues we found. Messages could be replayed to the same or a different client. This could allow a file intended for one client, to be served to a different one, perhaps aiding lateral movement. This is CVE-2022-22936 and has been patched by adding the name of the addressed client, a nonce and a signature to messages.

Finally, there were some messages which could be manipulated to cause the client to crash.  This is CVE-2022-22935 and was patched similarly by adding the addressee, nonce and signature.

If you are running Salt, please update as soon as possible to either 3002.8, 3003.4 or 3004.1.

Moving forward

These patches add signatures to almost every single message. The decision to have a single shared secret was a performance trade-off: only a single encryption is required to broadcast a message. As signatures are computationally much more expensive than symmetric encryption, this trade-off didn’t work out that well. It’s better to switch to a separate shared key per client, so that we don’t need to add a signature on every separate message. In effect, we are creating a single long-lived mutually-authenticated channel per client. But then we are getting very close to what mutually authenticated TLS (mTLS) can provide. What would that look like? Hold that thought for a moment: we will return to it below.

We got sidetracked from our original question: what does this all mean for making Salt post-quantum secure? One thing to take away about post-quantum cryptography today is that signatures are much larger: Dilithium2, for instance, weighs in at 2.4 kB compared to 256 bytes for an RSA-2048 signature. So ironically, patching these vulnerabilities has made our job more difficult as there are many more signatures. Thus, also for our post-quantum goal, mTLS seems very attractive. Not the least because there are post-quantum TLS stacks ready to go.

Finally, as the security properties of  mTLS are well understood, it will be much easier to add new messages and functionality to Salt’s protocol. With the current complex protocol, any change is much harder to judge with confidence.

An mTLS-based transport

So what would such an mTLS-based protocol for Salt look like? Clients would pin the server certificate and the server would pin the client certificates. Thus, we wouldn’t have any traditional public key infrastructure (PKI). This matches up nicely with how Salt deals with keys currently. This allows clients to establish long-lived connections with their server and be sure that all data exchanged between them is confidential, mutually authenticated and has forward secrecy. This mitigates replays, swapping of messages, reflection attacks or subtle denial of service pathways for free.

We tested this idea by implementing a third transport using WebSocket over mTLS (WSS). As mentioned before, Salt already offers an option to use TLS with the TCP transport, but it doesn’t authenticate clients and creates a new TCP connection for every client request which leads to a multitude of unnecessary TLS handshakes. Internally, Salt has been architected to work with new connections for each request, so our proof of concept required some laborious retooling.

Our findings show promise that there would be no significant losses and potentially some improvements when it comes to performance at scale. In our preliminary experiments with a single server handling a thousand clients, there was no difference in several metrics compared to the default ZeroMQ transport. Resource-intensive operations such as the fetching of pillar and state data resulted, in our experiment, in lower CPU usage in the mTLS transport. Enabling long-lived connections reduced the amount of data transmitted between the clients and the server, in some cases, significantly so.

We have shared our preliminary results with Salt, and we are working together to add an mTLS-based transport upstream. Stay tuned!

Conclusion

We had a look at how to make Salt post-quantum secure. While doing so, we found and helped fix several issues. We see a clear path forward to a future post-quantum Salt based on mTLS. Salt is but one system: we will continue our work, checking system-by-system, collaborating with vendors to bring post-quantum into the present.

With thanks to Bas and Sofía for their help on the project.

Optimizing Magic Firewall’s IP lists

Post Syndicated from Jordan Griege original https://blog.cloudflare.com/magic-firewall-optimizing-ip-lists/

Optimizing Magic Firewall’s IP lists

Optimizing Magic Firewall’s IP lists

Magic Firewall is Cloudflare’s replacement for network-level firewall hardware. It evaluates gigabits of traffic every second against user-defined rules that can include millions of IP addresses. Writing a firewall rule for each IP address is cumbersome and verbose, so we have been building out support for various IP lists in Magic Firewall—essentially named groups that make the rules easier to read and write. Some users want to reject packets based on our growing threat intelligence of bad actors, while others know the exact set of IPs they want to match, which Magic Firewall supports via the same API as Cloudflare’s WAF.

With all those IPs, the system was using more of our memory budget than we’d like. To understand why, we need to first peek behind the curtain of our magic.

Life inside a network namespace

Magic Transit and Magic WAN enable Cloudflare to route layer 3 traffic, and they are the front door for Magic Firewall. We have previously written about how Magic Transit uses network namespaces to route packets and isolate customer configuration. Magic Firewall operates inside these namespaces, using nftables as the primary implementation of packet filtering.

Optimizing Magic Firewall’s IP lists

When a user makes an API request to configure their firewall, a daemon running on every server detects the change and makes the corresponding changes to nftables. Because nftables configuration is stored within the namespace, each user’s firewall rules are isolated from the next. Every Cloudflare server routes traffic for all of our customers, so it’s important that the firewall only applies a customer’s rules to their own traffic.

Using our existing technologies, we built IP lists using nftables sets. In practice, this means that the wirefilter expression

ip.geoip.country == “US”

is implemented as

table ip mfw {
	set geoip.US {
		typeof ip saddr
		elements = { 192.0.2.0, … }
	}
	chain input {
		ip saddr @geoip.US block
	}
}

This design for the feature made it very easy to extend our wirefilter syntax so that users could write rules using all the different IP lists we support.

Scaling to many customers

We chose this design since sets fit easily into our existing use of nftables. We shipped quickly with just a few, relatively small lists to a handful of customers. This approach worked well at first, but we eventually started to observe a dramatic increase in our system’s memory use. It’s common practice for our team to enable a new feature for just a few customers at first. Here’s what we observed as those customers started using IP lists:

Optimizing Magic Firewall’s IP lists

It turns out you can’t store millions of IPs in the kernel without someone noticing, but the real problem is that we pay that cost for each customer that uses them. Each network namespace gets a complete copy of all the lists that its rules reference. Just to make it perfectly clear, if 10 users have a rule that uses the anonymizer list, then a server has 10 copies of that list stored in the kernel. Yikes!

And it isn’t just the storage of these IPs that caused alarm; the sheer size of the nftables configuration causes nft (the command-line program for configuring nftables) to use quite a bit of compute power.

Optimizing Magic Firewall’s IP lists

Can you guess when the nftables updates take place?  Now the CPU usage wasn’t particularly problematic, as the service is already quite lean. However, those spikes also create competition with other services that heavily use netlink sockets. At one point, a monitoring alert for another service started firing for high CPU, and a fellow development team went through the incident process only to realize our system was the primary contributor to the problem. They made use of the -terse flag to prevent nft from wasting precious time rendering huge lists of IPs, but degrading another team’s service was a harsh way to learn that lesson.

We put quite a bit of effort into working around this problem. We tried doing incremental updates of the sets, only adding or deleting the elements that had changed since the last update. It was helpful sometimes, but the complexity ended up not being worth the small savings. Through a lot of profiling, we found that splitting an update across multiple statements improved the situation some. This means we changed

add element ip mfw set0 { 192.0.2.0, 192.0.2.2, …, 198.51.100.0, 198.51.100.2, … }

to

add element ip mfw set0 { 192.0.2.0, … }
add element ip mfw set0 { 198.51.100.0, … }

just to squeeze out a second or two of reduced time that nft took to process our configuration. We were spending a fair amount of development cycles looking for optimizations, and we needed to find a way to turn the tides.

One more step with eBPF

As we mentioned on our blog recently, Magic Firewall leverages eBPF for some advanced packet-matching. And it may not be a big surprise that eBPF can match an IP in a list, but it wasn’t a tool we thought to reach for a year ago. What makes eBPF relevant to this problem is that eBPF maps exist regardless of a network namespace. That’s right; eBPF gives us a way to share this data across all the network namespaces we create for our customers.

Since several of our IP lists contain ranges, we can’t use a simple BPF_MAP_TYPE_HASH or BPF_MAP_TYPE_ARRAY to perform a lookup. Fortunately, Linux already has a data structure that we can use to accommodate ranges: the BPF_MAP_TYPE_LPM_TRIE.

Given an IP address, the trie’s implementation of bpf_map_lookup_elem() will return a non-null result if the IP falls within any of the ranges already inserted into the map. And using the map is about as simple as it gets for eBPF programs. Here’s an example:

typedef struct {
	__u32 prefix_len;
	__u8 ip[4];
} trie_key_t;

SEC("maps")
struct bpf_map_def ip_list = {
    .type = BPF_MAP_TYPE_LPM_TRIE,
    .key_size = sizeof(trie_key_t),
    .value_size = 1,
    .max_entries = 1,
    .map_flags = BPF_F_NO_PREALLOC,
};

SEC("socket/saddr_in_list")
int saddr_in_list(struct __sk_buff *skb) {
	struct iphdr iph;
	trie_key_t trie_key;

	bpf_skb_load_bytes(skb, 0, &iph, sizeof(iph));

	trie_key.prefix_len = 32;
	trie_key.ip[0] = iph.saddr & 0xff;
	trie_key.ip[1] = (iph.saddr >> 8) & 0xff;
	trie_key.ip[2] = (iph.saddr >> 16) & 0xff;
	trie_key.ip[3] = (iph.saddr >> 24) & 0xff;

	void *val = bpf_map_lookup_elem(&ip_list, &trie_key);
	return val == NULL ? BPF_OK : BPF_DROP;
}

nftables befriends eBPF

One of the limitations of our prior work incorporating eBPF into the firewall involves how the program is loaded into the nftables ruleset. Because the nft command-line program does not support calling a BPF program from a rule, we formatted the Netlink messages ourselves. While this allowed us to insert a rule that executes an eBPF program, it came at the cost of losing out on atomic rule replacements.

So any native nftables rules could be applied in one transaction, but any eBPF-based rules would have to be inserted afterwards. This introduces some headaches for handling failures—if inserting an eBPF rule fails, should we simply retry, or perhaps roll back the nftables ruleset as well? And what if those follow-up operations fail?

In other words, we went from a relatively simple operation—the new firewall configuration either fully applied or didn’t—to a much more complex one. We didn’t want to keep using more and more eBPF without solving this problem. After hacking on the nftables repo for a few days, we came out with a patch that adds a BPF keyword to the nftables configuration syntax.

I won’t cover all the changes, which we hope to upstream soon, but there were two key parts. The first is implementing struct expr_ops and some methods required, which is how nft’s parser knows what to do with each keyword in its configuration language (like the “bpf” keyword we introduced). The second is just a different approach for what we solved previously. Instead of directly formatting struct xt_bpf_info_v1 into a byte array as iptables does, nftables uses the nftnl library to create the netlink messages.

Once we had our footing working in a foreign codebase, that code turned out fairly simple.

static void netlink_gen_bpf(struct netlink_linearize_ctx *ctx,
                            const struct expr *expr,
                            enum nft_registers dreg)
{
       struct nftnl_expr *nle;

       nle = alloc_nft_expr("match");
       nftnl_expr_set_str(nle, NFTNL_EXPR_MT_NAME, "bpf");
       nftnl_expr_set_u8(nle, NFTNL_EXPR_MT_REV, 1);
       nftnl_expr_set(nle, NFTNL_EXPR_MT_INFO, expr->bpf.info, sizeof(struct xt_bpf_info_v1));
       nft_rule_add_expr(ctx, nle, &expr->location);
}

With the patch finally built, it became possible for us to write configuration like this where we can provide a path to any pinned eBPF program.

# nft -f - <<EOF
table ip mfw {
  chain input {
    bpf pinned "/sys/fs/bpf/filter" drop
  }
}
EOF

And now we can mix native nftables matches with eBPF programs without giving up the atomic nature of nft commands. This unlocks the opportunity for us to build even more advanced packet inspection and protocol detection in eBPF.

Results

This plot shows kernel memory during the time of the deployment – the change was rolled out to each namespace over a few hours.

Optimizing Magic Firewall’s IP lists

Though at this point, we’ve merely moved memory out of this slab. Fortunately, the eBPF map memory now appears in the cgroup for our Golang process, so it’s relatively easy to report. Here’s the point at which we first started populating the maps:

Optimizing Magic Firewall’s IP lists

This change was pushed a couple of weeks before the maps were actually used in the firewall (i.e. the graph just above). And it has remained constant ever since—unlike our original design, the number of customers is no longer a predominant factor in this feature’s resource usage.  The new model makes us more confident about how the service will behave as our product continues to grow.

Other than being better positioned to use eBPF more in the future, we made a significant improvement on the efficiency of the product today. In any project that grows rapidly for a while, an engineer can often see a few pesky pitfalls looming on the horizon. It is very satisfying to dive deep into these technologies and come out with a creative, novel solution before experiencing too much pain. We love solving hard problems and improving our systems to handle rapid growth in addition to pushing lots of new features.

Does that sound like an environment you want to work in? Consider applying!

Application security: Cloudflare’s view

Post Syndicated from Michael Tremante original https://blog.cloudflare.com/application-security/

Application security: Cloudflare’s view

Application security: Cloudflare’s view

Developers, bloggers, business owners, and large corporations all rely on Cloudflare to keep their applications secure, available, and performant.

To meet these goals, over the last twelve years we have built a smart network capable of protecting many millions of Internet properties. As of March 2022, W3Techs reports that:

“Cloudflare is used by 80.6% of all the websites whose reverse proxy service we know. This is 19.7% of all websites”

Netcraft, another provider who crawls the web and monitors adoption puts this figure at more than 20M active sites in their latest Web Server Survey (February 2022):

“Cloudflare continues to make strong gains amongst the million busiest websites, where it saw the only notable increases, with an additional 3,200 sites helping to bring its market share up to 19.4%”

The breadth and diversity of the sites we protect, and the billions of browsers and devices that interact with them, gives us unique insight into the ever-changing application security trends on the Internet. In this post, we share some of those insights we’ve gathered from the 32 million HTTP requests/second that pass through our network.

Definitions

Before we examine the data, it is useful to define the terminology we use. Throughout this post, we will refer to the following terms:

  • Mitigated Traffic: any eyeball HTTP* request that had a “terminating” action applied to by the Cloudflare platform. These include actions such as BLOCK, CHALLENGE (such as captchas or JavaScript based challenges). This does not include requests that had the following actions applied: LOG, SKIP, ALLOW.
  • Bot Traffic/Automated Traffic: any HTTP request identified by Cloudflare’s Bot Management system as being generated by a bot. This includes requests scored between 1 and 29.
  • API Traffic: any HTTP request with a response content type of XML, JSON, gRPC, or similar. Where the response content type is not available, such as for mitigated requests, the equivalent Accept content type (specified by the user agent) is used instead. In this latter case API traffic won’t be fully accounted for, but for insight purposes it still provides a good representation.

Unless otherwise stated, the time frame evaluated in this post is the three-month period from December 1, 2021, to March 1, 2022.

Finally, please note that the data is calculated based only on traffic observed across the Cloudflare network and does not necessarily represent overall HTTP traffic patterns across the Internet.

*When referring to HTTP traffic we mean both HTTP and HTTPS.

Global Traffic Insights

The first thing we can look at is traffic mitigated across all HTTP requests proxied by the Cloudflare network. This will give us a good baseline view before drilling into specific traffic types, such as bot and API traffic.

8% of all Cloudflare HTTP traffic is mitigated

Cloudflare proxies ~32 million HTTP requests per second on average, with more than ~44 million HTTP requests per second at peak. Overall, ~2.5 million requests per second are mitigated by our global network and never reach our caches or the origin servers, ensuring our customers’ bandwidth and compute power is only used for clean traffic.

Site owners using Cloudflare gain access to tools to mitigate unwanted or malicious traffic and allow access to their applications only when a request is deemed clean. This can be done both using fully managed features, such as our DDoS mitigation, WAF managed ruleset or schema validation, as well as custom rules that allow users to define their own filters for blocking traffic.

If we look at the top five Cloudflare features (sources) that mitigated traffic, we get a clear picture of how much each Cloudflare feature is contributing towards helping keep customer sites and applications online and secure:

Application security: Cloudflare’s view

Tabular format for reference:

Source Percentage %
Layer 7 DDoS mitigation 66.0%
Custom WAF Rules 19.0%
Rate Limiting 10.5%
IP Threat Reputation 2.5%
Managed WAF Rules 1.5%

Looking at each mitigation source individually:

  • Layer 7 DDoS mitigation, perhaps unsurprisingly, is the largest contributor to mitigated HTTP requests by total count (66% overall). Cloudflare’s layer 7 DDoS rules are fully managed and don’t require user configuration: they automatically detect a vast array of HTTP DDoS attacks including those generated by the Meris botnet, Mirai botnet, known attack tools, and others. Volumetric DDoS attacks, by definition, create a lot of malicious traffic!
  • Custom WAF Rules contribute to more than 19% of mitigated HTTP traffic. These are user-configured rules defined using Cloudflare’s wirefilter syntax. We explore common rule patterns further down in this post.
  • Our Rate Limiting feature allows customers to define custom thresholds based on application preferences. It is often used as an additional layer of protection for applications against traffic patterns that are too low to be detected as a DDoS attack. Over the time frame analyzed, rate limiting contributed to 10.5% of mitigated HTTP requests.
  • IP Threat Reputation is exposed in the Cloudflare dashboard as Security Level. Based on behavior we observe across the network, Cloudflare automatically assigns a threat score to each IP address. When the threat score is above the specified threshold, we challenge the traffic. This accounts for 2.5% of all mitigated HTTP requests.
  • Our Managed WAF Rules are rules that are handcrafted by our internal security analyst team aimed at matching only against valid malicious payloads. They contribute to about 1.5% of all mitigated requests.

HTTP anomalies are the most common attack vector

If we drill into Managed WAF Rules, we get a clear picture of what type of attack vectors malicious users are attempting against the Internet properties we protect.

The vast majority (over 54%) of HTTP requests blocked by our Managed WAF Rules contain HTTP anomalies, such as malformed method names, null byte characters in headers, non-standard ports or content length of zero with a POST request.

Common attack types in this category are shown below. These have been grouped when relevant:

Rule Type Description
Missing User Agent These rules will block any request without a User-Agent header. All browsers and legitimate crawlers present this header when connecting to a site. Not having a user agent is a common signal of a malicious request.
Not GET, POST or HEAD Method Most applications only allow standard GET or POST requests (normally used for viewing pages or submitting forms). HEAD requests are also often sent from browsers for security purposes. Customers using our Managed Rules can easily block any other method – which normally results in blocking a large number of vulnerability scanners.
Missing Referer When users navigate applications, browsers use the Referer header to indicate where they are coming from. Some applications expect this header to always be present.
Non-standard port Customers can configure Cloudflare Managed Rules to block HTTP requests trying to access non-standard ports (such as 80 and 443). This is activity normally seen by vulnerability scanners.
Invalid UTF-8 encoding It is common for attackers to attempt to break an application server by sending “special” characters that are not valid in UTF-8 encoding.

More commonly known and referenced attack vectors such as XSS and SQLi only contribute to about 13% of total mitigated requests. More interestingly, attacks aimed at information disclosure are third most popular (10%) and software-specific CVE-based attacks account for about 12% of mitigated requests (more than SQLi alone) highlighting both the importance of needing to patch software quickly, and the likelihood of CVE proof-of-concepts (PoCs) being used to compromise applications, such as with the recent Log4J vulnerability. The top 10 attack vectors by percentage of mitigated requests are shown below:

Application security: Cloudflare’s view

Tabular format for reference:

Source Percentage %
HTTP Anomaly 54.5%
Vendor Specific CVE 11.8%
Information Disclosure 10.4%
SQLi 7.0%
XSS 6.1%
File Inclusion 3.3%
Fake Bots 3.0%
Command Injection 2.7%
Open Redirects 0.1%
Other 1.5%

Businesses still rely on IP address-based access lists to protect their assets

In the prior section, we noted that 19% of mitigated requests come from Custom WAF Rules. These are rules that Cloudflare customers have implemented using the wirefilter syntax. At time of writing, Cloudflare customers had a total of ~6.5 million Custom WAF rules deployed.

It is interesting to look at what rule fields customers are using to identify malicious traffic, as this helps us focus our efforts on what other fully automated mitigations could be implemented to improve the Cloudflare platform.

The most common field, found in approximately 64% of all custom rules, remains the source IP address or fields easily derived from the IP address, such as the client country location. Note that IP addresses are becoming less useful signals for security policies, but they are often the quickest and simplest type of filter to implement during an attack. Customers are also starting to adopt better approaches such as those offered in our Zero Trust portfolio to further reduce reliance on IP address-based fields.

The top 10 fields are shown below:

Application security: Cloudflare’s view

Tabular format for reference:

Field name Used in % of rules
ip 64.9%
ip_geoip_country 27.3%
http_request_uri 24.1%
http_user_agent 21.8%
http_request_uri_path 17.8%
http_referer 8.6%
cf_client_bot 8.3%
http_host 7.8%
ip_geoip_asnum 5.8%
cf_threat_score 4.4%

Beyond IP addresses, standard HTTP request fields (URI, User-Agent, Path, Referer) tend to be the most popular. Note, also, that across the entire rule corpus, the average rule combines at least three independent fields.

Bot Traffic Insights

Cloudflare has long offered a Bot Management solution to allow customers to gain insights into the automated traffic that might be accessing their application. Using Bot Management classification data, we can perform a deep dive into the world of bots.

38% of HTTP traffic is automated

Over the time period analyzed, bot traffic accounted for about 38% of all HTTP requests. This traffic includes bot traffic from hundreds of Verified Bots tracked by Cloudflare, as well as any request that received a bot score below 30, indicating a high likelihood that it is automated.

Overall, when bot traffic matches a security configuration, customers allow 41% of bot traffic to pass to their origins, blocking only 6.4% of automated requests. Remember that this includes traffic coming from Verified Bots like GoogleBot, which ultimately benefits site owners and end users. It’s a reminder that automation in and of itself is not necessarily detrimental to a site.  This is why we segment Verified Bot traffic, and why we give customers a granular bot score, rather than a binary “bot or not bot” indicator. Website operators want the flexibility to be precise with their response to different types of bot traffic, and we can see that they do in fact use this flexibility. Note that our self-serve customers can also decide how to handle bot traffic using our Super Bot Fight Mode feature.

Application security: Cloudflare’s view

Tabular data for reference:

Action on all bot traffic Percentage %
allow 40.9%
log 31.9%
bypass 19.0%
block 6.4%
jschallenge 0.5%

More than a third of non-verified bot HTTP traffic is mitigated

31% of all bot traffic observed by Cloudflare is not verified, and comes from thousands of custom-built automated tools like scanners, crawlers, and bots built by hackers. As noted above, automation does not necessarily mean these bots are performing malicious actions. If we look at customer responses to identified bot traffic, we find that 38.5% of HTTP requests from non-verified bots are mitigated. This is obviously a much more defensive configuration compared to overall bot traffic actions shown above:

Application security: Cloudflare’s view

Tabular data for reference:

Action on non-verified bot traffic Percentage %
block 34.0%
log 28.6%
allow 14.5%
bypass 13.2%
managed_challenge 3.7%

You’ll notice that almost 30% of customers log traffic rather than take immediate action. We find that many enterprise customers choose to not immediately block bot traffic, so they don’t give a feedback signal to attackers. Rather, they prefer to tag and monitor this traffic, and either drop at a later time or redirect to alternate content. As targeted attack vectors have evolved, responses to those attacks have had to evolve and become more sophisticated as well. Additionally, nearly 3% of non-verified bot traffic is automatically mitigated by our DDoS protection (connection_close). These requests tend to be part of botnets used to attack customer applications.

API Traffic Insights

Many applications built on the Internet today are not meant to be consumed by humans. Rather, they are intended for computer-to-computer communication. The common way to expose an application for this purpose is to build an Application Programming Interface (API) that can be accessed using HTTP.

Due to the underlying format of the data in transit, API traffic tends to be a lot more structured than standard web applications, causing all sorts of problems from a security standpoint. First, the structured data often causes Web Application Firewalls (WAFs) to generate a large number of false positives. Secondly, due to the nature of APIs, they often go unnoticed, and many companies end up exposing old and unmaintained APIs without knowing, often referred to as “shadow APIs”.

Below, we look at some differences in API trends compared to the global traffic insights shown above.

10% of API traffic is mitigated at the edge

A good portion of bot traffic is accessing API endpoints, and as discussed previously, API traffic is the fastest growing traffic type on the Cloudflare network, currently accounting for 55% of total requests.

API endpoints globally receive more malicious requests compared to standard web applications (10% vs 8%) potentially indicating that attackers are focusing more on APIs for their attack surface as opposed to standard web apps.

Our DDoS mitigation is still the top source of mitigated events for API endpoints, accounting for just over 63% of the total mitigated requests. More interestingly, Custom WAF rules account for 35% compared to 19% when looking at global traffic. Customers have, to date, been heavily using WAF Custom Rules to lock down and validate traffic to API endpoints, although we expect our API Gateway schema validation feature to soon surpass Custom WAF Rules in terms of mitigated traffic.

SQLi is the most common attack vector on API endpoints

If we look at our WAF Managed Rules mitigations on API traffic only, we see notable differences compared to global trends. These differences include much more equal distribution across different types of attacks, but more noticeably, SQL injection attacks in the top spot.

Command Injection attacks are also much more prominent (14.3%), and vectors such as Deserialization make an appearance, contributing to more than 1% of the total mitigated requests.

Application security: Cloudflare’s view

Tabular data for reference:

Source Percentage %
SQLi 34.5%
HTTP Anomaly 18.2%
Vendor Specific CVE 14.5%
Command Injection 14.3%
XSS 7.3%
Fake Bots 5.8%
File Inclusion 2.3%
Deserialization 1.2%
Information Disclosure 0.6%
Other 1.3%

Looking ahead

In this post we shared some initial insights around Internet application security trends based on traffic to Cloudflare’s network. Of course, we have only just scratched the surface. Moving forward, we plan to publish quarterly reports with dynamic filters directly on Cloudflare Radar and provide much deeper insights and investigations.

Securing Cloudflare Using Cloudflare

Post Syndicated from Molly Cinnamon original https://blog.cloudflare.com/securing-cloudflare-using-cloudflare/

Securing Cloudflare Using Cloudflare

Securing Cloudflare Using Cloudflare

When a new security threat arises — a publicly exploited vulnerability (like log4j) or the shift from corporate-controlled environments to remote work or a potential threat actor — it is the Security team’s job to respond to protect Cloudflare’s network, customers, and employees. And as security threats evolve, so should our defense system. Cloudflare is committed to bolstering our security posture with best-in-class solutions — which is why we often turn to our own products as any other Cloudflare customer would.

We’ve written about using Cloudflare Access to replace our VPN, Purpose Justification to create granular access controls, and Magic + Gateway to prevent lateral movement from in-house. We experience the same security needs, wants, and concerns as security teams at enterprises worldwide, so we rely on the same solutions as the Fortune 500 companies that trust Cloudflare for improved security, performance, and speed. Using our own products is embedded in our team’s culture.

Security Challenges, Cloudflare Solutions

We’ve built the muscle to think Cloudflare-first when we encounter a security threat. In fact, many security problems we encounter have a Cloudflare solution.

  • Problem: Remote work creates a security blind spot of remote devices and networks.
  • Solution: Deploying Cloudflare Gateway and WARP to shield users and devices from threats, no matter their device or network connection.

Our Detection & Response team has regained visibility into security threats by connecting corporate devices to Gateway through our Cloudflare WARP application. These Zero Trust products shield users and devices from security threats like malware, phishing, shadow IT and more, as well as enable our Security team to instantly block threats and prevent sensitive data from leaving our organization — with no performance impact for our employees, no matter their location.

  • Problem: Larger company footprint (in size and location) increases the complexity of internal tooling.
  • Solution: Deploying Cloudflare Access and Purpose Justification to give network administrators granular and contextual application access controls.

Our Identity and Access Management team uses Access to create policies that minimize data access, ensuring that our employees only have access to what they need. Flexibility and instantaneous application of policies allow for ease of scalability as our internal tooling and teams evolve. With Purpose Justification Capture, employees must also justify their use case for visiting domains with particularly sensitive data — which not only solves an internal need for Cloudflare, but helps our customers meet data policy requirements (like GDPR).

  • Problem: Engineering and Product teams move at a rapid pace. Conducting a manual review of every pull request is not scalable.
  • Solution: A tool built on top of Workers that enables scanning of PRs for security bugs.

Our Product Security Engineering team uses Cloudflare’s development platform Workers to seamlessly deploy a code review assist framework to flag secrets, vulnerability dependencies, and binary security flags. The flexibility of Workers enables the Security team to evolve the tool depending on security concerns and scale it to the hundreds of PRs the company generates per week.

These are just some of the ways the Security team has used Cloudflare’s products to block malicious domains, streamline access management, provide visibility into threats, and harden our overall security posture. To give a sense of how we think about these challenges technically, we will dive into the implementation of a use of Cloudflare to secure Cloudflare.

Phish-proof websites using Cloudflare Access

Two-factor authentication is one of the most important security controls that can be implemented. Not all second factors provide the same level of security though. Time-based One-time password (TOTP) apps like Google Authenticator are a strong second factor, but are vulnerable to phishing via man-in-the-middle attacks. A successful phishing attack on an employee with privileged access is a terrifying thought, and it is a risk we wanted to completely eliminate.

FIDO2 was developed to provide simple UX and complete protection against phishing attacks. We decided to fully embrace FIDO2 supported security keys in all contexts, but FIDO2 support is not yet ubiquitous and there are many challenging compatibility issues. Cloudflare Access allowed us to enforce that FIDO2 was the only second factor that can be used when reaching systems protected by Cloudflare Access.

In order to manage our Cloudflare Access policies we check each one into source control as terraform code. Group based access control is enforced for our applications.

resource "cloudflare_access_policy" "prod_cloudflare_users" {
  application_id = cloudflare_access_application.prod_sandbox_access_application.id
  zone_id        = cloudflare_zone.prod_sandbox.id
  name           = "Allow "
  decision       = "allow"

  include {
    email_domain = ["cloudflare.com"]
    okta {
      # Require membership in Sandbox group
      name                 = ["ACL-ACCESS-sandbox"]
      identity_provider_id = cloudflare_access_identity_provider.okta_prod.id
    }
  }

  # Require a security key
  require {
    auth_method = "swk"
  }
}

The require section enforces that Cloudflare employees are using their FIDO2 supported security keys to access all of our internal and external applications that are protected by Access. More deeply described in RFC8176, auth_method is enforcing specific values are returned during the login flow from our OIDC provider within the amr field. The enforced swk is “Proof of possession of a software-secured key” which corresponds to logins using a security key.

The ability to enforce the use of security keys when accessing internal sites caused a massive improvement in our security posture. Prior to implementing this change, for many of our internal services we allowed both soft tokens like TOTP with Google Authenticator, along with WebAuthn because of a small number of systems that still didn’t support FIDO2. You’ll see that our use of soft tokens dropped to near zero after enforcing this change.

Securing Cloudflare Using Cloudflare

A Continued Practice

Not only does the Security team deploy Cloudflare’s products, but we test them first too. We work directly with Product to “dog food” our own products first. It’s our mission to help build a better Internet — and that means testing our products and collecting valuable feedback from our internal teams before every launch. As the number one consumer of Cloudflare’s products, the Security team is not only helping keep the company safer, but also contributing to build better products for our customers.

To learn more about examples and technical implementation of our use of Cloudflare products at Cloudflare, please check out this recent Cloudflare TV segment on Securing Cloudflare with Cloudflare.

And for more information on the products mentioned in this document, reach out to our Sales team to find out how we can help you secure your business, teams, and users.

Domain Scoped Roles – Early Access

Post Syndicated from Garrett Galow original https://blog.cloudflare.com/domain-scoped-roles-early-access/

Domain Scoped Roles - Early Access

Domain Scoped Roles - Early Access

Today, Cloudflare is making it easier for enterprise account owners to manage their team’s access to Cloudflare by allowing user access to be scoped to sets of domains. Ensuring users have exactly the access they need and no more is critical, and Domain Scoped Roles provide a significant step forward. Additionally, with the introduction of Domain Groups, account owners can grant users access to domains by group instead of individually. Domains can be added or removed from these groups to automatically update the access of those who have access to the group. This reduces toil in managing user access.

One of the most common uses we have seen for Domain Scoped Roles is to limit access to production domains to a small set of team members, while still allowing development and pre-production domains to be open to the rest of the team. That way, someone can’t make changes to a production domain unless they are given access.

How to use Domain Scoped Roles

If you are an enterprise customer please talk with your CSM to get you and your team enrolled. Note that you must have Super Administrator privileges to be able to modify account memberships.

Once the beta has been enabled for you, here is how to start using it:

  • Log in to dash.cloudflare.com, select your account, and navigate to the members page.
    • From here, you can either invite a new member with a Domain Scoped Role or modify an existing user’s permissions. In this case, we will invite a new user.

Domain Scoped Roles - Early Access

  • When inviting new members there are three things to provide:
    • Which users to invite.
    • The scope of which resource they will have access to:
      • Selecting “All Domains” will allow you to select legacy roles.
    • The role(s) which will decide what permissions are granted.

Domain Scoped Roles - Early Access

  • Before sending the invite, you will be able to confirm the users, scope, and roles.

Domain Scoped Roles - Early Access

Domain Groups

In addition to manually creating inclusion or exclusion lists per user, account owners can also create Domain Groups to allow granting one or more users to a group of domains. Domain Groups can be created from the member invite flow or directly from Account Configurations -> Lists. When creating a domain group, the user selects the domains to include and, from that point on, the group can be used when inviting a user to the account.

Domain Group Creation Screen

Domain Scoped Roles - Early Access

Domain Group selection during member invite

Domain Scoped Roles - Early Access

Introducing Bach

Domain Scoped Roles is possible because of a new permission system called Bach. Bach provides a policy based system for defining authorization to Cloudflare’s control plane. Authorization defines what someone can do in a system. Bach has been powering API Tokens, but going forward all authorization will use Bach. This gives customers the ability to define more granular permissions and resource scoping. Resources can be any object a user interacts with whether that be accounts, zones, worker environments, or DNS records to name a few.. Whereas before Cloudflare’s RBAC system relied on assigning a set of roles in which each role defined broad permissions that applied to an entire account, Bach’s policies allow for deeper permission grants that can be scoped to sets of resources.

Let’s take a look at the legacy system and how it compares to what Bach supports. Typically, user’s permissions are defined by the ‘roles’ that they are assigned to. These included: ‘Super Administrator’, ‘Administrator’, ‘Cloudflare for Teams’, and ‘Cloudflare Workers Admin’ to name a few. In the legacy system, each of these maps to an explicit set of simple permissions like ‘workers:read’, ‘workers:edit’ or ‘zones:edit’. When requests get made either via the Cloudflare API or the Cloudflare dashboard, Cloudflare’s API Gateway would check to see if, for the endpoint requested, an actor had the correct permissions to perform the action. In order to change a zone setting, the actor would need to have the ‘zone:edit’ permission – which could be granted from one of many roles. Below is what a user with only ‘DNS’ permissions is granted in the legacy system:

Legacy DNS User Role Permissions

Permission Edit Read
dns_records
legal
account
subscription
zone
zone_settings

While straightforward, this had many drawbacks. First, this meant the inability to define limits on the resources the permissions applied to, whether that be a set of resources or attributes of resources. Second, permissions for a given ‘resource’ were simply read or edit, where edit included creating and deleting. These don’t fully capture the needs that may exist – for example, limiting deletions while allowing edits.

With Bach we have an expanded capability to define granular permissions for authorization. An example policy defining the aforementioned ‘DNS’ member’s access would look like this:

Bach DNS User Role Policy

(slightly modified for readability)

#Legacy DNS Role - applies to all zones
  policies:
  - id: 186f95f3bda1443c986aeb78b05eb60b
    permission_groups:
    - id: 49ce85367bae433b9f0717ed4fea5c74
      name: DNS
      meta:
        description: Can edit DNS records.
        editable: 'false'
        label: dns_admin
        scopes: com.cloudflare.api.account
      permissions:
      - key: com.cloudflare.registrar.domain.read
      - key: com.cloudflare.registrar.domain.list
      - key: com.cloudflare.registrar.contact.read
      - key: com.cloudflare.registrar.contact.list
      - key: com.cloudflare.api.account.secondary-dns.update
      - key: com.cloudflare.api.account.secondary-dns.read
      - key: com.cloudflare.api.account.secondary-dns.delete
      - key: com.cloudflare.api.account.secondary-dns.create
      - key: com.cloudflare.api.account.zone.secondary-dns.update
      - key: com.cloudflare.api.account.zone.secondary-dns.read
      - key: com.cloudflare.api.account.zone.secondary-dns.delete
      - key: com.cloudflare.api.account.zone.secondary-dns.create
      - key: com.cloudflare.api.account.notification.*
      - key: com.cloudflare.api.account.custom-ns.update
      - key: com.cloudflare.api.account.custom-ns.list
      - key: com.cloudflare.api.account.custom-ns.*
      - key: com.cloudflare.edge.spectrum.app.list
      - key: com.cloudflare.edge.spectrum.app.read
      - key: com.cloudflare.api.account.zone.custom-page.read
      - key: com.cloudflare.api.account.zone.custom-page.list
      - key: com.cloudflare.api.account.zone.setting.read
      - key: com.cloudflare.api.account.zone.setting.list
      - key: com.cloudflare.api.account.zone.dnssec.update
      - key: com.cloudflare.api.account.zone.dnssec.read
      - key: com.cloudflare.api.account.dns-firewall.cluster.delete
      - key: com.cloudflare.api.account.dns-firewall.cluster.update
      - key: com.cloudflare.api.account.dns-firewall.cluster.read
      - key: com.cloudflare.api.account.dns-firewall.cluster.create
      - key: com.cloudflare.api.account.dns-firewall.cluster.list
      - key: com.cloudflare.api.account.zone.dns-record.delete
      - key: com.cloudflare.api.account.zone.dns-record.update
      - key: com.cloudflare.api.account.zone.dns-record.read
      - key: com.cloudflare.api.account.zone.dns-record.create
      - key: com.cloudflare.api.account.zone.dns-record.list
      - key: com.cloudflare.api.account.zone.page-rule.read
      - key: com.cloudflare.api.account.zone.page-rule.list
      - key: com.cloudflare.api.account.zone.railgun-connection.read
      - key: com.cloudflare.api.account.zone.railgun-connection.list
      - key: com.cloudflare.api.account.zone.subscription.read
      - key: com.cloudflare.api.account.zone.aml.read
      - key: com.cloudflare.api.account.zone.read
      - key: com.cloudflare.api.account.zone.list
      - key: com.cloudflare.api.account.audit-log.read
      - key: com.cloudflare.api.account.custom-page.read
      - key: com.cloudflare.api.account.custom-page.list
      - key: com.cloudflare.api.account.railgun.read
      - key: com.cloudflare.api.account.railgun.list
      - key: com.cloudflare.api.account.dpa.read
      - key: com.cloudflare.api.account.subscription.read
      - key: com.cloudflare.api.account.subscription.list
      - key: com.cloudflare.api.account.read
      - key: com.cloudflare.api.account.list
    resource_groups:
    - id: 2fe938e0a5824128bdc8c42f9339b127
      name: com.cloudflare.api.account.a67e14daa5f8dceeb91fe5449ba496eb
      meta:
        editable: 'false'
      scope:
        key: com.cloudflare.api.account.a67e14daa5f8dceeb91fe5449ba496eb
        objects:
        - key: "*"
    access: allow

You can see a greater granularity in the permissions that are defined. In both cases, the user can perform the same actions, but the granularity means we are more explicit about what can be done. Here for DNS records (under com.cloudflare.api.account.zone.dns-record) there are explicit create, read, update, delete and list permissions. In the resource group section, we can see that the scope section defines an account. Any objects like domains in that account will match to the “*” value under ‘objects’. This means that these permissions apply throughout the entire account. Now, let’s modify this user’s permissions to be scoped to a single domain and see how the policy changes.

Bach DNS User Role with Domain Scoping Policy

(slightly modified for readability)

# Zone Scoped DNS role - scoped to 1 zone
policies:
- id: 80b25dd735b040708155c85d0ed8a508
  permission_groups:
  - id: 132c52e7e6654b999c183cfcbafd37d7
    name: Zone DNS
    meta:
      description: Grants access to edit DNS settings for zones in an account.
      editable: 'false'
      label: zone_dns_admin
      scopes: com.cloudflare.api.account.zone
    permissions:
    - key: com.cloudflare.api.account.zone.secondary-dns.*
    - key: com.cloudflare.api.account.zone.dnssec.*
    - key: com.cloudflare.api.account.zone.dns-record.*
    - key: com.cloudflare.api.account.zone.analytics.dns-report.*
    - key: com.cloudflare.api.account.zone.analytics.dns-bytime.*
    - key: com.cloudflare.api.account.zone.setting.read
    - key: com.cloudflare.api.account.zone.setting.list
    - key: com.cloudflare.api.account.zone.rate-plan.read
    - key: com.cloudflare.api.account.zone.subscription.read
    - key: com.cloudflare.api.account.zone.read
    - key: com.cloudflare.api.account.subscription.read
    - key: com.cloudflare.api.account.subscription.list
    - key: com.cloudflare.api.account.read
  resource_groups:
  - scope:
      key: com.cloudflare.api.account.a67e14daa5f8dceeb91fe5449ba496eb
      objects:
      - key: com.cloudflare.api.account.zone.b1fbb152bbde3bd28919a7f4bdca841f
  access: allow

Once we scope the user’s permissions to only include one explicit zone, we see two main differences. First, there is a large reduction in permissions. This is because we do not grant the user access to read or list many account level resources that the legacy account scoped roles grant. The only account level permissions granted are account.read, subscriptions.read, and subscriptions.list. These permissions are necessary for a user to be able to use the dashboard.  When viewing the account in the dashboard the only account level product that will be shown is domains. Other products like Cloudflare Workers, Zero Trust, etc will be hidden.

Second, in the resource groups scope section, we see an explicit mention of a zone (line X: com.cloudflare.api.account.zone.b1fbb152bbde3bd28919a7f4bdca841f). This means that the zone permissions outlined in the policy only apply to that specific zone. Any attempt to access other features of that zone or to modify DNS for any other zones will be rejected by Cloudflare.

What’s next

If you are an enterprise customer and interested in getting started with Domain Scoped Roles, please contact your CSM to get enabled for the Early Access period. This announcement represents a significant milestone in our migration to Bach, an authorization system built for Coudflare’s scale. This will allow us to expand these same capabilities to more products in the future and to create an authorization system that puts customers more in control of their team’s access across all of Cloudflare’s services. Stay tuned as we are just getting started!

Cloudflare Observability

Post Syndicated from Tanushree Sharma original https://blog.cloudflare.com/vision-for-observability/

Cloudflare Observability

Cloudflare Observability

Whether you’re a software engineer deploying a new feature, network engineer updating routes, or a security engineer configuring a new firewall rule: You need visibility to know if your system is behaving as intended — and if it’s not, to know how to fix it.

Cloudflare is committed to helping our customers get visibility into the services they have protected behind Cloudflare. Being a single pane of glass for all network activity has always been one of Cloudflare’s goals. Today, we’re outlining the future vision for Cloudflare observability.

What is observability?

Observability means gaining visibility into the internal state of a system. It’s used to give users the tools to figure out what’s happening, where it’s happening, and why.

At Cloudflare, we believe that observability has three core components: monitoring, analytics, and forensics. Monitoring measures the health of a system – it tells you when something is going wrong. Analytics give you the tools to visualize data to identify patterns and insights. Forensics helps you answer very specific questions about an event.

Observability becomes particularly important in the context of security to validate that any mitigating actions performed by our security products, such as Firewall or Bot Management, are not false positives. Was that request correctly classified as malicious? And if it wasn’t, which detection system classified it as such?

Cloudflare, additionally, has products to improve performance of applications and corporate networks and allow developers to write lightning fast code that runs on our global network. We want to be able to provide our customers with insights into every request, packet, and fetch that goes through Cloudflare’s network.

Monitoring and Notifying

Analytics are fantastic for summarizing data, but how do you know when to look at them? No one wants to sit on the dashboard clicking refresh over and over again just in case something looks off. That’s where notifications come in.

When we talk about something “looking off” on an analytics page, what we really mean is that there’s a significant change in your traffic or network which is reflected by spikes or drops in our analytics. Availability and performance directly affect end users, and our goal is to monitor and notify our customers as soon as we see things going wrong.

Cloudflare Observability

Today, we have many different types of notifications from Origin Error Rates, Security Events, and Advanced Security Events to Usage Based Billing and Health Checks. We’re continuously adding more notification types to have them correspond with our awesome analytics. As our analytics get more customizable, our notifications will as well.

There’s tons of different algorithms that can be used to detect spikes, including using burn rates and z-scores. We’re continuing to iterate on the algorithms that we use for detections to offer more variations, make them smarter, and make sure that our notifications are both accurate and not too noisy.

Analytics

So, you’ve received an alert from Cloudflare. What comes next?

Analytics can be used to get a birds eye view of traffic or focus on specific types of events by adding filters and time ranges. After you receive an alert, we want to show you exactly what’s been triggered through graphs, high level metrics, and top Ns on the Cloudflare dashboard.

Whether you’re a developer, security analyst, or network engineer, the Cloudflare dashboard should be the spot for you to see everything you need. We want to make the dashboard more customizable to serve the diverse use cases of our customers. Analyze data by specifying a timeframe and filter through dropdowns on the dashboard, or build your own metrics and graphs that work alongside the raw logs to give you a clear picture of what’s happening.

Focusing on security, we believe analytics are the best tool to build confidence before deploying security policies. Moving forward, we plan to layer all of our security related detection signals on top of HTTP analytics so you can use the dashboard to answer questions such as: if I were to block all requests that the WAF identifies as an XSS attack, what would I block?

Customers using our enterprise Bot Management may already be familiar with this experience, and as we improve it and build upon it further, all of our other security products will follow.

Cloudflare Observability

Analytics are a powerful tool to see high level patterns and identify anomalies that indicate that something unusual is happening. We’re working on new dashboards, customizations, and features that widen the use cases for our customers. Stay tuned!

Logs

Logs are used when you want to examine specific details about an event. They consist of a timestamp and fields that describe the event and are used to get visibility on a granular level when you need a play-by-play.

In each of our datasets, an event measures something different. For example, in HTTP request logs, an event is when an end user requests content from or sends content to a server. For Firewall logs, an event occurs when the Firewall takes an action on an HTTP request. There can be multiple Firewall events for each HTTP request.

Today, our customers access logs using Logpull, Logpush, or Instant Logs. Logpull and Logpush are great for customers that want to send their logs to third parties (like our Analytics Partners) to store, analyze, and correlate with other data sources. With Instant Logs, our customers can monitor and troubleshoot their traffic in real-time straight from the dashboard or CLI. We’re planning on building out more capabilities to dig into logs on Cloudflare. We’re hard at work on building log storage on R2 – but what’s next?

We’ve heard from customers that the activity log on the Firewall analytics dashboard is incredibly useful. We want to continue to bring the power of logs to the dashboard by adding the same functionality across our products. For customers that will store their logs on Cloudflare R2, this means that we can minimize the use of sampled data.

If you’re looking for something very specific, querying logs is also important, which is where forensics comes in. The goal is to let you investigate from high level analytics all the way down to individual logs lines that make them up. Given a unique identifier, such as the ray ID, you should be able to look up a single request, and then correlate it with all other related activity. Find out the client IP of that ray ID and from there, use cases are plentiful: what other requests from this IP are malicious? What paths did the client follow?

Tracing

Logs are really useful, but they don’t capture the context around a request. Traces show the end-to-end life cycle of a request from when a user requests a resource to each of the systems that are involved in its delivery. They’re another way of applying forensics to help you find something very specific.

These are used to differentiate each part of the application to identify where errors or bottlenecks are occurring. Let’s say that you have a Worker that performs a fetch event to your origin and a third party API. Analytics can show you average execution times and error rates for your Worker, but it doesn’t give you visibility into each of these operations.

Using wrangler dev and console.log statements are really helpful ways to test and debug your code. They bring some of the visibility that’s needed, but it can be tedious to instrument your code like this.

As a developer, you should have the tools to understand what’s going on in your applications so you can deliver the best experience to your end users. We can help you answer questions like: Where is my Worker execution failing? Which operation is causing a spike in latency in my application?

Putting it all together

Notifications, analytics, logs, and tracing each have their distinct use cases, but together, these are powerful tools to provide analysts and developers visibility. Looking forward, we’re excited to bring more and more of these capabilities on the Cloudflare dashboard.

We would love to hear from you as we build these features out. If you’re interested in sharing use cases and helping shape our roadmap, contact your account team!

Commitment to Customer Security

Post Syndicated from Ling Wu original https://blog.cloudflare.com/our-commitment-to-customer-security/

Commitment to Customer Security

Commitment to Customer Security

Cloudflare has been hooked on securing customers globally since its inception. Our services protect customer traffic and data as well as our own, and we are continuously improving and expanding those services to respond to the changing threat landscape of the Internet. Proving that commitment is a multi-faceted venture, the Security Team focuses on people, proof, and transparency to ensure every touchpoint with our products and company feels dependable.

People

The breadth of knowledge of the Security Team is wide and bleeding edge. Working as a security team at a security company means being highly technical, diverse, willing to test any and all products on ourselves, and sharing our knowledge with our local and global communities through industry groups and presenting at conferences worldwide. Connecting with our customers and counterparts through meetups and conferences lets us share problems, learn about upcoming industry trends, and share feedback to make improvements to the customer experience. In addition to running a formally documented, risk-based security program for Cloudflare, team members drive continuous improvement efforts across our Product and Infrastructure teams by reviewing and advising on changes, identifying and treating vulnerabilities, controlling authorization and access to systems and data, encrypting data in transit and at rest, and by detecting and responding to threats and incidents.

Proof

Security claims are all well and good, but how can a customer be sure we are doing what we say we do? We do it by undergoing several audits a year, proving that our security practices meet industry standards. To date, Cloudflare has regularly assessed and maintained compliance with PCI DSS (as a merchant and a service provider), SOC 2 Type II, ISO 27001 and ISO 27701 standards. No matter where our customers are in the world, they will likely need to rely on at least one of these standards to protect their customers’ information. We honor the responsibility of being the backbone of that trust.

As Cloudflare’s customer base continues to grow into more regulated industries with complex and rigorous requirements, we’ve decided to assess our global network against three additional standards this year:

  • FedRAMP, the US Federal Risk and Authorization Management Program, which evaluates our systems and practices against the standard for protection of US agency data in cloud computing environments. Cloudflare is listed on the FedRAMP Marketplace as “In Process” for an agency authorization at a Moderate impact level. We’re in the final steps of concluding our security assessment report from our auditors and on target to receive an authorization to operate in 2022.
  • ISO 27018, which examines our practices to protect personally identifiable information (PII) as a cloud provider. This extension to the ISO 27001 standard ensures that our information security management system (ISMS) manages the risks associated with processing PII. We’ve completed the third-party assessment, and we’re waiting for our certification in the upcoming month.
  • C5, Cloud Computing Compliance Criteria Catalog, introduced by the Federal Office for Information Security (The BSI) in Germany, is a validation against a defined baseline security level for cloud computing. Cloudflare is currently in the process of being assessed against the catalog by third-party auditors. Learn about our journey here.

Transparency

Our commitment to security for our customers and business means we have to be super transparent. When a security incident is being contained, we have in our response plan to not only bring in our legal, compliance and communications teams to determine notification strategy, but we also start outlining a detailed overview of how we are responding, even if we are still in the process of remediating.

We know firsthand how frustrating it can be when your critical vendors stay silent during a security incident and provide nothing more than a one sentence legal response which fails to reveal how they were impacted by the security vulnerability or incident. Here at Cloudflare, it is in our DNA to be transparent. You can see it with the blogs (Verkada Incident, Log4j) we write and how quickly we show our customers how we’ve responded and what we’re doing to fix the issue.

One of the most frequent questions we get from our customers regarding incidents is if our third-parties were impacted. Supply chain vulnerabilities, like Solarwinds and Log4j, have driven us to create efficiencies, such as automated inquiries, to all of our critical vendors at once. During the Containment phase of our security incident response process, our third-party risk team is quickly able to identify the impacted vendors and prioritize our production and security vendors. Our tooling allows us to trigger inquiries to third parties immediately, and our team is integrated into the incident response process to ensure effective communication. Any information that we receive from our vendors, we share with our Security Compliance forums to ensure that other companies who are also inquiring with their vendors don’t have to duplicate their work.

Value

These recurring audits and assessments are not simple website badges. Our Security team doesn’t produce evidence only to pass audits; our process includes identifying risks, forming controls and processes to address those risks, continuous operation of those processes, evaluation of the effectiveness of (in the form of internal and external audits and tests) those processes, and making improvements to the ISMS based on those evaluations. Some things on our process that set us apart include the following:

  • Many companies do not contact vendors or have this process baked into their incident response procedures. For log4j, our Vendor Security Team was on calls with the response team and providing regular updates on vendor responses as soon as the incident was identified.
  • Many companies do not proactively communicate to customers like we do. We communicate even when we are not legally required to do so because we feel it’s the right thing to do regardless of the requirement.
  • The tools in this space also are not usually flexible enough to send custom questionnaires quickly out to vendors. We have automation in place to get these out in bulk right away and tailor questions to the vulnerabilities at hand.

The final step is communicating the resulting picture of our security posture to our customers. Our security certifications and assessment results are available to our customers via download from their Cloudflare Dashboards, or by request to their account team. For the latest information about our certifications and reports, please visit our Trust Hub.

Cloudflare partners with Microsoft to protect joint customers with a Global Zero Trust Network

Post Syndicated from Abhi Das original https://blog.cloudflare.com/cloudflare-partners-with-microsoft-to-protect-joint-customers-with-global-zero-trust-network/

Cloudflare partners with Microsoft to protect joint customers with a Global Zero Trust Network

Cloudflare partners with Microsoft to protect joint customers with a Global Zero Trust Network

As a company, we are constantly asking ourselves what we can do to provide more value to our customers, including integrated solutions with our partners. Joint customers benefit from our integrations below with Azure Active Directory by:

First, centralized identity and access management via Azure Active Directory which provides single sign-on, multifactor authentication, and access via conditional authentication.

Second, policy oriented access to specific applications using Cloudflare Access—a VPN replacement service.

Third, an additional layer of security for internal applications by connecting them to Cloudflare global network and not having to open them up to the whole Internet.

Cloudflare partners with Microsoft to protect joint customers with a Global Zero Trust Network

Let’s step back a bit.

Why Zero Trust?

Companies of all sizes are faced with an accelerating digital transformation of their IT stack and an increasingly distributed workforce, changing the definition of the security perimeter. We are moving away from the castle and moat model to the whole Internet, requiring security checks for every user accessing every resource. As a result, all companies, especially those whose use of Azure’s broad cloud portfolio is increasing, are adopting Zero Trust architectures as an essential part of their cloud and SaaS journey.

Cloudflare Access provides secure access to Azure hosted applications and on-premise applications. Also, it acts as an on-ramp to the world’s fastest network to Azure and the rest of the Internet. Users connect from their devices or offices via Cloudflare’s network in over 250 cities around the world. You can use Cloudflare Zero Trust on that global network to ensure that every request to every resource is evaluated for security, including user identity. We are excited to bring this secure global network on-ramp to Azure hosted applications and on-premise applications.

Also, performance is one of our key advantages in addition to security. Cloudflare serves over 32 million HTTP requests per second, on average, for the millions of customers who secure their applications on our network. When those applications do not run on our network, we can rely on our own global private backbone and our connectivity with over 10,000 networks globally to connect the user.

We are excited to bring this global security and performance perimeter network as our Cloudflare Zero Trust product for your Azure hosted applications and on-premises applications.

Cloudflare Access: a modern Zero Trust approach

Cloudflare’s Zero Trust solution Cloudflare Access provides a modern approach to authentication for internally managed applications. When corporate applications on Azure or on-premise are protected with Cloudflare Access, they feel like SaaS applications, and employees can log in to them with a simple and consistent flow. Cloudflare Access acts as a unified reverse proxy to enforce access control by making sure every request is authenticated, authorized, and encrypted.

Identity: Cloudflare Access integrates out of the box with all the major identity providers, including Azure Active Directory, allowing use of the policies and users you already created to provide conditional access to your web applications. For example, you can use Cloudflare Access to ensure that only company employees and no contractors can get to your internal kanban board, or you can lock down the SAP finance application hosted on Azure or on-premise.

Devices: You can use TLS with Client Authentication and limit connections only to devices with a unique client certificate. Cloudflare will ensure the connecting device has a valid client certificate signed by the corporate CA, and authenticate user credentials to grant access to an internal application.

Additional security: Want to use Cloudflare Access in front of an internal application but don’t want to open up that application to the whole Internet? For additional security, you can combine Access with Cloudflare Tunnel. Cloudflare Tunnel will connect from your Azure environment directly to Cloudflare’s network, so there is no publicly accessible IP.

Cloudflare partners with Microsoft to protect joint customers with a Global Zero Trust Network

Secure both your legacy applications and Azure hosted applications via Azure AD and Cloudflare Access jointly

For on-premise legacy applications, we are excited to announce that Cloudflare is an Azure Active Directory secure hybrid access partner. Azure AD secure hybrid access enables customers to centrally manage access to their legacy on-premise applications using SSO authentication without incremental development. Starting today, joint customers can easily use Cloudflare Access solution as an additional layer of security with built-in performance, in front of their legacy applications.

Cloudflare partners with Microsoft to protect joint customers with a Global Zero Trust Network

Traditionally for on-premise applications, customers have to change their existing code or add additional layers of code to integrate Azure AD or Cloudflare Access–like capabilities. With the help of Azure Active Directory secure hybrid access, customers can integrate these capabilities seamlessly without much code changes. Once integrated, customers can take advantage of the below Azure AD features and more:

  1. Multi-factor authentication (MFA)
  2. Single sign-on (SSO)
  3. Passwordless authentication
  4. Unified user access management
  5. Azure AD Conditional Access and device trust

Very similarly, the Azure AD and Cloudflare Access combo can also be used to secure your Azure hosted applications. Cloudflare Access enables secure on-ramp to Azure hosted applications or on-premise applications via the below two integrations:

Cloudflare partners with Microsoft to protect joint customers with a Global Zero Trust Network

1. Cloudflare Access integration with Azure AD:

Cloudflare Access is a Zero Trust Network Access (ZTNA) solution that allows you to configure precise access policies across their applications. You can integrate Microsoft Azure Active Directory with Cloudflare Zero Trust and build rules based on user identity, group membership and Azure AD Conditional Access policies. Users will authenticate with their Azure AD credentials and connect to Cloudflare Access. Additional policy controls include Device Posture, Network, Location and more. Setup typically takes less than a few hours!

2. Cloudflare Tunnel integration with Azure:

Cloudflare Tunnel can expose applications running on the Microsoft Azure platform. See guide to install and configure Cloudflare Tunnel. Also, a prebuilt Cloudflare Linux image exists on the Azure Marketplace. To simplify the process of connecting Azure applications to Cloudflare’s network, deploy the prebuilt image to an Azure resource group. Cloudflare Tunnel is now available on Microsoft’s Azure marketplace.

“The hybrid work environment has accelerated the cloud transition and increased the need for CIOs everywhere to provide secure and performant access to applications for their employees. This is especially true for self-hosted applications. Cloudflare’s global network security perimeter via Cloudflare Access provides this additional layer of security together with Azure Active Directory to enable employees to get work done from anywhere securely and performantly.” said David Gregory, Principal Program Manager
– Microsoft Azure Active Directory

Conclusion

Over the last ten years, Cloudflare has built one of the fastest, most reliable, most secure networks in the world. You now have the ability to use that network as a global security and performance perimeter to your Azure hosted applications via the above integrations, and it is easy.

What’s next?

In the coming months, we will be further strengthening our integrations with Microsoft Azure allowing customers to better implement our SASE perimeter.

If you’re using Cloudflare Zero Trust products today and are interested in using this integration with Azure, please visit our above developer documentation to learn about how you can enable it. If you want to learn more or have additional questions, please fill out the form or get in touch with your Cloudflare CSM or AE, and we’ll be happy to help you.

Managing Clouds – Cloudflare CASB and our not so secret plan for what’s next

Post Syndicated from Corey Mahan original https://blog.cloudflare.com/managing-clouds-cloudflare-casb/

Managing Clouds - Cloudflare CASB and our not so secret plan for what’s next

Managing Clouds - Cloudflare CASB and our not so secret plan for what’s next

Last month we introduced Cloudflare’s new API–driven Cloud Access Security Broker (CASB) via the acquisition of Vectrix. As a quick recap, Cloudflare’s CASB helps IT and security teams detect security issues in and across their SaaS applications. We look at both data and users in SaaS apps to alert teams to issues ranging from unauthorized user access and file exposure to misconfigurations and shadow IT.

I’m excited to share two updates since we announced the introduction of CASB functionality to Cloudflare Zero Trust. First, we’ve heard from Cloudflare customers who cannot wait to deploy the CASB and want to use it in more depth. Today, we’re outlining what we’re building next, based on that feedback, to give you a preview of what you can expect. Second, we’re opening the sign-up for our beta, and I’m going to walk through what will be available to new users as they are invited from the waitlist.

What’s next in Cloudflare CASB?

The vision for Cloudflare’s API–driven CASB is to provide IT and security owners an easy-to-use, one-stop shop to protect the security of their data and users across their fleet of SaaS tools. Our goal is to make sure any IT or security admin can go from creating a Zero Trust account for the first time to protecting what matters most in minutes.

Beyond that immediate level of visibility, we know the problems discovered by IT and security administrators still require time to find, understand, and resolve. We’re introducing three new features to the core CASB platform in the coming months to address each of those challenges.

New integrations (with more yet to come)

First, what are integrations? Integrations are what we call the method to grant permissions and connect SaaS applications (via API) to CASB for security scanning and management. Generally speaking, integrations are done following an OAuth 2.0 flow, however this varies between third-party SaaS apps. Aligning to our goal, we’ll always make sure that integration set up flows are as simple as possible and can be done in minutes.

As with most security strategies, protecting your most critical assets first becomes the priority. Integrations with Google Workspace and GitHub will be available in beta (request access here). We’ll soon follow with integrations to Zoom, Slack, and Okta before adding services like Microsoft 365 and Salesforce later this year. Working closely with customers will drive which applications we integrate with next.

SaaS asset management

On top of integrations, managing the various assets, or “digital nouns” like users, data, folders, repos, meetings, calendars, files, settings, recordings, etc. across services is tricky to say the least. Spreadsheets are hard to manage for tracking who has access to what or what files have been shared with whom.

This isn’t efficient and is ripe for human error. CASB SaaS asset management allows IT and security teams to view all of their data settings and user activity around said data from a single dashboard. Quickly being able to answer questions like; “did we disable the account for a user across these six services?” becomes a quick task instead of logging into each service and addressing individually.

Remediation guides + automated workflows

Detect, prevent, and fix. With detailed SaaS remediation guides, IT administrators can assign and tackle issues with the right team. By arming teams with what they need to know in context, it makes preventing issues from happening again seamless. In situations where action should be taken straight away, automated SaaS workflows provide the ability to solve SaaS security issues in one click. Need to remove sharing permissions from that file in OneDrive? A remediation button allows for action from anywhere, anytime.

Cloudflare Gateway + CASB

Combining products across the Zero Trust platform means solving complex problems through one seamless experience. Starting with the power of Gateway and CASB, customers will be able to take immediate action to wrangle in Shadow IT. In just a few clicks, a detected unauthorized SaaS application from the Gateway shadow IT report can go from being the wild west to a sanctioned and secure one with a CASB integration. This is just one example to highlight the many solutions we’re excited about that can be solved with the Zero Trust platform.

Managing Clouds - Cloudflare CASB and our not so secret plan for what’s next

Launching the Cloudflare CASB beta and what you can expect

In the CASB beta you can deploy popular integrations like Google Workspace on day one. You’ll also get direct access to our Product team to help shape what comes next. We’re excited to work closely with a number of early customers to align on which integrations and features matter most to them.

Getting started today with the Cloudflare CASB beta

Right now we’re working on making the out-of-band CASB product a seamless part of the Zero Trust platform. We’ll be sending out the first wave of beta invitations early next month – you can request access here.

We have some big ideas of what the CASB product can and will do. While this post highlights some exciting things to come, you can get started right now with Cloudflare’s Zero Trust platform by signing up here.

Handy Tips #25: Securing Zabbix logins with password complexity settings

Post Syndicated from Arturs Lontons original https://blog.zabbix.com/handy-tips-25-securing-zabbix-logins-with-password-complexity-settings/19883/

Secure your Zabbix logins from brute-force and dictionary attacks by defining password complexity requirements.

Enforcing an organization-wide password policy can be extremely unreliable if we don’t have a toolset to enforce these policies. By using native password complexity settings, we can provide an additional layer of security and ensure that our users follow our organization’s password complexity policies.

Define custom Zabbix login password complexity rules:

  • Set the minimum password length in a range of 2 – 70 characters
  • Define password character set rules

  • A built-in password list secures users from dictionary attacks
  • Prevent usage of passwords containing first or last names and easy to guess words

Check out the video to learn how to configure Zabbix password complexity requirements.

How to configure Zabbix password complexity requirements:
 
  1. As a super admin navigate to Administration → Authentication
  2. Define the minimum password length
  3. Select the optional Password must contain requirements
  4. Mark Avoid easy-to-guess passwords option
  5. Navigate to Administration → Users
  6. Select use for which we will change the password
  7. Press the Change password button
  8. Try using  easy to guess passwords like zabbix or password
  9. Observe the error messages
  10. Define a password that fits the password requirements
  11. Press the Update button

Tips and best practices:
  • It is possible to restrict access to the ui/data/top_passwords.txt file, which contains the Zabbix password deny list
  • Passwords longer than 72 characters will be truncated
  • Password complexity requirements are only applied to the internal Zabbix authentication
  • Users can change their passwords in the user profile settings

The post Handy Tips #25: Securing Zabbix logins with password complexity settings appeared first on Zabbix Blog.

Cloudflare and CrowdStrike partner to give CISOs secure control across devices, applications, and corporate networks

Post Syndicated from Deeksha Lamba original https://blog.cloudflare.com/cloudflare-crowdstrike-partnership/

Cloudflare and CrowdStrike partner to give CISOs secure control across devices, applications, and corporate networks

Today, we are very excited to announce multiple new integrations with CrowdStrike. These integrations combine the power of Cloudflare’s expansive network and Zero Trust suite, with CrowdStrike’s Endpoint Detection and Response (EDR) and incident remediation offerings.

Cloudflare and CrowdStrike partner to give CISOs secure control across devices, applications, and corporate networks

At Cloudflare, we believe in making our solutions easily integrate with the existing technology stack of our customers. Through our partnerships and integrations, we make it easier for our customers to use Cloudflare solutions jointly with that of partners, to further strengthen their security posture and unlock more value. Our partnership with CrowdStrike is an apt example of such efforts.

Together, Cloudflare and CrowdStrike are working to simplify the adoption of Zero Trust for IT and security teams. With this expanded partnership, joint customers can identify, investigate, and remediate threats faster through multiple integrations:

First, by integrating Cloudflare’s Zero Trust services with CrowdStrike Falcon Zero Trust Assessment (ZTA), which provides continuous real-time device posture assessments, our customers can verify users’ device posture before granting them access to internal or external applications.

Second, we joined the CrowdXDR Alliance in December 2021 and are partnering with CrowdStrike to share security telemetry and other insights to make it easier for customers to identify and mitigate threats. Cloudflare’s global network spans more than 250 cities in over 100 countries, blocking an average of 76 billion cyber threats each day. This provides customers with unparalleled insights, helping security teams better protect their organization. By joining the CrowdXDR Alliance, we will be able to use security signals from Cloudflare’s global network with CrowdStrike’s leading endpoint protection to help mutual customers stop cyber attacks anywhere in their network.

Third, CrowdStrike is one of Cloudflare’s incident response partners, providing rapid and effective support. CrowdStrike’s incident response team deals with active under attack situations day in, day out — helping customers mitigate the attack and get their web property and network back online. Our partnership with CrowdStrike enables rapid remediation of under attack scenarios to safeguard organizations from adversaries.

“The speed in which a company is able to identify, investigate and remediate a threat heavily determines how it will fare in the end. Our partnership with Cloudflare provides companies the ability to take action rapidly and contain exposure at the time of an attack, enabling them to get back on their feet and return to business as usual as quickly as possible.”
Thomas Etheridge, Senior Vice President, CrowdStrike Services

CrowdStrike’s endpoint security meets Cloudflare’s Zero Trust Services

Cloudflare and CrowdStrike partner to give CISOs secure control across devices, applications, and corporate networks

Before we get deep into how the integration works, let’s first recap Cloudflare’s Zero Trust Services.

Cloudflare Access and Gateway

Cloudflare Access determines if a user should be allowed access to an application or not. It uses our global network to check every request or connection for identity, device posture, location, multifactor method, and many more attributes to do so. Access also logs every request and connection — providing administrators with high-visibility. The upshot of all of this: it enables customers to deprecate their legacy VPNs.

Cloudflare Gateway protects users as they connect to the rest of the Internet. Instead of back hauling traffic to a centralized location, users connect to a nearby Cloudflare data center where we apply one or more layers of security, filtering, and logging, before accelerating their traffic to its final destination.

Zero Trust Integration with CrowdStrike

Cloudflare’s customers can now build Access and Gateway policies based on the presence of a CrowdStrike agent at the endpoint. In conjunction with our Zero Trust client, we are able to leverage the enhanced telemetry that CrowdStrike provides surrounding a user’s device.

CrowdStrike’s Zero Trust Assessment (ZTA) delivers continuous real-time security posture assessments across all endpoints in an organization regardless of the location, network or user. The ZTA scores enable enforcement of conditional policies based on device health and compliance checks to mitigate risks. These policies are evaluated each time a connection request is made, making the conditional access adaptive to the evolving condition of the device.

With this integration, organizations can build on top of their existing Cloudflare Access and Gateway policies ensuring that a minimum ZTA score or version has been met before a user is granted access. Because these policies work across our entire Zero Trust platform, organizations can use these to build powerful rules invoking Browser Isolation, tenant control, antivirus or any part of their Cloudflare deployment.

Cloudflare and CrowdStrike partner to give CISOs secure control across devices, applications, and corporate networks

“The CrowdStrike Falcon platform secures customers through verified access controls, helping customers reduce their attack surface and simplify, empower and accelerate their Zero Trust journey. By expanding our partnership with Cloudflare, we are making it easier for joint customers to strengthen their Zero Trust security posture across all endpoints and their entire corporate network.”
Michael Sentonas, Chief Technology Officer, CrowdStrike

How the integration works

Customers using our Zero Trust suite can add CrowdStrike as a device posture provider in the Cloudflare Zero Trust dashboard under Settings → Devices → Device Posture Providers. The details required from the CrowdStrike dashboard include: ClientID, Client Secret, REST API URL, and Customer ID.

Cloudflare and CrowdStrike partner to give CISOs secure control across devices, applications, and corporate networks

After creating the CrowdStrike Posture Provider, customers can create specific device posture checks requiring users’ devices to meet a certain threshold of ZTA scores.

Cloudflare and CrowdStrike partner to give CISOs secure control across devices, applications, and corporate networks

These rules can now be used to create conditional Access and Gateway policies to allow or deny access to applications, networks, or sites. Administrators can choose to block or isolate users or user groups with malicious or insecure devices.

Cloudflare and CrowdStrike partner to give CISOs secure control across devices, applications, and corporate networks

What comes next?

In the coming months, we will be further strengthening our integrations with CrowdStrike by allowing customers to correlate their Cloudflare logs with Falcon telemetry, for timely detection and mitigation of sophisticated threats.
If you’re using Cloudflare Zero Trust products today and are interested in using this integration with CrowdStrike, please visit our documentation to learn about how you can enable it. If you want to learn more or have additional questions, please fill out the form or get in touch with your Cloudflare CSM or AE, and we’ll be happy to help you.