Tag Archives: Uncategorized

SVR Attacks on Microsoft 365

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/01/svr-attacks-on-microsoft-365.html

FireEye is reporting the current known tactics that the SVR used to compromise Microsoft 365 cloud data as part of its SolarWinds operation:

Mandiant has observed UNC2452 and other threat actors moving laterally to the Microsoft 365 cloud using a combination of four primary techniques:

  • Steal the Active Directory Federation Services (AD FS) token-signing certificate and use it to forge tokens for arbitrary users (sometimes described as Golden SAML). This would allow the attacker to authenticate into a federated resource provider (such as Microsoft 365) as any user, without the need for that user’s password or their corresponding multi-factor authentication (MFA) mechanism.
  • Modify or add trusted domains in Azure AD to add a new federated Identity Provider (IdP) that the attacker controls. This would allow the attacker to forge tokens for arbitrary users and has been described as an Azure AD backdoor.
  • Compromise the credentials of on-premises user accounts that are synchronized to Microsoft 365 that have high privileged directory roles, such as Global Administrator or Application Administrator.
  • Backdoor an existing Microsoft 365 application by adding a new application or service principal credential in order to use the legitimate permissions assigned to the application, such as the ability to read email, send email as an arbitrary user, access user calendars, etc.

Lots of details here, including information on remediation and hardening.

The more we learn about the this operation, the more sophisticated it becomes.

In related news, MalwareBytes was also targeted.

Using Route 53 Private Hosted Zones for Cross-account Multi-region Architectures

Post Syndicated from Anandprasanna Gaitonde original https://aws.amazon.com/blogs/architecture/using-route-53-private-hosted-zones-for-cross-account-multi-region-architectures/

This post was co-written by Anandprasanna Gaitonde, AWS Solutions Architect and John Bickle, Senior Technical Account Manager, AWS Enterprise Support

Introduction

Many AWS customers have internal business applications spread over multiple AWS accounts and on-premises to support different business units. In such environments, you may find a consistent view of DNS records and domain names between on-premises and different AWS accounts useful. Route 53 Private Hosted Zones (PHZs) and Resolver endpoints on AWS create an architecture best practice for centralized DNS in hybrid cloud environment. Your business units can use flexibility and autonomy to manage the hosted zones for their applications and support multi-region application environments for disaster recovery (DR) purposes.

This blog presents an architecture that provides a unified view of the DNS while allowing different AWS accounts to manage subdomains. It utilizes PHZs with overlapping namespaces and cross-account multi-region VPC association for PHZs to create an efficient, scalable, and highly available architecture for DNS.

Architecture Overview

You can set up a multi-account environment using services such as AWS Control Tower to host applications and workloads from different business units in separate AWS accounts. However, these applications have to conform to a naming scheme based on organization policies and simpler management of DNS hierarchy. As a best practice, the integration with on-premises DNS is done by configuring Amazon Route 53 Resolver endpoints in a shared networking account. Following is an example of this architecture.

Route 53 PHZs and Resolver Endpoints

Figure 1 – Architecture Diagram

The customer in this example has on-premises applications under the customer.local domain. Applications hosted in AWS use subdomain delegation to aws.customer.local. The example here shows three applications that belong to three different teams, and those environments are located in their separate AWS accounts to allow for autonomy and flexibility. This architecture pattern follows the option of the “Multi-Account Decentralized” model as described in the whitepaper Hybrid Cloud DNS options for Amazon VPC.

This architecture involves three key components:

1. PHZ configuration: PHZ for the subdomain aws.customer.local is created in the shared Networking account. This is to support centralized management of PHZ for ancillary applications where teams don’t want individual control (Item 1a in Figure). However, for the key business applications, each of the teams or business units creates its own PHZ. For example, app1.aws.customer.local – Application1 in Account A, app2.aws.customer.local – Application2 in Account B, app3.aws.customer.local – Application3 in Account C (Items 1b in Figure). Application1 is a critical business application and has stringent DR requirements. A DR environment of this application is also created in us-west-2.

For a consistent view of DNS and efficient DNS query routing between the AWS accounts and on-premises, best practice is to associate all the PHZs to the Networking Account. PHZs created in Account A, B and C are associated with VPC in Networking Account by using cross-account association of Private Hosted Zones with VPCs. This creates overlapping domains from multiple PHZs for the VPCs of the networking account. It also overlaps with the parent sub-domain PHZ (aws.customer.local) in the Networking account. In such cases where there is two or more PHZ with overlapping namespaces, Route 53 resolver routes traffic based on most specific match as described in the Developer Guide.

2. Route 53 Resolver endpoints for on-premises integration (Item 2 in Figure): The networking account is used to set up the integration with on-premises DNS using Route 53 Resolver endpoints as shown in Resolving DNS queries between VPC and your network. Inbound and Outbound Route 53 Resolver endpoints are created in the VPC in us-east-1 to serve as the integration between on-premises DNS and AWS. The DNS traffic between on-premises to AWS requires an AWS Site2Site VPN connection or AWS Direct Connect connection to carry DNS and application traffic. For each Resolver endpoint, two or more IP addresses can be specified to map to different Availability Zones (AZs). This helps create a highly available architecture.

3. Route 53 Resolver rules (Item 3 in Figure): Forwarding rules are created only in the networking account to route DNS queries for on-premises domains (customer.local) to the on-premises DNS server. AWS Resource Access Manager (RAM) is used to share the rules to accounts A, B and C as mentioned in the section “Sharing forwarding rules with other AWS accounts and using shared rules” in the documentation. Account owners can now associate these shared rules with their VPCs the same way that they associate rules created in their own AWS accounts. If you share the rule with another AWS account, you also indirectly share the outbound endpoint that you specify in the rule as described in the section “Considerations when creating inbound and outbound endpoints” in the documentation. This implies that you use one outbound endpoint in a region to forward DNS queries to your on-premises network from multiple VPCs, even if the VPCs were created in different AWS accounts. Resolver starts to forward DNS queries for the domain name that’s specified in the rule to the outbound endpoint and forward to the on-premises DNS servers. The rules are created in both regions in this architecture.

This architecture provides the following benefits:

  1. Resilient and scalable
  2. Uses the VPC+2 endpoint, local caching and Availability Zone (AZ) isolation
  3. Minimal forwarding hops
  4. Lower cost: optimal use of Resolver endpoints and forwarding rules

In order to handle the DR, here are some other considerations:

  • For app1.aws.customer.local, the same PHZ is associated with VPC in us-west-2 region. While VPCs are regional, the PHZ is a global construct. The same PHZ is accessible from VPCs in different regions.
  • Failover routing policy is set up in the PHZ and failover records are created. However, Route 53 health checkers (being outside of the VPC) require a public IP for your applications. As these business applications are internal to the organization, a metric-based health check with Amazon CloudWatch can be configured as mentioned in Configuring failover in a private hosted zone.
  • Resolver endpoints are created in VPC in another region (us-west-2) in the networking account. This allows on-premises servers to failover to these secondary Resolver inbound endpoints in case the region goes down.
  • A second set of forwarding rules is created in the networking account, which uses the outbound endpoint in us-west-2. These are shared with Account A and then associated with VPC in us-west-2.
  • In addition, to have DR across multiple on-premises locations, the on-premises servers should have a secondary backup DNS on-premises as well (not shown in the diagram).
    This ensures a simple DNS architecture for the DR setup, and seamless failover for applications in case of a region failure.

Considerations

  • If Application 1 needs to communicate to Application 2, then the PHZ from Account A must be shared with Account B. DNS queries can then be routed efficiently for those VPCs in different accounts.
  • Create additional IP addresses in a single AZ/subnet for the resolver endpoints, to handle large volumes of DNS traffic.
  • Look at Considerations while using Private Hosted Zones before implementing such architectures in your AWS environment.

Summary

Hybrid cloud environments can utilize the features of Route 53 Private Hosted Zones such as overlapping namespaces and the ability to perform cross-account and multi-region VPC association. This creates a unified DNS view for your application environments. The architecture allows for scalability and high availability for business applications.

Sophisticated Watering Hole Attack

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/01/sophisticated-watering-hole-attack.html

Google’s Project Zero has exposed a sophisticated watering-hole attack targeting both Windows and Android:

Some of the exploits were zero-days, meaning they targeted vulnerabilities that at the time were unknown to Google, Microsoft, and most outside researchers (both companies have since patched the security flaws). The hackers delivered the exploits through watering-hole attacks, which compromise sites frequented by the targets of interest and lace the sites with code that installs malware on visitors’ devices. The boobytrapped sites made use of two exploit servers, one for Windows users and the other for users of Android

The use of zero-days and complex infrastructure isn’t in itself a sign of sophistication, but it does show above-average skill by a professional team of hackers. Combined with the robustness of the attack code — ­which chained together multiple exploits in an efficient manner — the campaign demonstrates it was carried out by a “highly sophisticated actor.”

[…]

The modularity of the payloads, the interchangeable exploit chains, and the logging, targeting, and maturity of the operation also set the campaign apart, the researcher said.

No attribution was made, but the list of countries likely to be behind this isn’t very large. If you were to ask me to guess based on available information, I would guess it was the US — specifically, the NSA. It shows a care and precision that it’s known for. But I have no actual evidence for that guess.

All the vulnerabilities were fixed by last April.

Send localized messages using Amazon Pinpoint templates and standard demographic attributes

Post Syndicated from Mohit Palriwal original https://aws.amazon.com/blogs/messaging-and-targeting/send-localized-messages-using-pinpoint-templates-and-standard-demographic-attributes/

As your application user base expands into more countries and languages, it’s important to make sure messages are localized for each recipient to improve engagement. Localizing your messages helps you reach your audience with content specific to their language settings. Creating separate messages for each language and managing each template separately can require a lot of duplication effort. It is also challenging to manage and group templates based on all possible locales or specific campaigns.

Amazon Pinpoint‘s messaging template provides a way to build a single message with multiple localizations. You prepare localizations based on locale of your audience registered with Amazon Pinpoint project.

This blog post walks you through a solution that uses the locale of your user endpoints to build a localized messaging template. We provide you with a template that is used with an Amazon Pinpoint campaign or journeys to target your audience across multiple locale with localized message content. This solution is applicable for all supported channels under Amazon Pinpoint, SMS, email, push, voice. This blog explains the solution for a SMS channel-specific scenario.

Solution overview

The solution below describes the workflow to send localized messaging to a group of users across various locales. The first prerequisite is to create an Amazon Pinpoint project in your AWS account and enable corresponding channels for message sending. Next, you will create an Amazon Pinpoint template using locale-specific message variables and register users endpoints with a demographic locale property. Once segment and template resources are generated, you can create a localized message in your campaign or journey.

Prerequisites

Setting up the solution

1. Set up Amazon Pinpoint

First, create a new Amazon Pinpoint project and configure the desired channels from which you want to send localized messages.

2. Create a localized template

  1. Create an Amazon Pinpoint messaging template with supported message variables of your choice. This builds more dynamic and personalized content.
  2. Use Demographic.Locale from supported Endpoint attributes to customize your message content per locale using eq comparison helper.

Below is an example of using an endpoint standard locale attribute in a template.

{{#eq Demographic.Locale "fr-FR"}} Bienvenue dans l'expérience utilisateur Pinpoint! 
{{else eq Demographic.Locale "de-DE"}} Willkommen bei Pinpoint User Experience! 
{{else}} Welcome to Pinpoint User Experience ! {{/eq}}  

3. Register your users with locale property

Register your user endpoint to pinpoint with the demographic locale/timezone standard attribute.

The below is an example for registering an SMS endpoint with de-DE locale.
aws pinpoint update-endpoint –application-id $APP_ID –endpoint-id

$ENDPOINT_ID --endpoint-request '{"Address":"+19999999999","ChannelType":"SMS","Demographic":{"Locale":"de-DE", "Timezone": "Europe/Berlin"}}'

Note: You can also register your user endpoints using the import segment feature. This accepts a .csv file with all endpoints.

4. Create a segment with all locale users

Create an Amazon Pinpoint segment to define the audience you want to target with localized message.

5. Create a journey or campaign

  1. Create an Amazon Pinpoint campaign or journey.
  2. Use the template from earlier in Step 2.
  3. Create a segment with all locale users from Step 4.Note: You can also use Amazon Pinpoint local time and quiet time features to target your audience in their local time zone or at a specific global time (for example 10am GMT). This also respects the quiet hours (for example 23:00 to 8:00) specific to their local time zone based on the EndpointDemographic.Timezone property.

 

6. Execution:

A marketing campaign manager wants to send a localized message to every audience based of their preferred language.

  1. Creates a single journey targeting a segment with 2 endpoints (each with unique locale) from Step 4.
  2. Create a segment with all locale users using the template defined in Step 2.
  3. Create a localized template

Conclusion

The Amazon Pinpoint messaging template provides you the ease of managing a single template for multiple locales.

With a localized messaging template you can simply target your audience across locales and receive targeted analytics. Get started today by visiting Amazon Pinpoint’s webpage.

Other useful links

 

SAF Products Integration into Zabbix

Post Syndicated from Tatjana Dunce original https://blog.zabbix.com/saf-products-integration-into-zabbix/12978/

Top of the line point-to-point microwave equipment manufacturer SAF Tehnika has partnered with Zabbix to provide NMS capabilities to its end customers. SAF Tehnika appreciates Zabbix’s customizability, scalability, ease of template design, and SAF products integration.

Content

I. SAF Tehnika (1:14)
II. SAF point-to-point microwave systems (3:20)
III. SAF product lines (5:37)

√ Integra (5:54)
√ PhoeniX-G2 (6:41)

IV. SAF services (7:21)
V. SAF partnership with Zabbix (8:48)

√  Zabbix templates for SAF equipment (10:50)
√ Zabbix Maps view for Phoenix G2 (15:00)

VI. Zabbix services provided by SAF (17:56)
VII. Questions & Answers (20:00)

SAF Tehnika

SAF Tehnika comes from a really small country — Latvia, as well as Zabbix.

SAF Tehnika:

✓ has been around for over 20 years,
✓ has been profitable/has no debt balance sheet,
✓ is present in 130+ countries,
✓ has manufacturing facilities in the European Union,
✓ is ISO 9001 certified,
✓ is Zabbix Certified Partner since August 2020,
✓ is publicly traded on NASDAQ Riga Stock Exchange,
✓ has flexible R&D, and is able to provide custom solutions based on customer requirements.

SAF Tehnika is primarily manufacturing:

  • point-to-point systems,
  • hand-held MW spectrum analyzers,
  • Aranet wireless sensors and solutions.

SAF Tehnika main product groups

SAF point-to-point microwave systems

Point-to-point microwave systems are an alternative to a fiber line. Instead of a fiber line, we have two radio systems with the antenna installed on two towers. The distance between those towers could be anywhere from a few km up to 50 or even 100 km. The data is transmitted from one point to another wirelessly.

SAF Tehnika point-to-point MW system technology provides:

  • long-distance wireless links;
  • free and excellent technical support;
  • fast & easy deployment;
  • The 5-year standard warranty for SAF products as SAF Tehnika ensures their top quality thanks to using high-quality materials and manufacturing reliable chipsets, as well as chamber testing of all products;
  • solutions for:

√ WISPs,
√ TV and broadcasting (No.1 in the USA),
√ public safety,
√ utilities & mining,
√ enterprise networks,
√ local government & military,
√ low-latency/HFT (No.1 globally).

SAF product lines

The primary PTP Microwave product series manufactured by SAF Tehnika are Integra and Phoenix G2

SAF Tehnika main radio products

Integra

Integra is a full-outdoor radio, which can be attached directly to the antenna, so there is nothing indoors besides the power supply.

Integra-E — wireless fiber solution specifically tailored for dense urban deployment:

  • operates in E-band range,
  • can achieve the throughput of up to 10 Gbps per second,
  • operates in 2GHz bandwidth.

Integra-X — a powerful dual-core system for network backbone deployment, incorporates two radios in a single enclosure and two modem chains allowing this system to operate with built-in 2+0 XPIC, reaching the maximum data transmission capacity of up to 2.2 gigabits per second.

PhoeniX-G2

The PhoeniX G2 product line can either be split-mount with the modem installed indoors and radio — outdoors or full-indoors solution. For instance, the broadcast market is mostly using full-indoor solutions as they prefer to have all the equipment to be indoors and then have a long elliptical line going up to the antenna. Phoenix G2 product line features native ASI transmission in parallel with the IP traffic – a crucial requirement of our broadcast customers.

SAF services

SAF has also been offering different sets of services:

product training.
link planning.
technical support.
staging and configuration — enables the customers to have all the equipment labeled and configured before it gets to the customer.
FCC coordination — recently added to the SAF portfolio and offered only to the customers in the USA. This provides an opportunity to save time on link planning, FCC coordination, pre-configuration, and hardware purchase from the one-stop-shop – SAF Tehnika.

Zabbix deployment and support.

SAF partnership with Zabbix

Before partnering with Zabbix, SAF Tehnika had developed its own network management system and used it for many years. Unfortunately, this software was limited to SAF products. Adding other vendors’ products was a difficult and complicated process.

More and more SAF customers were inquiring about the possibility of adding other vendors’ products to the network management system. That is where Zabbix came in handy, as besides monitoring SAF products, Zabbix can also monitor other vendors’ products just by adding appropriate templates.

Zabbix is an open-source, advanced and robust platform with high customizability and scalability – there are virtually no limits to the number of devices Zabbix can monitor. I am confident our customers will appreciate all of these benefits and enjoy the ability to add SAF and other vendor products to the list of monitored devices.

Finally, SAF Tehnika and Zabbix are located in the same small town, so the partnership was easy and natural.

Zabbix templates for SAF equipment

Following training at Zabbix, SAF engineers obtained the certified specialist status and developed Zabbix SNMP-based templates for main product lines:

  • Integra-X, Integra-E, Integra-G, Integra-W, and Phoenix G2.

SAF main product line templates are available free of charge to all of SAF customers on the SAF webpage: https://www.saftehnika.com/en/downloads
(registration required).

Users proficient in Linux and familiar with Zabbix can definitely install and deploy these templates themselves. Otherwise, SAF specialists are ready to assist in the deployment and integration of Zabbix templates and tuning of the required parameters.

Zabbix dashboard for Integra X

We have an Integra-X link Zabbix dashboard shown as an example below. As Integra-X is a dual-core radio, we provide the monitoring parameters for two radios in a single enclosure.

Zabbix dashboard for Integra-X link

On the top, we display the main health parameters of the link, current received signal level, MSE or so-called noise level, the transmit power, and the IP address – a small summary of the link.

On the left, we display the parameters of the radio and the graphs for the last couple of minutes — the live graphs of the received signal level and MSE, the noise level of the RF link.

On the right, we have the same parameters for the remote side. In the middle, we have added a few parameters, which should be monitored, such as CPU load percentage, the current traffic over the link, and diagnostic parameters, such as the temperature of each of the modems.

At the bottom, we have added the alarm widget. In this example, the alarm of too low received signal level is shown. These alarms are also colored by their severity: red alarms are for disaster-level issues, blue alarms — for information.

From this dashboard, the customers are able to estimate the current status of the link and any issues that have appeared in the past. Note that Zabbix graphs can be easily customized to display the widgets or graphs of customer choice.

Zabbix Maps view for Phoenix G2

Zabbix Maps view for Phoenix G2 1+1 system

In the map, our full-indoor Phoenix G2 system is displayed in duplicate, as this is a 1+1 protected link. Each of the IDU,  ASI module, and radio module is protected by the second respective module.

Zabbix allows for naming each of these modules and for monitoring every module’s performance individually. In this example, the ASI module is colored in red as one of the ASI ports has lost the connection, while the radio unit’s red color shows that the received signal is lower than expected.

Zabbix dashboard for Phoenix G2 1+1 system

Besides the maps view, the dashboard for Phoenix G2 1+1 system shows the historical data like alarm log and graphs. The data in red indicates that an issue hasn’t been cleared yet. The data in green – an issue was resolved, for instance, a low signal level was restored after going down for a short period of time.

In the middle we see a summary graph of all four radios’ performance — two on the local side and two on the remote side. Here, we are monitoring the most important parameters — the received signal level and MSE i.e., the noise level.

The graph at the bottom is important for broadcast customers as the majority of them transmit ASI traffic besides Ethernet and IP traffic. Here they’re able to monitor how much traffic was going through this link in the past.

Zabbix services provided by SAF

Since SAF Tehnika has experienced Zabbix-certified specialists, who have developed these multiple templates, we are ready to provide Zabbix-related services to our end customers, such as:

  • Zabbix deployment on other customer’s machine, integration and configuration of all the parameters, and fine-tuning according to our customers’ requirements.
  • consulting services and provide technical support on an annual contract basis.

NOTE. SAF Zabbix support services are limited to SAF products.

SAF Tehnika is ready and eager to provide Zabbix related services to our customers. If you already have a SAF network and would be willing to integrate it into Zabbix or plan to deploy a new SAF network integrated with Zabbix, you can contact our offices.

SAF contact details

Questions & Answers

Question. You told us that you provide Zabbix for your customers and create templates to pass to them, and so on. But do you use Zabbix in your own environment, for instance, in your offices, to monitor your own infrastructure?

Answer. SAF has been using Zabbix for almost 10 years and we use it to monitor our internal infrastructure. Currently, SAF has three separate Zabbix networks: one for SAF IT system monitoring, the other for Kubernetes system monitoring (Aranet Cloud services), and a separate Zabbix server for testing purposes, where we are able to test SAF equipment as well as experiment with Zabbix server deployments, templates, etc.

Question. You have passed the specialist courses. Do you have any plans on becoming certified professionals?

Answer. Our specialists are definitely interested in Zabbix certified professionals’ courses. We will make the decision about that based on the revenue Zabbix will bring us and interest of our customers.

Question. You have already provided a couple of templates for Zabbix and for your customers. Do you have any interesting templates you are working on? Do you have plans to create or upgrade some existing templates?

Answer. So far, we have released the templates for the main product lines — Integra and Phoenix G2. We have a few product lines that are are more specialized, such as low-latency products and some older products, such as CFIP Lumina. In case any of our customers are interested in integrating these older products or low-latency products, we might create more templates.

Question. Do you plan to refine the templates to make them a part of the Zabbix out-of-the-box solution?

Answer. If Zabbix is going to approve our templates to make them a part of the out-of-the-box solution, it will benefit our customers in using and monitoring our products. We’ll be delighted to provide the templates for this purpose.

 

Injecting a Backdoor into SolarWinds Orion

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/01/injecting-a-backdoor-into-solarwinds-orion.html

Crowdstrike is reporting on a sophisticated piece of malware that was able to inject malware into the SolarWinds build process:

Key Points

  • SUNSPOT is StellarParticle’s malware used to insert the SUNBURST backdoor into software builds of the SolarWinds Orion IT management product.
  • SUNSPOT monitors running processes for those involved in compilation of the Orion product and replaces one of the source files to include the SUNBURST backdoor code.
  • Several safeguards were added to SUNSPOT to avoid the Orion builds from failing, potentially alerting developers to the adversary’s presence.

Analysis of a SolarWinds software build server provided insights into how the process was hijacked by StellarParticle in order to insert SUNBURST into the update packages. The design of SUNSPOT suggests StellarParticle developers invested a lot of effort to ensure the code was properly inserted and remained undetected, and prioritized operational security to avoid revealing their presence in the build environment to SolarWinds developers.

This, of course, reminds many of us of Ken Thompson’s thought experiment from his 1984 Turing Award lecture, “Reflections on Trusting Trust.” In that talk, he suggested that a malicious C compiler might add a backdoor into programs it compiles.

The moral is obvious. You can’t trust code that you did not totally create yourself. (Especially code from companies that employ people like me.) No amount of source-level verification or scrutiny will protect you from using untrusted code. In demonstrating the possibility of this kind of attack, I picked on the C compiler. I could have picked on any program-handling program such as an assembler, a loader, or even hardware microcode. As the level of program gets lower, these bugs will be harder and harder to detect. A well-installed microcode bug will be almost impossible to detect.

That’s all still true today.

Friday Squid Blogging: China Launches Six New Squid Jigging Vessels

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/01/friday-squid-blogging-china-launches-six-new-squid-jigging-vessels.html

From Pingtan Marine Enterprise:

The 6 large-scale squid jigging vessels are normally operating vessels that returned to China earlier this year from the waters of Southwest Atlantic Ocean for maintenance and repair. These vessels left the port of Mawei on December 17, 2020 and are sailing to the fishing grounds in the international waters of the Southeast Pacific Ocean for operation.

I wonder if the company will include this blog post in its PR roundup.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Developing enterprise application patterns with the AWS CDK

Post Syndicated from Krishnakumar Rengarajan original https://aws.amazon.com/blogs/devops/developing-application-patterns-cdk/

Enterprises often need to standardize their infrastructure as code (IaC) for governance, compliance, and quality control reasons. You also need to manage and centrally publish updates to your IaC libraries. In this post, we demonstrate how to use the AWS Cloud Development Kit (AWS CDK) to define patterns for IaC and publish them for consumption in controlled releases using AWS CodeArtifact.

AWS CDK is an open-source software development framework to model and provision cloud application resources in programming languages such as TypeScript, JavaScript, Python, Java, and C#/.Net. The basic building blocks of AWS CDK are called constructs, which map to one or more AWS resources, and can be composed of other constructs. Constructs allow high-level abstractions to be defined as patterns. You can synthesize constructs into AWS CloudFormation templates and deploy them into an AWS account.

AWS CodeArtifact is a fully managed service for managing the lifecycle of software artifacts. You can use CodeArtifact to securely store, publish, and share software artifacts. Software artifacts are stored in repositories, which are aggregated into a domain. A CodeArtifact domain allows organizational policies to be applied across multiple repositories. You can use CodeArtifact with common build tools and package managers such as NuGet, Maven, Gradle, npm, yarn, pip, and twine.

Solution overview

In this solution, we complete the following steps:

  1. Create two AWS CDK pattern constructs in Typescript: one for traditional three-tier web applications and a second for serverless web applications.
  2. Publish the pattern constructs to CodeArtifact as npm packages. npm is the package manager for Node.js.
  3. Consume the pattern construct npm packages from CodeArtifact and use them to provision the AWS infrastructure.

We provide more information about the pattern constructs in the following sections. The source code mentioned in this blog is available in GitHub.

Note: The code provided in this blog post is for demonstration purposes only. You must ensure that it meets your security and production readiness requirements.

Traditional three-tier web application construct

The first pattern construct is for a traditional three-tier web application running on Amazon Elastic Compute Cloud (Amazon EC2), with AWS resources consisting of Application Load Balancer, an Autoscaling group and EC2 launch configuration, an Amazon Relational Database Service (Amazon RDS) or Amazon Aurora database, and AWS Secrets Manager. The following diagram illustrates this architecture.

 

Traditional stack architecture

Serverless web application construct

The second pattern construct is for a serverless application with AWS resources in AWS Lambda, Amazon API Gateway, and Amazon DynamoDB.

Serverless application architecture

Publishing and consuming pattern constructs

Both constructs are written in Typescript and published to CodeArtifact as npm packages. A semantic versioning scheme is used to version the construct packages. After a package gets published to CodeArtifact, teams can consume them for deploying AWS resources. The following diagram illustrates this architecture.

Pattern constructs

Prerequisites

Before getting started, complete the following steps:

  1. Clone the code from the GitHub repository for the traditional and serverless web application constructs:
    git clone https://github.com/aws-samples/aws-cdk-developing-application-patterns-blog.git
    cd aws-cdk-developing-application-patterns-blog
  2. Configure AWS Identity and Access Management (IAM) permissions by attaching IAM policies to the user, group, or role implementing this solution. The following policy files are in the iam folder in the root of the cloned repo:
    • BlogPublishArtifacts.json – The IAM policy to configure CodeArtifact and publish packages to it.
    • BlogConsumeTraditional.json – The IAM policy to consume the traditional three-tier web application construct from CodeArtifact and deploy it to an AWS account.
    • PublishArtifacts.json – The IAM policy to consume the serverless construct from CodeArtifact and deploy it to an AWS account.

Configuring CodeArtifact

In this step, we configure CodeArtifact for publishing the pattern constructs as npm packages. The following AWS resources are created:

  • A CodeArtifact domain named blog-domain
  • Two CodeArtifact repositories:
    • blog-npm-store – For configuring the upstream NPM repository.
    • blog-repository – For publishing custom packages.

Deploy the CodeArtifact resources with the following code:

cd prerequisites/
rm -rf package-lock.json node_modules
npm install
cdk deploy --require-approval never
cd ..

Log in to the blog-repository. This step is needed for publishing and consuming the npm packages. See the following code:

aws codeartifact login \
     --tool npm \
     --domain blog-domain \
     --domain-owner $(aws sts get-caller-identity --output text --query 'Account') \
     --repository blog-repository

Publishing the pattern constructs

  1. Change the directory to the serverless construct:
    cd serverless
  2. Install the required npm packages:
    rm package-lock.json && rm -rf node_modules
    npm install
    
  3. Build the npm project:
    npm run build
  4. Publish the construct npm package to the CodeArtifact repository:
    npm publish

    Follow the previously mentioned steps for building and publishing a traditional (classic Load Balancer plus Amazon EC2) web app by running these commands in the traditional directory.

    If the publishing is successful, you see messages like the following screenshots. The following screenshot shows the traditional infrastructure.

    Successful publishing of Traditional construct package to CodeArtifact

    The following screenshot shows the message for the serverless infrastructure.

    Successful publishing of Serverless construct package to CodeArtifact

    We just published version 1.0.1 of both the traditional and serverless web app constructs. To release a new version, we can simply update the version attribute in the package.json file in the traditional or serverless folder and repeat the last two steps.

    The following code snippet is for the traditional construct:

    {
        "name": "traditional-infrastructure",
        "main": "lib/index.js",
        "files": [
            "lib/*.js",
            "src"
        ],
        "types": "lib/index.d.ts",
        "version": "1.0.1",
    ...
    }

    The following code snippet is for the serverless construct:

    {
        "name": "serverless-infrastructure",
        "main": "lib/index.js",
        "files": [
            "lib/*.js",
            "src"
        ],
        "types": "lib/index.d.ts",
        "version": "1.0.1",
    ...
    }

Consuming the pattern constructs from CodeArtifact

In this step, we demonstrate how the pattern constructs published in the previous steps can be consumed and used to provision AWS infrastructure.

  1. From the root of the GitHub package, change the directory to the examples directory containing code for consuming traditional or serverless constructs.To consume the traditional construct, use the following code:
    cd examples/traditional

    To consume the serverless construct, use the following code:

    cd examples/serverless
  2. Open the package.json file in either directory and note that the packages and versions we consume are listed in the dependencies section, along with their version.
    The following code shows the traditional web app construct dependencies:

    "dependencies": {
        "@aws-cdk/core": "1.30.0",
        "traditional-infrastructure": "1.0.1",
        "aws-cdk": "1.47.0"
    }

    The following code shows the serverless web app construct dependencies:

    "dependencies": {
        "@aws-cdk/core": "1.30.0",
        "serverless-infrastructure": "1.0.1",
        "aws-cdk": "1.47.0"
    }
  3. Install the pattern artifact npm package along with the dependencies:
    rm package-lock.json && rm -rf node_modules
    npm install
    
  4. As an optional step, if you need to override the default Lambda function code, build the npm project. The following commands build the Lambda function source code:
    cd ../override-serverless
    npm run build
    cd -
  5. Bootstrap the project with the following code:
    cdk bootstrap

    This step is applicable for serverless applications only. It creates the Amazon Simple Storage Service (Amazon S3) staging bucket where the Lambda function code and artifacts are stored.

  6. Deploy the construct:
    cdk deploy --require-approval never

    If the deployment is successful, you see messages similar to the following screenshots. The following screenshot shows the traditional stack output, with the URL of the Load Balancer endpoint.

    Traditional CloudFormation stack outputs

    The following screenshot shows the serverless stack output, with the URL of the API Gateway endpoint.

    Serverless CloudFormation stack outputs

    You can test the endpoint for both constructs using a web browser or the following curl command:

    curl <endpoint output>

    The traditional web app endpoint returns a response similar to the following:

    [{"app": "traditional", "id": 1605186496, "purpose": "blog"}]

    The serverless stack returns two outputs. Use the output named ServerlessStack-v1.Api. See the following code:

    [{"purpose":"blog","app":"serverless","itemId":"1605190688947"}]

  7. Optionally, upgrade to a new version of pattern construct.
    Let’s assume that a new version of the serverless construct, version 1.0.2, has been published, and we want to upgrade our AWS infrastructure to this version. To do this, edit the package.json file and change the traditional-infrastructure or serverless-infrastructure package version in the dependencies section to 1.0.2. See the following code example:

    "dependencies": {
        "@aws-cdk/core": "1.30.0",
        "serverless-infrastructure": "1.0.2",
        "aws-cdk": "1.47.0"
    }

    To update the serverless-infrastructure package to 1.0.2, run the following command:

    npm update

    Then redeploy the CloudFormation stack:

    cdk deploy --require-approval never

Cleaning up

To avoid incurring future charges, clean up the resources you created.

  1. Delete all AWS resources that were created using the pattern constructs. We can use the AWS CDK toolkit to clean up all the resources:
    cdk destroy --force

    For more information about the AWS CDK toolkit, see Toolkit reference. Alternatively, delete the stack on the AWS CloudFormation console.

  2. Delete the CodeArtifact resources by deleting the CloudFormation stack that was deployed via AWS CDK:
    cd prerequisites
    cdk destroy –force
    

Conclusion

In this post, we demonstrated how to publish AWS CDK pattern constructs to CodeArtifact as npm packages. We also showed how teams can consume the published pattern constructs and use them to provision their AWS infrastructure.

This mechanism allows your infrastructure for AWS services to be provisioned from the configuration that has been vetted for quality control and security and governance checks. It also provides control over when new versions of the pattern constructs are released, and when the teams consuming the constructs can upgrade to the newly released versions.

About the Authors

Usman Umar

 

Usman Umar is a Sr. Applications Architect at AWS Professional Services. He is passionate about developing innovative ways to solve hard technical problems for the customers. In his free time, he likes going on biking trails, doing car modifications, and spending time with his family.

 

 

Krishnakumar Rengarajan

 

Krishnakumar Rengarajan is a DevOps Consultant with AWS Professional Services. He enjoys working with customers and focuses on building and delivering automated solutions that enables customers on their AWS cloud journeys.

Click Here to Kill Everybody Sale

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/01/click-here-to-kill-everybody-sale.html

For a limited time, I am selling signed copies of Click Here to Kill Everybody in hardcover for just $6, plus shipping.

Note that I have had occasional problems with international shipping. The book just disappears somewhere in the process. At this price, international orders are at the buyer’s risk. Also, the USPS keeps reminding us that shipping — both US and international — may be delayed during the pandemic.

I have 500 copies of the book available. When they’re gone, the sale is over and the price will revert to normal.

Order here.

EDITED TO ADD: I was able to get another 500 from the publisher, since the first 500 sold out so quickly.

Please be patient on delivery. There are already 550 orders, and that’s a lot of work to sign and mail. I’m going to be doing them a few at a time over the next several weeks. So all of you people reading this paragraph before ordering, understand that there are a lot of people ahead of you in line.

EDITED TO ADD (1/16): I am sold out. If I can get more copies, I’ll hold another sale after I sign and mail the 1,000 copies that you all purchased.

Cell Phone Location Privacy

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/01/cell-phone-location-privacy.html

We all know that our cell phones constantly give our location away to our mobile network operators; that’s how they work. A group of researchers has figured out a way to fix that. “Pretty Good Phone Privacy” (PGPP) protects both user identity and user location using the existing cellular networks. It protects users from fake cell phone towers (IMSI-catchers) and surveillance by cell providers.

It’s a clever system. The players are the user, a traditional mobile network operator (MNO) like AT&T or Verizon, and a new mobile virtual network operator (MVNO). MVNOs aren’t new. They’re intermediaries like Cricket and Boost.

Here’s how it works:

  1. One-time setup: The user’s phone gets a new SIM from the MVNO. All MVNO SIMs are identical.
  2. Monthly: The user pays their bill to the MVNO (credit card or otherwise) and the phone gets anonymous authentication (using Chaum blind signatures) tokens for each time slice (e.g., hour) in the coming month.
  3. Ongoing: When the phone talks to a tower (run by the MNO), it sends a token for the current time slice. This is relayed to a MVNO backend server, which checks the Chaum blind signature of the token. If it’s valid, the MVNO tells the MNO that the user is authenticated, and the user receives a temporary random ID and an IP address. (Again, this is now MVNOs like Boost already work.)
  4. On demand: The user uses the phone normally.

The MNO doesn’t have to modify its system in any way. The PGPP MVNO implementation is in software. The user’s traffic is sent to the MVNO gateway and then out onto the Internet, potentially even using a VPN.

All connectivity is data connectivity in cell networks today. The user can choose to be data-only (e.g., use Signal for voice), or use the MVNO or a third party for VoIP service that will look just like normal telephony.

The group prototyped and tested everything with real phones in the lab. Their approach adds essentially zero latency, and doesn’t introduce any new bottlenecks, so it doesn’t have performance/scalability problems like most anonymity networks. The service could handle tens of millions of users on a single server, because it only has to do infrequent authentication, though for resilience you’d probably run more.

The paper is here.

Introducing advanced segmentation in Amazon Pinpoint

Post Syndicated from Srini Sekaran original https://aws.amazon.com/blogs/messaging-and-targeting/introducing-advanced-segmentation-in-amazon-pinpoint/

Today, Amazon Pinpoint announced the launch of several new segmentation capabilities for Amazon Pinpoint. Amazon Pinpoint now provides customers additional filters to perform more granular segmentation. You can increase the level of campaign and message personalization by being able to reach more specific audiences.

Today’s end users require consistent and personalized experiences across channels such as email, SMS, and push. On average, 71% of consumers feel frustrated when their user experience is impersonal1. The ability to target a specific audience is a fundamental step to delivering personalized experiences. However, marketers struggle to target the right audience due to technical barriers such as the need for query language to segment groups. This is particularly resonant for organizations with a large pool of customer information. For these teams, understanding and targeting an audience based on preferences and behavior often extends to manual workarounds such as using spreadsheets.

With more data at their disposal, marketers want the ability to filter by user attributes in terms of metrics and time so they can send the right message to the right audience, at the right time.

Amazon Pinpoint now provides comparative and time-based filters, unlocking more use cases for targeting and retargeting. These filters allow you, for example, to define a segment of mobile users between the ages of 18 and 24 that joined after a October 24, 2020 with a lifetime value of more than $500. For marketers, being able to create defined segments such as this helps them increase user engagement by allowing them to tailor the right messaging and campaigns to specific sub-groups based on their characteristics.

When creating a segment, you will now have access to more segmentation filters including greater than, less than, between, before, and after. You can combine filters and create specific segments directly on the Pinpoint console or the CLI to build targeted and relevant campaigns that increase user engagement and marketing efficiency.

Amazon Pinpoint now provides the following filters to help you define and target the specific audience you would like to reach in your marketing campaigns:

  • Comparative filters: greater than, less than, equals, greater than or equals, less than or equals — a certain value

  • Matching filters: is, is not, contains — a certain string or text

  • Date filters: before, after, between, on — a certain date

To learn more about how you can create targeted campaigns with new segmentation capabilities, visit https://aws.amazon.com/pinpoint.

  1. https://www.forbes.com/sites/blakemorgan/2020/02/18/50-stats-showing-the-power-of-personalization/?sh=637812d52a94

Upcoming Speaking Engagements

Post Syndicated from Schneier.com Webmaster original https://www.schneier.com/blog/archives/2021/01/upcoming-speaking-engagements-5.html

This is a current list of where and when I am scheduled to speak:

  • I’m speaking (online) as part of Western Washington University’s Internet Studies Lecture Series on January 20, 2021.
  • I’m speaking (online) at ITU Denmark on February 2, 2021. Details to come.
  • I’m being interviewed by Keith Cronin as part of The Center for Innovation, Security, and New Technology’s CSINT Conversations series, February 10, 2021 from 11:00 AM – 11:30 AM CST.
  • I’ll be speaking at an Informa event on February 28, 2021. Details to come.

The list is maintained on this page.

Finding the Location of Telegram Users

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/01/finding-the-location-of-telegram-users.html

Security researcher Ahmed Hassan has shown that spoofing the Android’s “People Nearby” feature allows him to pinpoint the physical location of Telegram users:

Using readily available software and a rooted Android device, he’s able to spoof the location his device reports to Telegram servers. By using just three different locations and measuring the corresponding distance reported by People Nearby, he is able to pinpoint a user’s precise location.

[…]

A proof-of-concept video the researcher sent to Telegram showed how he could discern the address of a People Nearby user when he used a free GPS spoofing app to make his phone report just three different locations. He then drew a circle around each of the three locations with a radius of the distance reported by Telegram. The user’s precise location was where all three intersected.

[…]

Fixing the problem — or at least making it much harder to exploit it — wouldn’t be hard from a technical perspective. Rounding locations to the nearest mile and adding some random bits generally suffices. When the Tinder app had a similar disclosure vulnerability, developers used this kind of technique to fix it.

On US Capitol Security — By Someone Who Manages Arena-Rock-Concert Security

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/01/on-us-capitol-security-by-someone-who-manages-arena-rock-concert-security.html

Smart commentary:

…I was floored on Wednesday when, glued to my television, I saw police in some areas of the U.S. Capitol using little more than those same mobile gates I had ­ the ones that look like bike racks that can hook together ­ to try to keep the crowds away from sensitive areas and, later, push back people intent on accessing the grounds. (A new fence that appears to be made of sturdier material was being erected on Thursday.) That’s the same equipment and approximately the same amount of force I was able to use when a group of fans got a little feisty and tried to get backstage at a Vanilla Ice show.

[…]

There’s not ever going to be enough police or security at any event to stop people if they all act in unison; if enough people want to get to Vanilla Ice at the same time, they’re going to get to Vanilla Ice. Social constructs and basic decency, not lightweight security gates, are what hold everyone except the outliers back in a typical crowd.

[…]

When there are enough outliers in a crowd, it throws the normal dynamics of crowd control off; everyone in my business knows this. Citizens tend to hold each other to certain standards ­ which is why my 40,000-person town does not have 40,000 police officers, and why the 8.3 million people of New York City aren’t policed by 8.3 million police officers.

Social norms are the fabric that make an event run smoothly — and, really, hold society together. There aren’t enough police in your town to handle it if everyone starts acting up at the same time.

I like that she uses the term “outliers,” and I make much the same points in Liars and Outliers.

Cloning Google Titan 2FA keys

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/01/cloning-google-titan-2fa-keys.html

This is a clever side-channel attack:

The cloning works by using a hot air gun and a scalpel to remove the plastic key casing and expose the NXP A700X chip, which acts as a secure element that stores the cryptographic secrets. Next, an attacker connects the chip to hardware and software that take measurements as the key is being used to authenticate on an existing account. Once the measurement-taking is finished, the attacker seals the chip in a new casing and returns it to the victim.

Extracting and later resealing the chip takes about four hours. It takes another six hours to take measurements for each account the attacker wants to hack. In other words, the process would take 10 hours to clone the key for a single account, 16 hours to clone a key for two accounts, and 22 hours for three accounts.

By observing the local electromagnetic radiations as the chip generates the digital signatures, the researchers exploit a side channel vulnerability in the NXP chip. The exploit allows an attacker to obtain the long-term elliptic curve digital signal algorithm private key designated for a given account. With the crypto key in hand, the attacker can then create her own key, which will work for each account she targeted.

The attack isn’t free, but it’s not expensive either:

A hacker would first have to steal a target’s account password and also gain covert possession of the physical key for as many as 10 hours. The cloning also requires up to $12,000 worth of equipment and custom software, plus an advanced background in electrical engineering and cryptography. That means the key cloning — ­were it ever to happen in the wild — ­would likely be done only by a nation-state pursuing its highest-value targets.

That last line about “nation-state pursuing its highest-value targets” is just not true. There are many other situations where this attack is feasible.

Note that the attack isn’t against the Google system specifically. It exploits a side-channel attack in the NXP chip. Which means that other systems are probably vulnerable:

While the researchers performed their attack on the Google Titan, they believe that other hardware that uses the A700X, or chips based on the A700X, may also be vulnerable. If true, that would include Yubico’s YubiKey NEO and several 2FA keys made by Feitian.

Masking field values with Amazon Elasticsearch Service

Post Syndicated from Prashant Agrawal original https://aws.amazon.com/blogs/security/masking-field-values-with-amazon-elasticsearch-service/

Amazon Elasticsearch Service (Amazon ES) is a fully managed service that you can use to deploy, secure, and run Elasticsearch cost-effectively at scale. The service provides support for open-source Elasticsearch APIs, managed Kibana, and integration with Logstash and other AWS services. Amazon ES provides a deep security model that spans many layers of interaction and supports fine-grained access control at the cluster, index, document, and field level, on a per-user basis. The service’s security plugin integrates with federated identity providers for Kibana login.

A common use case for Amazon ES is log analytics. Customers configure their applications to store log data to the Elasticsearch cluster, where the data can be queried for insights into the functionality and use of the applications over time. In many cases, users reviewing those insights should not have access to all the details from the log data. The log data for a web application, for example, might include the source IP addresses of incoming requests. Privacy rules in many countries require that those details be masked, wholly or in part. This post explains how to set up field masking within your Amazon ES domain.

Field masking is an alternative to field-level security that lets you anonymize the data in a field rather than remove it altogether. When creating a role, add a list of fields to mask. Field masking affects whether you can see the contents of a field when you search. You can use field masking to either perform a random hash or pattern-based substitution of sensitive information from users, who shouldn’t have access to that information.

When you use field masking, Amazon ES creates a hash of the actual field values before returning the search results. You can apply field masking on a per-role basis, supporting different levels of visibility depending on the identity of the user making the query. Currently, field masking is only available for string-based fields. A search result with a masked field (clientIP) looks like this:

{
  "_index": "web_logs",
  "_type": "_doc",
  "_id": "1",
  "_score": 1,
  "_source": {
    "agent": "Mozilla/5.0 (X11; Linux x86_64; rv:6.0a1) Gecko/20110421 Firefox/6.0a1",
    "bytes": 0,
    "clientIP": "7e4df8d4df7086ee9c05efe1e21cce8ff017a711ee9addf1155608ca45d38219",
    "host": "www.example.com",
    "extension": "txt",
    "geo": {
      "src": "EG",
      "dest": "CN",
      "coordinates": {
        "lat": 35.98531194,
        "lon": -85.80931806
      }
    },
    "machine": {
      "ram": 17179869184,
      "os": "win 7"
    }
  }
}

To follow along in this post, make sure you have an Amazon ES domain with Elasticsearch version 6.7 or higher, sample data loaded (this example uses the web logs data supplied by Kibana), and access to Kibana through a role with administrator privileges for the domain.

Configure field masking

Field masking is managed by defining specific access controls within the Kibana visualization system. You’ll need to create a new Kibana role, define the fine-grained access-control privileges for that role, specify which fields to mask, and apply that role to specific users.

You can use either the Kibana console or direct-to-API calls to set up field masking. In our first example, we’ll use the Kibana console.

To configure field masking in the Kibana console

  1. Log in to Kibana, choose the Security pane, and then choose Roles, as shown in Figure 1.

    Figure 1: Choose security roles

    Figure 1: Choose security roles

  2. Choose the plus sign (+) to create a new role, as shown in Figure 2.

    Figure 2: Create role

    Figure 2: Create role

  3. Choose the Index Permissions tab, and then choose Add index permissions, as shown in Figure 3.

    Figure 3: Set index permissions

    Figure 3: Set index permissions

  4. Add index patterns and appropriate permissions for data access. See the Amazon ES documentation for details on configuring fine-grained access control.
  5. Once you’ve set Index Patterns, Permissions: Action Groups, Document Level Security Query, and Include or exclude fields, you can use the Anonymize fields entry to mask the clientIP, as shown in Figure 4.

    Figure 4: Anonymize field

    Figure 4: Anonymize field

  6. Choose Save Role Definition.
  7. Next, you need to create one or more users and apply the role to the new users. Go back to the Security page and choose Internal User Database, as shown in Figure 5.

    Figure 5: Select Internal User Database

    Figure 5: Select Internal User Database

  8. Choose the plus sign (+) to create a new user, as shown in Figure 6.

    Figure 6: Create user

    Figure 6: Create user

  9. Add a username and password, and under Open Distro Security Roles, select the role es-mask-role, as shown in Figure 7.

    Figure 7: Select the username, password, and roles

    Figure 7: Select the username, password, and roles

  10. Choose Submit.

If you prefer, you can perform the same task by using the Amazon ES REST API using Kibana dev tools.

Use the following API to create a role as described in below snippet and shown in Figure 8.

PUT _opendistro/_security/api/roles/es-mask-role
{
  "cluster_permissions": [],
  "index_permissions": [
    {
      "index_patterns": [
        "web_logs"
      ],
      "dls": "",
      "fls": [],
      "masked_fields": [
        "clientIP"
      ],
      "allowed_actions": [
        "data_access"
      ]
    }
  ]
}

Sample response:

{
  "status": "CREATED",
  "message": "'es-mask-role' created."
}
Figure 8: API to create Role

Figure 8: API to create Role

Use the following API to create a user with the role as described in below snippet and shown in Figure 9.

PUT _opendistro/_security/api/internalusers/es-mask-user
{
  "password": "xxxxxxxxxxx",
  "opendistro_security_roles": [
    "es-mask-role"
  ]
}

Sample response:

{
  "status": "CREATED",
  "message": "'es-mask-user' created."
}
Figure 9: API to create User

Figure 9: API to create User

Verify field masking

You can verify field masking by running a simple search query using Kibana dev tools (GET web_logs/_search) and retrieving the data first by using the kibana_user (with no field masking), and then by using the es-mask-user (with field masking) you just created.

Query responses run by the kibana_user (all access) have the original values in all fields, as shown in Figure 10.

Figure 10: Retrieval of the full clientIP data with kibana_user

Figure 10: Retrieval of the full clientIP data with kibana_user

Figure 11, following, shows an example of what you would see if you logged in as the es-mask-user. In this case, the clientIP field is hidden due to the es-mask-role you created.

Figure 11: Retrieval of the masked clientIP data with es-mask-user

Figure 11: Retrieval of the masked clientIP data with es-mask-user

Use pattern-based field masking

Rather than creating a hash, you can use one or more regular expressions and replacement strings to mask a field. The syntax is <field>::/<regular-expression>/::<replacement-string>.

You can use either the Kibana console or direct-to-API calls to set up pattern-based field masking. In the following example, clientIP is masked in such a way that the last three parts of the IP address are masked by xxx using the pattern is clientIP::/[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}$/::xxx.xxx.xxx>. You see only the first part of the IP address, as shown in Figure 12.

Figure 12: Anonymize the field with a pattern

Figure 12: Anonymize the field with a pattern

Run the search query to verify that the last three parts of clientIP are masked by custom characters and only the first part is shown to the requester, as shown in Figure 13.

Figure 13: Retrieval of the masked clientIP (according to the defined pattern) with es-mask-user

Figure 13: Retrieval of the masked clientIP (according to the defined pattern) with es-mask-user

Conclusion

Field level security should be the primary approach for ensuring data access security – however if there are specific business requirements that cannot be met with this approach, then field masking may offer a viable alternative. By using field masking, you can selectively allow or prevent your users from seeing private information such as personally identifying information (PII) or personal healthcare information (PHI). For more information about fine-grained access control, see the Amazon Elasticsearch Service Developer Guide.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the Amazon Elasticsearch Service forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Prashant Agrawal

Prashant is a Search Specialist Solutions Architect with Amazon Elasticsearch Service. He works closely with team members to help customers migrate their workloads to the cloud. Before joining AWS, he helped various customers use Elasticsearch for their search and analytics use cases.

Changes in WhatsApp’s Privacy Policy

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/01/changes-in-whatsapps-privacy-policy.html

If you’re a WhatsApp user, pay attention to the changes in the privacy policy that you’re being forced to agree with.

In 2016, WhatsApp gave users a one-time ability to opt out of having account data turned over to Facebook. Now, an updated privacy policy is changing that. Come next month, users will no longer have that choice. Some of the data that WhatsApp collects includes:

  • User phone numbers
  • Other people’s phone numbers stored in address books
  • Profile names
  • Profile pictures and
  • Status message including when a user was last online
  • Diagnostic data collected from app logs

Under the new terms, Facebook reserves the right to share collected data with its family of companies.

EDITED TO ADD (1/13): WhatsApp tries to explain.

Friday Squid Blogging: Searching for Giant Squid by Collecting Environmental DNA

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/01/friday-squid-blogging-searching-for-giant-squid-by-collecting-environmental-dna.html

The idea is to collect and analyze random DNA floating around the ocean, and using that to figure out where the giant squid are. No one is sure if this will actually work.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

APT Horoscope

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/01/apt-horoscope.html

This delightful essay matches APT hacker groups up with astrological signs. This is me:

Capricorn is renowned for its discipline, skilled navigation, and steadfastness. Just like Capricorn, Helix Kitten (also known as APT 35 or OilRig) is a skilled navigator of vast online networks, maneuvering deftly across an array of organizations, including those in aerospace, energy, finance, government, hospitality, and telecommunications. Steadfast in its work and objectives, Helix Kitten has a consistent track record of developing meticulous spear-phishing attacks.

Russia’s SolarWinds Attack and Software Security

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/01/russias-solarwinds-attack-and-software-security.html

The information that is emerging about Russia’s extensive cyberintelligence operation against the United States and other countries should be increasingly alarming to the public. The magnitude of the hacking, now believed to have affected more than 250 federal agencies and businesses — ­primarily through a malicious update of the SolarWinds network management software — ­may have slipped under most people’s radar during the holiday season, but its implications are stunning.

According to a Washington Post report, this is a massive intelligence coup by Russia’s foreign intelligence service (SVR). And a massive security failure on the part of the United States is also to blame. Our insecure Internet infrastructure has become a critical national security risk­ — one that we need to take seriously and spend money to reduce.

President-elect Joe Biden’s initial response spoke of retaliation, but there really isn’t much the United States can do beyond what it already does. Cyberespionage is business as usual among countries and governments, and the United States is aggressively offensive in this regard. We benefit from the lack of norms in this area and are unlikely to push back too hard because we don’t want to limit our own offensive actions.

Biden took a more realistic tone last week when he spoke of the need to improve US defenses. The initial focus will likely be on how to clean the hackers out of our networks, why the National Security Agency and US Cyber Command failed to detect this intrusion and whether the 2-year-old Cybersecurity and Infrastructure Security Agency has the resources necessary to defend the United States against attacks of this caliber. These are important discussions to have, but we also need to address the economic incentives that led to SolarWinds being breached and how that insecure software ended up in so many critical US government networks.

Software has become incredibly complicated. Most of us almost don’t know all of the software running on our laptops and what it’s doing. We don’t know where it’s connecting to on the Internet­ — not even which countries it’s connecting to­ — and what data it’s sending. We typically don’t know what third party libraries are in the software we install. We don’t know what software any of our cloud services are running. And we’re rarely alone in our ignorance. Finding all of this out is incredibly difficult.

This is even more true for software that runs our large government networks, or even the Internet backbone. Government software comes from large companies, small suppliers, open source projects and everything in between. Obscure software packages can have hidden vulnerabilities that affect the security of these networks, and sometimes the entire Internet. Russia’s SVR leveraged one of those vulnerabilities when it gained access to SolarWinds’ update server, tricking thousands of customers into downloading a malicious software update that gave the Russians access to those networks.

The fundamental problem is one of economic incentives. The market rewards quick development of products. It rewards new features. It rewards spying on customers and users: collecting and selling individual data. The market does not reward security, safety or transparency. It doesn’t reward reliability past a bare minimum, and it doesn’t reward resilience at all.

This is what happened at SolarWinds. A New York Times report noted the company ignored basic security practices. It moved software development to Eastern Europe, where Russia has more influence and could potentially subvert programmers, because it’s cheaper.

Short-term profit was seemingly prioritized over product security.

Companies have the right to make decisions like this. The real question is why the US government bought such shoddy software for its critical networks. This is a problem that Biden can fix, and he needs to do so immediately.

The United States needs to improve government software procurement. Software is now critical to national security. Any system for acquiring software needs to evaluate the security of the software and the security practices of the company, in detail, to ensure they are sufficient to meet the security needs of the network they’re being installed in. Procurement contracts need to include security controls of the software development process. They need security attestations on the part of the vendors, with substantial penalties for misrepresentation or failure to comply. The government needs detailed best practices for government and other companies.

Some of the groundwork for an approach like this has already been laid by the federal government, which has sponsored the development of a “Software Bill of Materials” that would set out a process for software makers to identify the components used to assemble their software.

This scrutiny can’t end with purchase. These security requirements need to be monitored throughout the software’s life cycle, along with what software is being used in government networks.

None of this is cheap, and we should be prepared to pay substantially more for secure software. But there’s a benefit to these practices. If the government evaluations are public, along with the list of companies that meet them, all network buyers can benefit from them. The US government acting purely in the realm of procurement can improve the security of nongovernmental networks worldwide.

This is important, but it isn’t enough. We need to set minimum safety and security standards for all software: from the code in that Internet of Things appliance you just bought to the code running our critical national infrastructure. It’s all one network, and a vulnerability in your refrigerator’s software can be used to attack the national power grid.

The IOT Cybersecurity Improvement Act, signed into law last month, is a start in this direction.

The Biden administration should prioritize minimum security standards for all software sold in the United States, not just to the government but to everyone. Long gone are the days when we can let the software industry decide how much emphasis to place on security. Software security is now a matter of personal safety: whether it’s ensuring your car isn’t hacked over the Internet or that the national power grid isn’t hacked by the Russians.

This regulation is the only way to force companies to provide safety and security features for customers — just as legislation was necessary to mandate food safety measures and require auto manufacturers to install life-saving features such as seat belts and air bags. Smart regulations that incentivize innovation create a market for security features. And they improve security for everyone.

It’s true that creating software in this sort of regulatory environment is more expensive. But if we truly value our personal and national security, we need to be prepared to pay for it.

The truth is that we’re already paying for it. Today, software companies increase their profits by secretly pushing risk onto their customers. We pay the cost of insecure personal computers, just as the government is now paying the cost to clean up after the SolarWinds hack. Fixing this requires both transparency and regulation. And while the industry will resist both, they are essential for national security in our increasingly computer-dependent worlds.

This essay previously appeared on CNN.com.