Tag Archives: news

Mass Exploitation of Exchange Server Zero-Day CVEs: What You Need to Know

Post Syndicated from Caitlin Condon original https://blog.rapid7.com/2021/03/03/mass-exploitation-of-exchange-server-zero-day-cves-what-you-need-to-know/

Mass Exploitation of Exchange Server Zero-Day CVEs: What You Need to Know

On March 2, 2021, the Microsoft Threat Intelligence Center (MSTIC) released details on an active state-sponsored threat campaign exploiting four zero-day vulnerabilities in on-premises instances of Microsoft Exchange Server. MSTIC attributes this campaign to HAFNIUM, a group “assessed to be state-sponsored and operating out of China.”

Rapid7 detection and response teams have also observed increased threat activity against Microsoft Exchange Server since Feb. 27, 2021, and can confirm ongoing mass exploitation of vulnerable Exchange instances. Microsoft Exchange customers should apply the latest updates on an emergency basis and take immediate steps to harden their Exchange instances. We strongly recommend that organizations monitor closely for suspicious activity and indicators of compromise (IOCs) stemming from this campaign. Rapid7 has a comprehensive list of IOCs available here.

The actively exploited zero-day vulnerabilities disclosed in the MSTIC announcement as part of the HAFNIUM-attributed threat campaign are:

  • CVE-2021-26855 is a server-side request forgery (SSRF) vulnerability in Exchange that allows an attacker to send arbitrary HTTP requests and authenticate as the Exchange server.
  • CVE-2021-26857 is an insecure deserialization vulnerability in the Unified Messaging service. Insecure deserialization is where untrusted user-controllable data is deserialized by a program. Exploiting this vulnerability gives an attacker the ability to run code as SYSTEM on the Exchange server. This requires administrator permission or another vulnerability to exploit.
  • CVE-2021-26858 is a post-authentication arbitrary file write vulnerability in Exchange. If an attacker could authenticate with the Exchange server, they could use this vulnerability to write a file to any path on the server. They could authenticate by exploiting the CVE-2021-26855 SSRF vulnerability or by compromising a legitimate admin’s credentials.
  • CVE-2021-27065 is a post-authentication arbitrary file write vulnerability in Exchange. An attacker who can authenticate with the Exchange server can use this vulnerability to write a file to any path on the server. They could authenticate by exploiting the CVE-2021-26855 SSRF vulnerability or by compromising a legitimate admin’s credentials.

Also included in the out-of-band update were three additional remote code execution vulnerabilities in Microsoft Exchange. These additional vulnerabilities are not known to be part of the HAFNIUM-attributed threat campaign but should be remediated with the same urgency nonetheless:

Microsoft has released out-of-band patches for all seven vulnerabilities as of March 2, 2021. Security updates are available for the following specific versions of Exchange:

  • Exchange Server 2010 (for Service Pack 3—this is a Defense in Depth update)
  • Exchange Server 2013 (CU 23)
  • Exchange Server 2016 (CU 19, CU 18)
  • Exchange Server 2019 (CU 8, CU 7)

Exchange Online is not affected.

For Rapid7 customers

InsightVM and Nexpose customers can assess their exposure to these vulnerabilities with authenticated vulnerability checks. Customers will need to perform a console restart after consuming the content update in order to scan for these vulnerabilities.

InsightIDR will generate an alert if suspicious activity is detected in your environment. The Insight Agent must be installed on Exchange Servers to detect the attacker behaviors observed as part of this attack. If you have not already done so, install the Insight Agent on your Exchange Servers.

For individual vulnerability analysis, see AttackerKB.

Indiscriminate Exploitation of Microsoft Exchange Servers (CVE-2021-24085)

Post Syndicated from Andrew Christian original https://blog.rapid7.com/2021/03/02/indiscriminate-exploitation-of-microsoft-exchange-servers-cve-2021-24085/

Indiscriminate Exploitation of Microsoft Exchange Servers (CVE-2021-24085)

The following blog post was co-authored by Andrew Christian and Brendan Watters.

Beginning Feb. 27, 2021, Rapid7’s Managed Detection and Response (MDR) team has observed a notable increase in the automated exploitation of vulnerable Microsoft Exchange servers to upload a webshell granting attackers remote access. The suspected vulnerability being exploited is a cross-site request forgery (CSRF) vulnerability: The likeliest culprit is CVE-2021-24085, an Exchange Server spoofing vulnerability released as part of Microsoft’s February 2021 Patch Tuesday advisory, though other CVEs may also be at play (e.g., CVE-2021-26855, CVE-2021-26865, CVE-2021-26857).

The following China Chopper command was observed multiple times beginning Feb. 27 using the same DigitalOcean source IP (165.232.154.116):

cmd /c cd /d C:\inetpub\wwwroot\aspnet_client\system_web&net group "Exchange Organization administrators" administrator /del /domain&echo [S]&cd&echo [E]

Exchange or other systems administrators who see this command—or any other China Chopper command in the near future—should look for the following in IIS logs:

  • 165.232.154.116 (the source IP of the requests)
  • /ecp/y.js
  • /ecp/DDI/DDIService.svc/GetList

Indicators of compromise (IOCs) from the attacks we have observed are consistent with IOCs for publicly available exploit code targeting CVE-2021-24085 released by security researcher Steven Seeley last week, shortly before indiscriminate exploitation began. After initial exploitation, attackers drop an ASP eval webshell before (usually) executing procdump against lsass.exe in order to grab all the credentials from the box. It would also be possible to then clean some indicators of compromise from the affected machine[s]. We have included a section on CVE-2021-24085 exploitation at the end of this document.

Exchange servers are frequent, high-value attack targets whose patch rates often lag behind attacker capabilities. Rapid7 Labs has identified nearly 170,000 Exchange servers vulnerable to CVE-2021-24085 on the public internet:

Indiscriminate Exploitation of Microsoft Exchange Servers (CVE-2021-24085)

Rapid7 recommends that Exchange customers apply Microsoft’s February 2021 updates immediately. InsightVM and Nexpose customers can assess their exposure to CVE-2021-24085 and other February Patch Tuesday CVEs with vulnerability checks. InsightIDR provides existing coverage for this vulnerability via our out-of-the-box China Chopper Webshell Executing Commands detection, and will alert you about any suspicious activity. View this detection in the Attacker Tool section of the InsightIDR Detection Library.

CVE-2021-24085 exploit chain

As part of the PoC for CVE-2021-24085, the attacker will search for a specific token using a request to /ecp/DDI/DDIService.svc/GetList. If that request is successful, the PoC moves on to writing the desired token to the server’s filesystem with the request /ecp/DDI/DDIService.svc/SetObject. At that point, the token is available for downloading directly. The PoC uses a download request to /ecp/poc.png (though the name could be anything) and may be recorded in the IIS logs themselves attached to the IP of the initial attack.

Indicators of compromise would include the requests to both /ecp/DDI/DDIService.svc/GetList and /ecp/DDI/DDIService.svc/SetObject, especially if those requests were associated with an odd user agent string like python. Because the PoC utilizes aSetObject to write the token o the server’s filesystem in a world-readable location, it would be beneficial for incident responders to examine any files that were created around the time of the requests, as one of those files could be the access token and should be removed or placed in a secure location. It is also possible that responders could discover the file name in question by checking to see if the original attacker’s IP downloaded any files.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Multiple Unauthenticated Remote Code Control and Execution Vulnerabilities in Multiple Cisco Products

Post Syndicated from boB Rudis original https://blog.rapid7.com/2021/02/25/multiple-unauthenticated-remote-code-control-and-execution-vulnerabilities-in-multiple-cisco-products/

What’s up?

Multiple Unauthenticated Remote Code Control and Execution Vulnerabilities in Multiple Cisco Products

On Feb. 24, 2021, Cisco released many patches for multiple products, three of which require immediate attention by organizations if they are running affected systems and operating system/software configurations. They are detailed below:

Cisco ACI Multi-Site Orchestrator Application Services Engine Deployment Authentication Bypass Vulnerability (CVSSv3 Base 10; CVE-2021-1388)

Cisco Security Advisory

Cisco Multi-Site Orchestrator (MSO) is the product responsible for provisioning, health monitoring, and managing the full lifecycle of Cisco Application Centric Infrastructure (ACI) networking policies and tenant policies across all Cisco ACI sites organizations have deployed. It essentially has full control over every aspect of networking and network security. Furthermore, Cisco ACI can be integrated with and administratively control VMware vCenter Server, Microsoft System Center VMM [SCVMM], and OpenStack controller virtualization platform managers.

A weakness in an API endpoint of Cisco ACI MSO installed on the Application Services Engine could allow an unauthenticated, remote attacker to bypass authentication on an affected device. One or more API endpoints improperly validated API tokens and a successful exploit gives an unauthenticated, remote attacker full control over this powerful endpoint.

This vulnerability affects Cisco ACI Multi-Site Orchestrator (MSO) running a 3.0 release of software only when deployed on a Cisco Application Services Engine. Only version 3.0 (3m) is vulnerable.

Thankfully, this vulnerability was discovered internally, reducing the immediate likelihood of proof-of-concept exploits being available.

Organizations are encouraged to restrict API access to trusted, segmented networks and ensure this patch is applied within critical patch change windows.

Cisco Application Services Engine Unauthorized Access Vulnerabilities (CVSSv3 Base 9.8; CVE-2021-1393, CVE-2021-1396)

Cisco Security Advisory

CVE-2021-1393 allows unauthenticated, remote attackers access to a privileged service on affected devices. One service running on the ASE Data Network has insufficient access controls which can be exploited by attackers via specially crafted TCP requests. Successful exploits result in privileged device access enabling the running of containers and execution of any host-level commands.

CVE-2021-1396 allows unauthenticated, remote attackers access to a privileged service on affected devices. This, too, affects a service API with lax access controls on the Data Network. Successful exploitation results in significant information disclosure, creation of support-level artifacts on an isolated volume, and the ability to manipulate an undocumented subset of configuration settings.

The ASE Data Network provides the following services:

  • Cisco Application Services Engine Clustering
  • App to app communication
  • Access to the management network of the Cisco ACI fabric
  • All app-to-ACI fabric communications

The Data Network is not the same as the Management Network, thus segmentation is not an option for temporary mitigation.

These vulnerabilities affect Cisco ASE software released 1.1 (3d) and earlier.

Again, thankfully, this vulnerability was discovered internally, reducing the immediate likelihood of proof-of-concept exploits being available.

Organizations are encouraged to ensure this patch is applied within critical patch change windows.

Cisco NX-OS Software Unauthenticated Arbitrary File Actions Vulnerability (CVSSv3 Base 9.8; CVE-2021-1361)

Cisco Security Advisory

CVE-2021-1361 enables remote, unauthenticated attackers to create, modify, or delete arbitrary files with the privileges of the root user on Cisco Nexus 3000 and 9000 series switches in standalone NX-OS mode.

Cisco has provided more technical information on this critical vulnerability than they have for the previous two, disclosing that a service running on TCP port 9075 improperly listens and responds to external communication requests. Specially crafted TCP requests can result in sufficient permissions to perform a cadre of actions, including creating a local user account without administrators (or log collectors) knowing.

Organizations can use the following command line on standalone NX-OS Nexus 3000/9000’s to determine if this service is listening externally:

nexus# show sockets connection | include 9075
tcp LISTEN 0 32 * : 9075

Only Nexus 3000/9000 series switches in standalone NX-OS mode are affected.

Organizations are encouraged to restrict Cisco management and control plane access to trusted, segmented networks and use on-device access control lists (ACLs) to block external requests to TCP port 9075. Once mitigations are performed, organizations should ensure this patch is applied within critical patch change windows. However, please note that this vulnerability was discovered by an external, anonymous reporter, which likely means there is at least one individual/group outside of Cisco that knows how to exploit this weakness. Such information may affect patch prioritization decisions in some organizations.

Rapid7 will update this post as more information is provided or proof-of-concept exploits are discovered.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

VMware vCenter Server CVE-2021-21972 Remote Code Execution Vulnerability: What You Need to Know

Post Syndicated from boB Rudis original https://blog.rapid7.com/2021/02/24/vmware-vcenter-server-cve-2021-21972-remote-code-execution-vulnerability-what-you-need-to-know/

VMware vCenter Server CVE-2021-21972 Remote Code Execution Vulnerability: What You Need to Know

This blog post was co-authored by Bob Rudis and Caitlin Condon.

What’s up?

On Feb. 23, 2021, VMware published an advisory (VMSA-2021-0002) describing three weaknesses affecting VMware ESXi, VMware vCenter Server, and VMware Cloud Foundation.

Before digging into the individual vulnerabilities, it is vital that all organizations that use the HTML5 VMware vSphere Client, i.e., VMware vCenter Server (7.x before 7.0 U1c, 6.7 before 6.7 U3l and 6.5 before 6.5 U3n) and VMware Cloud Foundation (4.x before 4.2 and 3.x before 3.10.1.2) immediately restrict network access to those clients—especially if they are not segmented off on a management network—implement the mitigation noted below, and consider performing accelerated/immediate patching on those systems.

Vulnerability details and recommendations

CVE-2021-21972 is a critical (CVSSv3 base 9.8) unauthenticated remote code execution vulnerability in the HTML5 vSphere client. Any malicious actor with access to port 443 can exploit this weakness and execute commands with unrestricted privileges.

PT Swarm has provided a detailed walkthrough of this weakness and how to exploit it.

Rapid7 researchers have independently analyzed, tested, and confirmed the exploitability of this weakness and have provided a full technical analysis.

Proof-of-concept working exploits are beginning to appear on public code-sharing sites.

Organizations should restrict access to this plugin and patch affected systems immediately (i.e., not wait for standard patch change windows).

VMware has provided steps for a temporary mitigation, which involves disabling the plugin.

CVE-2021-21973 is an important (CVSSv3 base 8.8) heap-overflow-based remote code execution vulnerability in VMware ESXi OpenSLP. Attackers with same-segment network access to port 427 on affected systems may be able to use the heap-overflow weakness to perform remote code execution.

VMware has provided steps for a temporary mitigation, which involves disabling the SLP service on affected systems.

Rapid7 recommends applying the vendor-provided patches as soon as possible after performing the vendor-recommended mitigation.

CVE-2021-21974 is a moderate (CVSSv3 base 5.3) server-side request forgery vulnerability affecting the HTML5 vSphere Client. Attackers with access to port 443 of affected systems can use this weakness to gain access to underlying system information.

VMware has provided steps for a temporary mitigation, which involves disabling the plugin.

Since attackers will already be focusing on VMware systems due to the other high-severity weaknesses, Rapid7 recommends applying the vendor-provided patches as soon as possible after performing the vendor-recommended mitigation.

Attacker activity

Rapid7 Labs has not detected broad scanning for internet-facing VMware vCenter servers, but Bad Packets has reported that they’ve detected opportunistic scanning. We will continue to monitor Project Heisenberg for attacker activity and update this blog post as we have more information.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

New – Amazon Elastic Block Store Local Snapshots on AWS Outposts

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/new-amazon-elastic-block-store-local-snapshots-on-aws-outposts/

Today I am happy to announce that AWS Outposts customers can now make local snapshots of their Amazon Elastic Block Store (EBS) volumes, making it easy to meet data residency and local backup requirements. AWS Outposts is a fully managed service that extends AWS infrastructure, services, APIs, and tools to virtually any datacenter, co-location space, or on-premises facility for a truly consistent hybrid experience. Until now, Amazon EBS snapshots on Outposts were stored by default on Amazon Simple Storage Service (S3) in the AWS Region. If your Outpost is provisioned with Amazon S3 on Outposts, now you have the option to store your snapshots locally on your Outpost.

Customers use AWS Outposts to support applications that need to run on-premises due to low latency, local data processing, or data residency requirements. Customers looking to use AWS services in countries where no AWS Region exists today can opt to run their applications on Outposts. Sometimes data needs to remain in a particular country, state, or municipality for regulatory, contractual, or information security reasons. These customers need the data for snapshots and Amazon Machine Image (AMI) to be stored locally on Outposts to operate their applications. In addition, some of our customers could also see value for workloads that need low latency access to local backups.

EBS Local Snapshots on Outposts is a new capability that enables snapshots and AMI data to be stored locally on Amazon S3 on Outposts. Now you can create and manage EBS Local Snapshots on Outposts through the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. You can also continue to take snapshots of EBS volumes on Outposts, which are stored in S3 in the associated parent Region.

How to Get Started With EBS Local Snapshots on Outposts
To get started, visit the AWS Outposts Management Console to order an Outposts configuration that includes your selected EBS and Amazon S3 storage capacity (EBS snapshots use Amazon S3 on Outposts to store snapshots), or you can add S3 storage to your existing Outposts. EBS Local Snapshots are enabled on Outposts provisioned with Amazon S3 on Outposts.

To create a local EBS snapshot on Outposts, go to the EBS volume console and select the volume you want to create a snapshot from. Click the Actions button, then select Create Snapshot in the dropdown menu.

You can create a snapshot either in the AWS Region or your Outposts when you choose the Snapshot destination. The AWS Region snapshot uses Amazon S3 in the region and the AWS Outposts snapshot uses S3 storage on Outposts for storing the snapshots. Amazon S3 on Outposts is a new storage class, which is designed to durably and redundantly store data on Outposts. Note that due to its scale, Amazon S3 in a region offers higher durability than S3 on Outposts.

You can call CreateSnapshot with the outpost-arn parameter set to the Outposts ARN that uniquely identifies your installation. If data residency is not a concern, you can also get the CreateSnapshot API to create the snapshot in the parent AWS Region by specifying AWS Region as the destination.

$ aws ec2 create-snapshot \
     --volume-id vol-1234567890abcdef0 \
     --outpost-arn arn:aws:outposts:us-east-1:123456789012:outpost/op-1a2b3c \ 
	 --description "local snapshots in outpost"

You can also use commands for the AWS Command Line Interface (CLI) and AWS SDKs e.g. CreateSnapshots, DescribeSnapshot, CopySnapshot, and DeleteSnapshot to manage snapshots on Outposts, and use Amazon Data Lifecycle Manager to automate snapshots management on Outposts. All local snapshots on Outposts are Encrypted by Default (EBD).

You can set IAM policies for data residency of your snapshots. The policy example below will enforce data residency on the Outposts by denying CreateSnapshot(s) calls to create snapshots in the region from outpost volumes.

{
   "Version":"2012-10-17",
   "Statement":[
      {
         "Effect":"Deny",
         "Action":[
            "ec2:CreateSnapshot",
            "ec2:CreateSnapshots"
         ],
         "Resource":"arn:aws:ec2:us-west-2::snapshot/*",
         "Condition":{
            "StringEquals":{
               "ec2:SourceOutpostArn":"arn:aws:outposts:us-west-2:1234567890:outpost/op-1a2b3c"
            },
            "Null":{
               "ec2:OutpostArn":"true"
            }
         }
      },
      {
         "Effect":"Allow",
         "Action":[
            "ec2:CreateSnapshot",
            "ec2:CreateSnapshots"
         ],
         "Resource":"*"
      }
   ]
}

You can audit your own data residency compliance by calling the DescribeSnapshots API that will return the snapshot’s storage location. All creation, update, and copy operations are logged in AWS CloudTrail audit logs.

You can copy AMI snapshots from the AWS Region to your Outposts and register them as AMI to launch your EC2 instances on Outposts.

Also, you can do this via simple AWS Command Line Interface (CLI) commands as follows:

$ aws ec2 copy-snapshot \
     --region us-west-2 \
     --source-region us-west-2 \
     --source-snapshot-id snap-1 \
     --destination-outpost-arn arn:aws:outposts:us-west-2:123456789012:outpost/op-1a2b3c \ 
	 --description "This is my copied snapshot."

Now you can register the snapshot as a local AMI for launching your EC2 instances on your Outposts.

$ aws ec2 register-image \
    --root-device-name /dev/sda1 \
    --block-device-mappings '[ \
       {"DeviceName": "/dev/sda1", "Ebs" :{"VolumeSize":100, "SnapshotId":"snap-1-copy"}}]'

You can also copy your regional AMIs to Outposts using the copy-image command. Specify the ID of the AMI to copy, the source Region, and the ARN of the destination Outpost.

$ aws ec2 copy-image \
       --source-region us-west-2 \
	   --source-image-id ami-1234567890abcdef0  \
	   --name "Local AMI copy"  \
	   --destination-outpost-arn arn:aws:outposts:us-west-2:123456789012:outpost/op-1a2b3c

Copying of local snapshots on Outposts to the parent AWS Region is not supported. In scenarios where data residency is required, you can only create local snapshots or copy snapshots from the parent Region. To ensure your data residency requirements are met on AWS Outposts, I recommend you refer to whitepapers such as AWS Policy Perspectives: Data Residency and Addressing Data Residency Requirements with AWS Outposts, and confirm and work closely with your compliance and security teams.

CloudEndure Migration and Disaster Recovery services, offered by AWS, allow customers to migrate or replicate workloads for recovery purposes into AWS from physical, virtual, or cloud-based sources. Up until now, if customers selected an Outposts device as a migration or recovery target, the snapshot data had to be copied to a public region before being copied back into the Outposts device. This led to increased cutover and recovery times, as well as other data transfer impacts.

With the newly launched availability of EBS Local Snapshots on Outposts, you can migrate, replicate and recover workloads from any sources directly into Outposts, or between Outposts devices, without requiring the EBS snapshot data to go through a public region, leading to lower latencies, greater performance, and reduced costs. Supported use cases related to Outposts for migration and disaster recovery include: from on-premises to Outposts, from public AWS Regions into Outposts, from Outposts into public AWS Regions, and between two Outposts devices. Learn more about CloudEndure Migration and CloudEndure Disaster Recovery.

Available Now
Amazon EBS Local Snapshots on AWS Outposts is available for all Outposts provisioned with S3 on Outposts. To learn more, take a look at the documentation. Please send feedback to the AWS Outposts team, your usual AWS support contacts, or Outposts partners.

Learn all the details about AWS Outposts and get started today.

Channy

Cisco Patches Recently Disclosed “sudo” Vulnerability (CVE-2021-3156) in Multiple Products

Post Syndicated from boB Rudis original https://blog.rapid7.com/2021/02/04/cisco-patches-recently-disclosed-sudo-vulnerability-cve-2021-3156-in-multiple-products/

Cisco Patches Recently Disclosed

While Punxsutawney Phil may have said we only have six more weeks of winter, the need to patch software and hardware weaknesses will, unfortunately, never end.

Cisco has released security updates to address vulnerabilities in most of their product portfolio, some of which may be exploited to gain full system/device control on certain devices, and one fixes the recently disclosed sudo input validation vulnerability. We discuss this vulnerability below, but there are many more lower-severity, or “valid administrator credentials-required” bugs on the Cisco Security Advisories page that all organizations who use Cisco products should review.

Getting back to RBAC

Cisco Patches Recently Disclosed

The “sudo” advisory is officially presented as “Sudo Privilege Escalation Vulnerability Affecting Cisco Products: January 2021” and affects pretty much every Cisco product that has a command line interface. It is a fix for the ubiquitous CVE-2021-3156 general sudo weakness.

According to the advisory, the vulnerability is due to “improper parsing of command line parameters that may result in a heap-based buffer overflow. An attacker could exploit this vulnerability by accessing a Unix shell on an affected device and then invoking the sudoedit command with crafted parameters or by executing a binary exploit.”

All commands invoked after exploiting this vulnerability will have root privileges.

This weakness will also enable lower-privileged users with access to Cisco devices to elevate their privileges, meaning you technically are out of compliance with any role-based access control requirement (which is in virtually every modern cybersecurity compliance framework).

Rapid7 strongly advises organizations to patch this weakness as soon as possible to stop attackers and curious users from taking control of your network, as well as ensuring you are able to continue checking ✅ this particular compliance box. Even though we mentioned it at the top of the post, don’t forget to check out the rest of the Cisco security advisories to see whether you need to address weaknesses in any of your other Cisco devices.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

SonicWall SNWLID-2021-0001 Zero-Day and SolarWinds’ 2021 CVE Trifecta: What You Need to Know

Post Syndicated from boB Rudis original https://blog.rapid7.com/2021/02/03/sonicwall-snwlid-2021-0001-zero-day-and-solarwinds-2021-cve-trifecta-what-you-need-to-know/

SonicWall SNWLID-2021-0001 Zero-Day and SolarWinds’ 2021 CVE Trifecta: What You Need to Know

Not content with the beating it laid down in January, 2021 continues to deliver with an unpatched zero-day exposure in some SonicWall appliances and three moderate-to-critical CVEs in SolarWinds software. We dig into the details below.

Urgent mitigations required for SonicWall SMA 100 Series appliances

On Jan. 22, 2021, SonicWall published an advisory and in-product notification that they had identified a coordinated attack on their internal systems by highly sophisticated threat actors exploiting probable zero-day vulnerabilities on certain SonicWall secure remote access products.

Specifically, they identified Secure Mobile Access (SMA) version 10.x running on the following physical SMA 100 appliances running firmware version 10x, as well as the SMA 500v virtual appliance:

  • SMA 200
  • SMA 210
  • SMA 400
  • SMA 410

On Jan. 31, 2021, NCC Group Research & Technology confirmed and demonstrated exploitability of a possible candidate for the vulnerability and detected indicators that attackers were exploiting this weakness.

On Feb. 3, 2021, SonicWall released a patch to firmware version SMA 10.2.0.5-29sv, which all impacted organizations should apply immediately.

SonicWall has recommended removing all SMA 100 Series appliances for SMA 500v virtual appliances from the internet until a patch is available. If this is not possible, organizations are strongly encouraged to perform the following steps:

  • Enable multi-factor authentication. SonicWall has indicated this is a “critical” step until the patch is available.
  • Reset user password for all SMA 100 appliances.
  • Configure the web application firewall on the SMA 100 series, which has been updated with rules to detect exploitation attempts (SonicWall indicates that this is normally a subscription-based software, but they have automatically provided 60-day complementary licenses to organizations affected by this vulnerability).

If it’s not possible to perform these steps, SonicWall recommends that organizations downgrade their SMA 100 Series appliances to firmware version 9.x. They do note that this will remove all settings and that the devices will need to be reconfigured from scratch.

Urgent patching required for SolarWinds Orion and Serv-U FTP products

On Feb. 3, 2021, Trustwave published a blog post providing details on two vulnerabilities in the SolarWinds Orion platform and a single vulnerability in the SolarWinds Serv-U FTP server for Windows.

The identified Orion platform weaknesses include:

  • CVE-2021-25274: Trustwave discovered that improper/malicious use of Microsoft Message Queue (MSMQ) could allow any remote, unprivileged attacker to execute arbitrary code in the highest privilege.
  • CVE-2021-25275: Trustwave discovered that credentials are stored insecurely, allowing any local user to take complete control over the SOLARWINDS_ORION database. This could lead to further information theft, and also enables attackers to add new admin-level users to all SolarWinds Orion platform products.

The identified SolarWinds Serv-U FTP server for Windows weakness enables any local user to create a file that can define a new Serv-U FTP admin account with full access to the C:\ drive, which will then give them access or replace any directory or file on the server.

Trustwave indicated they have private, proof-of-concept code that will be published on Feb. 9, 2021.

SolarWinds Orion Platform users can upgrade to version 2020.2.4. SolarWinds ServU-FTP users can upgrade to version 15.2.2 Hotfix 1.

Rapid7 vulnerability researchers have identified that after the Orion Platform patch is applied, there is a digital signature validation step performed on arrived messages so that messages having no signature or not signed with a per-installation certificate are not further processed. On the other hand, the MSMQ is still unauthenticated and allows anyone to send messages to it.

Rapid7 response

Rapid7 Labs is keeping a watchful eye on Project Heisenberg for indications of widespread inventory scans (attackers looking for potentially vulnerable systems) and will provide updates, as warranted, on any new developments.

Our InsightVM coverage team is currently evaluating options for detecting the presence of these vulnerabilities.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

User roles for the enterprise

Post Syndicated from Arturs Lontons original https://blog.zabbix.com/user-roles-for-the-enterprise/12887/

In this post, we’ll talk about granular user roles introduced in Zabbix 5.2 and some scenarios where user roles should be used and where they give a great benefit to these specific environments.

Contents

I. Permissions granularity (0:40)
II. User Roles in 5.2 (5:16)
III. Example use cases (16:16)
IV. Questions & Answers (h2)

Permissions granularity

Permissions granularity

Let’s consider two roles: the NOC Team role and the Network Administrator role. These are quite different roles requiring different permission levels. Let’s not also forget that the people working in these roles usually have different skill sets, therefore the user experience is quite important for both of these roles: NOC Team probably wants to see only the most important, the most vital data, while the Network Administrators usually require permissions to view data in more detail and have access to more detailed and granular information overviews of what’s going on in your environment.

For our example, let’s first define the requirements for these roles.

NOC Team role:

  • They will definitely require access to dashboards and maps.
  • We will want to restrict unnecessary UI elements for them just to improve the UX. In this case – less is more. Removing the unused UI elements will make the day-to-day workflow easier for the NOC team members who aren’t as proficient with Zabbix as our Monitoring team members.
  • For security reasons we need to restrict API access because NOC team members will either use API very rarely or not at all. With roles we can restrict the API access either partially or completely.
  • The ability to modify the existing  configuration will be restricted, as the NOC team will not be responsible for changing  the Zabbix configuration.
  • The ability to close problems manually will be restricted, since the network admin team will be responsible for that.

Network Administrator role:

  • Similar to the NOC team, the Network Administrators also require access to dashboards and maps. what’s going on in your environment, the health of the environment.
  • They need to have access to configuration, since members of this team are responsible for making configuration changes.
  • Most likely, instead of disabling the API access for our network administrator role, we would want to restrict API access in some way. They might still need access to get or create methods, while access to everything else should be restricted.
  • For each of our roles we will be implementing a UI cleanup by restricting UI elements – we will hide the functionality that we have opted out of using.

Roles and multi-tenancy

Granular permissions are one of the key factors in multi-tenant environments. We could use permissions to segregate our environment per tenant, but in 5.2 that’s not the end of it:

  • Imagine multiple tenants where each has different monitoring requirements. Some want to use the services function for SLA calculation, others want to use inventory, or need the maps and the dashboards.
  • Restricting access to elements and actions per tenant is important. So, for example, some tenants wish to be able to close problems manually, others need to have restrictions on map or dashboard creations for a specific user group..
  • Permissions are still used to enable isolation between tenants on host group level

User Roles in 5.2

With Zabbix 5.2 these use cases, which require additional permission granularity, are now fully supported.

So, let’s take a look at how the User Role feature looks in a real environment.

User role

User roles in Zabbix 5.2 are something completely new. Each user will have a role assigned to them on top of their User Type:

User permissions

We end up having our User types being  linked to User roles, and User roles linked to Users. This means that User types are linked to Users indirectly through the User roles.

User types

The User, Admin, and Super admin types are still in use. The role will be linked to one of these 3 user types.

User roles

Note that User type restrictions still apply.

  • Super admin has access to every section: Administration, Configuration, Reports, Inventory, and Monitoring.
  • Admin has access to Configuration, Reports, Inventory, and Monitoring.
  • User has access to Reports, Inventory, and Monitoring.

Frontend sections restricted by User type

Default User roles

Once we upgrade to 5.2 or install a fresh 5.2 instance, we will have a set of default user roles. The 4 pre-configured user roles are available under Administration > User roles:

  • Super admin,
  • Admin,
  • User, and
  • Guest.

Super admin role

  • The default Super admin role is static. It is set up by default once you upgrade or install a fresh instance. Users cannot modify this role.

All of the other default roles can be modified. In the Zabbix environment, we must have at least a single user with this Super admin role that has access to all of Zabbix functionality. This is similar to the root user in the Linux OS.

Newly created roles of either  Super admin, Admin, or User types can be modified. For example, we can create another Super admin role, change the permissions. For instance, we can have a Super admin that doesn’t have access to Administration > General, but has access to everything else.

User role section

Once we open the User roles section, we will see a list of features and functions that we can restrict per user role.

When we create a new role or open a pre-created role they will have the maximum allowed permissions depending on the User type that is used for the role.

Each of the default roles contains the maximum allowed permissions per user type

UI element restriction

We can restrict access to UI elements for each role. If we wish to create a NOC role we can restrict them to have access only to Dashboards and maps. When we open the User up and go to Permissions we will see the available sections highlighted in green.

NOC user role that has access only to Dashboards and maps

Once we open up the Dashboards or the Monitoring section, we will  see only the UI sections in our navigation menu that have been permitted for this specific user.

Global view: NOC user role that has access only to Dashboards and maps

Host group permissions

Note, that User Group access to Host Groups still has to be properly assigned. For instance, when we open the Dashboard, we still have to check if this user belongs to a user group, which has access to a specific host group. Then we will either display or hide the corresponding data.

User Group access to Host Group

Access to API

API access can also be restricted for each role. Depending on the Access to API “Enabled” checkbox the corresponding user of this specific role will be permitted or denied to access the API.

Used when creating API specific user roles

In addition to that, we can allow or restrict the execution of specific API methods. For this we can use an Allow or Deny list. For instance, we could create a user that has access only to get methods: they can read the data, but they cannot modify the data.

Restricting API method

Let’s use host.create method as an example. If I don’t have permission to do so, I will see an error message ‘no permissions to call’ and then the name of the call — host.create in this case.

Access to actions

Each role can have a specific list of actions that it can perform with respect to the role User type.

In this context, ‘Actions’ mean what this user can do within the UI: Do we wish for the user to be able to close problems, acknowledge them, create or edit maps.

Defining access to actions

NOTE. For the role of type ‘User’, the ‘Create and edit maintenance’ will be grayed out because the User type by default doesn’t have access to the Maintenance section. You cannot enable it for the role of User type, but you can enable or disable it for the Admin type role.

Restricting Actions example

Let’s restrict the role for acknowledging and closing problems. Once we define the restriction the acknowledgment and closing of problems will be grayed out in the frontend.

If we enable it (the checkboxes are editable), we can acknowledge and close problems.

Restricted role

Unrestricted role

Default access

We can also modify the Default access section. We can define that a role has default access to new actions, modules, and UI elements. For instance, if we are importing a new frontend module or upgrading our version 5.2 to version 6.0 in the future –  if any new UI elements, modules or action types appear, do we want for this specific role to have access to it by default once it is created or should this role by default have restricted access to all of these new elements that we are creating?

This allows to give access to any new UI elements for our Super Admin users while disabling the for any other User roles.

Default access for new elements of different types can be enabled or disabled for user roles

If Default access is enabled, whenever a new element is added, the user belonging to this role will automatically have access to it.

Role assignment post-upgrade

How are these roles going to be assigned after migration to 5.2? I have my users of a specific User type, but what’s going to happen with roles? Will I have to assign them manually?

When you upgrade to 5.2 from, for example, 5.0, the users will have the pre-created default roles for Admin, User, and Super admin assigned for them based on their types.

Pre-created roles after migration

This allows us to keep things as they were before 5.2 or go ahead with creating new User roles.

Example use cases

The following example use cases will give you an idea of how you can implement this in your environment.

Read-only role

ANOC Team User role, with no ability to create or modify any elements:

  • read-only access to dashboards,
  • no access to problems,
  • no access to API, and
  • no permissions to execute frontend scripts.

When we are defining this new role, we will mark the corresponding checkboxes in the Monitoring section. The User type for this role is going to be ‘User’ because they don’t need to have access to Administration or Configuration.

User type and sections the role has access to

We will also restrict access to actions, the API, and decide on the new UI element and module permission logic. Default access to new actions and modules will be restricted. Read up on Zabbix release notes to see if any new UI elements have been added in future releases!

Read-only role

When we log in with this user and go to Dashboards, we will see that this user has no option to create or edit a dashboard because we have restricted such an action. The access is still granted based on the Dashboard permissions — depending on whether it is a public or a private dashboard. When they open it up, the data that they will see will depend on the User group to Host group relationship.

When this user opens up the frontend, he will see that access to the unnecessary UI elements is restricted (the restricted UI elements are hidden). Even though he has access to the Problem widget on the dashboard, they are unable to acknowledge or close the problem as we have restricted those actions.

Restricted UI elements hidden and ‘Acknowledge’ button unclickable for this Role

Restrict access to Administration section

Another very interesting use case — restricting access to Administration sections. Administration sections are available only for our Super admins, but, in this case, we want to have a separate role of type Super admin that has some restrictions.

Our Super admin type role that has no access to User сonfiguration and General Zabbix settings will need to be able to:

  • create and manage proxies,
  • define media types and frontend scripts, and
  • access the queue to check the health of our Zabbix instance.

But they won’t be able to create new User groups, Users, and so on.

So, we are opening our Administration > User roles section, creating a new role of type Super admin, and restricting all of the user-related sections, and also restricting access to Administration > General.

User type – Super admin. General and User sections are restricted for this role

When we log in, we can see that there is no access to Administration > General section because we have restricted the ability to change housekeeper settings,  trigger severities, and other settings that are available in Administration > General.

But the Monitoring Super admin user still has the ability to create new Proxies, Media Types, Scripts and has access to the Queue section. This is a nice way to create different types of Super admins which was not possible before 5.2.

Access to Administration section elements

Roles for multi-tenant environment

Zabbix Dashboards and maps are used by multiple tenants to provide monitoring data.

In our example, we will imagine a customer portal that different tenants can access. They log in to Zabbix and based on their roles and permissions can access different elements. One of our Tenant requires a NOC role :

  • read-only access to dashboards,
  • read-only access to maps,
  • no access to API,
  • no access to configuration,
  • isolation per tenant so we won’t be able to see the host status of other tenants.

We will create a new role in Administration > User roles — new role of type User. We will restrict access only to the UI elements that need to be visible for the users belonging to this role.

User type role with very limited access to UI

Since we need to have isolation, we will also be using tag-based permissions to isolate our Hosts per tenant. We’ll go to Permissions section, add read-only or write permissions on a User group to a specific Host group. Then we will also define the tag-based permissions so that these users have access only to problems that are tagged with a specific tag.

Tag-based permissions to isolate our Hosts per tenant

Don’t forget to actually tag those problems and define these tags either on the trigger level or on the host level.

Tagging on the host level

Once we have implemented this, if we open up the UI, we go to Monitoring > Dashboards. We can see that:

  • The UI is restricted only to the required monitoring sections.
  • Tag-based permission ensure that we are seeing problems related to our specific tenant.

Isolation and role restriction have been implemented, and we can successfully have our multi-tenant environment.

Roles for multi-tenant environments

What’s next?

How would you proceed with upgrading to Zabbix 5.2 and implementing this? At the design stage, you need to understand that User roles can help you with a couple of things and you need to estimate and assign value to these things if you want to implement them in your environment.

  1. User roles can improve auditing. Since you have restricted roles per each user it’s easier to audit who did what in your environment.
  2. Restricting API access. We can not only enable or disable API access, but we can also restrict our users to specific methods. From the security and auditing perspective, this adds a lot of flexibility.
  3. Restricting configuration. We can restrict users to specific actions or limit their access to specific Configuration sections as in the example with the custom Super admin role. This allows us to have multiple tiers of admins in our environment
  4. Removing unwanted UI elements. By restricting access to only the necessary UI elements we can give Zabbix a much cleaner look and improve the UX of your users.

Thank you! I hope I gave you some insight into how roles can be used and how they will be implemented in Zabbix 5.2. I hope you aren’t too afraid to play around with this new set of features and implement them in your environment.

Questions & Answers

Question. Can we have a limited read-only user that will have access to all the hosts that are already in Zabbix and will be added in the future?

Answer. Yes, we can have access to all of the existing Host groups. But when you add a new Host Group, you will have to go to your Permissions section and assign User Group to Host Group permissions for the newly added group.

Question. So that means that now we can have a fully customizable multi-tenant environment?

Answer. Definitely. Fully customizable based both on our User group to Host group permissions and roles to make the actions and different UI sections available as per the requirements of our tenants.

Question. I want to create a user with only API access. Is that possible in 5.0 or 5.2?

Answer. It’s been possible for a while now.  You can just disable the frontend access and leave the user with the respective permissions on specific Host groups. But with 5.2 you can make the API limitations more granular. So, you can say that this API-only user has access only to specific API methods

Question. Can we make a user who can see but cannot edit the configuration?

Answer. Partially. For read-only users, read-only access still works for the Monitoring section. But if we go to Configuration, if we want to see anything in the Configuration section, we need write access.You can use Monitoring > Hosts section, where you can see partial configuration. Configuration section unfortunately still is not available for read-only access.

 

 

Rapid7 Acquires Leading Kubernetes Security Provider, Alcide

Post Syndicated from Brian Johnson original https://blog.rapid7.com/2021/02/01/rapid7-acquires-leading-kubernetes-security-provider-alcide/

Rapid7 Acquires Leading Kubernetes Security Provider, Alcide

Organizations around the globe continue to embrace the flexibility, speed, and agility of the cloud. Those that have adopted it are able to accelerate innovation and deliver real value to their customers faster than ever before. However, while the cloud can bring a tremendous amount of benefits to a company, it is not without its risks. Organizations need comprehensive visibility into their cloud and container environments to help mitigate risk, potential threats, and misconfigurations.

At Rapid7, we strive to help our customers establish and implement strategies that enable them to rapidly adopt and secure cloud environments. Looking only at cloud infrastructure or containers in a silo provides limited ability to understand the impact of a possible vulnerability or breach.

To help our customers gain a more comprehensive view of their cloud environments, I am happy to announce that we have acquired Alcide, a leader in Kubernetes security based in Tel Aviv, Israel. Alcide provides seamless Kubernetes security fully integrated into the DevOps lifecycle and processes so that business applications can be rapidly deployed while also protecting cloud environments from malicious attacks.

Alcide’s industry-leading cloud workload protection platform (CWPP) provides broad, real-time visibility and governance, container runtime and network monitoring, as well as the ability to detect, audit, and investigate known and unknown security threats. By bringing together Alcide’s CWPP capabilities with our existing posture management (CSPM) and infrastructure entitlements (CIEM) capabilities, we will be able to provide our customers with a cloud-native security platform that enables them to manage risk and compliance across their entire cloud environment.

This is an exciting time in cloud security, as we’re witnessing a shift in perception. Cloud security teams are no longer viewed as a cost center or operational roadblock and have earned their seat at the table as a critical investment essential to driving business forward. With Alcide, we’re excited to further increase that competitive advantage for our customers.

We look forward to joining forces with Alcide’s talented team as we work together to provide our customers comprehensive, unified visibility across their entire cloud infrastructure and cloud-native applications.

Welcome to the herd, Alcide!

State-Sponsored Threat Actors Target Security Researchers

Post Syndicated from boB Rudis original https://blog.rapid7.com/2021/01/26/state-sponsored-threat-actors-target-security-researchers/

State-Sponsored Threat Actors Target Security Researchers

This blog was co-authored by Caitlin Condon, VRM Security Research Manager, and Bob Rudis, Senior Director and Chief Security Data Scientist.

On Monday, Jan. 25, 2021, Google’s Threat Analysis Group (TAG) published a blog on a widespread social engineering campaign that targeted security researchers working on vulnerability research and development. The campaign, which Google attributed to North Korean (DPRK) state-sponsored actors, has been active for several months and sought to compromise researchers using several methods.

Rapid7 is aware that many security researchers were targeted in this campaign, and information is still developing. While we currently have no evidence that we were compromised, we are continuing to investigate logs and examine our systems for any of the IOCs listed in Google’s analysis. We will update this post with further information as it becomes available.

Organizations should take note that this was a highly sophisticated attack that was important enough to those who orchestrated it for them to burn an as-yet unknown exploit path on. This event is the latest in a chain of attacks—e.g., those targeting SonicWall, VMware, Mimecast, Malwarebytes, Microsoft, Crowdstrike, and SolarWinds—that demonstrates a significant increase in threat activity targeting cybersecurity firms with legitimately sophisticated campaigns. Scenarios like these should become standard components of tabletop exercises and active defense plans.

North Korean-attributed social engineering campaign

Google discovered that the DPRK threat actors had built credibility by establishing a vulnerability research blog and several Twitter profiles to interact with potential targets. They published videos of their alleged exploits, including a YouTube video of a fake proof-of-concept (PoC) exploit for CVE-2021-1647—a high-profile Windows Defender zero-day vulnerability that garnered attention from both security researchers and the media. The DPRK actors also published “guest” research (likely plagiarized from other researchers) on their blog to further build their reputation.

The malicious actors then used two methods to social engineer targets into accepting malware or visiting a malicious website. According to Google:

  • After establishing initial communications, the actors would ask the targeted researcher if they wanted to collaborate on vulnerability research together, and then provide the researcher with a Visual Studio Project. Within the Visual Studio Project would be source code for exploiting the vulnerability, as well as an additional pre-compiled library (DLL) that would be executed through Visual Studio Build Events. The DLL is custom malware that would immediately begin communicating with actor-controlled command and control (C2) domains.

State-Sponsored Threat Actors Target Security Researchers
Visual Studio Build Events command executed when building the provided VS Project files. Image provided by Google.

  • In addition to targeting users via social engineering, Google also observed several cases where researchers have been compromised after visiting the actors’ blog. In each of these cases, the researchers followed a link on Twitter to a write-up hosted on blog[.]br0vvnn[.]io, and shortly thereafter, a malicious service was installed on the researcher’s system and an in-memory backdoor would begin beaconing to an actor-owned command and control server. At the time of these visits, the victim systems were running fully patched and up-to-date Windows 10 and Chrome browser versions. As of Jan. 26, 2021, Google was unable to confirm the mechanism of compromise.

The blog the DPRK threat actors used to execute this zero-day drive-by attack was posted on Reddit as long as three months ago. The actors also used a range of social media and communications platforms to interact with targets—including Telegram, Keybase, Twitter, LinkedIn, and Discord. As of Jan. 26, 2021, many of these profiles have been suspended or deactivated.

Rapid7 customers

Google’s threat intelligence includes information on IOCs, command-and-control domains, actor-controlled social media accounts, and compromised domains used as part of the campaign. Rapid7’s MDR team is deploying IOCs and behavior-based detections. These detections will also be available to InsightIDR customers later today. We will update this blog post with further information as it becomes available.

Defender guidance

TAG noted in their blog post that they have so far only seen actors targeting Windows systems. As of the evening of Jan. 25, 2021, researchers across many companies confirmed on Twitter that they had interacted with the DPRK actors and/or visited the malicious blog. Organizations that believe their researchers or other employees may have been targeted should conduct internal investigations to determine whether indicators of compromise are present on their networks.

At a minimum, responders should:

  • Ensure members of all security teams are aware of this campaign and encourage individuals to report if they believe they were targeted by these actors.
  • Search web traffic, firewall, and DNS logs for evidence of contacts to the domains and URLs provided by Google in their post.
  • According to Rapid7 Labs’ forward DNS archive, the br0vvnn[.]io apex domain has had two discovered fully qualified domain names (FQDNs)—api[.]br0vvnn[.]io and blog[.]br0vvnn[.]io—over the past four months with IP addresses 192[.]169[.]6[.]31 and 192[.]52[.]167[.]169, respectively. Contacts to those IPs should also be investigated in historical access records.
  • Check for evidence of the provided hashes on all systems, starting with those operated and accessed by members of security teams.

Moving forward, organizations and individuals should heed Google’s advice that “if you are concerned that you are being targeted, we recommend that you compartmentalize your research activities using separate physical or virtual machines for general web browsing, interacting with others in the research community, accepting files from third parties and your own security research.”

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Update on SolarWinds Supply-Chain Attack: SUNSPOT and New Malware Family Associations

Post Syndicated from boB Rudis original https://blog.rapid7.com/2021/01/12/update-on-solarwinds-supply-chain-attack-sunspot-and-new-malware-family-associations/

Update on SolarWinds Supply-Chain Attack: SUNSPOT and New Malware Family Associations

This update is a continuation of our previous coverage of the SolarWinds supply-chain attack that was discovered by FireEye in December 2020. As of Jan. 11, 2021, new research has been published that expands the security community’s understanding of the breadth and depth of the SolarWinds attack.

Two recent developments warrant your attention:

The SUNSPOT build implant

On Monday, Jan. 11, 2021, CrowdStrike’s intelligence team published technical analysis on SUNSPOT, a newly identified type of malware that appears to have been used as part of the SolarWinds supply chain attack. CrowdStrike describes SUNSPOT as “a malicious tool that was deployed into the build environment to inject [the SUNBURST] backdoor into the SolarWinds Orion platform.”

While SUNSPOT infection is part of the attack chain that allows for SUNBURST backdoor compromise, SUNSPOT has distinct host indicators of attack (including executables and related files), artifacts, and TTPs (tactics, techniques, and procedures).

CrowdStrike provides a thorough breakdown of how SUNSPOT operates, including numerous indicators of compromise. Here are the critical highlights:

SUNSPOT’s on-disk executable is named taskhostsvc.exe and has an initial, likely build date of Feb. 20, 2020. It maintains persistence through a scheduled task that executes on boot and has the SeDebugPrivilege grant, which is what enables it to read the memory of other processes.

It uses this privilege to watch for MsBuild.exe (a Visual Studio development component) execution and modifies the target source code before the compiler has a chance to read it. SUNSPOT then looks for a specific Orion software source code component and replaces it with one that will inject SUNBURST during the build process. SUNSPOT also has validation checks to ensure no build errors are triggered during the build process, which helps it escape developer and other detection.

The last half of the CrowdStrike analysis has details on tactics, techniques, and procedures, along with host indicators of attack, ATT&CK framework mappings, and YARA rules specific to SUNSPOT. Relevant indicators have been incorporated into Rapid7’s SIEM, InsightIDR, and Managed Detection and Response instances and workflows.

SolarWinds has updated their blog with a reference to this new information on SUNSPOT. Because SUNSPOT, SUNBURST, and related tooling have not been definitively mapped to a known adversary, CrowdStrike has christened the actors responsible for these intrusions “StellarParticle.”

SUNBURST’s Kazuar lineage

Separately, Kaspersky Labs also published technical analysis on Monday, Jan. 11, 2020 that builds a case for a connection between the SUNBURST backdoor and another backdoor called Kazuar. Kazuar, which Palo Alto Networks’ Unit42 team first described in May of 2017 as a “multiplatform espionage backdoor with API access,” is a .NET backdoor that Kaspersky says appears to share several “unusual features” with SUNBURST. (Palo Alto linked Kazuar to the Turla APT group back in 2017, which Kaspersky says their own observations support, too.)

Shared features Kaspersky has identified so far include the use of FNV-1a hashing throughout Kazua and SUNBURST code, a similar algorithm used to generate unique victim identifiers, and customized (thought not exactly the same) implementations of a sleeping algorithm that delays between connections to a C2 server and makes network activity less obvious. Kaspersky has a full, extremely detailed list of similar and different features across both backdoors in their post.

Kaspersky does not definitively state that the two backdoors are the work of the same actor. Instead, they offer five possible explanations for the similarities they’ve identified between Kazuar and SUNBURST. The potential explanations below have been taken directly from their post:

  1. Sunburst was developed by the same group as Kazuar.
  2. The Sunburst developers adopted some ideas or code from Kazuar, without having a direct connection (they used Kazuar as an inspiration point).
  3. Both groups, DarkHalo/UNC2452 and the group using Kazuar, obtained their malware from the same source.
  4. Some of the Kazuar developers moved to another team, taking knowledge and tools with them.
  5. The Sunburst developers introduced these subtle links as a form of false flag, in order to shift blame to another group.

As Kaspersky notes, the knowledge of a potential lineage connection to Kazaur changes little for defenders, but is worth keeping an eye on, as a confirmed connection may help those in more highly targeted sectors use previous Kazuar detection and prevention methods to enhance their response to the SolarWinds compromise.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Thank you 2020: Results, Achievements, and Plans

Post Syndicated from Jekaterina Petruhina original https://blog.zabbix.com/thank-you-2020-results-achievements-and-plans/13184/

2020 was not an easy year, but it challenged and taught us a lot. Let’s sum up the past year’s results together and make some plans for the next one.

Going online

Despite the worldwide lockdown and the entire team’s inability to work from the office, Zabbix continued to evolve and get better. The Zabbix 5.0 release was the first release to be done remotely – without the usual cake and “family” celebration on release day in our cozy office kitchen. Version 5.2 also followed remotely, with the already familiar online celebration in Zoom.

Unexpectedly we had to change the vector of planned activities. In January 2020, we expected to hold ten conferences in different countries of the world. But in reality, due to the pandemic and the restrictions it caused, we were only able to have a few: Zabbix Conference Benelux 2020, Zabbix Conference China 2020, and Zabbix Conference Japan 2020.

In 2020, we were going to have the biggest Zabbix Summit ever – the 10th-anniversary event. But plans changed, and instead, we held our first online Zabbix Summit. Given the unusual format of a traditional Zabbix Summit – it turned out very successful and proved the online version also could be sustainable.  We are grateful to all the speakers and guests who supported us and joined the event.

We can say that 2020 was all about online events. We start a good tradition to hold Zabbix meetups online and received positive feedback from the community. Thanks to our partners’ support, online meetups have been held not only in English and Russian but also in Czech, Italian, French, Portuguese, Spanish, German, and Polish. We will continue this tradition for sure, providing our users worldwide to learn Zabbix, share their experience, and meet co-thinkers. Remember that we always record the meetups for you to have access to presentations when it is required. Recordings are available on our website’s event section under each event.

Conquering the world

One of the most important events of 2020 for Zabbix was the opening of the office in Brazil. The software and professional services are now even more accessible to Latin American users thanks to language localization and the branch office’s closeness. We are happy to be part of one of Zabbix’s most active communities worldwide and being able to cooperate on a much greater level. We have expanded our Latin American branch with new Zabbix professionals, partners, and initiatives during last year.

We are also proud of making Zabbix more reachable for different language users – with our Latin American office, we opened Zabbix Spanish and Portuguese web page and thanks to our active partners – also German site. Now we are working on more languages to make our open source solution even more open.

As for the overall results, we can share with you some statistics

Our integration team has been very active over the past year and has released many useful templates. In total, their efforts resulted in 42 new templates and 20 new Webhook integrations.

Zabbix partnership network has also grown. Zabbix Partnership program creates a worldwide network of trustful and highly skilled companies ready to provide immediate technical assistance and become an accessible intermediary between you and the Zabbix team. This network has now grown by 40+ partners worldwide, resulting in around 200 partners for now.

In the past year, we rethought the professional training program and expanded it with extra training – one-day courses for in-depth study of one specific topic. These courses differ from the rest of the program by having no requirements. You can get extra knowledge about monitoring with Zabbix without Zabbix certification. Four additional training courses are available now, but we are working on the next topics to provide you.

Note that we have already published many training sessions for this year, so now is an excellent time to choose and book the courses and timing that suit you best. Don’t forget to use the filter – you can quickly sort courses by language, level, and location (online or on-site).

Speaking of statistics for last year, thanks to the Zabbix team and partners’ efforts, there were 203 training sessions in total. Altogether, 1,214 official Zabbix certificates were awarded to the training attendees who successfully mastered the material and passed the exam.

But education doesn’t end with training. Webinars are an excellent opportunity to learn different aspects of Zabbix for free. Note that we have also added the ability to watch recorded sessions at your convenience. And we also want to remind you not to forget to use the filter to navigate between sessions. We cover many topics in many different languages, and we want to make sure you don’t miss out on learning the material. Last year we hosted 245 webinar sessions (including 55 sessions held by Zabbix partners worldwide), and we hope it was beneficial for our community.

A sneak peek to 2021

So what can we expect from Zabbix in the new year of 2021? Lots of things! We are not going to stop and continue to work hard. The Zabbix software development roadmap is available on our website. Feel free to explore the features we are working on right now! We will continue to hold meetups, covering complex technical issues. As for the Zabbix Summit 2021, we are also planning to have it online in October 2021. The first experience was a success, so why not strengthen the achieved result. And as for other news, keep an eye on our updates here on the blog, social media, and newsletter. We have an exciting year ahead of us, so let’s make the most of it!

 

New – AWS Transfer Family support for Amazon Elastic File System

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/new-aws-transfer-family-support-for-amazon-elastic-file-system/

AWS Transfer Family provides fully managed Secure File Transfer Protocol (SFTP), File Transfer Protocol (FTP) over TLS, and FTP support for Amazon Simple Storage Service (S3), enabling you to seamlessly migrate your file transfer workflows to AWS.

Today I am happy to announce AWS Transfer Family now also supports file transfers to Amazon Elastic File System (EFS) file systems as well as Amazon S3. This feature enables you to easily and securely provide your business partners access to files stored in Amazon EFS file systems. With this launch, you now have the option to store the transferred files in a fully managed file system and reduce your operational burden, while preserving your existing workflows that use SFTP, FTPS, or FTP protocols.

Amazon EFS file systems are accessible within your Amazon Virtual Private Cloud (VPC) and VPC connected environments. With this launch, you can securely enable third parties such as your vendors, partners, or customers to access your files over the supported protocols at scale globally, without needing to manage any infrastructure. When you select Amazon EFS as the data store for your AWS Transfer Family server, the transferred files are readily available to your business-critical applications running on Amazon Elastic Compute Cloud (EC2), as well as to containerized and serverless applications run using AWS services such as Amazon Elastic Container Service (ECS), Amazon Elastic Kubernetes Service (EKS), AWS Fargate, and AWS Lambda.

Using Amazon EFS – Getting Started
To get started in your existing Amazon EFS file system, make sure the POSIX identities you assign for your SFTP/FTPS/FTP users are owners of the files and directories you want to provide access to. You will provide access to that Amazon EFS file system through a resource-based policy. Your role also needs to establish a trust relationship. This trust relationship allows AWS Transfer Family to assume the AWS Identity and Access Management (IAM) role to access your bucket so that it can service your users’ file transfer requests.

You will also need to make sure you have created a mount target for your file system. In the example below, the home directory is owned by userid 1234 and groupid 5678.

$ mkdir home/myname
$ chown 1234:5678 home/myname

When you create a server in the AWS Transfer Family console, select Amazon EFS as your storage service in the Step 4 section Choose a domain.

When the server is enabled and in an online state, you can add users to your server. On the Servers page, select the check box of the server that you want to add a user to and choose Add user.

In the User configuration section, you can specify the username, uid (e.g. 1234), gid (e.g 5678), IAM role, and Amazon EFS file system as user’s home directory. You can optionally specify a directory within the file system which will be the user’s landing directory. You use a service-managed identity type – SSH keys. If you want to use password type, you can use a custom option with AWS Secrets Manager.

Amazon EFS uses POSIX IDs which consist of an operating system user id, group id, and secondary group id to control access to a file system. When setting up your user, you can specify the username, user’s POSIX configuration, and an IAM role to access the EFS file system. To learn more about configuring ownership of sub-directories in EFS, visit the documentation.

Once the users have been configured, you can transfer files using the AWS Transfer Family service by specifying the transfer operation in a client. When your user authenticates successfully using their file transfer client, it will be placed directly within the specified home directory, or root of the specified EFS file system.

$ sftp [email protected]

sftp> cd /fs-23456789/home/myname
sftp> ls -l
-rw-r--r-- 1 3486 1234 5678 Jan 04 14:59 my-file.txt
sftp> put my-newfile.txt
sftp> ls -l
-rw-r--r-- 1 3486 1234 5678 Jan 04 14:59 my-file.txt
-rw-r--r-- 1 1002 1234 5678 Jan 04 15:22 my-newfile.txt

Most of SFTP/FTPS/FTP commands are supported in the new EFS file system. You can refer to a list of available commands for FTP and FTPS clients in the documentation.

Command Amazon S3 Amazon EFS
cd Supported Supported
ls/dir Supported Supported
pwd Supported Supported
put Supported Supported
get Supported Supported including resolving symlinks
rename Supported (only file) Supported (file or folder)
chown Not supported Supported (root only)
chmod Not supported Supported (root only)
chgrp Not supported Supported (root or owner only)
ln -s Not supported Not supported
mkdir Supported Supported
rm Supported Supported
rmdir Supported (non-empty folders only) Supported
chmtime Not Supported Supported

You can use Amazon CloudWatch to track your users’ activity for file creation, update, delete, read operations, and metrics for data uploaded and downloaded using your server. To learn more on how to enable CloudWatch logging, visit the documentation.

Available Now
AWS Transfer Family support for Amazon EFS file systems is available in all AWS Regions where AWS Transfer Family is available. There are no additional AWS Transfer Family charges for using Amazon EFS as the storage backend. With Amazon EFS storage, you pay only for what you use. There is no need to provision storage in advance and there are no minimum commitments or up-front fees.

To learn more, take a look at the FAQs and the documentation. Please send feedback to the AWS forum for AWS Transfer Family or through your usual AWS support contacts.

Learn all the details about AWS Transfer Family to access Amazon EFS file systems and get started today.

Channy;

Amazon Location – Add Maps and Location Awareness to Your Applications

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-location-add-maps-and-location-awareness-to-your-applications/

We want to make it easier and more cost-effective for you to add maps, location awareness, and other location-based features to your web and mobile applications. Until now, doing this has been somewhat complex and expensive, and also tied you to the business and programming models of a single provider.

Introducing Amazon Location Service
Today we are making Amazon Location available in preview form and you can start using it today. Priced at a fraction of common alternatives, Amazon Location Service gives you access to maps and location-based services from multiple providers on an economical, pay-as-you-go basis.

You can use Amazon Location Service to build applications that know where they are and respond accordingly. You can display maps, validate addresses, perform geocoding (turn an address into a location), track the movement of packages and devices, and much more. You can easily set up geofences and receive notifications when tracked items enter or leave a geofenced area. You can even overlay your own data on the map while retaining full control.

You can access Amazon Location Service from the AWS Management Console, AWS Command Line Interface (CLI), or via a set of APIs. You can also use existing map libraries such as Mapbox GL and Tangram.

All About Amazon Location
Let’s take a look at the types of resources that Amazon Location Service makes available to you, and then talk about how you can use them in your applications.

MapsAmazon Location Service lets you create maps that make use of data from our partners. You can choose between maps and map styles provided by Esri and by HERE Technologies, with the potential for more maps & more styles from these and other partners in the future. After you create a map, you can retrieve a tile (at one of up to 16 zoom levels) using the GetMapTile function. You won’t do this directly, but will use Mapbox GL, Tangram, or another library instead.

Place Indexes – You can choose between indexes provided by Esri and HERE. The indexes support the SearchPlaceIndexForPosition function which returns places, such as residential addresses or points of interest (often known as POI) that are closest to the position that you supply, while also performing reverse geocoding to turn the position (a pair of coordinates) into a legible address. Indexes also support the SearchPlaceIndexForText function, which searches for addresses, businesses, and points of interest using free-form text such as an address, a name, a city, or a region.

Trackers –Trackers receive location updates from one or more devices via the BatchUpdateDevicePosition function, and can be queried for the current position (GetDevicePosition) or location history (GetDevicePositionHistory) of a device. Trackers can also be linked to Geofence Collections to implement monitoring of devices as they move in and out of geofences.

Geofence Collections – Each collection contains a list of geofences that define geographic boundaries. Here’s a geofence (created with geojson.io) that outlines a park near me:

Amazon Location in Action
I can use the AWS Management Console to get started with Amazon Location and then move on to the AWS Command Line Interface (CLI) or the APIs if necessary. I open the Amazon Location Service Console, and I can either click Try it! to create a set of starter resources, or I can open up the navigation on the left and create them one-by-one. I’ll go for one-by-one, and click Maps:

Then I click Create map to proceed:

I enter a Name and a Description:

Then I choose the desired map and click Create map:

The map is created and ready to be added to my application right away:

Now I am ready to embed the map in my application, and I have several options including the Amplify JavaScript SDK, the Amplify Android SDK, the Amplify iOS SDK, Tangram, and Mapbox GL (read the Developer Guide to learn more about each option).

Next, I want to track the position of devices so that I can be notified when they enter or exit a given region. I use a GeoJSON editing tool such as geojson.io to create a geofence that is built from polygons, and save (download) the resulting file:

I click Create geofence collection in the left-side navigation, and in Step 1, I add my GeoJSON file, enter a Name and Description, and click Next:

Now I enter a Name and a Description for my tracker, and click Next. It will be linked to the geofence collection that I just created:

The next step is to arrange for the tracker to send events to Amazon EventBridge so that I can monitor them in CloudWatch Logs. I leave the settings as-is, and click Next to proceed:

I review all of my choices, and click Finalize to move ahead:

The resources are created, set up, and ready to go:

I can then write code or use the CLI to update the positions of my devices:

$ aws location batch-update-device-position \
   --tracker-name MyTracker1 \
   --updates "DeviceId=Jeff1,Position=-122.33805,47.62748,SampleTime=2020-11-05T02:59:07+0000"

After I do this a time or two, I can retrieve the position history for the device:

$ aws location get-device-position-history \
  -tracker-name MyTracker1 --device-id Jeff1
------------------------------------------------
|           GetDevicePositionHistory           |
+----------------------------------------------+
||               DevicePositions              ||
|+---------------+----------------------------+|
||  DeviceId     |  Jeff1                     ||
||  ReceivedTime |  2020-11-05T02:59:17.246Z  ||
||  SampleTime   |  2020-11-05T02:59:07Z      ||
|+---------------+----------------------------+|
|||                 Position                 |||
||+------------------------------------------+||
|||  -122.33805                              |||
|||  47.62748                                |||
||+------------------------------------------+||
||               DevicePositions              ||
|+---------------+----------------------------+|
||  DeviceId     |  Jeff1                     ||
||  ReceivedTime |  2020-11-05T03:02:08.002Z  ||
||  SampleTime   |  2020-11-05T03:01:29Z      ||
|+---------------+----------------------------+|
|||                 Position                 |||
||+------------------------------------------+||
|||  -122.43805                              |||
|||  47.52748                                |||
||+------------------------------------------+||

I can write Amazon EventBridge rules that watch for the events, and use them to perform any desired processing. Events are published when a device enters or leaves a geofenced area, and look like this:

{
  "version": "0",
  "id": "7cb6afa8-cbf0-e1d9-e585-fd5169025ee0",
  "detail-type": "Location Geofence Event",
  "source": "aws.geo",
  "account": "123456789012",
  "time": "2020-11-05T02:59:17.246Z",
  "region": "us-east-1",
  "resources": [
    "arn:aws:geo:us-east-1:123456789012:geofence-collection/MyGeoFences1",
    "arn:aws:geo:us-east-1:123456789012:tracker/MyTracker1"
  ],
  "detail": {
        "EventType": "ENTER",
        "GeofenceId": "LakeUnionPark",
        "DeviceId": "Jeff1",
        "SampleTime": "2020-11-05T02:59:07Z",
        "Position": [-122.33805, 47.52748]
  }
}

Finally, I can create and use place indexes so that I can work with geographical objects. I’ll use the CLI for a change of pace. I create the index:

$ aws location create-place-index \
  --index-name MyIndex1 --data-source Here

Then I query it to find the addresses and points of interest near the location:

$ aws location search-place-index-for-position --index-name MyIndex1 \
  --position "[-122.33805,47.62748]" --output json \
  |  jq .Results[].Place.Label
"Terry Ave N, Seattle, WA 98109, United States"
"900 Westlake Ave N, Seattle, WA 98109-3523, United States"
"851 Terry Ave N, Seattle, WA 98109-4348, United States"
"860 Terry Ave N, Seattle, WA 98109-4330, United States"
"Seattle Fireboat Duwamish, 860 Terry Ave N, Seattle, WA 98109-4330, United States"
"824 Terry Ave N, Seattle, WA 98109-4330, United States"
"9th Ave N, Seattle, WA 98109, United States"
...

I can also do a text-based search:

$ aws location search-place-index-for-text --index-name MyIndex1 \
  --text Coffee --bias-position "[-122.33805,47.62748]" \
  --output json | jq .Results[].Place.Label
"Mohai Cafe, 860 Terry Ave N, Seattle, WA 98109, United States"
"Starbucks, 1200 Westlake Ave N, Seattle, WA 98109, United States"
"Metropolitan Deli and Cafe, 903 Dexter Ave N, Seattle, WA 98109, United States"
"Top Pot Doughnuts, 590 Terry Ave N, Seattle, WA 98109, United States"
"Caffe Umbria, 1201 Westlake Ave N, Seattle, WA 98109, United States"
"Starbucks, 515 Westlake Ave N, Seattle, WA 98109, United States"
"Cafe 815 Mercer, 815 9th Ave N, Seattle, WA 98109, United States"
"Victrola Coffee Roasters, 500 Boren Ave N, Seattle, WA 98109, United States"
"Specialty's, 520 Terry Ave N, Seattle, WA 98109, United States"
...

Both of the searches have other options; read the Geocoding, Reverse Geocoding, and Search to learn more.

Things to Know
Amazon Location is launching today as a preview, and you can get started with it right away. During the preview we plan to add an API for routing, and will also do our best to respond to customer feedback and feature requests as they arrive.

Pricing is based on usage, with an initial evaluation period that lasts for three months and lets you make numerous calls to the Amazon Location APIs at no charge. After the evaluation period you pay the prices listed on the Amazon Location Pricing page.

Amazon Location is available in the US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), and Asia Pacific (Tokyo) Regions.

Jeff;

 

New –  FreeRTOS Long Term Support to Provide Years of Feature Stability

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/new-freertos-long-term-support-to-provide-years-of-feature-stability/

Today, I’m particularly happy to announce FreeRTOS Long Term Support (LTS). FreeRTOS is an open source, real-time operating system for microcontrollers that makes small, low-power edge devices easy to program, deploy, secure, connect, and manage. LTS releases offer a more stable foundation than standard releases as manufacturers deploy and later update devices in the field. As we have planned, LTS is now included in the FreeRTOS kernel and a set of FreeRTOS libraries needed for embedded and IoT applications, and for securely connecting microcontroller-based (MCU) devices to the cloud.

Embedded developers at original equipment manufacturers (OEMs) and MCU vendors using FreeRTOS to build long-lived applications on IoT devices now get the predictability and feature stability of an LTS release without compromising access to critical security updates. The FreeRTOS 202012.00 LTS release applies to the FreeRTOS kernel, connectivity libraries (FreeRTOS+TCP, coreMQTT, coreHTTP), security library (PKCS #11 implementation), and AWS library (AWS IoT Device Shadow).

We will provide security updates and critical bug fixes for all these libraries until December 31, 2022.

Benefits of FreeRTOS LTS
Embedded developers at OEMs who want to use FreeRTOS libraries for their long-lived applications want to benefit from security updates and bug fixes in the latest FreeRTOS mainline releases. Mainline releases can introduce both new features and critical fixes, which may increase time and effort for users to include only fixes.

An LTS release provides years of feature stability of included libraries. With an LTS release, any update will not change public APIs, file structure, or build processes that could require changes to your application. Security updates and critical bug fixes will be backported at least until Dec 31, 2022. LTS releases contain updates that only address critical issues including security vulnerabilities. Therefore, the integration of LTS releases is less disruptive to customers’ development and integration efforts as they approach and move into production. For MCU vendors, this means reduced effort in integrating a stable code base and faster time to market with vendors’ latest libraries.

Available Now
The FreeRTOS 202012.00 LTS release is available now to download. To learn more, visit FreeRTOS LTS and the documentation. Please send us feedback on the Github repository and the forum of FreeRTOS.

Channy

Announcing AWS IoT Greengrass 2.0 – With an Open Source Edge Runtime and New Developer Capabilities

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/announcing-aws-iot-greengrass-2-0-with-an-open-source-edge-runtime-and-new-developer-capabilities/

I am happy to announce AWS IoT Greengrass 2.0, a new version of AWS IoT Greengrass that makes it easy for device builders to build, deploy, and manage intelligent device software. AWS IoT Greengrass 2.0 provides an open source edge runtime, a rich set of pre-built software components, tools for local software development, and new features for managing software on large fleets of devices.

 

AWS IoT Greengrass 2.0 edge runtime is now open source under an Apache 2.0 license, and available on Github. Access to the source code allows you to more easily integrate your applications, troubleshoot problems, and build more reliable and performant applications that use AWS IoT Greengrass.

You can add or remove pre-built software components based on your IoT use case and your device’s CPU and memory resources. For example, you can choose to include pre-built AWS IoT Greengrass components such as stream manager only when you need to process data streams with your application, or machine learning components only when you want to perform machine learning inference locally on your devices.

The AWS IoT Greengrass IoT Greengrass 2.0 includes a new command-line interface (CLI) that allows you to locally develop and debug applications on your device. In addition, there is a new local debug console that helps you visually debug applications on your device. With these new capabilities, you can rapidly develop and debug code on a test device before using the cloud to deploy to your production devices.

AWS IoT Greengrass 2.0 is also integrated with AWS IoT thing groups, enabling you to easily organize your devices in groups and manage application deployments across your devices with features to control rollout rates, timeouts, and rollbacks.

AWS IoT Greengrass 2.0 – Getting Started
Device builders can use AWS IoT Greengrass 2.0 by going to the AWS IoT Greengrass console where you can find a download and install command that you run on your device. Once the installer is downloaded to the device, you can use it to install Greengrass software with all essential features, register the device as an AWS IoT Thing, and create a simple “hello world” software component in less than 10 minutes.

To get started in the AWS IoT Greengrass console, you first register a test device by clicking Set up core device. You assign the name and group of your core device. To deploy to only the core device, select No group. In the next step, install the AWS IoT Greengrass Core software in your device.

When the installer completes, you can find your device in the list of AWS IoT Greengrass Core devices on the Core devices page.

AWS IoT Greengrass components enable you to develop and deploy software to your AWS IoT Greengrass Core devices. You can write your application functionality and bundle it as a private component for deployment. AWS IoT Greengrass also provides public components, which provide pre-built software for common use cases that you can deploy to your devices as you develop your device software. When you finish developing the software for your component, you can register it with AWS IoT Greengrass. Then, you can deploy and run the component on your AWS IoT Greengrass Core devices.

 

To create a component, click the Create component button on the Components page. You can use a recipe or import an AWS Lambda function. The component recipe is a YAML or JSON file that defines the component’s details, dependencies, compatibility, and lifecycle. To learn about the specifications, visit the recipe reference guide.

Here is an example of a YAML recipe.

When you finish developing your component, you can add it to a deployment configuration to deploy to one or more core devices. To create a new deployment or configure the components to deploy to core devices, click the Create button on the Deployments page. You can deploy to a core device or a thing group as a target, and select the components to deploy. The deployment includes the dependencies for each component that you select.

 

You can edit the version and parameters of selected components and advanced settings such as the rollout configuration, which defines the rate at which the configuration deploys to the target devices; timeout configuration, which defines the duration that each device has to apply the deployment; or cancel configuration, which defines when to automatically stop the deployment.

Moving to AWS IoT Greengrass 2.0
Existing devices running AWS IoT Greengrass 1.x will continue to run without any changes. If you want to take advantage of new AWS IoT Greengrass 2.0 features, you will need to move your existing AWS IoT Greengrass 1.x devices and workloads to AWS IoT Greengrass 2.0. To learn how to do this, visit the migration guide.

After you move your 1.x applications over, you can start adding components to your applications using new version 2 features, while leaving your version 1 code as-is until you decide to update them.

AWS IoT Greengrass 2.0 Partners
At launch, industry-leading partners NVIDIA and NXP have qualified a number of their devices for AWS IoT Greengrass 2.0:

See all partner device listings in the AWS Partner Device Catalog. To learn about getting your device qualified, visit the AWS Device Qualification Program.

Available Now
AWS IoT Greengrass 2.0 is available today. Please see the AWS Region table for all the regions where AWS IoT Greengrass is available. For more information, see the developer guide.

Starting today, to help you evaluate, test, and develop with this new release of AWS IoT Greengrass, the first 1,000 devices in your account will not incur any AWS IoT Greengrass charges until December 31, 2021. For pricing information, check out the AWS IoT Greengrass pricing page.

Give it a try, and please send us feedback through your usual AWS Support contacts or the AWS forum for AWS IoT Greengrass.

Learn all the details about AWS IoT Greengrass 2.0 and get started with the new version today.

Channy

New – AWS IoT Core for LoRaWAN to Connect, Manage, and Secure LoRaWAN Devices at Scale

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/new-aws-iot-core-for-lorawan-to-connect-manage-and-secure-lorawan-devices-at-scale/

Today, I am happy to announce AWS IoT Core for LoRaWAN, a new fully-managed feature that allows AWS IoT Core customers to connect and manage wireless devices that use low-power long-range wide area network (LoRaWAN) connectivity with the AWS Cloud.

Using AWS IoT Core for LoRaWAN, customers can now set up a private LoRaWAN network by connecting their own LoRaWAN devices and gateways to the AWS Cloud – without developing or operating a LoRaWAN Network Server (LNS) by themselves. The LNS is required to manage LoRaWAN devices and gateways’ connection to the cloud; gateways serve as a bridge and carry device data to and from the LNS, usually over Wi-Fi or Ethernet.

This allows customers to eliminate the undifferentiated work and operational burden of managing an LNS, and enables them to easily and quickly connect and secure LoRaWAN device fleets at scale.

Combined with the long range and deep in-building coverage provided by LoRa technology, AWS IoT Core now enables customers to accelerate IoT application development using AWS services and acting on the data generated easily from connected LoRaWAN devices.

Customers – mostly enterprises – need to develop IoT applications using devices that transmit data over long range (1-3 miles of urban coverage or up to 10 miles for line-of-sight) or through the walls and floors of buildings, for example for real-time asset tracking at airports, remote temperature monitoring in buildings, or predictive maintenance of industrial equipment. Such applications also require devices to be optimized for low-power consumption, so that batteries can last several years without replacement, thus making the implementation cost-effective. Given the extended coverage of LoRaWAN connectivity, it is attractive to enterprises for these use cases, but setting up LoRaWAN connectivity in a privately managed site requires customers to operate an LNS.

With AWS IoT Core for LoRaWAN, you can connect LoRaWAN devices and gateways to the cloud with a few simple steps in the AWS IoT Management Console, thus speeding up the network setup time, and connect off-the-shelf LoRaWAN devices, without any requirement to modify embedded software, for a plug and play experience.

AWS IoT Core for LoRaWAN – Getting Started
Getting started with a LoRaWAN network setup is easy. You can find AWS IoT Core for LoRaWAN qualified gateways and developer kits from the AWS Partner Device Catalog. AWS qualified gateways and developer kits are pre-tested and come with a step by step guide from the manufacturer on how to connect it with AWS IoT Core for LoRaWAN.

With AWS IoT Core console, you can register the gateways by providing a gateway’s unique identifier (provided by the gateway vendor) and selecting LoRa frequency band. For registering devices, you can input device credentials (identifiers and security keys provided by the device vendor) on the console.

Each device has a Device Profile that specifies the device capabilities and boot parameters the LNS requires to set up LoRaWAN radio access service. Using the console, you can select a pre-populated Device Profile or create a new one.

A destination automatically routes messages from LoRaWAN devices to AWS IoT Rules Engine. Once a destination is created, you can use it to map multiple LoRaWAN devices to the same IoT rule. You can write rules using simple SQL queries, to transform and act on the device data, like converting data from proprietary binary to JSON format, raising alerts, or routing it to other AWS services like Amazon Simple Storage Service (S3). From the console, you can also query metrics for connected devices and gateways to troubleshoot connectivity issues.

Available Now
AWS IoT Core for LoRaWAN is available today in US East (N. Virginia) and Europe (Ireland) Regions. With pay-as-you-go pricing and no monthly commitments, you can connect and scale LoRaWAN device fleets reliably, and build applications with AWS services quickly and efficiently. For more information, see the pricing page.

To get started, buy an AWS qualified LoRaWAN developer kit and and launch Getting Started experience in the AWS Management Console. To learn more, visit the developer guide. Give this a try, and please send us feedback either through your usual AWS Support contacts or the AWS forum for AWS IoT.

Learn all the details about AWS IoT Core for LoRaWAN and get started with the new feature today.

Channy

Join the Preview – Amazon Managed Service for Prometheus (AMP)

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/join-the-preview-amazon-managed-service-for-prometheus-amp/

Observability is an essential aspect of running cloud infrastructure at scale. You need to know that your resources are healthy and performing as expected, and that your system is delivering the desired level of performance to your customers.

A lot of challenges arise when monitoring container-based applications. First, because container resources are transient and there are lots of metrics to watch, the monitoring data has strikingly high cardinality. In plain language this means that there are lots of unique values, which can make it harder to define a space-efficient storage model and to create queries that return meaningful results. Second, because a well-architected container-based system is composed using a large number of moving parts, ingesting, processing, and storing the monitoring data can become an infrastructure challenge of its own.

Prometheus is a leading open-source monitoring solution with an active developer and user community. It has a multi-dimensional data model that is a great fit for time series data collected from containers.

Introducing Amazon Managed Service for Prometheus (AMP)
Today we are launching a preview of Amazon Managed Service for Prometheus (AMP). This fully-managed service is 100% compatible with Prometheus. It supports the same metrics, the same PromQL queries, and can also make use of the 150+ Prometheus exporters. AMP runs across multiple Availability Zones for high availability, and is powered by CNCF Cortex for horizontal scalability. AMP will easily scale to ingest, store, and query millions of time series metrics.

The preview includes support for Amazon Elastic Kubernetes Service (EKS) and Amazon Elastic Container Service (ECS). It can also be used to monitor your self-managed Kubernetes clusters that are running in the cloud or on-premises.

Getting Started with Amazon Managed Service for Prometheus (AMP)
After joining the preview, I open the AMP Console, enter a name for my AMP workspace, and click Create to get started (API and CLI support is also available):

My workspace is active within a minute or so. The console provides me with the endpoints that I can use to write data to my workspace, and to issue queries:

It also provides guidance on how to configure an existing Prometheus server to send metrics to the AMP workspace:

I can also use AWS Distro for OpenTelemetry to scrape Prometheus metrics and send them to my AMP workspace.

Once I have stored some metrics in my workspace, I can run PromQL queries and I can use Grafana to create dashboards and other visualizations. Here’s a sample Grafana dashboard:

Join the Preview
As noted earlier, we’re launching Amazon Managed Service for Prometheus (AMP) in preview form and you are welcome to try it out today.

We’ll have more info (and a more detailed blog post) at launch time.

Jeff;

New – AWS Systems Manager Consolidates Application Management

Post Syndicated from Steve Roberts original https://aws.amazon.com/blogs/aws/new-aws-systems-manager-consolidates-application-management/

A desire for consolidated, and simplified operational oversight isn’t limited to just cloud infrastructure. Increasingly, customers ask us for a “single pane of glass” approach for also monitoring and managing their application portfolios.

These customers tell us that detection and investigation of application issues takes additional time and effort, due to the typical use of multiple consoles, tools, and sources of information such as resource usage metrics, logs, and more, to enable their DevOps engineers to obtain context about the application issue under investigation. Here, an “application” means not just the application code but also the logical group of resources that act as a unit to host the application, along with ownership boundaries for operators, and environments such as development, staging, and production.

Today, I’m pleased to announce a new feature of AWS Systems Manager, called Application Manager. Application Manager aggregates operational information from multiple AWS services and Systems Manager capabilities into a single console, making it easier to view operational data for your applications.

To make it even more convenient, the service can automatically discover your applications. Today, auto-discovery is available for applications running in AWS CloudFormation stacks and Amazon Elastic Kubernetes Service (EKS) clusters, or launched using AWS Launch Wizard. Applications can also be discovered from Resource Groups.

A particular benefit of automated discovery is that application components and resources are automatically kept up-to-date on an ongoing basis, but you can also always revise applications as needed by adding or deleting components manually.

With applications discovered and consolidated into a single console, you can more easily diagnose operational issues and resolve them with minimal time and effort. Automated runbooks targeting an application component or resource can be run to help remediate operational issues. For any given application, you can select a resource and explore relevant details without needing to leave the console.

For example, the application can surface Amazon CloudWatch logs, operational metrics, AWS CloudTrail logs, and configuration changes, removing the need to engage with multiple tools or consoles. This means your on-call engineers can understand issues more quickly and reduce the time needed to resolve them.

Exploring an Application with Application Manager
I can access Application Manager from the Systems Manager home page. Once open, I get an overview of my discovered applications and can see immediately that there are some alarms, without needing to switch context to the Amazon CloudWatch console, and some operations items (“OpsItems”) that I might need to pay attention to. I can also switch to the Applications tab to view the collections of applications, or I can click the buttons in the Applications panel for the collection I’m interested in.

Screenshot of the <span title="">Application Manager</span> overview page

In the screenshot below, I’ve navigated to a sample application and again, have indicators showing that alarms have raised. The various tabs enable me to drill into more detail to view resources used by the application, config resource and rules compliance, monitoring alarms, logs, and automation runbooks associated with the application.

Screenshot of application components and overview

Clicking on the Alarm indicator takes me into the Monitoring tab, and it shows that the ConsumedWriteCapacityUnits alarm has been raised. I can change the timescale to zero in on when the event occurred, or I can use the View recent alarms dashboard link to jump into the Amazon CloudWatch Alarms console to view more detail.

Screenshot of alarms on the <span title="">Application Manager</span> Monitoring tab

The Logs tab shows me a consolidated list of log groups for the application, and clicking a log group name takes me directly to the CloudWatch Logs where I can inspect the log streams, and take advantage of Log Insights to dive deeper by querying the log data.

OpsItems shows me operational issues associated with the resources of my application, and enables me to indicate the current status of the issue (open, in progress, resolved). Below, I am marking investigation of a stopped EC2 instance as in progress.

Screenshot of <span title="">Application Manager</span> OpsItems tab

Finally, Runbooks shows me automation documents associated with the application and their execution status. Below, it’s showing that I ran the AWS-RestartEC2Instance automation document to restart the EC2 instance that was stopped, and I would now resolve the issue logged in the OpsItems tab.

Screenshot of <span title="">Application Manager</span>'s Runbooks tab

Consolidating this information into a single console gives engineers a single starting location to monitor and investigate issues arising with their applications, and automatic discovery of applications and resources makes getting started simple. AWS Systems Manager Application Manager is available today, at no extra charge, in all public AWS Regions where Systems Manager is available.

Learn more about Application Manager and get started at AWS Systems Manager.

— Steve

New – AWS Systems Manager Fleet Manager

Post Syndicated from Steve Roberts original https://aws.amazon.com/blogs/aws/new-aws-systems-manager-fleet-manager/

Organizations, and their systems administrators, routinely face challenges in managing increasingly diverse portfolios of IT infrastructure across cloud and on-premises environments. Different tools, consoles, services, operating systems, procedures, and vendors all contribute to complicate relatively common, and related, management tasks. As workloads are modernized to adopt Linux and open-source software, those same systems administrators, who may be more familiar with GUI-based management tools from a Windows background, have to continually adapt and quickly learn new tools, approaches, and skill sets.

AWS Systems Manager is an operational hub enabling you to manage resources on AWS and on-premises. Available today, Fleet Manager is a new console based experience in Systems Manager that enables systems administrators to view and administer their fleets of managed instances from a single location, in an operating-system-agnostic manner, without needing to resort to remote connections with SSH or RDP. As described in the documentation, managed instances includes those running Windows, Linux, and macOS operating systems, in both the AWS Cloud and on-premises. Fleet Manager gives you an aggregated view of your compute instances regardless of where they exist.

All that’s needed, whether for cloud or on-premises servers, is the Systems Manager agent installed on each server to be managed, some AWS Identity and Access Management (IAM) permissions, and AWS Key Management Service (KMS) enabled for Systems Manager‘s Session Manager. This makes it an easy and cost-effective approach for remote management of servers running in multiple environments without needing to pay the licensing cost of expensive management tools you may be using today. As noted earlier, it also works with instances running macOS. With the agent software and permissions set up, Fleet Manager enables you to explore and manage your servers from a single console environment. For example, you can navigate file systems, work with the registry on Windows servers, manage users, and troubleshoot logs (including viewing Windows event logs) and monitor common performance counters without needing the Amazon CloudWatch agent to be installed.

Exploring an Instance With Fleet Manager
To get started exploring my instances using Fleet Manager, I first head to the Systems Manager console. There, I select the new Fleet Manager entry on the navigation toolbar. I can also select the Managed Instances option – Fleet Manager replaces Managed Instances going forward, but the original navigation toolbar entry will be kept for backwards compatibility for a short while. But, before we go on to explore my instances, I need to take you on a brief detour.

When you select Fleet Manager, as with some other views in Systems Manager, a check is performed to verify that a role, named AmazonSSMRoleForInstancesQuickSetup, exists in your account. If you’ve used other components of Systems Manager in the past, it’s quite possible that it does. The role is used to permit Systems Manager to access your instances on your behalf and if the role exists, then you’re directed to the requested view. If however the role doesn’t exist, you’ll first be taken to the Quick Setup view. This in itself will trigger creation of the role, but you might want to explore the capabilities of Quick Setup, which you can also access any time from the navigation toolbar.

Quick Setup is a feature of Systems Manager that you can use to set up specific configuration items, such as the Systems Manager and CloudWatch agents on your instances (and keep them up-to-date), and also IAM roles permitting access to your resources for Systems Manager components. For this post, all the instances I’m going to use already have the required agent set up, including the role permissions, so I’m not going to discuss this view further but I encourage you to check it out. I also want to remind you that to take full advantage of Fleet Manager‘s capabilities you first need to have KMS encryption enabled for your instances and secondly, the role attached to your Amazon Elastic Compute Cloud (EC2) instances must have the kms:Decrypt role permission included, referencing the key you selected when you enabled KMS encryption. You can enable encryption, and select the KMS key, using the Preferences section of the Session Manager console, and of course you can set up the role permission in the IAM console.

That’s it for the diversion; if you have the role already, as I do, you’ll now be at the Managed instances list view. If you’re at Quick Setup instead, simply click the Fleet Manager navigation button once more.

The Managed instances view shows me all of my instances, in the cloud or on-premises, that I can access. Selecting an instance, in this case an EC2 Windows instance launched using AWS Elastic Beanstalk, and clicking Instance actions presents me with a menu of options. The options (less those specific to Windows) are available for my Amazon Linux instance too, and for instances running macOS I can use the View file system option.

Screenshot of <span title="">Fleet Manager</span>'s Managed instances view

The File system view displays a read-only view onto the file system of the selected instance. This can be particularly useful for viewing text-based log files, for example, where I can preview up to 10,000 lines of a log file and even tail it to view changes as the log updates. I used this to open and tail an IIS web server log on my Windows Server instance. Having selected the instance, I next select View file system from the Instance actions dropdown (or I can click the Instance ID to open a view onto that instance and select File system from the menu displayed on the instance view).

Having opened the file system view for my instance, I navigate to the folder on the instance containing the IIS web server logs.

Screenshot of <span title="">Fleet Manager</span>'s File system view

Selecting a log file, I then click Actions and select Tail file. This opens a view onto the log file contents, which updates automatically as new content is written.

Screenshot of tailing a log file in <span title="">Fleet Manager</span>

As I mentioned, the File system view is also accessible for macOS-based instances. For example, here is a screenshot of viewing the Applications folder on an EC2 macOS instance.

Screenshot of macOS file system view in <span title="">Fleet Manager</span>

Next, let’s examine the Performance counters view, which is available for both Windows and Linux instances. This view displays CPU, memory, network traffic, and disk I/O and will be familiar to Windows users from Task Manager. The metrics shown reflect the guest OS metrics, whereas EC2 instance metrics you may be used to relate to the hypervisor. On this particular instance I’ve deployed an ASP.NET Core 5 application, which generates a varying length collection of Fibonacci numbers on page refresh. Below is a snapshot of the counters, after I’ve put the instance under a small amount of load. The view updates automatically every 5 seconds.

Screenshot of <span title="">Fleet Manager</span>'s Performance Counters view

There are more views available than I have space for in this post. Using the Windows Registry view, I can view and edit the registry on the selected Windows instance. Windows event logs gives me access to the Application and Service logs, and common Windows logs such as System, Setup, Security, etc. With Users and groups I can manage users or groups, including assignment of users to groups (again for both Windows and Linux instances). For all views, Fleet Manager enables me to use a single and convenient console.

Getting Started
AWS Systems Manager Fleet Manager is available today for use with managed instances running Windows, Linux, and macOS. Information on pricing, for this and other Systems Manager features, can be found at this page.

Learn more, and get started with Fleet Manager today, at AWS Systems Manager.

— Steve