Identifying publicly accessible resources with Amazon VPC Network Access Analyzer

Post Syndicated from Patrick Duffy original https://aws.amazon.com/blogs/security/identifying-publicly-accessible-resources-with-amazon-vpc-network-access-analyzer/

Network and security teams often need to evaluate the internet accessibility of all their resources on AWS and block any non-essential internet access. Validating who has access to what can be complicated—there are several different controls that can prevent or authorize access to resources in your Amazon Virtual Private Cloud (Amazon VPC). The recently launched Amazon VPC Network Access Analyzer helps you understand potential network paths to and from your resources without having to build automation or manually review security groups, network access control lists (network ACLs), route tables, and Elastic Load Balancing (ELB) configurations. You can use this information to add security layers, such as moving instances to a private subnet behind a NAT gateway or moving APIs behind AWS PrivateLink, rather than use public internet connectivity. In this blog post, we show you how to use Network Access Analyzer to identify publicly accessible resources.

What is Network Access Analyzer?

Network Access Analyzer allows you to evaluate your network against your design requirements and network security policy. You can specify your network security policy for resources on AWS through a Network Access Scope. Network Access Analyzer evaluates the configuration of your Amazon VPC resources and controls, such as security groups, elastic network interfaces, Amazon Elastic Compute Cloud (Amazon EC2) instances, load balancers, VPC endpoint services, transit gateways, NAT gateways, internet gateways, VPN gateways, VPC peering connections, and network firewalls.

Network Access Analyzer uses automated reasoning to produce findings of potential network paths that don’t meet your network security policy. Network Access Analyzer reasons about all of your Amazon VPC configurations together rather than in isolation. For example, it produces findings for paths from an EC2 instance to an internet gateway only when the following conditions are met: the security group allows outbound traffic, the network ACL allows outbound traffic, and the instance’s route table has a route to an internet gateway (possibly through a NAT gateway, network firewall, transit gateway, or peering connection). Network Access Analyzer produces actionable findings with more context such as the entire network path from the source to the destination, as compared to the isolated rule-based checks of individual controls, such as security groups or route tables.

Sample environment

Let’s walk through a real-world example of using Network Access Analyzer to detect publicly accessible resources in your environment. Figure 1 shows an environment for this evaluation, which includes the following resources:

  • An EC2 instance in a public subnet allowing inbound public connections on port 80/443 (HTTP/HTTPS).
  • An EC2 instance in a private subnet allowing connections from an Application Load Balancer on port 80/443.
  • An Application Load Balancer in a public subnet with a Target Group connected to the private web server, allowing public connections on port 80/443.
  • An Amazon Aurora database in a public subnet allowing public connections on port 3306 (MySQL).
  • An Aurora database in a private subnet.
  • An EC2 instance in a public subnet allowing public connections on port 9200 (OpenSearch/Elasticsearch).
  • An Amazon EMR cluster allowing public connections on port 8080.
  • A Windows EC2 instance in a public subnet allowing public connections on port 3389 (Remote Desktop Protocol).
Figure 1: Example environment of web servers hosted on EC2 instances, remote desktop servers hosted on EC2, Relational Database Service (RDS) databases, Amazon EMR cluster, and OpenSearch cluster on EC2

Figure 1: Example environment of web servers hosted on EC2 instances, remote desktop servers hosted on EC2, Relational Database Service (RDS) databases, Amazon EMR cluster, and OpenSearch cluster on EC2

Let us assume that your organization’s security policy requires that your databases and analytics clusters not be directly accessible from the internet, whereas certain workload such as instances for web services can have internet access only through an Application Load Balancer over ports 80 and 443. Network Access Analyzer allows you to evaluate network access to resources in your VPCs, including database resources such as Amazon RDS and Amazon Aurora clusters, and analytics resources such as Amazon OpenSearch Service clusters and Amazon EMR clusters. This allows you to govern network access to your resources on AWS, by identifying network access that does not meet your security policies, and creating exclusions for paths that do have the appropriate network controls in place.

Configure Network Access Analyzer

In this section, you will learn how to create network scopes, analyze the environment, and review the findings produced. You can create network access scopes by using the AWS Command Line Interface (AWS CLI) or AWS Management Console. When creating network access scopes using the AWS CLI, you can supply the scope by using a JSON document. This blog post provides several network access scopes as JSON documents that you can deploy to your AWS accounts.

To create a network scope (AWS CLI)

  1. Verify that you have the AWS CLI installed and configured.
  2. Download the network-scopes.zip file, which contains JSON documents that detect the following publicly accessible resources:
    • OpenSearch/Elasticsearch clusters
    • Databases (MySQL, PostgreSQL, MSSQL)
    • EMR clusters
    • Windows Remote Desktop
    • Web servers that can be accessed without going through a load balancer

    Make note of the folder where you save the JSON scopes because you will need it for the next step.

  3. Open a systems shell, such as Bash, Zsh, or cmd.
  4. Navigate to the folder where you saved the preceding JSON scopes.
  5. Run the following commands in the shell window:
    aws ec2 create-network-insights-access-scope 
    --cli-input-json file://detect-public-databases.json 
    --tag-specifications 'ResourceType="network-insights-access-scope",
    Tags=[{Key="Name",Value="detect-public-databases"},{Key="Description",
    		   Value="Detects publicly accessible databases."}]' 
    --region us-east-1
    
    aws ec2 create-network-insights-access-scope 
    --cli-input-json file://detect-public-elastic.json 
    --tag-specifications 'ResourceType="network-insights-access-scope",
    Tags=[{Key="Name",Value="detect-public-opensearch"},{Key="Description",
    		   Value="Detects publicly accessible OpenSearch/Elasticsearch endpoints."}]' 
    --region us-east-1
    
    aws ec2 create-network-insights-access-scope 
    --cli-input-json file://detect-public-emr.json 
    --tag-specifications 'ResourceType="network-insights-access-scope",
    Tags=[{Key="Name",Value="detect-public-emr"},{Key="Description",
    		   Value="Detects publicly accessible Amazon EMR endpoints."}]'
    --region us-east-1
    
    aws ec2 create-network-insights-access-scope 
    --cli-input-json file://detect-public-remotedesktop.json 
    --tag-specifications 'ResourceType="network-insights-access-scope",
    Tags=[{Key="Name",Value="detect-public-remotedesktop"},{Key="Description",
    		   Value="Detects publicly accessible Microsoft Remote Desktop servers."}]' 
    --region us-east-1
    
    aws ec2 create-network-insights-access-scope 
    --cli-input-json file://detect-public-webserver-noloadbalancer.json 
    --tag-specifications 'ResourceType="network-insights-access-scope",
    Tags=[{Key="Name",Value="detect-public-webservers"},{Key="Description",
    		   Value="Detects publicly accessible web servers that can be accessed without using a load balancer."}]' 
    --region us-east-1
    
    

Now that you’ve created the scopes, you will analyze them to find resources that match your match conditions.

To analyze your scopes (console)

  1. Open the Amazon VPC console.
  2. In the navigation pane, under Network Analysis, choose Network Access Analyzer.
  3. Under Network Access Scopes, select the checkboxes next to the scopes that you want to analyze, and then choose Analyze, as shown in Figure 2.
    Figure 2: Custom network scopes created for Network Access Analyzer

    Figure 2: Custom network scopes created for Network Access Analyzer

If Network Access Analyzer detects findings, the console indicates the status Findings detected for each scope, as shown in Figure 3.

Figure 3: Network Access Analyzer scope status

Figure 3: Network Access Analyzer scope status

To review findings for a scope (console)

  1. On the Network Access Scopes page, under Network Access Scope ID, select the link for the scope that has the findings that you want to review. This opens the latest analysis, with the option to review past analyses, as shown in Figure 4.
    Figure 4: Finding summary identifying Amazon Aurora instance with public access to port 3306

    Figure 4: Finding summary identifying Amazon Aurora instance with public access to port 3306

  2. To review the path for a specific finding, under Findings, select the radio button to the left of the finding, as shown in Figure 4. Figure 5 shows an example of a path for a finding.
    Figure 5: Finding details showing access to the Amazon Aurora instance from the internet gateway to the elastic network interface, allowed by a network ACL and security group.

    Figure 5: Finding details showing access to the Amazon Aurora instance from the internet gateway to the elastic network interface, allowed by a network ACL and security group.

  3. Choose any resource in the path for detailed information, as shown in Figure 6.
    Figure 6: Resource detail within a finding outlining a specific security group allowing access on port 3306

    Figure 6: Resource detail within a finding outlining a specific security group allowing access on port 3306

How to remediate findings

After deploying network scopes and reviewing findings for publicly accessible resources, you should next limit access to those resources and remove public access. Use cases vary, but the scopes outlined in this post identify resources that you should share publicly in a more secure manner or remove public access entirely. The following techniques will help you align to the Protecting Networks portion of the AWS Well-Architected Framework Security Pillar.

If you have a need to share a database with external entities, consider using AWS PrivateLink, VPC peering, or use AWS Site-to-Site VPN to share access. You can remove public access by modifying the security group attached to the RDS instance or EC2 instance serving the database, but you should migrate the RDS database to a private subnet as well.

When creating web servers in EC2, you should not place web servers directly in a public subnet with security groups allowing HTTP and HTTPS ports from all internet addresses. Instead, you should place your EC2 instances in private subnets and use Application Load Balancers in a public subnet. From there, you can attach a security group that allows HTTP/HTTPS access from public internet addresses to your Application Load Balancer, and attach a security group that allows HTTP/HTTPS from your Load Balancer security group to your web server EC2 instances. You can also associate AWS WAF web ACLs to the load balancer to protect your web applications or APIs against common web exploits and bots that may affect availability, compromise security, or consume excessive resources.

Similarly, if you have OpenSearch/Elasticsearch running on EC2 or Amazon OpenSearch Service, or are using Amazon EMR, you can share these resources using PrivateLink. Use the Amazon EMR block public access configuration to verify that your EMR clusters are not shared publicly.

To connect to Remote Desktop on EC2 instances, you should use AWS Systems Manager to connect using Fleet Manager. Connecting with Fleet Manager only requires your Windows EC2 instances to be a managed node. When connecting using Fleet Manager, the security group requires no inbound ports, and the instance can be in a private subnet. For more information, see the Systems Manager prerequisites.

Conclusion

This blog post demonstrates how you can identify and remediate publicly accessible resources. Amazon VPC Network Access Analyzer helps you identify available network paths by using automated reasoning technology and user-defined access scopes. By using these scopes, you can define non-permitted network paths, identify resources that have those paths, and then take action to increase your security posture. To learn more about building continuous verification of network compliance at scale, see the blog post Continuous verification of network compliance using Amazon VPC Network Access Analyzer and AWS Security Hub. Take action today by deploying the Network Access Analyzer scopes in this post to evaluate your environment and add layers of security to best fit your needs.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Patrick Duffy

Patrick is a Solutions Architect in the Small Medium Business (SMB) segment at AWS. He is passionate about raising awareness and increasing security of AWS workloads. Outside work, he loves to travel and try new cuisines and enjoys a match in Magic Arena or Overwatch.

Peter Ticali

Peter Ticali

Peter is a Solutions Architect focused on helping Media & Entertainment customers transform and innovate. With over three decades of professional experience, he’s had the opportunity to contribute to architecture that stream live video to millions, including two Super Bowls, PPVs, and even a Royal Wedding. Previously he held Director, and CTO roles in the EdTech, advertising & public relations space. Additionally, he is a published photo journalist.

John Backes

John Backes

John is a Senior Applied Scientist in AWS Networking. He is passionate about applying Automated Reasoning to network verification and synthesis problems.

Vaibhav Katkade

Vaibhav Katkade

Vaibhav is a Senior Product Manager in the Amazon VPC team. He is interested in areas of network security and cloud networking operations. Outside of work, he enjoys cooking and the outdoors.

Happy 10th Anniversary, Amazon S3 Glacier – A Decade of Cold Storage in the Cloud

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/happy-10th-anniversary-amazon-s3-glacier-a-decade-of-cold-storage-in-the-cloud/

Ten years ago, on August 20, 2012, AWS announced the general availability of Amazon Glacier, secure, reliable, and extremely low-cost storage designed for data archiving and backup. At the time, I was working as an AWS customer and it felt like an April Fools’ joke, offering long-term, secure, and durable cloud storage that allowed me to archive large amounts of data at a very low cost.

In Jeff’s original blog post for this launch, he noted that:

Glacier provides, at a cost as low as $0.01 (one US penny, one one-hundredth of a dollar) per Gigabyte per month, extremely low-cost archive storage. You can store a little bit, or you can store a lot (terabytes, petabytes, and beyond). There’s no upfront fee, and you pay only for the storage that you use. You don’t have to worry about capacity planning, and you will never run out of storage space.

Ten years later, Amazon S3 Glacier has evolved to be the best place in the world for you to store your archive data. The Amazon S3 Glacier storage classes are purpose-built for data archiving, providing you with the highest performance, most retrieval flexibility, and the lowest cost archive storage in the cloud.

You can now choose from three archive storage classes optimized for different access patterns and storage duration – Amazon S3 Glacier Instant Retrieval, Amazon S3 Glacier Flexible Retrieval (formerly Amazon S3 Glacier), and Amazon S3 Glacier Deep Archive. We’ll dive into each of these storage classes in a bit.

A Decade of Innovation in Amazon S3 Glacier
To understand how we got here, we’ll walk through through the last decade and revisit some of the most significant Amazon S3 Glacier launches that fundamentally changed archive storage forever:

August 2012 – Amazon Glacier: Archival Storage for One Penny per GB per Month
We launched Amazon Glacier to store any amount of data with high durability at a cost that allows you to get rid of your tape libraries and all the operational complexity and overhead that have been part of data archiving for decades. Amazon Glacier was modeled on S3’s durability and dependability but designed and built from the ground up to offer an archival storage to you at an extremely low cost. At that time, Glacier introduced the concept of a “vault” for storing archival data. You could then easily retrieve your archival data by initiating a request and then the data was made available to you for download in 3–5 hours.

November 2012 – Archiving Amazon S3 Data to Glacier
While Glacier was purpose-built from the ground up for archival data, many customers had object data that originated in S3 warmer storage that they would eventually want to move to colder storage. To make that easy for customers, Amazon S3’s Lifecycle Management (aka Lifecycle Rule) integrated S3 and Glacier and made the details visible via the storage class of each object. Lifecycle Management allows you to define time-based rules that can start Transition (changing S3 storage class to Glacier) and Expiration (deletion of objects). In 2014, we combined the flexibility of S3 versioned objects with Glacier, helping you to further reduce your overall storage costs.

November 2016 – Glacier Price Reductions and Additional Retrieval Options for Glacier
As part of AWS’s long-term focus on reducing costs and passing along those savings to customers, we reduced the price of Glacier storage to $0.004 (less than half a cent) in the case of 1 GB for 1 month in the US East (N. Virginia) Region, from $0.007 in 2015 and $0.010 in 2012. With storing data at a very low cost but having flexibility in how quickly they can retrieve the data, we introduced two more options for data retrieval that were based on the amount of data that you stored in Glacier and the rate at which you retrieved it. You could select expedited retrieval (typically taking 1–5 minutes), bulk retrieval (5–12 hours), or the existing standard retrieval method (3–5 hours).

November 2018 – Amazon S3 Glacier Storage Class to Integrate S3 Experiences
Glacier customers appreciated the way they could easily move data from S3 to Glacier via S3 lifecycle management, and wanted us to expand on that capability to use the most common S3 APIs to operate directly on S3 Glacier objects. So, we added S3 PUT API to S3 Glacier, which enables you to use the standard S3 PUT API and select any storage class, including S3 Glacier, to store the data. Data can be stored directly in S3 Glacier, eliminating the need to upload to S3 Standard and immediately transition to S3 Glacier with a zero-day lifecycle policy. So, you could PUT to S3 Glacier like any other S3 storage class.

March 2019 – Amazon S3 Glacier Deep Archive – the Lowest Cost Storage in the Cloud
While the original Glacier service offered an extremely low price for archival storage, we challenged ourselves to see if we could find a way to invent an even lower priced storage offering for very cold data. The Amazon S3 Glacier Deep Archive storage class delivers the lowest cost storage, up to 75 percent lower cost (than S3 Glacier Flexible Retrieval), for long-lived archive data that is accessed less than once per year and is retrieved asynchronously. At just $0.00099 per GB-month (or $1 per TB-month), S3 Glacier Deep Archive offers the lowest cost storage in the cloud at prices significantly lower than storing and maintaining data in on-premises tape or archiving data off-site.

November 2020 – Amazon S3 Intelligent-Tiering adds Archive Access and Deep Archive Access tiers
In November 2018, we launched Amazon S3 Intelligent-Tiering, the only cloud storage class that delivers automatic storage cost savings, up to 95 percent when data access patterns change, without performance impact or operational overhead. In order to offer customers the simplicity and flexibility of S3 Intelligent-Tiering and the low storage cost of archival data, we added the Archive Access tier providing the same performance and pricing as the S3 Glacier storage class as well as the Deep Archive Access tier which offers the same performance and pricing as the S3 Glacier Deep Archive storage class.

November 2021 – Amazon S3 Glacier Flexible Retrieval and S3 Glacier Instant Retrieval
The Amazon S3 Glacier storage class was renamed to Amazon S3 Glacier Flexible Retrieval and now includes free bulk retrievals along with an additional 10 percent price reduction across all Regions, making it optimized for use cases such as backup and disaster recovery.

Additionally, customers asked us for a storage solution that had the low costs of Glacier but allowed for fast access when data was needed very quickly. So, we introduced Amazon S3 Glacier Instant Retrieval, a new archive storage class that delivers the lowest cost storage for long-lived data that is rarely accessed and requires milliseconds retrieval. You can save up to 68 percent on storage costs compared to using the S3 Standard-Infrequent Access (S3 Standard-IA) storage class when your data is accessed once per quarter.

The Amazon S3 Intelligent-Tiering storage class also recently added a new Archive Instant Access tier, providing the same performance and pricing as the S3 Glacier Instant Retrieval storage class which delivers automatic 68% cost savings for customers using S3 Intelligent-Tiering with long-lived data.

Then and Now
Customers across all industries and verticals use the S3 Glacier storage classes for every imaginable archival workload. Accessing and using the S3 Glacier storage classes through the S3 APIs and S3 console provides enhanced functionality for data management and cost optimization.

As we discussed above, you can now choose from three archive storage classes optimized for different access patterns and storage duration:

  • S3 Glacier Instant Retrieval – For archive data that needs immediate access, such as medical images, news media assets, or genomics data, choose the S3 Glacier Instant Retrieval storage class, an archive storage class that delivers the lowest cost storage with milliseconds retrieval.
  • S3 Glacier Flexible Retrieval – For archive data that does not require immediate access but needs to have the flexibility to retrieve large sets of data at no cost, such as backup or disaster recovery use cases, choose the S3 Glacier Flexible Retrieval storage class, with retrieval in minutes or free bulk retrievals in 12 hours.
  • S3 Glacier Deep Archive – For retaining data for 7–10 years or longer to meet customer needs and regulatory compliance requirements, such as financial services, healthcare, media and entertainment, and public sector, choose the S3 Glacier Deep Archive storage class, the lowest cost storage in the cloud with data retrieval within 12–48 hours.

Watch a brief introduction video for an overview of the S3 Glacier storage classes.

All S3 Glacier storage classes are designed for 99.999999999% (11 9s) of durability for objects. Data is redundantly stored across three or more Availability Zones that are physically separated within an AWS Region. Here are some comparisons across the S3 Glacier storage classes at a glance:

Performances S3 Glacier
Instant Retrieval
S3 Glacier
Flexible Retrieval
S3 Glacier
Deep Archive
Availability 99.9% 99.99% 99.99%
Availability SLA 99% 99.9% 99.9%
Minimum capacity charge per object 128 KB 40 KB 40 KB
Minimum storage duration charge 90 days 90 days 180 days
Retrieval charge per GB per GB per GB
Retrieval time milliseconds Expedited (1–5 minutes),
Standard (3–5 hours),
Bulk (5–12 hours) free
Standard (within 12 hours),
Bulk (within 48 hours)

For data with changing access patterns that you want to automatically archive based on the last access of that data, choose the S3 Intelligent-Tiering storage class. Doing so will optimize storage costs by automatically moving data to the most cost-effective access tier when access patterns change. Its Archive Instant Access, Archive Access, and Deep Archive Access tiers have the same performance as S3 Glacier Instant Retrieval, S3 Glacier Flexible Retrieval, and S3 Glacier Deep Archive respectively. To learn more, see the blog post Automatically archive and restore data with Amazon S3 Intelligent-Tiering.

To get started with S3 Glacier, see the blog post Best practices for archiving large datasets with AWS for key considerations and actions when planning your cold data storage patterns. You can also use a hands-on lab tutorial that will help you get started with the S3 Glacier storage classes in just 20 minutes, and start archiving your data in the S3 Glacier storage classes in the S3 console.

Happy Birthday, Amazon S3 Glacier!
During the last AWS Storage Day 2022, Kevin Miller, VP & GM of Amazon S3, mentioned the 10th anniversary of S3 Glacier and its pace of innovation for many customer use cases throughout his interview with theCUBE.

In this expanding world of data growth, you have to have an archiving strategy. Everyone has archival data — every company, every vertical, and every industry. There is an archiving need not only for companies that have been around for a while but also for digital native businesses.

Lots of AWS customers such as Nasdaq, Electronic Arts, and NASCAR have used S3 Glacier storage classes for their backup and archiving workloads. The following are some additional recent customer-authored blogs focusing on AWS archiving best practices from customers in the financial, media, gaming, and software industries.

A big thank you to all of our S3 Glacier customers from around the world! Over 90 percent of S3’s roadmap has come directly from feedback from customers like you. We will never stop listening to you, as your feedback and ideas are essential to how we improve the service. Thank you for trusting us and for constantly raising the bar and pushing us to improve to lower costs, simplify your storage, increase your agility, and allow you to innovate faster.

In accordance with Customer Obsession, one of the Amazon Leadership Principles, your feedback is always welcome! If you want to see new S3 Glacier features and capabilities, please send any feedback to AWS re:Post for S3 Glacier or through your usual AWS Support contacts.

– Channy

[$] LRU-list manipulation with DAMON

Post Syndicated from original https://lwn.net/Articles/905370/

The DAMON
subsystem, which entered the
kernel during the 5.15 release cycle, uses various heuristics to determine
which pages of memory are in active use. Since the beginning, the intent
has been to use this information to influence memory management. The 6.0
kernel contains another step in this direction, giving DAMON the ability to
actively reorder pages on the kernel’s least-recently-used (LRU) lists.

Network Access for Sale: Protect Your Organization Against This Growing Threat

Post Syndicated from Jeremy Makowski original https://blog.rapid7.com/2022/08/22/network-access-for-sale-protect-your-organization-against-this-growing-threat/

Network Access for Sale: Protect Your Organization Against This Growing Threat

Vulnerable network access points are a potential gold mine for threat actors who, once inside, can exploit them persistently. Many cybercriminals are not only interested in obtaining personal information but also seek corporate information that could be sold to the highest bidder.

Infiltrating corporate networks

To infiltrate corporate networks, threat actors typically use several techniques, including:

Social engineering and phishing attacks

Threat actors collect email addresses, phone numbers, and information shared on social media platforms to target key people within an organization using phishing campaigns to collect credentials. Moreover, many threat actors managed to find the details of potential victims via leaked databases posted on dark web forums.

Malware infection and remote access

Another technique used by threat actors to gain access to corporate networks is malware infection. This technique consists of spreading malware, such as trojans, through a network of botnets to infect thousands of computers around the world.

Once infected, a computer can be remotely controlled to gain full access to the company network that it is connected to. It is not rare to find threat actors with botnets on hacking forums looking for partnerships to target companies.

Network Access for Sale: Protect Your Organization Against This Growing Threat

Network and system vulnerabilities

Some threat actors will prefer to take advantage of vulnerabilities within networks or systems rather than developing offensive cyber tools or using social engineering techniques. The vulnerabilities exploited are usually related to:

  • Outdated or unpatched software that exposes systems and networks
  • Misconfigured operating systems or firewalls allowing default policies to be enabled
  • Ports that are open by default on servers
  • Poor network segmentation with unsecured interconnections

Selling network access on underground forums and markets

Since gaining access to corporate networks can take a lot of effort, some cybercriminals prefer to simply buy access to networks that have already been compromised or information that was extracted from them. As a result, it has become common for cybercriminals to sell access to corporate networks on cybercrime forms.

Usually, the types of access that are sold on underground hacking forums are SSH, cPanels, RDP, RCE, SH, Citrix, SMTP, and FTP. The price of network access is usually based on a few criteria, such as the size and revenue of the company, as well as the number of devices connected to the network. It usually goes from a few hundred dollars to a couple thousand dollars. Companies in all industries and sectors have been impacted.

Network Access for Sale: Protect Your Organization Against This Growing Threat

Network Access for Sale: Protect Your Organization Against This Growing Threat

For these reasons, it is increasingly important for organizations to have visibility into external threats. Threat intelligence solutions can deliver 360-degree visibility of what is happening on forums, markets, encrypted messaging applications, and other deep and darknet platforms where many cybercriminals operate tirelessly.

In order to protect your internal assets, ensure the following measures exist within the company and are implemented correctly.

  • Keep all systems and network updated.
  • Implement a network and systems access control solution.
  • Implement a two-factor authentication solution.
  • Use an encrypted VPN.
  • Perform network segmentation with security interfaces between networks.
  • Perform periodic internal security audit.
  • Use a threat intelligence solution to keep updated on external threats.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Security updates for Monday

Post Syndicated from original https://lwn.net/Articles/905590/

Security updates have been issued by Debian (jetty9 and kicad), Fedora (community-mysql and trafficserver), Gentoo (chromium, gettext, tomcat, and vim), Mageia (apache-mod_wsgi, libitrpc, libxml2, teeworlds, wavpack, and webkit2), Red Hat (podman), Slackware (vim), SUSE (java-1_8_0-openjdk, nodejs10, open-iscsi, rsync, and trivy), and Ubuntu (exim4).

Hyundai Uses Example Keys for Encryption System

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/08/hyundai-uses-example-keys-for-encryption-system.html

This is a dumb crypto mistake I had not previously encountered:

A developer says it was possible to run their own software on the car infotainment hardware after discovering the vehicle’s manufacturer had secured its system using keys that were not only publicly known but had been lifted from programming examples.

[…]

“Turns out the [AES] encryption key in that script is the first AES 128-bit CBC example key listed in the NIST document SP800-38A [PDF]”.

[…]

Luck held out, in a way. “Greenluigi1” found within the firmware image the RSA public key used by the updater, and searched online for a portion of that key. The search results pointed to a common public key that shows up in online tutorials like “RSA Encryption & Decryption Example with OpenSSL in C.

EDITED TO ADD (8/23): Slashdot post.

Friday Squid Blogging: The Language of the Jumbo Flying Squid

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/08/friday-squid-blogging-the-language-of-the-jumbo-flying-squid.html

The jumbo flying squid (Dosidicus gigas) uses its color-changing ability as a language:

In 2020, however, marine biologists discovered that jumbo flying squid are surprisingly coordinated. Despite their large numbers, the squid rarely bumped into each other or competed for the same prey. The scientists hypothesized that the flickering pigments allowed the squid to quickly communicate complex messages, such as when it was preparing to attack and what it was targeting.

The researchers observed that the squid displayed 12 distinct pigmentation patterns in a variety of sequences, similar to how humans arrange words in a sentence. For example, squid darkened while pursuing prey and then shifted to a half light/half dark pattern immediately before striking. The researchers hypothesized that these whole-body pigment changes signaled a precise action, such as “I’m about to attack.”

More interestingly (or worrisome), the researchers also believe the squid use subtle pigment changes to provide more context to the action. For example, they sometimes flashed pale stripes along their torso before darkening, possibly denoting the type or location of prey. This suggested that the squid may arrange the patterns to modify the meaning of other patterns, creating what humans call “syntax.”

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Metasploit Wrap-Up

Post Syndicated from Alan David Foster original https://blog.rapid7.com/2022/08/19/metasploit-wrap-up-172/

Advantech iView NetworkServlet Command Injection

Metasploit Wrap-Up

This week Shelby Pace has developed a new exploit module for CVE-2022-2143. This module uses an unauthenticated command injection vulnerability to gain remote code execution against vulnerable versions of Advantech iView software below 5.7.04.6469. The software runs as NT AUTHORITY\SYSTEM, granting the module user unauthenticated privileged access with relatively low effort. Version 5.7.04.6469 has been patched to require authentication, but remote code execution can still be achieved – gaining a shell as the LOCAL SERVICE user.

Cisco ASA ASDM Brute-force Login

Our very own Jake Baines has contributed a new module which scans for the Cisco ASA ASDM landing page and performs login brute-force to identify valid credentials:

msf6 > use auxiliary/scanner/http/cisco_asa_asdm_bruteforce
msf6 auxiliary(scanner/http/cisco_asa_asdm_bruteforce) > set RHOST 10.9.49.201
RHOST => 10.9.49.201
msf6 auxiliary(scanner/http/cisco_asa_asdm_bruteforce) > set VERBOSE false
VERBOSE => false
msf6 auxiliary(scanner/http/cisco_asa_asdm_bruteforce) > run
[*] The remote target appears to host Cisco ASA ASDM. The module will continue.
[*] Starting login brute force...
[+] SUCCESSFUL LOGIN - "cisco":"cisco123"
[*] Scanned 1 of 1 hosts (100% complete)
[*] Auxiliary module execution completed
msf6 auxiliary(scanner/http/cisco_asa_asdm_bruteforce) > 

New module content (2)

  • Cisco ASA ASDM Brute-force Login by jbaines-r7 – This adds a scanner module to brute force the Cisco ASA’s ASDM interface in its default configuration.
  • Advantech iView NetworkServlet Command Injection by Shelby Pace, rgod, and y4er, which exploits CVE-2022-2143 – This adds an exploit module that leverages a command injection vulnerability in Advantech iView (CVE-2022-2143) to get remote command execution as the SYSTEM user. Versions below 5.7.04.6469 are vulnerable and do not require authentication. Version 5.7.04.6469 is still vulnerable but requires valid credentials to be exploited. Also, this version only gets you RCE as the LOCAL SERVICE user.

Enhancements and features (7)

  • #16883 from gwillcox-r7 -This PR deprecates the srt_webdrive_priv script as the same functionality is included in the service_permissions post module.
  • #16884 from bcoles – This PR deprecates the credcollect script as it has effectively been replaced by post/windows/gather/credentials/credential_collector
  • #16902 from bcoles – The scripts/meterpreter/killav.rb script has been removed since scripts have been depreciated for over 5 years. It has been replaced with post/windows/manage/killav.
  • #16905 from bcoles – The scripts/meterpreter/panda_2007_pavsrv51.rb script has been removed and replaced by exploit/windows/local/service_permissions. Note that scripts have been deprecated for over 5 years and are no longer supported.
  • #16908 from bcoles – Remove ./scripts/meterpreter/dumplinks.rb, replace with post/windows/gather/dumplink which does pretty much the same thing but is a proper module vs a deprecated script, since we stopped supporting scripts several years ago.
  • #16909 from bcolesscripts/meterpreter/get_pidgin_creds.rb has been removed since scripts have been depreciated for some time now and are no longer supported. It has been replaced by post/multi/gather/pidgin_cred.
  • #16910 from bcoles – The scripts/meterpreter/arp_scanner.rb script has been replaced with post/windows/gather/arp_scanner which implements the same logic with an improved OUI database to help fingerprint the MAC vendor.

Bugs fixed (1)

  • #16881 from bcoles – This fixes a crash in the post/windows/manage/forward_pageant module caused by the removal of Dir::Tmpname.make_tmpname() in Ruby 2.5.0. This also makes some improvements to the code.

Get it

As always, you can update to the latest Metasploit Framework with msfupdate
and you can get more details on the changes since the last blog post from
GitHub:

If you are a git user, you can clone the Metasploit Framework repo (master branch) for the latest.
To install fresh without using git, you can use the open-source-only Nightly Installers or the
binary installers (which also include the commercial edition).

Sequence Diagrams enrich your understanding of distributed architectures

Post Syndicated from Kevin Hakanson original https://aws.amazon.com/blogs/architecture/sequence-diagrams-enrich-your-understanding-of-distributed-architectures/

Architecture diagrams visually communicate and document the high-level design of a solution. As the level of detail increases, so does the diagram’s size, density, and layout complexity. Using Sequence Diagrams, you can explore additional usage scenarios and enrich your understanding of the distributed architecture while continuing to communicate visually.

This post takes a sample architecture and iteratively builds out a set of Sequence Diagrams. Each diagram adds to the vocabulary and graphical notation of Sequence Diagrams, then shows how the diagram deepened understanding of the architecture. All diagrams in this post were rendered from a text-based domain specific language using a diagrams-as-code tool instead of being drawn with graphical diagramming software.

Sample architecture

The architecture is based on Implementing header-based API Gateway versioning with Amazon CloudFront from the AWS Compute Blog, which uses the AWS Lambda@Edge feature to dynamically route the request to the targeted API version.

Amazon API Gateway is a fully managed service that makes it easier for developers to create, publish, maintain, monitor, and secure APIs at any scale. Amazon CloudFront is a global content delivery network (CDN) service built for high-speed, low-latency performance, security, and developer ease-of-use. Lambda@Edge lets you run functions that customize the content that CloudFront delivers.

The numbered labels in Figure 1 correspond to the following text descriptions:

  1. User sends an HTTP request to CloudFront, including a version header.
  2. CloudFront invokes the Lambda@Edge function for the Origin Request event.
  3. The function matches the header value to data fetched from an Amazon DynamoDB table, then modifies the Host header and path of the request and returns it to CloudFront.
  4. CloudFront routes the HTTP request to the matching API Gateway.

Figure 1 architecture diagram is a free-form mixture between a structure diagram and a behavior diagram. It includes structural aspects from a high-level Deployment Diagram, which depicts network connections between AWS services. It also demonstrates behavioral aspects from a Communication Diagram, which uses messages represented by arrows labeled with chronological numbers.

High-level architecture diagram

Figure 1. High-level architecture diagram

Sequence Diagrams

Sequence Diagrams are part of a subset of behavior diagrams known as interaction diagrams, which emphasis control and data flow. Sequence Diagrams model the ordered logic of usage scenarios in a consistent visual manner and capture detailed behaviors. I use this diagram type for analysis and design purposes and to validate my assumptions about data flows in distributed architectures. Let’s investigate the system use case where the API is called without a header indicating the requested version using a Sequence Diagram.

Examining the system use case

In Figure 2, User, Web Distribution, and Origin Request are each actors or system participants. The parallel vertical lines underneath these participants are lifelines. The horizontal arrows between participants are messages, with the arrowhead indicating message direction. Messages are arranged in time sequence from top to bottom. The dashed lines represent reply messages. The text inside guillemets («like this») indicate a stereotype, which refines the meaning of a model element. The rectangle with the bent upper-right corner is a note containing additional useful information.

Missing accept-version header

Figure 2. Missing accept-version header

The message from User to Web Distribution lacks any HTTP header that indicates the version, which precipitates the choice of Accept-Version for this name. The return message requires a decision about HTTP status code for this error case (400). The interaction with the Origin Request prompts a selection of Lambda runtimes (nodejs14.x) and understanding the programming model for generating an HTTP response for this request.

Designing the interaction

Next, let’s design the interaction when the Accept-Version header is present, but the corresponding value is not found in the Version Mappings table.

Figure 3 adds new notation to the diagram. The rectangle with “opt” in the upper-left corner and bolded text inside square brackets is an interaction fragment. The “opt” indicates this operation is an option based on the constraint (or guard) that “version mappings not cached” is true.

API version not found

Figure 3. API version not found

A DynamoDB scan operation on every request consumes table read capacity. Caching Version Mappings data inside the Lambda@Edge function’s memory optimizes for on-demand capacity mode. The «on-demand» stereotype on the DynamoDB participant succinctly communicates this decision. The “API V3 not found” note on Figure 3 provides clarity to the reader. The HTTP status code for this error case is decided as 404 with a custom description of “API Version Not Found.”

Now, let’s design the interaction where the API version is found and the caller receives a successful response.

Figure 4 is similar to Figure 3 up until the note, which now indicates “API V1 found.” Consulting the documentation for Writing functions for Lambda@Edge, the request event is updated with the HTTP Host header and path for the “API V1” Amazon API Gateway.

API version found

Figure 4. API version found

Instead of three separate diagrams for these individual scenarios, a single, combined diagram can represent the entire set of use cases. Figure 5 includes two new “alt” interaction fragments that represent choices of alternative behaviors.

The first “alt” has a guard of “missing Accept-Version header” mapping to our Figure 2 use case. The “else” guard encompasses the remaining use cases containing a second “alt” splitting where Figure 3 and Figure 4 diverge. That “version not found” guard is the Figure 3 use case returning the 404, while that “else” guard is the Figure 4 success condition. The added notes improve visual clarity.

Header-based API Gateway versioning with CloudFront

Figure 5. Header-based API Gateway versioning with CloudFront

Diagrams as code

After diagrams are created, the next question is where to save them and how to keep them updated. Because diagrams-as-code use text-based files, they can be stored and versioned in the same source control system as application code. Also consider an architectural decision record (ADR) process to document and communicate architecturally significant decisions. Then as application code is updated, team members can revise both the ADR narrative and the text-based diagram source. Up-to-date documentation is important for operationally supporting production deployments, and these diagrams quickly provide a visual understanding of system component interactions.

Conclusion

This post started with a high-level architecture diagram and ended with an additional Sequence Diagram that captures multiple usage scenarios. This improved understanding of the system design across success and error use cases. Focusing on system interactions prior to coding facilitates the interface definition and emergent properties discovery, before thinking in terms of programming language specific constructs and SDKs.

Experiment to see if Sequence Diagrams improve the analysis and design phase of your next project. View additional examples of diagrams-as-code from the AWS Icons for PlantUML GitHub repository. The Workload Discovery on AWS solution can even build detailed architecture diagrams of your workloads based on live data from AWS.

For vetted architecture solutions and reference architecture diagrams, visit the AWS Architecture Center. For more serverless learning resources, visit Serverless Land.

Related information

  • The Unified Modeling Language specification provides the full definition of Sequence Diagrams. This includes notations for additional interaction frame operators, using open arrow heads to represent asynchronous messages, and more.
  • Diagrams were created for this blog post using PlantUML and the AWS Icons for PlantUML. PlantUML integrates with IDEs, wikis, and other external tools. PlantUML is distributed under multiple open-source licenses, allowing local server rendering for diagrams containing sensitive information. AWS Icons for PlantUML include the official AWS Architecture Icons.

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close