Malcolm: The state of static analysis in the GCC 12 compiler

Post Syndicated from original https://lwn.net/Articles/891062/

David Malcolm has posted an
update
on the state of static analysis in GCC 12.

Some other languages, such as Perl, can track input and flag any
variable that should not be trusted because it was read from an
outside source such as a web form. Flagging variables in this
manner is called tainting. After a program runs the variable
through a check, the variable can be untainted, a process called
sanitization.

Our GCC analyzer’s taint mode is activated by
-fanalyzer-checker=taint (which should be specified in
addition to -fanalyzer). Taint mode attempts to track
attacker-controlled values entering the program and to warn if they
are used without sanitization.

DDoS Attack Trends for 2022 Q1

Post Syndicated from Omer Yoachimik original https://blog.cloudflare.com/ddos-attack-trends-for-2022-q1/

DDoS Attack Trends for 2022 Q1

DDoS Attack Trends for 2022 Q1

Welcome to our first DDoS report of 2022, and the ninth in total so far. This report includes new data points and insights both in the application-layer and network-layer sections — as observed across the global Cloudflare network between January and March 2022.

The first quarter of 2022 saw a massive spike in application-layer DDoS attacks, but a decrease in the total number of network-layer DDoS attacks. Despite the decrease, we’ve seen volumetric DDoS attacks surge by up to 645% QoQ, and we mitigated a new zero-day reflection attack with an amplification factor of 220 billion percent.

In the Russian and Ukrainian cyberspace, the most targeted industries were Online Media and Broadcast Media. In our Azerbaijan and Palestinian Cloudflare data centers, we’ve seen enormous spikes in DDoS activity — indicating the presence of botnets operating from within.

The Highlights

The Russian and Ukrainian cyberspace

  • Russian Online Media companies were the most targeted industries within Russia in Q1. The next most targeted was the Internet industry, then Cryptocurrency, and then Retail. While many attacks that targeted Russian Cryptocurrency companies originated in Ukraine or the US, another major source of attacks was from within Russia itself.
  • The majority of HTTP DDoS attacks that targeted Russian companies originated from Germany, the US, Singapore, Finland, India, the Netherlands, and Ukraine. It’s important to note that being able to identify where cyber attack traffic originates is not the same as being able to attribute where the attacker is located.
  • Attacks on Ukraine targeted Broadcast Media and Publishing websites and seem to have been more distributed, originating from more countries — which may indicate the use of global botnets. Still, most of the attack traffic originated from the US, Russia, Germany, China, the UK, and Thailand.

Read more about what Cloudflare is doing to keep the Open Internet flowing into Russia and keep attacks from getting out.

Ransom DDoS attacks

  • In January 2022, over 17% of under-attack respondents reported being targeted by ransom DDoS attacks or receiving a threat in advance.
  • That figure drastically dropped to 6% in February, and then to 3% in March.
  • When compared to previous quarters, we can see that in total, in Q1, only 10% of respondents reported a ransom DDoS attack; a 28% decrease YoY and 52% decrease QoQ.

Application-layer DDoS attacks

  • 2022 Q1 was the busiest quarter in the past 12 months for application-layer attacks. HTTP-layer DDoS attacks increased by 164% YoY and 135% QoQ.
  • Diving deeper into the quarter, in March 2022 there were more HTTP DDoS attacks than in all of Q4 combined (and Q3, and Q1).
  • After four consecutive quarters in a row with China as the top source of HTTP DDoS attacks, the US stepped into the lead this quarter. HTTP DDoS attacks originating from the US increased by a staggering 6,777% QoQ and 2,225% YoY.

Network-layer DDoS attacks

  • Network-layer attacks in Q1 increased by 71% YoY but decreased 58% QoQ.
  • The Telecommunications industry was the most targeted by network-layer DDoS attacks, followed by Gaming and Gambling companies, and the Information Technology and Services industry.
  • Volumetric attacks increased in Q1. Attacks above 10 Mpps (million packets per second) grew by over 300% QoQ, and attacks over 100 Gbps grew by 645% QoQ.

This report is based on DDoS attacks that were automatically detected and mitigated by Cloudflare’s DDoS Protection systems. To learn more about how it works, check out this deep-dive blog post.

A note on how we measure DDoS attacks observed over our network
To analyze attack trends, we calculate the “DDoS activity” rate, which is either the percentage of attack traffic out of the total traffic (attack + clean) observed over our global network, or in a specific location, or in a specific category (e.g., industry or billing country). Measuring the percentages allows us to normalize data points and avoid biases reflected in absolute numbers towards, for example, a Cloudflare data center that receives more total traffic and likely, also more attacks.

To view an interactive version of this report view it on Cloudflare Radar.

Ransom Attacks

Our systems constantly analyze traffic and automatically apply mitigation when DDoS attacks are detected. Each DDoS’d customer is prompted with an automated survey to help us better understand the nature of the attack and the success of the mitigation.

For over two years now, Cloudflare has been surveying attacked customers — one question on the survey being if they received a threat or a ransom note demanding payment in exchange to stop the DDoS attack. In the last quarter, 2021 Q4, we observed a record-breaking level of reported ransom DDoS attacks (one out of every five customers). This quarter, we’ve witnessed a drop in ransom DDoS attacks with only one out of 10 respondents reporting a ransom DDoS attack; a 28% decrease YoY and 52% decrease QoQ.

DDoS Attack Trends for 2022 Q1

When we break it down by month, we can see that January 2022 saw the largest number of respondents reporting receiving a ransom letter in Q1. Almost one out of every five customers (17%).

DDoS Attack Trends for 2022 Q1

Application-layer DDoS attacks

Application-layer DDoS attacks, specifically HTTP DDoS attacks, are attacks that usually aim to disrupt a web server by making it unable to process legitimate user requests. If a server is bombarded with more requests than it can process, the server will drop legitimate requests and — in some cases — crash, resulting in degraded performance or an outage for legitimate users.

DDoS Attack Trends for 2022 Q1

Application-layer DDoS attacks by month

In Q1, application-layer DDoS attacks soared by 164% YoY and 135% QoQ – the busiest quarter within the past year.

Application-layer DDoS attacks increased to new heights in the first quarter of 2022. In March alone, there were more HTTP DDoS attacks than in all of 2021 Q4 combined (and Q3, and Q1).

DDoS Attack Trends for 2022 Q1

DDoS Attack Trends for 2022 Q1

Application-layer DDoS attacks by industry

Consumer Electronics was the most targeted industry in Q1.

Globally, the Consumer Electronics industry was the most attacked with an increase of 5,086% QoQ. Second was the Online Media industry with a 2,131% increase in attacks QoQ. Third were Computer Software companies, with an increase of 76% QoQ and 1,472 YoY.

DDoS Attack Trends for 2022 Q1

However, if we focus only on Ukraine and Russia, we can see that Broadcast Media, Online Media companies, and Internet companies were the most targeted. Read more about what Cloudflare is doing to keep the Open Internet flowing into Russia and keep attacks from getting out.

DDoS Attack Trends for 2022 Q1

DDoS Attack Trends for 2022 Q1

Application-layer DDoS attacks by source country

To understand the origin of the HTTP attacks, we look at the geolocation of the source IP address belonging to the client that generated the attack HTTP requests. Unlike network-layer attacks, source IP addresses cannot be spoofed in HTTP attacks. A high percentage of DDoS activity in a given country usually indicates the presence of botnets operating from within the country’s borders.

After four consecutive quarters in a row with China as the top source of HTTP DDoS attacks, the US stepped into the lead this quarter. HTTP DDoS attacks originating from the US increased by a staggering 6,777% QoQ and 2,225% YoY. Following China in second place are India, Germany, Brazil, and Ukraine.

DDoS Attack Trends for 2022 Q1

Application-layer DDoS attacks by target country

In order to identify which countries are targeted by the most HTTP DDoS attacks, we bucket the DDoS attacks by our customers’ billing countries and represent it as a percentage out of all DDoS attacks.

The US drops to second place, after being first for three consecutive quarters. Organizations in China were targeted the most by HTTP DDoS attacks, followed by the US, Russia, and Cyprus.

DDoS Attack Trends for 2022 Q1

Network-layer DDoS attacks

While application-layer attacks target the application (Layer 7 of the OSI model) running the service that end users are trying to access (HTTP/S in our case), network-layer attacks aim to overwhelm network infrastructure (such as in-line routers and servers) and the Internet link itself.

DDoS Attack Trends for 2022 Q1

Network-layer DDoS attacks by month

While HTTP DDoS attacks soared in Q1, network-layer DDoS attacks actually decreased by 58% QoQ, but still increased by 71% YoY.

Diving deeper into Q1, we can see that the amount of network-layer DDoS attacks remained mostly consistent throughout the quarter with about a third of attacks occurring every month.

DDoS Attack Trends for 2022 Q1

DDoS Attack Trends for 2022 Q1

DDoS Attack Trends for 2022 Q1

Cloudflare mitigates zero-day amplification DDoS attack

Amongst these network-layer DDoS attacks are also zero-day DDoS attacks that Cloudflare automatically detected and mitigated.

In the beginning of March, Cloudflare researchers helped investigate and expose a zero-day vulnerability in Mitel business phone systems that amongst other possible exploitations, also enables attackers to launch an amplification DDoS attack. This type of attack reflects traffic off vulnerable Mitel servers to victims, amplifying the amount of traffic sent in the process by an amplification factor of 220 billion percent in this specific case. You can read more about it in our recent blog post.

We observed several of these attacks across our network. One of them targeted a North American cloud provider using the Cloudflare Magic Transit service. The attack originated from 100 source IPs mainly from the US, UK, Canada, Netherlands, Australia, and approximately 20 other countries. It peaked above 50 Mpps (~22 Gbps) and was automatically detected and mitigated by Cloudflare systems.

DDoS Attack Trends for 2022 Q1

Network-layer DDoS attacks by industry

Many network-layer DDoS attacks target Cloudflare’s IP ranges directly. These IP ranges serve our WAF/CDN customers, Cloudflare authoritative DNS, Cloudflare public DNS resolver 1.1.1.1,  Cloudflare Zero Trust products, and our corporate offices, to name a few. Additionally, we also allocate dedicated IP addresses to customers via our Spectrum product and advertise the IP prefixes of other companies via our Magic Transit, Magic WAN, and Magic Firewall Products for L3/4 DDoS protection.

In this report, for the first time, we’ve begun classifying network-layer DDoS attacks according to the industries of our customers using the Spectrum and Magic products. This classification allows us to understand which industries are targeted the most by network-layer DDoS attacks.

When we look at Q1 statistics, we can see that in terms of attack packets and attack bytes launched towards Cloudflare customers, the Telecommunications industry was targeted the most.  More than 8% of all attack bytes and 10% of all attack packets that Cloudflare mitigated targeted Telecommunications companies.

Following not too far behind, in second and third place were the Gaming / Gambling and Information Technology and Services industries.

DDoS Attack Trends for 2022 Q1

DDoS Attack Trends for 2022 Q1

Network-layer DDoS attacks by target country

Similarly to the classification by our customers’ industry, we can also bucket attacks by our customers’ billing country as we do for application-layer DDoS attacks, to identify the top attacked countries.

Looking at Q1 numbers, we can see that the US was targeted by the highest percentage of DDoS attacks traffic — over 10% of all attack packets and almost 8% of all attack bytes. Following the US is China, Canada, and Singapore.

DDoS Attack Trends for 2022 Q1

DDoS Attack Trends for 2022 Q1

Network-layer DDoS attacks by ingress country

When trying to understand where network-layer DDoS attacks originate, we cannot use the same method as we use for the application-layer attack analysis. To launch an application-layer DDoS attack, successful handshakes must occur between the client and the server in order to establish an HTTP/S connection. For a successful handshake to occur, the attacker cannot spoof their source IP address. While the attacker may use botnets, proxies, and other methods to obfuscate their identity, the attacking client’s source IP location does sufficiently represent the attack source of application-layer DDoS attacks.

On the other hand, to launch network-layer DDoS attacks, in most cases, no handshake is needed. Attackers can spoof the source IP address in order to obfuscate the attack source and introduce randomness into the attack properties, which can make it harder for simple DDoS protection systems to block the attack. So if we were to derive the source country based on a spoofed source IP, we would get a ‘spoofed country’.

For this reason, when analyzing network-layer DDoS attack sources, we bucket the traffic by the Cloudflare edge data center locations where the traffic was ingested, and not by the (potentially) spoofed source IP to get an understanding of where the attacks originate from. We are able to achieve geographical accuracy in our report because we have data centers in over 270 cities around the world. However, even this method is not 100% accurate, as traffic may be back hauled and routed via various Internet Service Providers and countries for reasons that vary from cost reduction to congestion and failure management.

In Q1, the percentage of attacks detected in Cloudflare’s data centers in Azerbaijan increased by 16,624% QoQ and 96,900% YoY, making it the country with the highest percentage of network-layer DDoS activity (48.5%).

Following our Azerbaijanian data center is our Palestinian data center where a staggering 41.9% of all traffic was DDoS traffic. This represents a 10,120% increase QoQ and 46,456% YoY.

DDoS Attack Trends for 2022 Q1

DDoS Attack Trends for 2022 Q1

To view all regions and countries, check out the interactive map.

Attack vectors

SYN Floods remain the most popular DDoS attack vector, while use of generic UDP floods drops significantly in Q1.

An attack vector is a term used to describe the method that the attacker uses to launch their DDoS attack, i.e., the IP protocol, packet attributes such as TCP flags, flooding method, and other criteria.

In Q1, SYN floods accounted for 57% of all network-layer DDoS attacks, representing a 69% increase QoQ and a 13% increase YoY. In second place, attacks over SSDP surged by over 1,100% QoQ. Following were RST floods and attacks over UDP. Last quarter, generic UDP floods took the second place, but this time, generic UDP DDoS attacks plummeted by 87% QoQ from 32% to a mere 3.9%.

DDoS Attack Trends for 2022 Q1

Emerging threats

Identifying the top attack vectors helps organizations understand the threat landscape. In turn, this may help them improve their security posture to protect against those threats. Similarly, learning about new emerging threats that may not yet account for a significant portion of attacks, can help mitigate them before they become a significant force.

When we look at new emerging attack vectors in Q1, we can see increases in DDoS attacks reflecting off of Lantronix services (+971% QoQ) and SSDP reflection attacks (+724% QoQ). Additionally, SYN-ACK attacks increased by 437% and attacks by Mirai botnets by 321% QoQ.

Attacker reflecting traffic off of Lantronix Discovery Service

Lantronix is a US-based software and hardware company that provides solutions for Internet of Things (IoT) management amongst their vast offering. One of the tools that they provide to manage their IoT components is the Lantronix Discovery Protocol. It is a command-line tool that helps to search and find Lantronix devices. The discovery tool is UDP-based, meaning that no handshake is required. The source IP can be spoofed. So an attacker can use the tool to search for publicly exposed Lantronix devices using a 4 byte request, which will then in turn respond with a 30 byte response from port 30718. By spoofing the source IP of the victim, all Lantronix devices will target their responses to the victim — resulting in a reflection/amplification attack.

Simple Service Discovery Protocol used for reflection DDoS attacks

The Simple Service Discovery Protocol (SSDP) protocol works similarly to the Lantronix Discovery protocol, but for Universal Plug and Play (UPnP) devices such as network-connected printers. By abusing the SSDP protocol, attackers can generate a reflection-based DDoS attack overwhelming the target’s infrastructure and taking their Internet properties offline. You can read more about SSDP-based DDoS attacks here.

DDoS Attack Trends for 2022 Q1

Network-layer DDoS attacks by attack rate

In Q1, we observed a massive uptick in volumetric DDoS attacks — both from the packet rate and bitrate perspective. Attacks over 10 Mpps grew by over 300% QoQ, and attacks over 100 Gbps grew by 645% QoQ.

There are different ways of measuring the size of an L3/4 DDoS attack. One is the volume of traffic it delivers, measured as the bit rate (specifically, terabits per second or gigabits per second). Another is the number of packets it delivers, measured as the packet rate (specifically, millions of packets per second).

Attacks with high bit rates attempt to cause a denial-of-service event by clogging the Internet link, while attacks with high packet rates attempt to overwhelm the servers, routers, or other in-line hardware appliances. These devices dedicate a certain amount of memory and computation power to process each packet. Therefore, by bombarding it with many packets, the appliance can be left with no further processing resources. In such a case, packets are “dropped,” i.e., the appliance is unable to process them. For users, this results in service disruptions and denial of service.

Distribution by packet rate

The majority of network-layer DDoS attacks remain below 50,000 packets per second. While 50 kpps is on the lower side of the spectrum at Cloudflare scale, it can still easily take down unprotected Internet properties and congest even a standard Gigabit Ethernet connection.

DDoS Attack Trends for 2022 Q1

When we look at the changes in the attack sizes, we can see that attacks of over 10 Mpps grew by over 300% QoQ. Similarly, attacks of 1-10 Mpps grew by almost 40% QoQ.

DDoS Attack Trends for 2022 Q1

Distribution by bitrate

In Q1, most of the network-layer DDoS attacks remain below 500 Mbps. This too is a tiny drop in the water at Cloudflare scale, but can very quickly shut down unprotected Internet properties with less capacity or at the very least congest, even a standard Gigabit Ethernet connection.

DDoS Attack Trends for 2022 Q1
Graph of the distribution of network-layer DDoS attacks by bit rate in 2022 Q1

Similarly to the trends observed in the packet-per-second realm, here we can also see large increases. The amount of DDoS attacks that peaked over 100 Gbps increased by 645% QoQ; attacks peaking between 10 Gbps to 100 Gbps increased by 407%; attacks peaking between 1 Gbps to 10 Gbps increased by 88%; and even attacks peaking between 500 Mbps to 1 Gbps increased by almost 20% QoQ.

DDoS Attack Trends for 2022 Q1

Network-layer DDoS attacks by duration

Most attacks remain under one hour in duration, reiterating the need for automated always-on DDoS mitigation solutions.

We measure the duration of an attack by recording the difference between when it is first detected by our systems as an attack and the last packet we see with that attack signature towards that specific target.

In previous reports, we provided a breakdown of ‘attacks under an hour’, and larger time ranges. However, in most cases over 90 percent of attacks last less than an hour. So starting from this report, we broke down the short attacks and grouped them by shorter time ranges to provide better granularity.

One important thing to keep in mind is that even if an attack lasts only a few minutes, if it is successful, the repercussions could last well beyond the initial attack duration. IT personnel responding to a successful attack may spend hours and even days restoring their services.

In the first quarter of 2022, more than half of the attacks lasted 10-20 minutes, approximately 40% ended within 10 minutes, another ~5% lasted 20-40 minutes, and the remaining lasted longer than 40 minutes.

DDoS Attack Trends for 2022 Q1

Short attacks can easily go undetected, especially burst attacks that, within seconds, bombard a target with a significant number of packets, bytes, or requests. In this case, DDoS protection services that rely on manual mitigation by security analysis have no chance in mitigating the attack in time. They can only learn from it in their post-attack analysis, then deploy a new rule that filters the attack fingerprint and hope to catch it next time. Similarly, using an “on-demand” service, where the security team will redirect traffic to a DDoS provider during the attack, is also inefficient because the attack will already be over before the traffic routes to the on-demand DDoS provider.

It’s recommended that companies use automated, always-on DDoS protection services that analyze traffic and apply real-time fingerprinting fast enough to block short-lived attacks.

Summary

Cloudflare’s mission is to help build a better Internet. A better Internet is one that is more secure, faster, and reliable for everyone — even in the face of DDoS attacks. As part of our mission, since 2017, we’ve been providing unmetered and unlimited DDoS protection for free to all of our customers. Over the years, it has become increasingly easier for attackers to launch DDoS attacks. But as easy as it has become, we want to make sure that it is even easier — and free — for organizations of all sizes to protect themselves against DDoS attacks of all types.

Not using Cloudflare yet? Start now with our Free and Pro plans to protect your websites, or contact us for comprehensive DDoS protection for your entire network using Magic Transit.

Zabbix 6 IT Infrastructure Monitoring Cookbook: Interview with the Co-Author

Post Syndicated from Jekaterina Sizova original https://blog.zabbix.com/zabbix-6-it-infrastructure-monitoring-cookbook-interview-with-the-co-author/20122/

We say Zabbix is a universal monitoring system, which is true. In many cases, the Zabbix potential is limited only by knowledge and the ability to use all the functions properly. That is why various training materials, including books, are so important. Nathan Liefting and Brian van Baekel recently presented their new Monitoring Cookbook on Zabbix 6.0 features and made a huge contribution to the existing knowledge pool. One of the book’s authors kindly agreed to tell us more about the issue.

Hi, Nathan! A year ago, we already had an interview with you to mark the Zabbix 5 IT Infrastructure Monitoring Cookbook launch. And now, just a month after the release of Zabbix 6.0 LTS, you published a new version of the book devoted to the latest Zabbix version. How did you manage to do it so quickly?

The last interview was a lot of fun to work on, so I am glad to do one again. Thanks for having me.

Let me jump right in by saying it took a lot of Alpha releases to write all of the new content. Normally we are already on top of new features, compiling them from sources even, to inform the community about the Zabbix development process and make sure our knowledge is up to par to ensure the highest quality for our training. But, with this book, we took it to the next level. Using the early releases, we worked hard to get the book ready by giving ourselves a hard deadline of getting the book to market just 1 month after the official Zabbix 6 release while the publisher needed 3 weeks to perform the copy editing and such. It was challenging, as we didn’t know exactly which features were to be included, and how they were built. We wrote parts of the book without knowing the feature in-depth, during the development phase, as a result, we had to rewrite whole chapters to reflect the actual integration. In the end, we made it and the entire Zabbix community can now use our book to get their Zabbix 6 environment up and running right from the start, with the added benefit for us that we can serve our customers perfectly due to all our in-depth knowledge we’ve gained in the process.

Comparing these two issues, how much do their contents overlap? Should users who already have the 5.0 book consider buying this edition (Zabbix 6 IT Infrastructure Monitoring Cookbook) as well?

It’s important to know that there is definitely an overlap in content. We specifically wanted to include that in the description for the book as well, as we don’t want anyone to think that all content has been remade.

That out of the way, there are a lot of new things as well. For example, all the new major Zabbix 6 features have been added like:

  • High Availability for Zabbix server
  • Business Service Monitoring (SLA’s)
  • The new widgets (Geomaps, single item and top hosts)
  • User roles
  • New trigger syntax

Not only that, but we also made sure to include all the new best practices. Redoing all the tags using the new tag policy and making sure all UI changes are included as well. Every recipe has had a Zabbix 6.0 touch-up and thus everything is current.

Even though it is a second edition you will find a lot of new things, and nothing has been left behind from the older Zabbix version. It can thus be a good purchase, even if you already own the Zabbix 5 version since there was a lot changed, added and improved in the product itself as well.

What background knowledge about Zabbix should the reader of this book have? Are you targeting more beginners or experienced software users?

We target both actually! A lot of technical books take you through the process of settings things up start-to-finish and we are no different. What is different with our Cookbook you don’t have to execute every single chapter to end up with the right result. This is a Packt Cookbook and everything in the cookbook format (as defined by the publisher) has to be able to be configured separately. You can follow the book from Chapter 1 Installing Zabbix and Getting Started Using the Frontend up until the end or you can pick any subject and dive right in.

It’s set up in such a way that it works for everyone, the very beginners and the more advanced users, we want to be there for the community as a whole, and not just the super experienced engineers.

Can you please tell us briefly about the book’s content, in particular, what challenging monitoring aspects have you managed to explain, making them easy to understand and implement?

We tried to include most of the important Zabbix subjects. You will find the basics in the book on Installing Zabbix, using the frontend and setting up users, groups and roles. But besides that, most types of monitoring are also included. We’ll discuss monitoring using the Zabbix agent, SNMP, ODBC, HTTP, JMX, dependent items, calculated/aggregated checks and even external checks.

Of course, Low-Level Discovery is also explained to make sure that we can automate the entity (items, triggers, etc.) creation. Besides that, we think that when working with monitoring data it is important to keep things structured. Thus, we also spend time on using the best practices to set up templates, and use methods like tags to easily filter out data and keep things structured.

For the real Zabbix gurus, we’ll even dive into using the Zabbix API and Python scripts to extend upon the Zabbix functionality. There really is something for everyone. Check out the full list of included subjects here.

Fun fact: Brian van Baekel has a cat named Zabbix

You probably got a lot of feedback and comments about the Zabbix 5 IT Infrastructure Monitoring Cookbook. Can you please share with us what people are saying?

Well, with a lot of pride I can say that the Zabbix 5 book was met with great reviews! We’re happy to be able to help out the Zabbix community because our goal with the book was to give people access to a bundle of Zabbix information. It can be a lot easier to get information in a book format, instead of having to search for separate resources on the internet.

The community has been great with their reviews and we’ve had compliments like

“I found this book a great resource for my network monitoring needs. It has everything I needed. The authors do a fantastic job at explaining Zabbix.

“Excellent book. Highly recommended for IT infrastructure monitoring. I was in the need of server and network devices monitoring so I tried Zabbix.”

We’re super happy with how well the books have been received and really appreciate all of the (Amazon) reviews people have left! If you have to book and would like to leave a review, it is much appreciated and we personally read all of them.

Where is the book available to purchase?

You can get your copy at our publisher’s website, Amazon, or a local retailer. Packt has a great network of suppliers throughout the world that might just have a copy for you. But you can always check out the links below to get your copy:

https://www.packtpub.com/product/zabbix-6-it-infrastructure-monitoring-cookbook-second-edition/9781803246918

https://www.amazon.com/Zabbix-Infrastructure-Monitoring-Cookbook-maintaining-ebook/product-reviews/B08NX17XT8/ref=cm_cr_dp_d_show_all_btm?ie=UTF8&reviewerType=all_reviews

We’re actually still looking to work with a publisher in Japan, China and/or other locations where a local language might help out the Zabbix community. If you have any ideas for that, feel free to contact me on LinkedIn.

Time flies and the Zabbix 7.0 LTS version is also not too far off. Should the community expect a new book when the new release comes out?

We had a lot of fun working on this Zabbix 6 edition, but at the same time it has taken up a considerable part of our personal time to get this book ready for everyone again. As a full-time Zabbix engineer and Zabbix trainer for Opensource ICT Solutions, there isn’t much time left in the weeks to write. That means that writing is mostly done on the weekends.

But if we find the time to do it again, we’ll definitely get a Zabbix 7 edition out as well! Already trying to make a plan to see if it is in the cards for us again.

The post Zabbix 6 IT Infrastructure Monitoring Cookbook: Interview with the Co-Author appeared first on Zabbix Blog.

Security updates for Tuesday

Post Syndicated from original https://lwn.net/Articles/891048/

Security updates have been issued by Debian (thunderbird and usbguard), Fedora (containerd, firefox, golang-github-containerd-imgcrypt, nss, and vim), Oracle (firefox, kernel, kernel-container, and thunderbird), Red Hat (thunderbird), Scientific Linux (thunderbird), SUSE (libexif, mozilla-nss, mysql-connector-java, and qemu), and Ubuntu (libarchive and python-django).

How to integrate AWS STS SourceIdentity with your identity provider

Post Syndicated from Keith Joelner original https://aws.amazon.com/blogs/security/how-to-integrate-aws-sts-sourceidentity-with-your-identity-provider/

You can use third-party identity providers (IdPs) such as Okta, Ping, or OneLogin to federate with the AWS Identity and Access Management (IAM) service using SAML 2.0, allowing your workforce to configure services by providing authorization access to the AWS Management Console or Command Line Interface (CLI). When you federate to AWS, you assume a role through the AWS Security Token Service (AWS STS), which through the AssumeRole API returns a set of temporary security credentials you then use to access AWS resources. The use of temporary credentials can make it challenging for administrators to trace which identity was responsible for actions performed.

To address this, with AWS STS you set a unique attribute called SourceIdentity, which allows you to easily see which identity is responsible for a given action.

This post will show you how to set up the AWS STS SourceIdentity attribute when using Okta, Ping, or OneLogin as your IdP. Your IdP administrator can configure a corporate directory attribute, such as an email address, to be passed as the SourceIdentity value within the SAML assertion. This value is stored as the SourceIdentity element in AWS CloudTrail, along with the activity performed by the assumed role. This post will also show you how to set up a sample policy for setting the SourceIdentity when switching roles. Finally, as an administrator reviewing CloudTrail activity, you can use the source identity information to determine who performed which actions. We will walk you through CloudTrail logs from two accounts to demonstrate the continuance of the source identity attribute, showing you how the SourceIdentity will appear in both accounts’ logs.

For more information about the SAML authentication flow in AWS services, see AWS Identity and Access Management Using SAML. For more information about using SourceIdentity, see How to relate IAM role activity to corporate identity.

Configure the SourceIdentity attribute with Okta integration

You will do this portion of the configuration within the Okta administrative console. This procedure assumes that you have a previously configured AWS and Okta integration. If not, you can configure your integration by following the instructions in the Okta AWS Multi-Account Configuration Guide. You will use the Okta to SAML integration and configure an optional attribute to map as the SourceIdentity.

To set up Okta with SourceIdentity

  1. Log in to the Okta admin console.
  2. Navigate to Applications–AWS.
  3. In the top navigation bar, select the Sign On tab, as shown in Figure 1.

    Figure 1 - Navigate to attributes in SAML settings on the Okta applications page

    Figure 1 – Navigate to attributes in SAML settings on the Okta applications page

  4. Under Sign on methods, select SAML 2.0, and choose the arrow next to Attributes (Optional) to expand, as shown in Figure 2.

    Figure 2 - Add new attribute SourceIdentity and map it to Okta provided attribute of your choice

    Figure 2 – Add new attribute SourceIdentity and map it to Okta provided attribute of your choice

  5. Add the optional attribute definition for SourceIdentity using the following parameters:
    • For Name, enter:
      https://aws.amazon.com/SAML/Attributes/SourceIdentity
    • For Name format, choose URI Reference.
    • For Value, enter user.login.

    Note: The Name format options are the following:
    Unspecified – can be any format defined by the Okta profile and must be interpreted by your application.
    URI Reference – the name is provided as a Uniform Resource Identifier string.
    Basic – a simple string; the default if no other format is specified.

The examples shown in Figure 1 and Figure 2 show how to map an email address to the SourceIdentity attribute by using an on-premises Active Directory sync. The SourceIdentity can be mapped to other attributes from your Active Directory.

Configure the SourceIdentity attribute with PingOne integration

You do this portion of the configuration in the Ping Identity administrative console. This procedure assumes that you have a previously configured AWS and Ping integration. If not, you can set up the PingFederate AWS Connector by following the Ping Identity instructions Configuring an SSO connection to Amazon Web Services.

You’re using the Ping to SAML integration and configuring an optional attribute to map as the source identity.

Configuring PingOne as an IdP involves setting up an identity repository (in this case, the PingOne Directory), creating a user group, and adding users to the individual groups.

To configure PingOne as an IdP for AWS

  1. Navigate to https://admin.pingone.com/ and log in using your administrator credentials.
  2. Choose the My Applications tab, as shown in Figure 3.

    Figure 3. PingOne My Applications tab

    Figure 3. PingOne My Applications tab

  3. On the Amazon Web Services line, choose on the arrow on the right side to show application details to edit and add a new attribute for the source identity.
  4. Choose Continue to Next Step to open the Attribute Mapping section, as shown in Figure 4.

    Figure 4. Attribute mappings

    Figure 4. Attribute mappings

  5. In the Attribute Mapping section line 1, for SAML_SUBJECT, choose Advanced.
  6. On the Advanced Attribute Options page, for Name ID Format to send to SP select urn:oasis:names:tc:SAML:2.0:nameid-format:persistent. For IDP Attribute Name or Literal Value, select SAML_SUBJECT, as shown in Figure 4.

    Figure 5. Advanced Attribute Options for SAML_SUBJECT

    Figure 5. Advanced Attribute Options for SAML_SUBJECT

  7. In the Attribute Mapping section line 2 as shown in Figure 4, for the application attribute https://aws.amazon.com/SAML/Attributes/Role, select Advanced.
  8. On the Advanced Attribute Options page, for Name Format, select urn:oasis:names:tc:SAML:2.0:attrname-format:uri, as shown in Figure 6.

    Figure 6. Advanced Attribute Options for https://aws.amazon.com/SAML/Attributes/Role

    Figure 6. Advanced Attribute Options for https://aws.amazon.com/SAML/Attributes/Role

  9. In the Attribute Mapping section line 2 as shown in Figure 4, select As Literal.
  10. For IDP Attribute Name or Literal Value, format the role and provider ARNs (which are not yet created on the AWS side) in the following format. Be sure to replace the placeholders with your own values. Make a note of the role name and SAML provider name, as you will be using these exact names to create an IAM role and an IAM provider on the AWS side.

    arn:aws:iam::<AWS_ACCOUNT_ID>:role/<IAM_ROLE_NAME>,arn:aws:iam:: ::<AWS_ACCOUNT_ID>:saml-provider/<SAML_PROVIDER_NAME>

  11. In the Attribute Mapping section line 3 as shown in Figure 4, for the application attribute https://aws.amazon.com/SAML/Attributes/RoleSessionName, enter Email (Work).
  12. In the Attribute Mapping section as shown in Figure 4, to create line 5, choose Add a new attribute in the lower left.
  13. In the newly added Attribute Mapping section line 5 as shown in Figure 4, add the SourceIdentity.
    • For Application Attribute, enter:
      https://aws.amazon.com/SAML/Attributes/SourceIdentity
    • For Identity Bridge Attribute or Literal Value, enter:
      SAML_SUBJECT
  14. Choose Continue to Next Step in the lower right.
  15. For Group Access, add your existing PingOne Directory Group to this application.
  16. Review your setup configuration, as shown in Figure 7, and choose Finish.

    Figure 7. Review mappings

    Figure 7. Review mappings

Configure the SourceIdentity attribute with OneLogin integration

For the OneLogin SAML integration with AWS, you use the Amazon Web Services Multi Account application and configure an optional attribute to map as the SourceIdentity. You do this portion of the configuration in the OneLogin administrative console.

This procedure assumes that you already have a previously configured AWS and OneLogin integration. For information about how to configure the OneLogin application for AWS authentication and authorization, see the OneLogin KB article Configure SAML for Amazon Web Services (AWS) with Multiple Accounts and Roles.

After the OneLogin Multi Account application and AWS are correctly configured for SAML login, you can further customize the application to pass the SourceIdentity parameter upon login.

To change OneLogin configuration to add SourceIdentity attribute

  1. In the OneLogin administrative console, in the Amazon Web Services Multi Account application, on the app administration page, navigate to Parameters, as shown in Figure 8.

    Figure 8. OneLogin AWS Multi Account Application Configuration Parameters

    Figure 8. OneLogin AWS Multi Account Application Configuration Parameters

  2. To add a parameter, choose the + (plus) icon to the right of Value.
  3. As shown in Figure 9, for Field Name enter https://aws.amazon.com/SAML/Attributes/SourceIdentity, select Include in SAML assertion, then choose Save.
    Figure 9. OneLogin AWS Multi Account Application add new field

    Figure 9. OneLogin AWS Multi Account Application add new field

  4. In the Edit Field page, select the default value you want to use for SourceIdentity. For the example in this blog post, for Value, select Email, then choose Save, as shown in Figure 10.
    Figure 10. OneLogin AWS Multi Account Application map new field to email

    Figure 10. OneLogin AWS Multi Account Application map new field to email

After you’ve completed this procedure, review the final mapping details, as shown in Figure 11, to confirm that you see the additional parameter that will be passed into AWS through the SAML assertion.

Figure 11. OneLogin AWS Multi Account Application final mapping details

Figure 11. OneLogin AWS Multi Account Application final mapping details

Configuring AWS IAM role trust policy

Now that the IdP configuration is complete, you can enable your AWS accounts to use SourceIdentity by modifying the IAM role trust policy.

For the workforce identity or application to be able to define their source identity when they assume IAM roles, you must first grant them permission for the sts:SetSourceIdentity action, as illustrated in the sample policy document below. This will permit the workforce identity or application to set the SourceIdentity themselves without any need for manual intervention.

To modify an AWS IAM role trust policy

  1. Log in to the AWS Management Console for your account as a user with privileges to configure an IdP, typically an administrator.
  2. Navigate to the AWS IAM service.
  3. For trusted identity, choose SAML 2.0 federation.
  4. From the SAML Provider drop down menu, select the IAM provider you created previously.
  5. Modify the role trust policy and add the SetSourceIdentity action.

Sample policy document

This is a sample policy document attached to a role you assume when you log in to Account1 from the Okta dashboard. Edit your Account1/Role1 trust policy document and add sts:AssumeRoleWithSAML and sts:setSourceIdentity to the Action section.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "arn:aws:iam::<AccountId>:saml-provider/<IdP>"
      },
      "Action": [
        "sts:AssumeRoleWithSAML",
        "sts:SetSourceIdentity"
      ],
      "Condition": {
        "StringEquals": {
          "SAML:aud": "https://signin.aws.amazon.com/saml"
        }
      }
    }
  ]
}

Notes: The SetSourceIdentity action has to be allowed in the trust policy for assumeRole to work when the IdP is set up to pass SourceIdentity in the assertion. Future version of the sign-in URL may contain a Region code. When this occurs, you will need to modify the URL appropriately.

Policy statement

The following are examples of how the line “Federated”: “arn:aws:iam::<AccountId>:saml-provider/<IdP>” should look, based on the different IdPs specified in this post:

  • “Federated”: “arn:aws:iam::12345678990:saml-provider/Okta”
  • “Federated”: “arn:aws:iam::12345678990:saml-provider/PingOne”
  • “Federated”: “arn:aws:iam::12345678990:saml-provider/OneLogin”

Modify Account2/Role2 policy statement

The following is a sample access control policy document in Account2 for Role2 that allows you to switchRole from Account1. Edit the control policy and add sts:AssumeRole and sts:SetSourceIdentity in the Action section.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::<AccountID>:root"
      },
      "Action": [
        "sts:AssumeRole",
        "sts:SetSourceIdentity"
      ] 
    }
  ]
}

Trace the SourceIdentity attribute in AWS CloudTrail

Use the following procedure for each IdP to illustrate passing a corporate directory attribute mapped as the SourceIdentity.

To trace the SourceIdentity attribute in AWS CloudTrail

  1. Use an IdP to log in to an account Account1 (111122223333) using a role named Role1.
  2. Create a new Amazon Simple Storage Service (Amazon S3) bucket in Account1.
  3. Validate that the CloudTrail log entries for Account1 contain the Active Directory mapped SourceIdentity.
  4. Use the Switch Role feature to switch to a second account Account2 (444455556666), using a role named Role2.
  5. Create a new Amazon S3 bucket in Account2.

To summarize what you’ve done so far, you have:

  • Configured your corporate directory to pass a unique attribute to AWS as the source identity.
  • Configured a role that will persist the SourceIdentity attribute in AWS STS, which an employee will use to federate into your account.
  • Configured an Amazon S3 bucket that user will access.

Now you’ll observe in CloudTrail the SourceIdentity attribute that will be associated with every IAM action.

To see the SourceIdentity attribute in CloudTrail

  1. From the your preferred IdP dashboard, select the AWS tile to log into the AWS console. The example in Figure 12 shows the Okta dashboard.
    Figure 12. Login to AWS from IdP dashboard

    Figure 12. Login to AWS from IdP dashboard

  2. Choose the AWS icon, which will take you to the AWS Management Console. Notice how the user has assumed the role you created earlier.
  3. To test the SourceIdentity action, you will create a new Amazon S3 bucket.

    Amazon S3 bucket names are globally unique, and the namespace is shared by all AWS accounts, so you will need to create a unique bucket name in your account. For this example, we used a bucket named DOC-EXAMPLE-BUCKET1 to validate CloudTrail log entries containing the SourceIdentity attribute.

  4. Log into an account Account1 (111122223333) using a role named Role1.
  5. Next, create a new Amazon S3 bucket in Account1, and validate that the Account1 CloudTrail logs entries contain the SourceIdentity attribute.
  6. Create an Amazon S3 bucket called DOC-EXAMPLE-BUCKET1, as shown in Figure 13.
    Figure 13. Create S3 bucket

    Figure 13. Create S3 bucket

  7. In the AWS Management Console go to CloudTrail and check the log entry for bucket creation event, as shown in Figure 14.
    Figure 14 - Bucket creating entry in CloudTrail

    Figure 14 – Bucket creating entry in CloudTrail

Sample CloudTrail entry showing SourceIdentity entry

The following example shows the new sourceIdentity entry added to the JSON message for the CreateBucket event above.

{"eventVersion":"1.08",
"userIdentity":{
    "type":"AssumedRole",
    "principalId":"AROA42BPHP3V5TTJH32PZ:sourceidentitytest",
    "arn":"arn:aws:sts::111122223333:assumed-role/idsol-org-admin/sourceidentitytest",
    "accountId":"111122223333",
    "accessKeyId":"ASIA42BPHP3V2QJBW7WJ",
    "sessionContext":{
        "sessionIssuer":{
            "type":"Role",
            "principalId":"AROA42BPHP3V5TTJH32PZ",
            "arn":"arn:aws:iam::111122223333:role/idsol-org-admin",
            "accountId":"111122223333","userName":"idsol-org-admin"
        },
        "webIdFederationData":{},
        "attributes":{
            "mfaAuthenticated":"false",
            "creationDate":"2021-05-05T16:29:19Z"
        },
        "sourceIdentity":"<[email protected]>"
    }
},
"eventTime":"2021-05-05T16:33:25Z",
"eventSource":"s3.amazonaws.com",
"eventName":"CreateBucket",
"awsRegion":"us-east-1",
"sourceIPAddress":"203.0.113.0"
  1. Switch to Account2 (444455556666) using assume role, and switch to Account2/assumeRoleSourceIdentity.
  2. Create a new Amazon S3 bucket in Account2 and validate that the Account2 CloudTrail log entries contain the SourceIdentity attribute, as shown in Figure 15.
    Figure 15 - Switch role to assumeRoleSourceIdentity

    Figure 15 – Switch role to assumeRoleSourceIdentity

  3. Create a new Amazon S3 bucket in account2 called DOC-EXAMPLE-BUCKET2, as shown in Figure 16.
    Figure 16 - Create DOC-EXAMPLE-BUCKET2 bucket while logged into account2 using assumeRoleSourceIdentity

    Figure 16 – Create DOC-EXAMPLE-BUCKET2 bucket while logged into account2 using assumeRoleSourceIdentity

  4. Check the CloudTrail logs for account2 (444455556666) to see if the original SourceIdentity is logged, as shown in Figure 17.
    Figure 17 - CloudTrail log entry for the above action

    Figure 17 – CloudTrail log entry for the above action

CloudTrail entry showing original SourceIdentity after assuming a role

{
    "eventVersion": "1.08",
    "userIdentity": {
        "type": "AssumedRole",
        "principalId": "AROAVC5CY2KJCIXJLPMQE:sourceidentitytest",
        "arn": "arn:aws:sts::444455556666:assumed-role/s3assumeRoleSourceIdentity/sourceidentitytest",
        "accountId": "444455556666",
        "accessKeyId": "ASIAVC5CY2KJIAO7CGA6",
        "sessionContext": {
            "sessionIssuer": {
                "type": "Role",
                "principalId": "AROAVC5CY2KJCIXJLPMQE",
                "arn": "arn:aws:iam::444455556666:role/s3assumeRoleSourceIdentity",
                "accountId": "444455556666",
                "userName": "s3assumeRoleSourceIdentity"
            },
            "webIdFederationData": {},
            "attributes": {
                "mfaAuthenticated": "false",
                "creationDate": "2021-05-05T16:47:41Z"
            },
            "sourceIdentity": "<[email protected]>"
        }
    },
    "eventTime": "2021-05-05T16:48:53Z",
    "eventSource": "s3.amazonaws.com",
    "eventName": "CreateBucket",
    "awsRegion": "us-east-1",
    "sourceIPAddress": "203.0.113.0",

You logged into Account1/Role1 and switched to Account2/Role2. All the user activities performed in AWS using the Assume Role were also logged with the original user’s sourceIdentity attribute. This makes it simple to trace user activity in CloudTrail.

Conclusion

Now that you have configured your SourceIdentity, you have made it easier for the security team of your organization to use CloudTrail logs to investigate and identify the originating identity of a user. In this post, you learned how to configure the AWS STS SourceIdentity attribute for three different popular IdPs, as well as how to configure each IdP using SAML and their optional attributes. We also provided sample control policy documents outlining how to configure the SourceIdentity for each provider. Additionally, we provide a sample policy for setting the SourceIdentity when switching roles. Lastly, the post walks through how the source identity will show in CloudTrail logs, and provides logs from two accounts to demonstrate the continuance of the source identity attribute. You can now test this capability yourself in your own environment, validate activity in your CloudTrail logs, and determine which user performed a specific action while using the assumeRole functionality.

 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Keith Joelner

Keith Joelner

Keith is a Solution Architect at Amazon Web Services working in the ISV segment. He is based in the San Francisco Bay area. Since joining AWS in 2019, he’s been supporting Snowflake and Okta. In his spare time Keith liked woodworking and home improvement projects.

Nitin Kulkarni

Nitin is a Solutions Architect on the AWS Identity Solutions team. He helps customers build secure and scalable solutions on the AWS platform. He also enjoys hiking, baseball and linguistics.

Ramesh Kumar Venkatraman

Ramesh Kumar Venkatraman is a Solutions Architect at AWS who is passionate about containers and databases. He works with AWS customers to design, deploy and manage their AWS workloads and architectures. In his spare time, he loves to play with his two kids and follows cricket.

Eddie Esquivel

Eddie Esquivel

Eddie is a Sr. Solutions Architect in the ISV segment. He spent time at several startups focusing on Big Data and Kubernetes before joining AWS. Currently, he’s focused on management and governance and helping customers make best use of AWS technology. In his spare time he enjoys spending time outdoors with his Wife and pet dog.

Building an event-driven application with Amazon EventBridge

Post Syndicated from Talia Nassi original https://aws.amazon.com/blogs/compute/building-an-event-driven-application-with-amazon-eventbridge/

In event-driven architecture, services interact with each other through events. An event is something that happened in your application (for example, an item was put into a cart, a new order was placed). Events are JSON objects that tell you information about something that happened in your application. In event-driven architecture, each component of the application raises an event whenever anything changes. Other components listen and decide what to do with it and how they would like to react.

When you build applications with event-driven architecture, you decouple your event sources and event targets. This can enable teams to act more independently, because your services are loosely coupled. When you add new features to your applications, you raise new events and then decide on the event source and event target. The event source is what emits the event, and the event target is what subscribes to or receives the event. Decoupling event sources and event targets can greatly speed up development time, and it can simplify making changes to your application.

Decoupling your application can allow for more seamless cross-team collaboration. For example, let’s say you are a developer at an ecommerce company and you are building a serverless ecommerce application. Your team is in charge of the account creation and authentication process. You build the login workflow, and raise an event when a new user creates an account.

When the event is raised, other teams can be alerted. The marketing team can listen for the Account Created event and act on it (for example, send promotional emails). In this decoupled architecture, event producers and consumers don’t have to know about each other. They only have to listen to events and act accordingly when they are interested in an event. This can speed up development by reducing the complexity caused by building new features.

In AWS, events are choreographed through Amazon EventBridge rules. A rule matches incoming events from an event source and sends them to event targets for processing.

eventbridge architecture

EventBridge accepts events from many different event sources, including over 200 AWS services, custom events from your Lambda functions or applications, and third-party SaaS applications. You specify an action to take when EventBridge receives an event that matches the event pattern in the rule. When an event matches, Amazon EventBridge sends the event to the specified target and triggers the action defined in the rule.

To route events from these sources to the correct target, the events must be placed on a corresponding event bus. There are three types of event buses. The first type is the default bus, which is always available in every account, and it’s where AWS events are routed to. The second type is a custom event bus. You can create custom event buses for your own applications to meet your business needs. Lastly, you can also create SaaS event buses, which are created when you configure SaaS applications as an event source.

There are many potential event targets. Event targets are what the event bus route to once a corresponding event happens. Targets include AWS Lambda, Amazon Kinesis, AWS Step Functions, Amazon API Gateway, and even event buses in other accounts. This flexible design allows you to create a wide variety of integration patterns based on your specific needs.

Configuring events with Amazon EventBridge

This tutorial sends an event from Amazon S3 (the event source) to AWS Lambda (the event target) using an event rule.

In this tutorial, you learn how to configure events with Amazon EventBridge by deploying an AWS Serverless Application Model template. The AWS Serverless Application Model (AWS SAM) is an open-source framework for building serverless applications. It provides shorthand syntax to express functions, APIs, databases, and event source mappings. AWS SAM is an extension of AWS CloudFormation, which is the AWS infrastructure as code tool. You define resources using CloudFormation in your AWS SAM template and use the full suite of resources, intrinsic functions, and other template features that are available in AWS CloudFormation.

First, you upload images to an S3 bucket. This raises an event, which invokes a Lambda function, which resizes the image and places it in a different S3 bucket.

Prerequisites:

  1. AWS SAM CLI (If you use AWS Cloud9, this is installed for you)

To configure events with Amazon EventBridge:

  1. Navigate to the Serverlessland Patterns Collection and choose the pattern Amazon S3 to Amazon EventBridge to AWS Lambda. This AWS SAM template deploys an S3 bucket, a Lambda function, an EventBridge rule, and the IAM resources required to run the application.
  2. Copy and paste the cloning instructions in your terminal.
  3. Run sam deploy --guided to deploy the pattern.
  4. You see the success message:
  5. Navigate to the EventBridge console and choose Rules from the left panel. Then choose the rule that was created by AWS SAM (starting with sam-app)
    The event source is S3 and the rule is invoked when an image is put into the source bucket in S3. Next, notice that the event target is the Lambda function that you created from the AWS SAM template.
  6. Navigate to the S3 console and choose Buckets on the left panel. Then choose the bucket that was created for you (starting with sam-app). Choose the Properties tab, and note that the integration with EventBridge is on.
  7. From the Objects tab, choose Upload, and upload an image.

  8. Navigate to the Lambda console and choose your Lambda function (starting with sam-app). Select the Monitor tab, and choose View Logs in CloudWatch.
  9. You can see the event that triggered the Lambda function in the logs:

Adding more event rules to your application

In the previous example, you add an EventBridge rule that routes events from S3 (the event source) to Lambda (the event target) using an event bus. Now, add another rule:

  1. From the EventBridge console, choose Rules, and then choose Create rule.
  2. Enter a name and description for the rule.
  3. Define the event pattern that is used to invoke the event targets. For Service provider, choose AWS, and for Service name choose S3. For Event type, choose Amazon S3 event notification, and in the event dropdown choose Object created. You are configuring the event source to be an object created in your S3 bucket.
  4. Select either the default AWS event bus or a custom event bus.
  5. Select the event target. In this example, configure an Amazon CloudWatch log group. Enter any name for the log group, which is created automatically.
  6. Choose Create.
  7. Upload an image to the S3 bucket, as shown in step 7 above.
  8. Navigate to the Amazon CloudWatch console and choose Log groups from the left panel. Choose the log group, and then choose a log stream.
  9. The event is logged to CloudWatch Logs.

Adding a second event rule did not change the event source’s behavior or affect other event targets.

Conclusion

This post is a brief introduction of event-driven architecture, and walks through a tutorial where you create an event-driven application with the Serverlessland Patterns Collection. You also add two different event rules to your event bus.

For more serverless learning resources, visit Serverless Land.

AWS Week in Review – April 11, 2022

Post Syndicated from Channy Yun original https://aws.amazon.com/blogs/aws/aws-week-in-review-april-11-2022/

This post is part of our Week in Review series. Check back each week for a quick round up of interesting news and announcements from AWS!

As spring arrives in the Northern Hemisphere, tulips, sunshine, and cherry blossoms finally appear to be in bloom—surely signs of warmer days to come in North America, Asia, and Europe. I hope you enjoy the spring and, in the Southern Hemisphere, fall season with your family.

Let’s look the second edition of the AWS Week in Review for the month of April!

Last Week’s Launches
Here are some launches that caught my attention last week:

New Amazon EC2 Single Page Instance Launching Console – As Jeff introduced, the Amazon EC2 console introduces the new and improved launch experience—a quicker and easier way to launch an instance. The new design provides a single page layout, allowing you to view all your settings in one location. You no longer need to navigate back and forth between steps to ensure your configuration is correct. The new design also introduces a summary panel that provides an overview and helps navigate the page. Quickly get started by following the simple steps and see the EC2 documentation to learn more.

Unified Settings in the AWS Management Console – New Unified Settings will persist across devices, browsers, and services. It supports settings called default language, Region, visual theme such as either light or dark mode, and favorites bar with either the service icon and full name or only the service icon. You can access Unified Settings by signing in to the AWS Management Console, navigating to the account menu, and selecting Settings in all AWS Regions.

AWS Lambda Function URLs – This is really big news! AWS Lambda Function URLs is a new feature that makes it easier to invoke functions through an HTTPS endpoint as a built-in capability of the AWS Lambda service. You can add Function URLs to new and existing functions easily from the Lambda console. Function URLs are ideal for getting started with building web services on Lambda or for common tasks like building webhooks. To get started quickly and learn more, see Alex’s blog post.

Amazon CloudWatch Metrics Insights is Now Generally Available – As a fast, flexible, SQL-based query engine, Amazon CloudWatch Metrics Insights enables you to identify trends and patterns across millions of operational metrics in real time and helps you use these insights to reduce time to resolution. With Metrics Insights, you can gain better visibility on your infrastructure and large-scale application performance with flexible querying and on-the-fly metric aggregations. To get started, select the All metrics link under Metrics on the left navigation panel of the CloudWatch console and browse to the Query tab. To learn more, see the Metrics Insights documentation.

AWS Amplify Studio’s New File Storage and File Management – This new feature makes it easy to store and serve user-generated content (such as photos and videos) from web or mobile apps. With Amplify Studio, you can easily create an Amazon Simple Storage Service (Amazon S3) bucket, configure file access levels, integrate storage client libraries into your web or mobile app, and manage files in Studio’s drag-and-drop file explorer. Get started by reading Nikhil’s blog post on how to provision Storage directly from your Amplify Studio.

You can either select Upload files or drag and drop files onto your browser

For a full list of AWS announcements, be sure to keep an eye on the What’s New at AWS page.

Other AWS News
Here are some featured news items about open-source and community support at AWS in the last week:

Amazon Athena ACID Transactions Powered by Apache Iceberg – We announced the general availability of Amazon Athena ACID transactions, a new capability that adds insert, update, delete, and time travel operations to Athena’s SQL data manipulation language (DML). Built on the Apache Iceberg table format, Athena ACID transactions are optimized for Amazon S3 storage, support seamless schema evolution, and ensure atomic operations across other services and engines that support the Iceberg table format. To learn more, see Using Amazon Athena Transactions and Using Iceberg Tables in the Athena User Guide.

Amazon OpenSearch Service Now Supports OpenSearch 1.2 – We launched support for OpenSearch 1.0 on Amazon OpenSearch Service in September 2021 and for OpenSearch 1.1 in January 2022. The support included features of OpenSearch 1.2 such as transforms, data streams, notebooks, cross-cluster replication, and improvements to anomaly detection and alerting.

Amazon EKS Now Supports Kubernetes 1.22 – Customers can start taking advantage of the numerous enhancements and new generally available APIs in Kubernetes 1.22. In line with the Kubernetes community support for Kubernetes versions, Amazon EKS is committed to supporting at least four production-ready versions of Kubernetes at any given time. You can learn about how to upgrade your EKS version in our blog posts Amazon EKS now supports Kubernetes 1.22 and Planning Kubernetes Upgrades with Amazon EKS.

The New AWS Community Builders Directory – You can find over 800 AWS Community Builders in the global directory. Community Builders are technical enthusiasts and emerging thought leaders who are passionate about sharing knowledge and connecting with the technical community. You can contact all Community Builders in the directory to engage the AWS Community in your Region. To see created and shared content by them, check them out on dev.to.

Upcoming AWS Events
Check your calendars and sign up for these AWS events:

AWS Summits in the Asia-Pacific Are Back – I am happy to announce newly scheduled AWS Summits Online in the Asia-Pacific Regions such as Korea (on May 10–11), ASEAN (on May 18), and Australia & New Zealand (on May 18–19). More in-person summits in May are coming in Madrid (on May 4), Stockholm (on May 11), Berlin (on May 11–12), Tel Aviv (on May 18), and Atlanta (on May 18–19). Find an AWS Summit near you!

AWS Online Tech Talks for April – These talks cover a range of topics and expertise levels and features technical deep dives, demonstrations, customer examples, and live Q&A with AWS experts. Over 20 virtual or on-demand seminars have been scheduled from April 18–29. You can also find archived on-demand videos from previous AWS Online Tech Talks.

AWS Solutions-Focused Immersion Days – This is a series of events that are designed to educate you about AWS products and services and help you develop the skills needed to build, deploy, and operate your infrastructure and applications in the cloud. Hands on labs provide you with an immersive experience in the AWS console. Join us to learn how to build on AWS.

To find more about AWS events and webinars, explore the all AWS Events page.

That’s all for this week. Check back next Monday for another Week in Review!

Channy

The 2022 French Presidential election leaves its mark on the Internet

Post Syndicated from João Tomé original https://blog.cloudflare.com/elections-france-2022/

The 2022 French Presidential election leaves its mark on the Internet

The 2022 French Presidential election leaves its mark on the Internet

The first round of the 2022 French presidential elections were held this past Sunday, April 10, 2022, and a run-off will be held on April 24 between the top two candidates, Emmanuel Macron and Marine Le Pen. Looking at Internet trends in France for Sunday, it appears that when people were voting Internet traffic went down, and, no surprise, it went back up when results are coming in — that includes major spikes to news and election-related websites.

Cloudflare Radar data shows that Sundays are usually high-traffic days in France. But this Sunday looked a little different.

The seven-day Radar chart shows that there was a decrease in traffic compared to the previous Sunday between 08:00 and 16:00 UTC, that’s 10:00 and 18:00 in local time — bear in mind that polling stations in France were open between 08:00 and 19:00 (or 20:00 in big cities) local time. So, the decrease in traffic was ‘inside’ the period when French citizens were allowed to vote.

The 2022 French Presidential election leaves its mark on the Internet

That’s a similar trend we have seen in other elections, like the Portuguese one back in January 2022.

The time of the French election day with the largest difference compared to the previous Sunday was 14:00 UTC (16:00 in local time), when traffic decreased as much as 16% (as the previous 7-day chart shows). That’s clear in this chart:

The 2022 French Presidential election leaves its mark on the Internet

That doesn’t show us precisely how people use the Internet differently on an election day — note that we already saw in the past how the weather, times of the year or even events affect human behaviour and subsequently Internet trends.

Let’s look deeper into those trends. We know that weekdays, weekends and even Sundays have, in many countries, specific patterns so, when we compare the previous four Sundays in France since March 20, we can see some trends highlighted in the next chart:

  • April 10, Election Day, was the Sunday with the most traffic of the previous month at 06:30 UTC (08:30 local time) and in several periods between 16:30 and 20:45 UTC (18:30 and 22:45 local time).
  • April 10, Election Day, was the Sunday with the least traffic of the previous month in several periods between 09:45 and 11:15 (11:45 and 13:15 local time) and it was the #3 out of #4 with less traffic between 12:15 and 16:15 (14:15 and 18:15 local time).
The 2022 French Presidential election leaves its mark on the Internet

This seems to show patterns such as: before going to vote more people than usual were online on Sunday, Election Day (08:30 local time), but traffic went down considerably in the late morning period between (11:30-13:15) and again after lunch (14:15 and 18:15) shortly before the polling stations were closed.

The first exit polls started to be published around 18:40 local time (seen in the second and biggest green circle in the previous chart), but the main exit poll was at 20:00 local time, when all the polling stations were already closed, at that time Internet traffic in France was at its highest compared to Sundays during the past 30 days (seen in the third green circle in the previous chart, 18:00 UTC).

How about mobile devices’ usage trends? People in France were definitely using their mobile devices more on Election Day, and that is also evident when compared to the previous Sunday, April 3.

On Election Day, April 10, 2022, at around 09:00 local time mobile usage represented 60% of Internet traffic and had another spike at 21:00 local time with 58% (the seven-day average for mobile usage in France is 48%).

The 2022 French Presidential election leaves its mark on the Internet

When results arrive, people go online

Official websites usually aren’t the most popular sites in a given country, their popularity is mostly connected to when citizens have to fill in their tax forms online or want to see something like election results — although news media outlets are also important there. Here we’re looking at DNS request trends to get a sense of traffic to Internet properties.


Official French election-related websites like elections.interieur.gouv.fr (where the results are published) had an increase in traffic throughout the week mainly after Monday, April 4, but on election day there were two major spikes.

The 2022 French Presidential election leaves its mark on the Internet

The first spike in traffic was around 20:00 local time (370% more than the previous Sunday at the same time), when all the polling stations were already closed and the first major polls were revealed. But the main spike was later, at midnight (local time), when 84% of the votes were already counted and published — Macron was leading (27%) followed closely by Le Pen (25%). That spike represented 925% more requests than in the previous Sunday.

The news Internet traffic spike ‘knocks’ at 20:00

When there are elections in a country, people tend to see the analysis and results using media outlets from radio to TV, but also the Internet — media websites and social media. Let’s focus on French media outlets. The biggest spike of the week in our aggregate DNS chart, that shows trends from 12 news websites, was definitely on Election Day, around 20:00 local time, when those domains had 116% more traffic than at the same time on the previous Sunday.

The 2022 French Presidential election leaves its mark on the Internet

Nonetheless, after 16:00 local time, traffic started to increase to those news outlets and by 18:00 local time it had its largest spike of the week with sustained growth until 20:00. At 23:00 local time there was another increase in traffic and after that it started to decrease. But, this Monday morning, traffic at 08:00 was already higher again than during the previous week (Election Day excluded). So, no surprise, Sunday night was when people were looking more into the news.

The same trend is seen on the major French TV station websites, with an even more isolated spike at 20:00 local time and a 472% increase in traffic compared to the previous Sunday, when the main exit polls were announced.

The 2022 French Presidential election leaves its mark on the Internet

This was also similar to the broadcast radio website trends. Besides the 20:00 local time spike (272% increase compared to the previous Sunday), there was also a big one at 23:00 local time (300%) and a Monday morning spike with higher than before traffic (82% increase):

The 2022 French Presidential election leaves its mark on the Internet

How about social media?

Regarding social media in France (looking at the aggregate DNS of the several sites), there’s no clear trend regarding the elections, but there were slightly fewer requests than on the previous Sunday. So social media doesn’t appear to have been as impacted by the elections as news websites.

The 2022 French Presidential election leaves its mark on the Internet

Conclusion

Although there aren’t big changes in Internet traffic, like those seen in countries that shut down the Internet during election periods, Election Day seems to influence human and Internet patterns, in this case when results started to pour in on election night people went to news or official election websites.

You can keep an eye on these trends using Cloudflare Radar.

Performance at GitHub: deferring stats with rack.after_reply

Post Syndicated from blakewilliams original https://github.blog/2022-04-11-performance-at-github-deferring-stats-with-rack-after_reply/

Performance is an essential aspect of any production application, and GitHub is no exception. In the last year, we’ve been making significant investments to improve the performance of some of our most critical workflows. There’s a lot to share, but today we’re going to focus on a little-known feature of some Rack-powered web servers called rack.after_reply that saved us 30ms p50 and 50ms+ p99 on every request across the site.

The problem

First, let me talk about how we found ourselves looking at this Rack web server functionality. When diving into performance, we often look at flamegraphs that reveal where time is being spent for a given request. Eventually, we started to notice a pattern. It turned out we were spending a significant amount of time sending statsd metrics throughout a request. In fact, it could be up to 65ms per request! While telemetry enables us to improve GitHub, it doesn’t help users directly, especially when it impacts the performance of our page loads.

There’s no need to send telemetry immediately, especially if it’s expensive, so we started discussing ways to remove telemetry network calls out of the critical path. This resulted in the question, “Can we batch all of our telemetry calls and send them at once?” There’s far too much data to use a background job, so we had to look at other potential solutions. We came across Rack::Events and spiked out a proof of concept. Unfortunately, this approach caused any work performed in Rack::Events callbacks to block the response from closing. This resulted in two issues. First, users would see the browser loading indicator until the deferred code was complete. Second, this impacted our metrics since response times still included the time it took to run our deferred code.

A potential path forward

Recognizing the potential, we kept digging. Eventually, we came across a feature that seemed to do exactly what we wanted, rack.after_reply. The Puma source code describes rack.after_reply as: “A rack extension. If the app writes #call’ables to this array, we will invoke them when the request is done.” In other words, this would allow us to collect telemetry metrics during a request and flush them in a single batch after the browser received the full response, avoiding the issue of network calls slowing down response times. There was a problem though. We don’t use Puma at GitHub, we use Unicorn. 😱

We had a clear path forward. All we needed to do was implement rack.after_reply in Unicorn. This functionality has a lot of use cases outside of sending telemetry metrics, so we contributed rack.after_reply back to Unicorn. This allowed us to write a small wrapper class around rack.after_reply, making it easier to use in the application:


class AfterResponse def initialize(env) env[“rack.after_reply”] ||= [] env[“rack.after_reply”] << -> do self.call end @to_perform = [] end # Calls each callable defined via #perform def call @to_perform.each do |block| begin block.call(self) rescue Object => e Rails.logger.error(e) end end end # Adds given block to the array of callables that will # be called after the user has received the response. def perform(name, &block) @to_perform << block end end

With the hard part completed, we wrote a small wrapper around our telemetry class to buffer all of our stats data. Combining that with a middleware that performs roughly the following, users now receive responses 30ms faster P50 across all pages. P99 response times improved too, with a 50ms+ reduction across the site.


GitHub.after_response.perform do GitHub.statsd.flush! end

In conclusion

There are a few drawbacks to this approach that are worth mentioning. The primary disadvantage is that a Unicorn worker executes the rack.after_reply code, making it unable to serve another HTTP request until your callables finish running. If not kept in check, this can potentially increase HTTP queueing, the time spent waiting for a web server worker to serve a request. We mitigated this by adding timeout behavior to prevent any code from running too long.

It’s a little early to talk about it, but there is progress toward making this functionality an official part of the Rack spec. A recently released gem also implements additional functionality around this feature called maybe_later that’s worth checking out.

Overall, we’re pleased with the results. Unicorn-powered applications now have a new tool at their disposal, and our customers reap the benefits of this performance enhancement!

Building SAML federation for Amazon OpenSearch Dashboards with Auth0

Post Syndicated from Raghavarao Sodabathina original https://aws.amazon.com/blogs/architecture/building-saml-federation-for-amazon-opensearch-dashboards-with-auth0/

Amazon OpenSearch is a fully managed, distributed, open search, and analytics service that is powered by the Apache Lucene search library. OpenSearch is derived from Elasticsearch 7.10.2, and is used for real-time application monitoring, log analytics, and website search. It’s ideal for use cases that require fast access and response for large volumes of data. OpenSearch Dashboards is derived from Kibana 7.10.2, and used for visual data exploration. With Security Assertion Markup Language (SAML)-based federation for OpenSearch, Dashboards lets you use your existing identity provider (IdP) like Auth0. You can use Auth0 to provide single sign-on (SSO) for OpenSearch Dashboards on Amazon OpenSearch search domains. It also gives you fine-grained access control, and the ability to search your data and build visualizations. Amazon OpenSearch supports providers that use the SAML 2.0 standard, such as Auth0, Okta, Keycloak, Active Directory Federation Services (AD FS), and Ping Identity (PingID).

In this post, we provide step-by-step guidance to show you how to set up a trial Auth0 account. We’ll demonstrate how to build users and groups within your organization’s directory, and enable SP-initiated single sign-on (SSO) into OpenSearch Dashboards.

To use this feature, you must enable fine-grained access control. Rather than authenticating through Amazon Cognito or an internal user database, SAML authentication for OpenSearch Dashboards lets you use third-party identity providers to log in to the OpenSearch Dashboards. SAML authentication for OpenSearch Dashboards is only for accessing the OpenSearch Dashboards through a web browser. Your SAML credentials do not let you make direct HTTP requests to OpenSearch or OpenSearch Dashboards APIs.

Auth0 is an AWS Competency Partner and popular Identity-as-a-Service (IDaaS) solution. It supports both service provider (SP)-initiated and identity provider (IdP)-initiated SSO. For SP-initiated SSO, when you sign into the OpenSearch Dashboards login page it sends an authorization request to Auth0. Once it authenticates your identity, you are redirected to OpenSearch Dashboards. In IdP-initiated SSO, you log in to the Auth0 SSO page, and choose OpenSearch Dashboards to open the application.

Overview of AuthO SAML authenticated solution

Figure 1 depicts a sample architecture of a generic, integrated solution between Auth0 and OpenSearch Dashboards over SAML authentication.

High level flow of SAML transactions between Amazon OpenSearch and Auth0

Figure 1. A high-level view of a SAML transaction between Amazon OpenSearch and Auth0

The sign-in flow is as follows:

  1. User opens browser window and navigates to Amazon OpenSearch Dashboards
  2. Amazon OpenSearch generates SAML authentication request
  3. Amazon OpenSearch redirects request back to browser
  4. Browser redirects to Auth0 URL
  5. Auth0 parses SAML request, authenticates user, and generates SAML response
  6. Auth0 returns encoded SAML response to browser
  7. Browser sends SAML response back to Amazon OpenSearch Assertion Consumer Service (ACS) URL
  8. ACS verifies SAML response
  9. User logs into Amazon OpenSearch domain

Prerequisites

For this walkthrough, you should have the following prerequisites:

  1. An AWS account
  2. A virtual private cloud (VPC) based Amazon OpenSearch domain with fine-grained access control enabled
  3. An Auth0 account with user and a group
  4. A browser with network connectivity to Auth0, Amazon OpenSearch domain, and Amazon OpenSearch Dashboards.

The steps in this post are structured into the following sections:

  1. Identity provider (Auth0) setup
  2. Prepare Amazon OpenSearch for SAML configuration
  3. Identity provider (Auth0) SAML configuration
  4. Finish Amazon OpenSearch for SAML configuration
  5. Validation
  6. Cleanup

Identity provider (Auth0) setup

Step 1: Sign up for an Auth0 account

  • Sign up for an Auth0 account, then click on the Sign up button to complete your account setup.
  • If you already have an account with Auth0, log in to your Auth0 account.

Step 2: Create Groups in Auth0

  • Choose User Management in the left menu and click Users, then click on the +Create User button.
  • Provide an email, password, and connection to your users. Click on the Create button to create your user.
  • Add more users to your Auth0 account.

Step 3: Install Auth0 Extension to create a group and assign users to the group

  • Click on Extensions in the left menu and search for “Auth0 Authorization”. Click on Auth0 Authorization to install the extension, shown in Figure 2.
The diagram depicts the Installing of Auth0 Authorization extension

Figure 2. Installing Auth0 Authorization extension

  • Use all default options and click on the Install button to install the extension.
  • Click on the Auth0 Authorization extension and choose the Accept button to provide access to your Auth0 account.
  • The Auth0 Authorization extension must be configured. Click on Go to Configuration (Figure 3).
The diagram depicts the configuration of Auth0 Authorization extension

Figure 3. Configuring the Auth0 Authorization extension

  • Rotate your API keys and check Groups, Roles, and Permissions to provide authorization to the extension and then click on PUBLISH RULE to complete the configuration, see Figure 4.
The diagram depicts the providing permissions to Auth0 Authorization extension

Figure 4. Providing the permissions to Auth0 Authorization extension

Step 4: Create a group in Auth0

  • Choose Groups from the left menu and click on the Create your first Group button. For this example, we will create a group called opensearch for OpenSearch Dashboards access.
  • Add your users to opensearch by clicking on ADD MEMBERS BUTTON, then click on the CONFIRM button to complete your group assignment (Figure 5).
The diagram depicts the adding users to Auth0 Group

Figure 5. Adding users to Auth0 Group

Step 5: Create an Auth0 Application

  • Choose Applications from the left menu. Click on the +Create Application button.
  • For this example, we are creating an application called “opensearch”.
  • Select Single Page Web Applications, then click on the CREATE button to proceed.
  • Click on the Addons tab on the application Kibana (Figure 6).
The diagram depicts the creation of Auth0 SAML application

Figure 6. Creating an Auth0 SAML application

  • Click on the SAML2 WEB APP, then select settings to provide SAML URLs from Amazon OpenSearch. We will configure these details after preparing the Amazon OpenSearch cluster for SAML.

Prepare Amazon OpenSearch for SAML configuration

Once the Amazon OpenSearch domain is up and running, we can proceed with configuration.

  • Under Actions, choose Edit security configuration (Figure 7).
The diagram depicts the enablement of OpenSearch security configuration for SAML

Figure 7. Enabling Amazon OpenSearch security configuration for SAML

  • Under SAML authentication for OpenSearch Dashboards/Kibana, select the Enable SAML authentication check box (Figure 8). When we enable SAML, it will create different URLs required for configuring SAML with your identity provider.
The diagram depicts the Amazon OpenSearch URLs for SAML configuration

Figure 8. Amazon OpenSearch URLs for SAML configuration

We will be using the Service Provider entity ID and SP-initiated SSO URL (highlighted in Figure 8) for Auth0 SAML configuration. We will complete the rest of the Amazon OpenSearch SAML configuration after the Auth0 SAML configuration.

Auth0 SAML configuration

Go back to Auth0.com, and navigate to Applications from the left menu. Then select the opensearch application that you created as a part of the Auth0 setup.

  • Click on the Addons tab on the application opensearch.
  • Click on the SAML2 WEB APP, then select Settings to provide SAML URLs from Amazon OpenSearch, as shown in Figure 9:
    • Application Callback URL = https://vpc-XXXXX-XXXXX.us-east-1.es.amazonaws.com/_dashboards/_opendistro/_security/saml/acs (SP-initiated SSO URL)
    • audience”: “https://vpc-XXXXX-XXXXX.us-east-1.es.amazonaws.com” (Service provider entity ID)
    • destination”: “ https://vpc-XXXXX-XXXXX.us-east-1.es.amazonaws.com/_plugin/kibana/_opendistro/_security/saml/acs” (SP-initiated SSO URL)
    • Mappings and other configurations shown in Figure 9

{
  "audience": "https://vpc-XXXXX-XXXXX.us-east-1.es.amazonaws.com",
  "destination": "https://vpc-XXXXX-XXXXX.us-east-1.es.amazonaws.com/_plugin/kibana/_opendistro/_security/saml/acs",
  "mappings":
  {
    "email":
    "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress",
    "name": "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name",
    "groups": "http://schemas.xmlsoap.org/claims/Group"
  },
  "createUpnClaim": false,
  "passthroughClaimsWithNoMapping": false,
  "mapUnknownClaimsAsIs": false,
  "mapIdentities": false,
  "nameIdentifierFormat":
  "urn:oasis:names:tc:SAML:1.1:nameid-format:emailAddress", "nameIdentifierProbes": [
"http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress" ]
}

The diagram depicts the configuration of Auth0 SAML parameters

Figure 9. Configuring Auth0 SAML parameters

  • Click on Enable to save the SAML configurations.
  • Go to the Usage tab, and click on the Download button to download Identity Provider Metadata, see Figure 10.
The diagram depicts the downloading of Auth0 identity provider meta data for SAML configuration

Figure 10. Downloading Auth0 identity provider metadata for SAML configuration

Amazon OpenSearch SAML configuration

  • Switch back to Amazon OpenSearch domain:
    • Navigate to Amazon OpenSearch console
    • Click on Actions, then click on Modify Security configuration
    • Select Enable SAML authentication check box
  • Under Import IdP metadata section (Figure 11):
    • Metadata from IdP: Import the Auth0 identity provider metadata from downloaded XML file
    • SAML master backend role: opensearch (Auth0 group). Provide a SAML backend role/group SAML assertion key for group SSO into Kibana
The diagram depicts the configuration of Amazon OpenSearch SAML parameters

Figure 11. Configuring Amazon OpenSearch SAML parameters

  • Under Optional SAML settings (Figure 12):
    • Leave Subject Key as blank, as Auth0 provides NameIdentifier
    • Role key should be http://schemas.xmlsoap.org/claims/Group. Auth0 lets you view a sample assertion during the configuration process by clicking on the DEBUG button on SAML2 WebApp. Tools like SAML-tracer can help you examine and troubleshoot the contents of real assertions.
    • Session time to live (mins): 60
The diagram depicts the configuration of Amazon OpenSearch optional SAML parameters

Figure 12. Configuring Amazon OpenSearch optional SAML parameters

Click on the Save changes button to complete Amazon OpenSearch SAML configuration for Kibana. We have successfully completed SAML configuration and are now ready for testing.

Validating access with Auth0 users

  • Access OpenSearch Dashboards from the previously created OpenSearch cluster. The OpenSearch Dashboards URL can be found as shown in Figure 13. The first access to the OpenSearch Dashboards URL redirects you to the Auth0 login screen.
The diagram depicts the validation of Auth0 users access with Amazon OpenSearch

Figure 13. Validating Auth0 users access with Amazon OpenSearch

  • Now copy and paste the OpenSearch Dashboards URL in your browser, and enter the user credentials.
  • If your OpenSearch domain is hosted within a private VPC, you will not be able to access your OpenSearch Dashboard over the public internet. But you can still use SAML as long as your browser can communicate with both your OpenSearch cluster and your identity provider.
  • You can create a Mac or Windows EC2 instance within the same VPC. This way you can access Amazon OpenSearch Dashboards from your EC2 instance’s web browser to validate your SAML configuration. You can also access Amazon OpenSearch Dashboards through Site-to-Site VPN from an on-premises environment.
  • After successful login, you will be redirected into the OpenSearch Dashboards home page. Explore our sample data and visualizations in OpenSearch Dashboards, as shown in Figure 14.
SAML authenticated Amazon OpenSearch Dashboards

Figure 14. SAML authenticated Amazon OpenSearch Dashboards

  • You now have successfully federated Amazon OpenSearch Dashboards with Auth0 as an identity provider. You can connect OpenSearch Dashboards by using your Auth0 credentials.

Cleaning up

After you test out this solution, remember to delete all the resources you created to avoid incurring future charges. Refer to these links:

Conclusion

In this blog post, we have demonstrated how to set up Auth0 as an identity provider over SAML authentication for Amazon OpenSearch Dashboards access. With this solution, you now have an OpenSearch Dashboard that uses Auth0 as the custom identity provider for your users. This reduces the customer login process to one set of credentials and improves employee productivity.

Get started by checking the Amazon OpenSearch Developer Guide, which provides guidance on how to build applications using Amazon OpenSearch for your operational analytics.

[$] Negative dentries, 20 years later

Post Syndicated from original https://lwn.net/Articles/890025/

Filesystems and the virtual filesystem layer are in the business of
managing files that actually exist, but the Linux “dentry cache”, which
remembers the results of file-name lookups, also keeps track of files that
don’t exist. This cache of “negative dentries” plays an important
role in the overall performance of the system but, if it is allowed to grow
too large, its role can become negative in its own right. As the 2022 Linux Storage, Filesystem,
and Memory-Management Summit
(LSFMM) approaches, the subject of negative
dentries has come up yet again; whether one can be positive about the
prospects for a resolution this time around remains unclear.

Integrate Amazon Redshift native IdP federation with Microsoft Azure AD using a SQL client

Post Syndicated from Maneesh Sharma original https://aws.amazon.com/blogs/big-data/integrate-amazon-redshift-native-idp-federation-with-microsoft-azure-ad-using-a-sql-client/

Amazon Redshift accelerates your time to insights with fast, easy, and secure cloud data warehousing at scale. Tens of thousands of customers rely on Amazon Redshift to analyze exabytes of data and run complex analytical queries.

The new Amazon Redshift native identity provider authentication simplifies administration by sharing identity and group membership information to Amazon Redshift from a third-party identity provider (IdP) service, such as Microsoft Azure Active Directory (Azure AD), and enabling Amazon Redshift to natively process third-party tokens, identities, and group permissions. This process is very easy to set up, provides a secure and smoother customer experience for managing identities and groups at a centralized external IdP, and integrates natively with Amazon Redshift.

In this post, we focus on Microsoft Azure AD as the IdP and provide step-by-step guidance to connect SQL clients like SQL Workbench/J and DBeaver with Amazon Redshift using a native IdP process. Azure AD manages the users and provides federated access to Amazon Redshift. You don’t need to create separate Amazon Redshift database users, AWS Identity and Access Management (IAM) roles, or IAM policies with this setup.

Solution overview

Using an Amazon Redshift native IdP has the following benefits:

  • Enables your users to be automatically signed in to Amazon Redshift with their Azure AD accounts
  • You can manage users and groups from a centralized IdP
  • External users can securely access Amazon Redshift without manually creating new user names or roles using their existing corporate directory credentials
  • External user group memberships are natively mirrored with Amazon Redshift roles and users

The following diagram illustrates the architecture of a native IdP for Amazon Redshift:

The workflow contains the following steps:

  1. You configure a JDBC or ODBC driver in your SQL client to use Azure AD federation and use Azure AD login credentials to sign in.
  2. Upon a successful authentication, Azure AD issues an authentication token (OAuth token) back to the Amazon Redshift driver.
  3. The driver forwards the authentication token to the Amazon Redshift cluster to initiate a new database session.
  4. Amazon Redshift verifies and validates the authentication token.
  5. Amazon Redshift calls the Azure Graph API to obtain the user’s group membership.
  6. Amazon Redshift maps the logged-in Azure AD user to the Amazon Redshift user and maps the Azure AD groups to Amazon Redshift roles. If the user and groups don’t exist, Amazon Redshift automatically creates those identities within the IdP namespace.

To implement the solution, you complete the following high-level steps:

  1. Set up your Azure application.
    1. Create OAuth Application
    2. Create Redshift Client application
    3. Create Azure AD Group
  2. Collect Azure AD information for the Amazon Redshift IdP.
  3. Set up the IdP on Amazon Redshift.
  4. Set up Amazon Redshift permissions to external identities.
  5. Configure the SQL client (for this post, we use SQL Workbench/J and DBeaver).

Prerequisites

You need the following prerequisites to set up this solution:

Set up your Azure application

For integrating with any SQL client/BI tool except Microsoft Power BI, we would be creating two applications. First application will be used to authenticate the user and provide a login token.  Second application will be used by Redshift to retrieve user and group information.

Step 1: Create OAuth Application

  1. Sign in to the Azure portal with your Microsoft account.
  2. Navigate to the Azure Active Directory application.
  3. Under Manage in the navigation pane, choose App registrations and then choose New registration.
  4. For Name, enter a name (for example, oauth_application).
  5. For Redirect URI, choose Public client/native (mobile and desktop) and enter the redirect URL http://localhost:7890/redshift/. For this post, we are keeping the default settings for the rest of the fields.
  6. Choose Register.
  7. In the navigation pane, under Manage, choose Expose an API.

If you’re setting up for the first time, you can see Set to the right of Application ID URI.

  1. Choose Set and then choose Save.
  2. After the application ID URI is set up, choose Add a scope.
  3. For Scope name, enter a name (for example, jdbc_login).
  4. For Admin consent display name, enter a display name (for example, JDBC login).
  5. For Admin consent description, enter a description of the scope.
  6. Choose Add scope.

  7. After the scope is added, note down the application ID URI (for example, api://991abc78-78ab-4ad8-a123-zf123ab03612p) and API scope (api://991abc78-78ab-4ad8-a123-zf123ab03612p/jdbc_login) in order to register the IdP in Amazon Redshift later.

The application ID URI is known as <Microsoft_Azure_Application_ID_URI> in the following section.

The API scope is known as <Microsoft_Azure_API_scope_value> when setting up the SQL client such as DBeaver and SQL Workbench/J.

Step 2. Create Redshift Client Application

  1. Navigate to the Azure Active Directory application.
  2. Under Manage in the navigation pane, choose App registrations and then choose New registration.
  3. For Name, enter a name (for example, redshift_client). For this post, we are keeping the default settings for the rest of the fields.
  4. Choose Register.
  5. On the newly created application Overview page, locate the client ID and tenant ID and note down these IDs in order to register the IdP in Amazon Redshift later.
  6. In the navigation pane, choose Certificates & secrets.
  7. Choose New client secret.
  8. Enter a Description, select an expiration for the secret or specify a custom lifetime. We are keeping Microsoft recommended default expiration value of 6 months. Choose Add.
  9. Copy the secret value.

It would only be present one time and after that you cannot read it.

  1. In the navigation pane, choose API permissions.
  2. Choose Add a permission and choose Microsoft Graph.
  3. Choose Application permissions.
  4. Search the directory and select the Directory.Read.All permission.
  5. Choose Add permissions.
  6. After the permission is created, choose Grant admin consent.
  7. In the pop-up box, choose Yes to grant the admin consent.

The status for the permission shows as Granted for with a green check mark.

Step 3. Create Azure AD Group

  1. On the Azure AD home page, under Manage, choose Groups.
  2. Choose New group.
  3. In the New Group section, provide the required information.
  4. Choose No members selected and then search for the members.
  5. Select the members and choose Select. For this example, you can search your username and click select.

You can see the number of members in the Members section.

  1. Choose Create.

Collect Azure AD information

Before we collect the Azure AD information, we need to identify the access token version from the application which you have created earlier on the Azure portal under Step 1. Create OAuth Application. In the navigation pane, under Manage, choose Manifest section, then view the accessTokenAcceptedVersion parameter: null and 1 indicate v1.0 tokens, and 2 indicates v2.0 tokens.

To configure your IdP in Amazon Redshift, collect the following parameters from Azure AD. If you don’t have these parameters, contact your Azure admin.

  1. issuer – This is known as <Microsoft_Azure_issuer_value> in the following sections. If you’re using the v1.0 token, use https://sts.windows.net/<Microsoft_Azure_tenantid_value>/. If you’re using the v2.0 token, use https://login.microsoftonline.com/<Microsoft_Azure_tenantid_value>/v2.0. To find your Azure tenant ID, complete the following steps:
    • Sign in to the Azure portal with your Microsoft account.
    • Under Manage, choose App registrations.
    • Choose any application which you have created in previous sections.
    • Click on the Overview (left panel) page and it’s listed in the Essentials section as Directory (tenant) ID.

  2. client_id – This is known as <Microsoft_Azure_clientid_value> in the following sections. An example of a client ID is 123ab555-a321-666d-7890-11a123a44890). To get your client ID value, locate the application you created earlier on the Azure portal under Step 2. Create Redshift Client Application. Click on the Overview (left panel) page and it’s listed in the Essentials section.
  3. client_secret – This is known as <Microsoft_Azure_client_secret_value> in the following sections. An example of a client secret value is KiG7Q~FEDnE.VsWS1IIl7LV1R2BtA4qVv2ixB). To create your client secret value, refer to the section under Step 2. Create Redshift Client Application.
  4. audience – This is known as <Microsoft_Azure_token_audience_value> in the following sections. If you’re using a v1.0 token, the audience value is the application ID URI (for example, api://991abc78-78ab-4ad8-a123-zf123ab03612p). If you’re using a v2.0 token, the audience value is the client ID value (for example, 991abc78-78ab-4ad8-a123-zf123ab03612p). To get these values, please refer to the application which you have created in Step 1: Create OAuth Application. Click on the Overview (left panel) page and it’s listed in the Essentials section.

Set up the IdP on Amazon Redshift

To set up the IdP on Amazon Redshift, complete the following steps:

  1. Log in to Amazon Redshift with a superuser user name and password using query editor v2 or any SQL client.
  2. Run the following SQL:
    CREATE IDENTITY PROVIDER <idp_name> TYPE azure 
    NAMESPACE '<namespace_name>' 
    PARAMETERS '{ 
    "issuer":"<Microsoft_Azure_issuer_value>", 
    "audience":["<Microsoft_Azure_token_audience_value>"],
    "client_id":"<Microsoft_Azure_clientid_value>", 
    "client_secret":"<Microsoft_Azure_client_secret_value>"
    }';

For example, the following code uses a v1.0 access token:

CREATE IDENTITY PROVIDER test_idp TYPE 
azure NAMESPACE 'oauth_aad' 
PARAMETERS '{
"issuer":"https://sts.windows.net/87f4aa26-78b7-410e-bf29-57b39929ef9a/", 
"audience":["api://991abc78-78ab-4ad8-a123-zf123ab03612p"],
"client_id":"123ab555-a321-666d-7890-11a123a44890", 
"client_secret":"KiG7Q~FEDnE.VsWS1IIl7LV1R2BtA4qVv2ixB"
}';

The following code uses a v2.0 access token:

CREATE IDENTITY PROVIDER test_idp TYPE 
azure NAMESPACE 'oauth_aad' 
PARAMETERS '{
"issuer":
"https://login.microsoftonline.com/87f4aa26-78b7-410e-bf29-57b39929ef9a/v2.0",
"audience":["991abc78-78ab-4ad8-a123-zf123ab03612p"], 
"client_id":"123ab555-a321-666d-7890-11a123a44890", 
"client_secret":"KiG7Q~FEDnE.VsWS1IIl7LV1R2BtA4qVv2ixB" 
}';
  1. To alter the IdP, use the following command (this new set of parameter values completely replaces the current values):
    ALTER IDENTITY PROVIDER <idp_name> PARAMETERS 
    '{
    "issuer":"<Microsoft_Azure_issuer_value>/",
    "audience":["<Microsoft_Azure_token_audience_value>"], 
    "client_id":"<Microsoft_Azure_clientid_value>", 
    "client_secret":"<Microsoft_Azure_client_secret_value>"
    }';

  2. To view a single registered IdP in the cluster, use the following code:
    DESC IDENTITY PROVIDER <idp_name>;

  3. To view all registered IdPs in the cluster, use the following code:
    select * from svv_identity_providers;

  4. To drop the IdP, use the following command:
    DROP IDENTITY PROVIDER <idp_name> [CASCADE];

Set up Amazon Redshift permissions to external identities

The users, roles, and role assignments are automatically created in your Amazon Redshift cluster during the first login using your native IdP unless they were manually created earlier.

Create and assign permission to Amazon Redshift roles

In this step, we create a role in the Amazon Redshift cluster based on the groups that you created on the Azure AD portal. This helps us avoid creating multiple user names manually on the Amazon Redshift side and assign permissions for multiple users individually.

The role name in the Amazon Redshift cluster looks like <namespace>:<azure_ad_group_name>, where the namespace is the one we provided in the IdP creation command and the group name is the one we specified when we were setting up the Azure application. In our example, it’s oauth_aad:rsgroup.

Run the following command in the Amazon Redshift cluster to create a role:

create role "<namespace_name>:<Azure AD groupname>";

For example:

create role "oauth_aad:rsgroup";

To grant permission to the Amazon Redshift role, enter the following command:

GRANT { { SELECT | INSERT | UPDATE | DELETE | DROP | REFERENCES } [,...]
 | ALL [ PRIVILEGES ] }
ON { [ TABLE ] table_name [, ...] | ALL TABLES IN SCHEMA schema_name [, ...] }
TO role "<namespace_name>:<Azure AD groupname>";

Then grant relevant permission to the role as per your requirement. For example:

grant select on all tables in schema public to role "oauth_aad:rsgroup";

Create and assign permission to an Amazon Redshift user

This step is only required if you want to grant permission to an Amazon Redshift user instead of roles. We create an Amazon Redshift user that maps to a Azure AD user and then grant permission to it. If you don’t want to explicitly assign permission to an Amazon Redshift user, you can skip this step.

To create the user, use the following syntax:

CREATE USER "<namespace_name>:<Azure AD username>" PASSWORD DISABLE;

For example:

CREATE USER "oauth_aad:[email protected]" PASSWORD DISABLE;

We use the following syntax to grant permission to the Amazon Redshift user:

GRANT { { SELECT | INSERT | UPDATE | DELETE | DROP | REFERENCES } [,...]
 | ALL [ PRIVILEGES ] }
ON { [ TABLE ] table_name [, ...] | ALL TABLES IN SCHEMA schema_name [, ...] }
TO "<namespace_name>:<Azure AD username>";

For example:

grant select on all tables in schema public to "oauth_aad:[email protected]";

Configure the SQL client

In this section, we provide instructions to set up a SQL client using either DBeaver or SQL Workbench/J.

Set up DBeaver

To set up DBeaver, complete the following steps:

  1. Go to Database and choose Driver Manager.
  2. Search for Redshift, then choose it and choose Copy.
  3. On the Settings tab, for Driver name, enter a name, such as Redshift Native IDP.
  4. Update the URL template to jdbc:redshift://{host}:{port}/{database}?plugin_name=com.amazon.redshift.plugin.BrowserAzureOAuth2CredentialsProvider
    Note: In this URL template, do not replace the template parameters with the actual values. Please keep the value as shown in screenshot below.
  5. On the Libraries tab, choose Add files. Keep only one set of the latest driver version (2.1.0.4 and upwards) and if you see any older versions, delete those files.
  6. Add all the files from the downloaded AWS JDBC driver pack .zip file and choose OK (remember to unzip the .zip file).

Note: Use the Amazon Redshift driver 2.1.0.4 onwards, because all previous Amazon Redshift driver versions don’t support the Amazon Redshift native IdP feature.

  1. Close the Driver Manager.
  2. Go to Database and choose New Database Connection.
  3. Search for Redshift Native IDP, then choose it and choose Next.
  4. For Host/Instance, enter your Amazon Redshift endpoint. For e.g. test-cluster.ab6yejheyhgf.us-east-1.redshift.amazonaws.com.
  5. For Database, enter the database name (for this post, we use dev).
  6. For Port, enter 5439.
  7. Please get the below parameter values (scope, client_id and idp_tenant) from the application which you have created in Step 1: Create OAuth Application. On the Driver properties tab, add the following properties:
    1. listen_port – 7890
    2. idp_response_timeout – 50
    3. scope – Enter the value for <Microsoft_Azure_API_scope_value>.
      • If you’re using a v1.0 token, then use the scope value (for example, api://991abc78-78ab-4ad8-a123-zf123ab03612p/jdbc_login).
      • If you’re using a v2.0 token, the scope value is the client ID value (for example, 991abc78-78ab-4ad8-a123-zf123ab03612p).
    4. client_id – Enter the value for <Microsoft_Azure_clientid_value>. For example, 991abc78-78ab-4ad8-a123-zf123ab03612p.
    5. idp_tenant – Enter the value for <Microsoft_Azure_tenantid_value>. For example, 87f4aa26-78b7-410e-bf29-57b39929ef9a.
  8. You can verify the connection by choosing Test Connection.

You’re redirected to the browser to sign in with your Azure AD credentials. In case, you get SSL related error, then go to SSL tab and select Use SSL

  1. Log in to be redirected to a page showing the connection as successful.
  2. Choose Ok.

Congratulations! You have completed the Amazon Redshift native IdP setup with DBeaver.

Set up SQL Workbench/J

To set up SQL Workbench/J, complete the following steps:

  1. Create a new connection in SQL Workbench/J and choose Amazon Redshift as the driver.
  2. Choose Manage drivers and add all the files from the downloaded AWS JDBC driver pack .zip file (remember to unzip the .zip file).

Use the Amazon Redshift driver 2.1.0.4 onwards, because all previous Amazon Redshift driver versions don’t support the Amazon Redshift native IdP feature.

  1. For URL, enter jdbc:redshift://<cluster endpoint>:<port>:<databasename>. For e.g., jdbc:redshift://test-cluster.ab6yejheyhgf.us-east-1.redshift.amazonaws.com:5439/dev.
  2. Please get the below parameter values (scope, client_id and idp_tenant) from the application which you have created in Step 1: Create OAuth Application. On the Driver properties tab, add the following properties:
    1. plugin_namecom.amazon.redshift.plugin.BrowserAzureOAuth2CredentialsProvider
    2. listen_port – 7890
    3. idp_response_timeout – 50
    4. scope – Enter the value for <Microsoft_Azure_API_scope_value>.
      • If you’re using a v1.0 token, then use the scope value (for example, api://991abc78-78ab-4ad8-a123-zf123ab03612p/jdbc_login).
      • If you’re using a v2.0 token, the scope value is the client ID value (for example, 991abc78-78ab-4ad8-a123-zf123ab03612p).
    5. client_id – Enter the value for <Microsoft_Azure_clientid_value>. For example, 991abc78-78ab-4ad8-a123-zf123ab03612p.
    6. idp_tenant – Enter the value for <Microsoft_Azure_tenantid_value>. For example, 87f4aa26-78b7-410e-bf29-57b39929ef9a.
  3. Choose OK.
  4. Choose Test from SQL Workbench/J.

You’re redirected to the browser to sign in with your Azure AD credentials.

  1. Log in to be redirected to a page showing the connection as successful.
  2. Choose Finish.

    sqlworkbenchj-test-successful
  3. With this connection profile, run the following query to test Amazon Redshift native IdP authentication.
    sqlworkbenchj-current-user

Congratulations! You have completed the Amazon Redshift native IdP setup with SQL Workbench/J.

Best Practices with Redshift native IdP:

  • Pre-create the Amazon Redshift roles based upon the groups which you have created on the Azure AD portal.
  • Assign permissions to Redshift roles instead of assigning to each individual external user. This will provide smoother end user experience as user will have all the required permission when they login using native IdP.

Troubleshooting

If your connection didn’t work, consider the following:

  • Enable logging in the driver. For instructions, see Configure logging.
  • Make sure to use the latest Amazon Redshift JDBC driver version 2.1.0.4 onwards, which supports Amazon Redshift native IdP authentication.
  • If you’re getting errors while setting up the application on Azure AD, make sure you have admin access.
  • If you can authenticate via the SQL client but get a permission issue or can’t see objects, grant the relevant permission to the role, as detailed earlier in this post.
  • If you get the error “claim value does not match expected value,” make sure you provided the correct parameters during Amazon Redshift IdP registration.
  • Check stl_error or stl_connection_log views on the Amazon Redshift cluster for authentication failures.

Conclusion

In this post, we provided step-by-step instructions to integrate Amazon Redshift with Azure AD and SQL clients (SQL Workbench/J and DBeaver) using Amazon Redshift native IdP authentication. We also showed how Azure group membership is mapped automatically with Amazon Redshift roles and how to set up Amazon Redshift permissions.

For more information about Amazon Redshift native IdP federation, see:


About the Authors

Maneesh Sharma is a Senior Database Engineer at AWS with more than a decade of experience designing and implementing large-scale data warehouse and analytics solutions. He collaborates with various Amazon Redshift Partners and customers to drive better integration.

Debu-PandaDebu Panda is a Senior Manager, Product Management at AWS. He is an industry leader in analytics, application platform, and database technologies, and has more than 25 years of experience in the IT world.

Ilesh Garish is a Software Development Engineer at AWS. His role is to develop connectors for Amazon Redshift. Prior to AWS, he built database drivers for the Oracle RDBMS, TigerLogic XDMS, and OpenAccess SDK. He also worked in the database internal technologies at San Francisco Bay Area startups.

Dengfeng(Davis) Li is a Software Development Engineer at AWS. His passion is creating ease-of-use, secure and scalable applications. In the past few years, he worked on Redshift security, data sharing and catalog optimization.

Backblaze Partner API: The Details

Post Syndicated from Elton Carneiro original https://www.backblaze.com/blog/backblaze-partner-api-the-details/

Last week, we announced enhancements to our partner program that make working with Backblaze even easier for current and future partners. We shared a deep dive into the first new offering—Backblaze B2 Reserve—on Friday, and today, we’re digging into another key element: the Backblaze Partner API. The Backblaze Partner API enables independent software vendors (ISVs) participating in Backblaze’s Alliance Partner program to add Backblaze B2 Cloud Storage as a seamless backend extension within their own platform.

Read on to learn more about the Backblaze Partner API and what it means for existing and potential Alliance Partners.

What Is the Backblaze Partner API?

With the Backblaze Partner API, ISVs participating in Backblaze’s Alliance Partner program can programmatically provision accounts, run reports, and create a bundled solution or managed service which employs B2 Cloud Storage on the back end while delivering a unified experience to their users.

By unlocking an improved customer experience, the Partner API allows Alliance Partners to build additional cloud services into their product portfolio to generate new revenue streams and/or grow existing margin.

Why Develop the Backblaze Partner API?

We heard frequently from our existing partners that they wanted to provide a more seamless experience for their customers when it came to offering a cloud storage tier. Specifically, they wanted to keep customers on their site rather than requiring them to go elsewhere as part of the sign up experience. We built the Partner API to deliver this enhanced customer experience while also helping our partners extend their services and expand their offerings.

“Our customers produce thousands of hours of content daily, and, with the shift to leveraging cloud services like ours, they need a place to store both their original and transcoded files. The Backblaze Partner API allows us to expand our cloud services and eliminate complexity for our customers—giving them time to focus on their business needs, while we focus on innovations that drive more value.”
—Murad Mordukhay, CEO, Qencode

What Does the Partner API Do, Specifically?

To create the Backblaze Partner API, we exposed existing functionality to allow partners to automate tasks like creating and ejecting member accounts, managing Groups, and leveraging system-generated reports to get granular billing and usage information—outlining user tasks individually so users can be billed more accurately for what they’ve used.

The API calls are:

  • Account creation (adding Group members).
  • Organizing accounts in Groups.
  • Listing Groups.
  • Listing Group members.
  • Ejecting Group members.

Once the Partner API is configured, developers can use the Backblaze S3 Compatible API or the Backblaze B2 Native API to manage Group members’ Backblaze B2 accounts, including: uploading, downloading, and deleting files, as well as creating and managing the buckets that hold files.

How to Get Started With the Backblaze Partner API

If you’re familiar with Backblaze, getting started is straightforward:

  1. Create a Backblaze account.
  2. Enable Business Groups and B2 Cloud Storage.
  3. Contact Sales for access to the API.
  4. Create a Group.
  5. Create an Application Key and set up Partner API calls.

Check out our documentation for more detailed information on getting started with the Backblaze Partner API. You can also reach out to us via email at any time to schedule a meeting to discuss how the Backblaze Partner API can help you create an easier customer experience.

The post Backblaze Partner API: The Details appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Security updates for Monday

Post Syndicated from original https://lwn.net/Articles/890936/

Security updates have been issued by Debian (gzip, libxml2, minidlna, openjpeg2, thunderbird, webkit2gtk, wpewebkit, xen, and xz-utils), Fedora (crun, unrealircd, and vim), Mageia (389-ds-base, busybox, flatpak, fribidi, gdal, python-paramiko, and usbredir), openSUSE (opera and seamonkey), Oracle (kernel and kernel-container), Red Hat (firefox), Scientific Linux (firefox), Slackware (libarchive), SUSE (389-ds, libsolv, libzypp, zypper, and python), and Ubuntu (python-django and tcpdump).

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close