Tag Archives: апи

Backblaze B2 and the S3 Compatible API on Cloudflare

Post Syndicated from Tim Obezuk original https://blog.cloudflare.com/backblaze-b2-and-the-s3-compatible-api-on-cloudflare/

Backblaze B2 and the S3 Compatible API on Cloudflare

In May 2020, Backblaze, a founding Bandwidth Alliance partner announced S3 compatible APIs for their B2 Cloud Storage service. As a refresher, the Bandwidth Alliance is a group of forward-thinking cloud and networking companies that are committed to discounting or waiving data transfer fees for shared customers. Backblaze has been a proud partner since 2018. We are excited to see Backblaze introduce a new level of compatibility in their Cloud Storage service.

History of the S3 API

First let’s dive into the history of the S3 API and why it’s important for Cloudflare users.

Prior to 2006, before the mass migration to the Cloud, if you wanted to store content for your company you needed to build your own expensive and highly available storage platform that was large enough to store all your existing content with enough growth headroom for your business. AWS launched to help eliminate this model by renting their physical computing and storage infrastructure.

Amazon Simple Storage Service (S3) led the market by offering a scalable and resilient tool for storing unlimited amounts of data without building it yourself. It could be integrated into any application but there was one catch: you couldn’t use any existing standard such as WebDAV, FTP or SMB: your application needed to interface with Amazon’s bespoke S3 API.

Fast forward to 2020 and the storage provider landscape has become highly competitive with many providers capable of providing petabyte (and exabyte) scale content storage at extremely low cost-per-gigabyte. However, Amazon S3 has remained a dominant player despite heavy competition and not being the most cost-effective player.

The broad adoption of the S3 API by developers in their codebases and internal systems has transformed the S3 API into what WebDAV promised us to be: de facto standard HTTP File Storage API.

Engineering costs of changing storage providers

With many code bases and legacy applications being entrenched in the S3 API, the process to switch to a more cost-effective storage provider is not so easy. Companies need to consider the cost of engineer time programming a new storage API while also physically moving their data.

This engineering overhead has led many storage providers to natively support the S3 API, leveling the playing field and allowing companies to focus on picking the most cost-effective provider.

First-mile bandwidth costs and the Bandwidth Alliance

Cloudflare caches content in Points of Presence located in more than 200 cities around the world. This cached content is then handed to your Internet service provider (ISP) over low cost and often free Internet exchange connections in the same facility using mutual fibre optic cables. This cost saving is fairly well understood as the benefit of content delivery networks and has become highly commoditized.

What is less well understood is the first-mile cost of moving data from a storage provider to the content delivery network. Typically storage providers expect traffic to route via the Internet and will charge the consumer per-gigabyte of data transmitted. This is not the case for Cloudflare as we also share facilities and mutual fibre optic cables with many storage providers.

Backblaze B2 and the S3 Compatible API on Cloudflare

These shared interconnects created an opportunity to waive the cost of first-mile bandwidth between Cloudflare and many providers and is what prompted us to create the Bandwidth Alliance.

Media and entertainment companies serving user-generated content have a continuous supply of new content being moved over the first-mile from the storage provider to the content delivery network. The first-mile bandwidth cost adds up and using a Bandwidth Alliance partner such Backblaze can entirely eliminate it.

Using the S3 API in Cloudflare Workers

The Solutions Engineering team at Cloudflare is tasked with providing strategic technical guidance for our enterprise customers.

It’s not uncommon for developers to connect Cloudflare’s global network directly to their storage provider and directly serve content such as live and on-demand video without an intermediate web server.

For security purposes engineers typically use Cloudflare Workers to sign each uncached request using the S3 API. Cloudflare Workers allows anyone to deploy code to our global network of over 200+ Points of Presence in seconds and is built on top of Service Workers.

We’ve tested Backblaze B2’s S3 Compatible API in Cloudflare Workers using the same code tested for Amazon S3 buckets and it works perfectly by changing the target endpoint.

Creating a S3 Compatible Worker script

Here’s how it is done using Cloudflare Worker’s CLI tool Wrangler:

Generate a new project in Wrangler using a template intended for use with Amazon S3:

wrangler generate <projectname> https://github.com/obezuk/worker-signed-s3-template

This template uses aws4fetch. A fast, lightweight implementation of an S3 Compatible signing library that is commonly used in Service Worker environments like Cloudflare Workers.

The template creates an index.js file with a standard request signing implementation:

import { AwsClient } from 'aws4fetch'

const aws = new AwsClient({
    "accessKeyId": AWS_ACCESS_KEY_ID,
    "secretAccessKey": AWS_SECRET_ACCESS_KEY,
    "region": AWS_DEFAULT_REGION
});

addEventListener('fetch', function(event) {
    event.respondWith(handleRequest(event.request))
});

async function handleRequest(request) {
    var url = new URL(request.url);
    url.hostname = AWS_S3_BUCKET;
    var signedRequest = await aws.sign(url);
    return await fetch(signedRequest, { "cf": { "cacheEverything": true } });
}

Environment Variables

Modify your wrangler.toml file to use your Backblaze B2 API Key ID and Secret:

[env.dev]
vars = { AWS_ACCESS_KEY_ID = "<BACKBLAZE B2 keyId>", 
AWS_SECRET_ACCESS_KEY = "<BACKBLAZE B2 secret>", 
AWS_DEFAULT_REGION = "", 
AWS_S3_BUCKET = "<BACKBLAZE B2 bucketName>.<BACKBLAZE B2 S3 Endpoint>"}

AWS_S3_BUCKET environment variable will be the combination of your bucket name, period and S3 Endpoint. For a Backblaze B2 Bucket named example-bucket and S3 Endpoint s3.us-west-002.backblazeb2.com use example-bucket.s3.us-west-002.backblazeb2.com

AWS_DEFAULT_REGION environment variable is interpreted from your S3 Endpoint. I use us-west-002.

We recommend using Secret Environment variables to store your AWS_SECRET_ACCESS_KEY content when using this script in production.

Preview your Cloudflare Worker

Next run wrangler preview --env dev to enter a preview window of your Worker script. My bucket contained a static website containing adaptive streaming video content stored in a Backblaze B2 bucket.

Backblaze B2 and the S3 Compatible API on Cloudflare
Note: We permit caching of third party video content only for enterprise domains. Free/Pro/Biz users wanting to serve video content via Cloudflare may use Stream which delivers an end-to-end video delivery service.

Backblaze B2’s compatibility for the S3 API is an exciting update that has made their storage platform highly compatible with existing code bases and legacy systems. And, as a special offer to Cloudflare blog readers, Backblaze will pay the migration costs for transferring your data from S3 to Backblaze B2 (click here for more detail). With the cost of migration covered and compatibility for your existing workflows, it is now easier than ever to switch to a Bandwidth Alliance partner and save on first-mile costs. By doing so, you can slash your cloud bills, gain flexibility, and make no compromises to your performance.

To learn more, join us on May 14th for a webinar focused on getting you ultra fast worldwide content delivery.

Ready for changes with Hexagonal Architecture

Post Syndicated from Netflix Technology Blog original https://netflixtechblog.com/ready-for-changes-with-hexagonal-architecture-b315ec967749

by Damir Svrtan and Sergii Makagon

As the production of Netflix Originals grows each year, so does our need to build apps that enable efficiency throughout the entire creative process. Our wider Studio Engineering Organization has built more than 30 apps that help content progress from pitch (aka screenplay) to playback: ranging from script content acquisition, deal negotiations and vendor management to scheduling, streamlining production workflows, and so on.

Highly integrated from the start

About a year ago, our Studio Workflows team started working on a new app that crosses multiple domains of the business. We had an interesting challenge on our hands: we needed to build the core of our app from scratch, but we also needed data that existed in many different systems.

Some of the data points we needed, such as data about movies, production dates, employees, and shooting locations, were distributed across many services implementing various protocols: gRPC, JSON API, GraphQL and more. Existing data was crucial to the behavior and business logic of our application. We needed to be highly integrated from the start.

Swappable data sources

One of the early applications for bringing visibility into our productions was built as a monolith. The monolith allowed for rapid development and quick changes while the knowledge of the space was non-existent. At one point, more than 30 developers were working on it, and it had well over 300 database tables.

Over time applications evolved from broad service offerings towards being highly specialized. This resulted in a decision to decompose the monolith to specific services. This decision was not geared by performance issues — but with setting boundaries around all of these different domains and enabling dedicated teams to develop domain-specific services independently.

Large amounts of the data we needed for the new app were still provided by the monolith, but we knew that the monolith would be broken up at some point. We were not sure about the timing of the breakup, but we knew that it was inevitable, and we needed to be prepared.

Thus, we could leverage some of the data from the monolith at first as it was still the source of truth, but be prepared to swap those data sources to new microservices as soon as they came online.

Leveraging Hexagonal Architecture

We needed to support the ability to swap data sources without impacting business logic, so we knew we needed to keep them decoupled. We decided to build our app based on principles behind Hexagonal Architecture and Uncle Bob’s Clean Architecture.

The idea of Hexagonal Architecture is to put inputs and outputs at the edges of our design. Business logic should not depend on whether we expose a REST or a GraphQL API, and it should not depend on where we get data from — a database, a microservice API exposed via gRPC or REST, or just a simple CSV file.

The pattern allows us to isolate the core logic of our application from outside concerns. Having our core logic isolated means we can easily change data source details without a significant impact or major code rewrites to the codebase.

One of the main advantages we also saw in having an app with clear boundaries is our testing strategy — the majority of our tests can verify our business logic without relying on protocols that can easily change.

Defining the core concepts

Leveraged from the Hexagonal Architecture, the three main concepts that define our business logic are Entities, Repositories, and Interactors.

  • Entities are the domain objects (e.g., a Movie or a Shooting Location) — they have no knowledge of where they’re stored (unlike Active Record in Ruby on Rails or the Java Persistence API).
  • Repositories are the interfaces to getting entities as well as creating and changing them. They keep a list of methods that are used to communicate with data sources and return a single entity or a list of entities. (e.g. UserRepository)
  • Interactors are classes that orchestrate and perform domain actions — think of Service Objects or Use Case Objects. They implement complex business rules and validation logic specific to a domain action (e.g., onboarding a production)

With these three main types of objects, we are able to define business logic without any knowledge or care where the data is kept and how business logic is triggered. Outside of the business logic are the Data Sources and the Transport Layer:

  • Data Sources are adapters to different storage implementations.
    A data source might be an adapter to a SQL database (an Active Record class in Rails or JPA in Java), an elastic search adapter, REST API, or even an adapter to something simple such as a CSV file or a Hash. A data source implements methods defined on the repository and stores the implementation of fetching and pushing the data.
  • Transport Layer can trigger an interactor to perform business logic. We treat it as an input for our system. The most common transport layer for microservices is the HTTP API Layer and a set of controllers that handle requests. By having business logic extracted into interactors, we are not coupled to a particular transport layer or controller implementation. Interactors can be triggered not only by a controller, but also by an event, a cron job, or from the command line.
The dependency graph in Hexagonal Architecture goes inward.

With a traditional layered architecture, we would have all of our dependencies point in one direction, each layer above depending on the layer below. The transport layer would depend on the interactors, the interactors would depend on the persistence layer.

In Hexagonal Architecture all dependencies point inward — our core business logic does not know anything about the transport layer or the data sources. Still, the transport layer knows how to use interactors, and the data sources know how to conform to the repository interface.

With this, we are prepared for the inevitable changes to other Studio systems, and whenever that needs to happen, the task of swapping data sources is easy to accomplish.

Swapping data sources

The need to swap data sources came earlier than we expected — we suddenly hit a read constraint with the monolith and needed to switch a certain read for one entity to a newer microservice exposed over a GraphQL aggregation layer. Both the microservice and the monolith were kept in sync and had the same data, reading from one service or the other produced the same results.

We managed to transfer reads from a JSON API to a GraphQL data source within 2 hours.

The main reason we were able to pull it off so fast was due to the Hexagonal architecture. We didn’t let any persistence specifics leak into our business logic. We created a GraphQL data source that implemented the repository interface. A simple one-line change was all we needed to start reading from a different data source.

With a proper abstraction it was easy to change data sources

At that point, we knew that Hexagonal Architecture worked for us.

The great part about a one-line change is that it mitigates risks to the release. It is very easy to rollback in the case that a downstream microservice failed on initial deployment. This as well enables us to decouple deployment and activation, as we can decide which data source to use through configuration.

Hiding data source details

One of the great advantages of this architecture is that we are able to encapsulate data source implementation details. We ran into a case where we needed an API call that did not yet exist — a service had an API to fetch a single resource but did not have bulk fetch implemented. After talking with the team providing the API, we realized this endpoint would take some time to deliver. So we decided to move forward with another solution to solve the problem while this endpoint was being built.

We defined a repository method that would grab multiple resources given multiple record identifiers — and the initial implementation of that method on the data source sent multiple concurrent calls to the downstream service. We knew this was a temporary solution and that the second take at the data source implementation was to use the bulk API once implemented.

Our business logic doesn’t need to be aware of specific data source limitations.

A design like this enabled us to move forward with meeting the business needs without accruing much technical debt or the need to change any business logic afterward.

Testing strategy

When we started experimenting with Hexagonal Architecture, we knew we needed to come up with a testing strategy. We knew that a prerequisite to great development velocity was to have a test suite that is reliable and super fast. We didn’t think of it as a nice to have, but a must-have.

We decided to test our app at three different layers:

  • We test our interactors, where the core of our business logic lives but is independent of any type of persistence or transportation. We leverage dependency injection and mock any kind of repository interaction. This is where our business logic is tested in detail, and these are the tests we strive to have most of.
  • We test our data sources to determine if they integrate correctly with other services, whether they conform to the repository interface, and check how they behave upon errors. We try to minimize the amount of these tests.
  • We have integration specs that go through the whole stack, from our Transport / API layer, through the interactors, repositories, data sources, and hit downstream services. These specs test whether we “wired” everything correctly. If a data source is an external API, we hit that endpoint and record the responses (and store them in git), allowing our test suite to run fast on every subsequent invocation. We don’t do extensive test coverage on this layer — usually just one success scenario and one failure scenario per domain action.

We don’t test our repositories as they are simple interfaces that data sources implement, and we rarely test our entities as they are plain objects with attributes defined. We test entities if they have additional methods (without touching the persistence layer).

We have room for improvement, such as not pinging any of the services we rely on but relying 100% on contract testing. With a test suite written in the above manner, we manage to run around 3000 specs in 100 seconds on a single process.

It’s lovely to work with a test suite that can easily be run on any machine, and our development team can work on their daily features without disruption.

Delaying decisions

We are in a great position when it comes to swapping data sources to different microservices. One of the key benefits is that we can delay some of the decisions about whether and how we want to store data internal to our application. Based on the feature’s use case, we even have the flexibility to determine the type of data store — whether it be Relational or Documents.

Uncle Bob said it great:

The purpose of a good architecture is to delay decisions. Why? Because when we delay a decision, we have more information when it comes time to make it.

At the beginning of a project, we have the least amount of information about the system we are building. We should not lock ourselves into an architecture with uninformed decisions leading to a project paradox.

The decisions we made make sense for our needs now and have enabled us to move fast. The best part of Hexagonal Architecture is that it keeps our application flexible for future requirements to come.


Ready for changes with Hexagonal Architecture was originally published in Netflix TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Introducing Secrets and Environment Variables to Cloudflare Workers

Post Syndicated from John Donmoyer original https://blog.cloudflare.com/workers-secrets-environment/

Introducing Secrets and Environment  Variables to Cloudflare Workers

Introducing Secrets and Environment  Variables to Cloudflare Workers

The Workers team here at Cloudflare has been hard at work shipping a bunch of new features in the last year and we’ve seen some amazing things built with the tools we’ve provided. However, as my uncle once said, with great serverless platform growth comes great responsibility.

One of the ways we can help is by ensuring that deploying and maintaining your Workers scripts is a low risk endeavor. Rotating a set of API keys shouldn’t require risking downtime through code edits and redeployments and in some cases it may not make sense for the developer writing the script to know the actual API key value at all. To help tackle this problem, we’re releasing Secrets and Environment Variables to the Wrangler CLI and Workers Dashboard.

Supporting secrets

As we started to design support for secrets in Workers we had a sense that this was already a big concern for a lot of our users but we wanted to learn about all of the use cases to ensure we were building the right thing. We headed to the community forums, twitter, and the inbox of Louis Grace, business development representative extraordinaire, for some anecdotes about Secrets usage. We also sent out a survey to our existing users to learn about use cases and pain points.

We learned that even though there was already a way to store secrets without exposing them via Workers KV, the solution was not very intuitive, nor did it meet all the needs of our users. Many users didn’t even know we had an interim solution in place. Recognizing that we were not the first platform to encounter this problem, we surveyed the existing landscape of Platform as a Service offerings to get a better sense for what our users would expect of us.

Deciding on a solution

One of the first things we found was that not all environment variables are created equal. While the simplest use case for having a defined environment variable may be storing a piece of text that can be updated no matter where it is referenced in a script, sometimes those variables may have higher stakes associated with them. If you’re storing an API key that controls access to an important system, you may not want to allow anyone with dashboard access to see it, maybe not even the developers themselves.

With this in mind, we had to ensure the feature covered two different use cases: one for storing variables in plain text where you could see the variable being referenced and make edits to it and another where the variable would be encrypted as soon as you save it, never to be seen again. This way, we were able to serve both needs of our users, side by side, without one compromising for the other.

Testing our prototypes

Once we had a fairly good idea of what we wanted to build, we built some prototypes and rough implementations in staging environments so we would be able to perform some usability testing. We wrangled up some developers and observed them as they performed a series of tasks where they were asked to add some secrets and plain-text environment variables, reference them in one of their Workers, and bind their Worker to a Worker KV namespace.

Along the way we also asked questions to understand the developer’s professional background, familiarity with the product, and the use cases they’ve had for using Workers in the past along with any paint points they experienced.

While we were testing the new dashboard interface we also began testing the usability of the Wrangler CLI. We had Wrangler users perform the same tasks as the Workers dashboard users to help us find out if users are expecting different things out of their command-line tooling.

Findings and fixes

Through our testing we were able to make a number of changes before the final release. Some of the smaller changes included things like adjusting the behavior of form fields to ensure users knew which variable would be associated with each value. We also made larger changes like electing to separate the KV namespace bindings from the other environment variables as a way to emphasize that KV namespace bindings are not the keys and values themselves but a reference to a namespace where those keys are stored.

Cina, one of our engineers, put together a proposal to align some of our terminology with the terms that our developers were naturally using to describe their workflow. In Wrangler users were accustomed to referencing their KV namespaces by adding a KV namespace binding so when they came to the Workers dashboard interface and saw a field called “KV Variables” they were often confused, thinking they were adding keys and values to the namespace itself instead of establishing a variable that could be used to reference the namespace. As a fix, we decided to call it a “KV namespace binding” throughout the experience.

Try it out

Environment variables are available now with the Wrangler CLI and in the Workers Dashboard so go ahead and give them a shot today!

Introducing Secrets and Environment  Variables to Cloudflare Workers
Adding a secret with Wrangler
Introducing Secrets and Environment  Variables to Cloudflare Workers
Managing environment variables and KV bindings in the Workers Dashboard

As we continue to build out the Workers platform we’d love to hear from you. Let us know if you’re interested in participating in user research or just have something to say as we’d love to hear from you.

How we used our new GraphQL Analytics API to build Firewall Analytics

Post Syndicated from Nick Downie original https://blog.cloudflare.com/how-we-used-our-new-graphql-api-to-build-firewall-analytics/

How we used our new GraphQL Analytics API to build Firewall Analytics

How we used our new GraphQL Analytics API to build Firewall Analytics

Firewall Analytics is the first product in the Cloudflare dashboard to utilize the new GraphQL Analytics API. All Cloudflare dashboard products are built using the same public APIs that we provide to our customers, allowing us to understand the challenges they face when interfacing with our APIs. This parity helps us build and shape our products, most recently the new GraphQL Analytics API that we’re thrilled to release today.

By defining the data we want, along with the response format, our GraphQL Analytics API has enabled us to prototype new functionality and iterate quickly from our beta user feedback. It is helping us deliver more insightful analytics tools within the Cloudflare dashboard to our customers.

Our user research and testing for Firewall Analytics surfaced common use cases in our customers’ workflow:

  • Identifying spikes in firewall activity over time
  • Understanding the common attributes of threats
  • Drilling down into granular details of an individual event to identify potential false positives

We can address all of these use cases using our new GraphQL Analytics API.

GraphQL Basics

Before we look into how to address each of these use cases, let’s take a look at the format of a GraphQL query and how our schema is structured.

A GraphQL query is comprised of a structured set of fields, for which the server provides corresponding values in its response. The schema defines which fields are available and their type. You can find more information about the GraphQL query syntax and format in the official GraphQL documentation.

To run some GraphQL queries, we recommend downloading a GraphQL client, such as GraphiQL, to explore our schema and run some queries. You can find documentation on getting started with this in our developer docs.

At the top level of the schema is the viewer field. This represents the top level node of the user running the query. Within this, we can query the zones field to find zones the current user has access to, providing a filter argument, with a zoneTag of the identifier of the zone we’d like narrow down to.

{
  viewer {
    zones(filter: { zoneTag: "YOUR_ZONE_ID" }) {
      # Here is where we'll query our firewall events
    }
  }
}

Now that we have a query that finds our zone, we can start querying the firewall events which have occurred in that zone, to help solve some of the use cases we’ve identified.

Visualising spikes in firewall activity

It’s important for customers to be able to visualise and understand anomalies and spikes in their firewall activity, as these could indicate an attack or be the result of a misconfiguration.

Plotting events in a timeseries chart, by their respective action, provides users with a visual overview of the trend of their firewall events.

Within the zones field in the query we’ve created earlier, we can query our firewall event aggregates using the firewallEventsAdaptiveGroups field, providing arguments to limit the count of groups, a filter for the date range we’re looking for (combined with any user-entered filters), and a list of fields to order by; in this case, just the datetimeHour field that we’re grouping by.

Within the zones field in the query we created earlier, we can further query our firewall event aggregates using the firewallEventsAdaptiveGroups field and providing arguments for:

  • A limit for the count of groups
  • A filter for the date range we’re looking for (combined with any user-entered filters)
  • A list of fields to orderBy (in this case, just the datetimeHour field that we’re grouping by).

By adding the dimensions field, we’re querying for groups of firewall events, aggregated by the fields nested within dimensions. In this case, our query includes the action and datetimeHour fields, meaning the response will be groups of firewall events which share the same action, and fall within the same hour. We also add a count field, to get a numeric count of how many events fall within each group.

query FirewallEventsByTime($zoneTag: string, $filter: FirewallEventsAdaptiveGroupsFilter_InputObject) {
  viewer {
    zones(filter: { zoneTag: $zoneTag }) {
      firewallEventsAdaptiveGroups(
        limit: 576
        filter: $filter
        orderBy: [datetimeHour_DESC]
      ) {
        count
        dimensions {
          action
          datetimeHour
        }
      }
    }
  }
}

Note – Each of our groups queries require a limit to be set. A firewall event can have one of 8 possible actions, and we are querying over a 72 hour period. At most, we’ll end up with 567 groups, so we can set that as the limit for our query.

This query would return a response in the following format:

{
  "viewer": {
    "zones": [
      {
        "firewallEventsAdaptiveGroups": [
          {
            "count": 5,
            "dimensions": {
              "action": "jschallenge",
              "datetimeHour": "2019-09-12T18:00:00Z"
            }
          }
          ...
        ]
      }
    ]
  }
}

We can then take these groups and plot each as a point on a time series chart. Mapping over the firewallEventsAdaptiveGroups array, we can use the group’s count property on the y-axis for our chart, then use the nested fields within the dimensions object, using action as unique series and the datetimeHour as the time stamp on the x-axis.

How we used our new GraphQL Analytics API to build Firewall Analytics

Top Ns

After identifying a spike in activity, our next step is to highlight events with commonality in their attributes. For example, if a certain IP address or individual user agent is causing many firewall events, this could be a sign of an individual attacker, or could be surfacing a false positive.

Similarly to before, we can query aggregate groups of firewall events using the firewallEventsAdaptiveGroups field. However, in this case, instead of supplying action and datetimeHour to the group’s dimensions, we can add individual fields that we want to find common groups of.

By ordering by descending count, we’ll retrieve groups with the highest commonality first, limiting to the top 5 of each. We can add a single field nested within dimensions to group by it. For example, adding clientIP will give five groups with the IP addresses causing the most events.

We can also add a firewallEventsAdaptiveGroups field with no nested dimensions. This will create a single group which allows us to find the total count of events matching our filter.

query FirewallEventsTopNs($zoneTag: string, $filter: FirewallEventsAdaptiveGroupsFilter_InputObject) {
  viewer {
    zones(filter: { zoneTag: $zoneTag }) {
      topIPs: firewallEventsAdaptiveGroups(
        limit: 5
        filter: $filter
        orderBy: [count_DESC]
      ) {
        count
        dimensions {
          clientIP
        }
      }
      topUserAgents: firewallEventsAdaptiveGroups(
        limit: 5
        filter: $filter
        orderBy: [count_DESC]
      ) {
        count
        dimensions {
          userAgent
        }
      }
      total: firewallEventsAdaptiveGroups(
        limit: 1
        filter: $filter
      ) {
        count
      }
    }
  }
}

Note – we can add the firewallEventsAdaptiveGroups field multiple times within a single query, each aliased differently. This allows us to fetch multiple different groupings by different fields, or with no groupings at all. In this case, getting a list of top IP addresses, top user agents, and the total events.

How we used our new GraphQL Analytics API to build Firewall Analytics

We can then reference each of these aliases in the UI, mapping over their respective groups to render each row with its count, and a bar which represents the proportion of total events, showing the proportion of all events each row equates to.

Are these firewall events false positives?

After users have identified spikes, anomalies and common attributes, we wanted to surface more information as to whether these have been caused by malicious traffic, or are false positives.

To do this, we wanted to provide additional context on the events themselves, rather than just counts. We can do this by querying the firewallEventsAdaptive field for these events.

Our GraphQL schema uses the same filter format for both the aggregate firewallEventsAdaptiveGroups field and the raw firewallEventsAdaptive field. This allows us to use the same filters to fetch the individual events which summate to the counts and aggregates in the visualisations above.

query FirewallEventsList($zoneTag: string, $filter: FirewallEventsAdaptiveFilter_InputObject) {
  viewer {
    zones(filter: { zoneTag: $zoneTag }) {
      firewallEventsAdaptive(
        filter: $filter
        limit: 10
        orderBy: [datetime_DESC]
      ) {
        action
        clientAsn
        clientCountryName
        clientIP
        clientRequestPath
        clientRequestQuery
        datetime
        rayName
        source
        userAgent
      }
    }
  }
}

How we used our new GraphQL Analytics API to build Firewall Analytics

Once we have our individual events, we can render all of the individual fields we’ve requested, providing users the additional context on event they need to determine whether this is a false positive or not.

That’s how we used our new GraphQL Analytics API to build Firewall Analytics, helping solve some of our customers most common security workflow use cases. We’re excited to see what you build with it, and the problems you can help tackle.

You can find out how to get started querying our GraphQL Analytics API using GraphiQL in our developer documentation, or learn more about writing GraphQL queries on the official GraphQL Foundation documentation.

Introducing the GraphQL Analytics API: exactly the data you need, all in one place

Post Syndicated from Filipp Nisenzoun original https://blog.cloudflare.com/introducing-the-graphql-analytics-api-exactly-the-data-you-need-all-in-one-place/

Introducing the GraphQL Analytics API: exactly the data you need, all in one place

Introducing the GraphQL Analytics API: exactly the data you need, all in one place

Today we’re excited to announce a powerful and flexible new way to explore your Cloudflare metrics and logs, with an API conforming to the industry-standard GraphQL specification. With our new GraphQL Analytics API, all of your performance, security, and reliability data is available from one endpoint, and you can select exactly what you need, whether it’s one metric for one domain or multiple metrics aggregated for all of your domains. You can ask questions like “How many cached bytes have been returned for these three domains?” Or, “How many requests have all the domains under my account received?” Or even, “What effect did changing my firewall rule an hour ago have on the responses my users were seeing?”

The GraphQL standard also has strong community resources, from extensive documentation to front-end clients, making it easy to start creating simple queries and progress to building your own sophisticated analytics dashboards.

From many APIs…

Providing insights has always been a core part of Cloudflare’s offering. After all, by using Cloudflare, you’re relying on us for key parts of your infrastructure, and so we need to make sure you have the data to manage, monitor, and troubleshoot your website, app, or service. Over time, we developed a few key data APIs, including ones providing information regarding your domain’s traffic, DNS queries, and firewall events. This multi-API approach was acceptable while we had only a few products, but we started to run into some challenges as we added more products and analytics. We couldn’t expect users to adopt a new analytics API every time they started using a new product. In fact, some of the customers and partners that were relying on many of our products were already becoming confused by the various APIs.

Following the multi-API approach was also affecting how quickly we could develop new analytics within the Cloudflare dashboard, which is used by more people for data exploration than our APIs. Each time we built a new product, our product engineering teams had to implement a corresponding analytics API, which our user interface engineering team then had to learn to use. This process could take up to several months for each new set of analytics dashboards.

…to one

Our new GraphQL Analytics API solves these problems by providing access to all Cloudflare analytics. It offers a standard, flexible syntax for describing exactly the data you need and provides predictable, matching responses. This approach makes it an ideal tool for:

  1. Data exploration. You can think of it as a way to query your own virtual data warehouse, full of metrics and logs regarding the performance, security, and reliability of your Internet property.
  2. Building amazing dashboards, which allow for flexible filtering, sorting, and drilling down or rolling up. Creating these kinds of dashboards would normally require paying thousands of dollars for a specialized analytics tool. You get them as part of our product and can customize them for yourself using the API.

In a companion post that was also published today, my colleague Nick discusses using the GraphQL Analytics API to build dashboards. So, in this post, I’ll focus on examples of how you can use the API to explore your data. To make the queries, I’ll be using GraphiQL, a popular open-source querying tool that takes advantage of GraphQL’s capabilities.

Introspection: what data is available?

The first thing you may be wondering: if the GraphQL Analytics API offers access to so much data, how do I figure out what exactly is available, and how I can ask for it? GraphQL makes this easy by offering “introspection,” meaning you can query the API itself to see the available data sets, the fields and their types, and the operations you can perform. GraphiQL uses this functionality to provide a “Documentation Explorer,” query auto-completion, and syntax validation. For example, here is how I can see all the data sets available for a zone (domain):

Introducing the GraphQL Analytics API: exactly the data you need, all in one place

If I’m writing a query, and I’m interested in data on firewall events, auto-complete will help me quickly find relevant data sets and fields:

Introducing the GraphQL Analytics API: exactly the data you need, all in one place

Querying: examples of questions you can ask

Let’s say you’ve made a major product announcement and expect a surge in requests to your blog, your application, and several other zones (domains) under your account. You can check if this surge materializes by asking for the requests aggregated under your account, in the 30 minutes after your announcement post, broken down by the minute:

{
 viewer { 
   accounts (filter: {accountTag: $accountTag}) {
     httpRequests1mGroups(limit: 30, filter: {datetime_geq: "2019-09-16T20:00:00Z", datetime_lt: "2019-09-16T20:30:00Z"}, orderBy: [datetimeMinute_ASC]) {
	  dimensions {
		datetimeMinute
	  }
	  sum {
		requests
	  }
	}
   }
 }
}

Here is the first part of the response, showing requests for your account, by the minute:

Introducing the GraphQL Analytics API: exactly the data you need, all in one place

Now, let’s say you want to compare the traffic coming to your blog versus your marketing site over the last hour. You can do this in one query, asking for the number of requests to each zone:

{
 viewer {
   zones(filter: {zoneTag_in: [$zoneTag1, $zoneTag2]}) {
     httpRequests1hGroups(limit: 2, filter: {datetime_geq: "2019-09-16T20:00:00Z",
datetime_lt: "2019-09-16T21:00:00Z"}) {
       sum {
         requests
       }
     }
   }
 }
}

Here is the response:

Introducing the GraphQL Analytics API: exactly the data you need, all in one place

Finally, let’s say you’re seeing an increase in error responses. Could this be correlated to an attack? You can look at error codes and firewall events over the last 15 minutes, for example:

{
 viewer {
   zones(filter: {zoneTag: $zoneTag}) {
     httpRequests1mGroups (limit: 100,
filter: {datetime_geq: "2019-09-16T21:00:00Z",
datetime_lt: "2019-09-16T21:15:00Z"}) {
       sum {
         responseStatusMap {
           edgeResponseStatus
           requests
         }
       }
     }
    firewallEventsAdaptiveGroups (limit: 100,
filter: {datetime_geq: "2019-09-16T21:00:00Z",
datetime_lt: "2019-09-16T21:15:00Z"}) {
       dimensions {
         action
       }
       count
     }
    }
  }
}

Notice that, in this query, we’re looking at multiple datasets at once, using a common zone identifier to “join” them. Here are the results:

Introducing the GraphQL Analytics API: exactly the data you need, all in one place

By examining both data sets in parallel, we can see a correlation: 31 requests were “dropped” or blocked by the Firewall, which is exactly the same as the number of “403” responses. So, the 403 responses were a result of Firewall actions.

Try it today

To learn more about the GraphQL Analytics API and start exploring your Cloudflare data, follow the “Getting started” guide in our developer documentation, which also has details regarding the current data sets and time periods available. We’ll be adding more data sets over time, so take advantage of the introspection feature to see the latest available.

Finally, to make way for the new API, the Zone Analytics API is now deprecated and will be sunset on May 31, 2020. The data that Zone Analytics provides is available from the GraphQL Analytics API. If you’re currently using the API directly, please follow our migration guide to change your API calls. If you get your analytics using the Cloudflare dashboard or our Datadog integration, you don’t need to take any action.

One more thing….

In the API examples above, if you find it helpful to get analytics aggregated for all the domains under your account, we have something else you may like: a brand new Analytics dashboard (in beta) that provides this same information. If your account has many zones, the dashboard is helpful for knowing summary information on metrics such as requests, bandwidth, cache rate, and error rate. Give it a try and let us know what you think using the feedback link above the new dashboard.

What’s new with Workers KV?

Post Syndicated from Steve Klabnik original https://blog.cloudflare.com/whats-new-with-workers-kv/

What’s new with Workers KV?

What’s new with Workers KV?

The Storage team here at Cloudflare shipped Workers KV, our global, low-latency, key-value store, earlier this year. As people have started using it, we’ve gotten some feature requests, and have shipped some new features in response! In this post, we’ll talk about some of these use cases and how these new features enable them.

New KV APIs

We’ve shipped some new APIs, both via api.cloudflare.com, as well as inside of a Worker. The first one provides the ability to upload and delete more than one key/value pair at once. Given that Workers KV is great for read-heavy, write-light workloads, a common pattern when getting started with KV is to write a bunch of data via the API, and then read that data from within a Worker. You can now do these bulk uploads without needing a separate API call for every key/value pair. This feature is available via api.cloudflare.com, but is not yet available from within a Worker.

For example, say we’re using KV to redirect legacy URLs to their new homes. We have a list of URLs to redirect, and where they should redirect to. We can turn this list into JSON that looks like this:

[
  {
    "key": "/old/post/1",
    "value": "/new-post-slug-1"
  },
  {
    "key": "/old/post/2",
    "value": "/new-post-slug-2"
  }
]

And then POST this JSON to the new bulk endpoint, /storage/kv/namespaces/:namespace_id/bulk. This will add both key/value pairs to our namespace.

Likewise, if we wanted to drop support for these redirects, we could issue a DELETE that has this body:

[
    "/old/post/1",
    "/old/post/2"
]

to /storage/kv/namespaces/:namespace_id/bulk, and we’d delete both key/value pairs in a single call to the API.

The bulk upload API has one more trick up its sleeve: not all data is a string. For example, you may have an image as a value, which is just a bag of bytes. if you need to write some binary data, you’ll have to base64 the value’s contents so that it’s valid JSON. You’ll also need to set one more key:

[
  {
    "key": "profile-picture",
    "value": "aGVsbG8gd29ybGQ=",
    "base64": true
  }
]

Workers KV will decode the value from base64, and then store the resulting bytes.

Beyond bulk upload and delete, we’ve also given you the ability to list all of the keys you’ve stored in any of your namespaces, from both the API and within a Worker. For example, if you wrote a blog powered by Workers + Workers KV, you might have each blog post stored as a key/value pair in a namespace called “contents”. Most blogs have some sort of “index” page that lists all of the posts that you can read. To create this page, we need to get a listing of all of the keys, since each key corresponds to a given post. We could do this from within a Worker by calling list() on our namespace binding:

const value = await contents.list()

But what we get back isn’t only a list of keys. The object looks like this:

{
  keys: [
    { name: "Title 1” },
    { name: "Title 2” }
  ],
  list_complete: false,
  cursor: "6Ck1la0VxJ0djhidm1MdX2FyD"
}

We’ll talk about this “cursor” stuff in a second, but if we wanted to get the list of titles, we’d have to iterate over the keys property, and pull out the names:

const keyNames = value.keys.map(e => e.name)

keyNames would be an array of strings:

[“Title 1”, “Title 2”, “Title 3”, “Title 4”, “Title 5”]

We could take keyNames and those titles to build our page.

So what’s up with the list_complete and cursor properties? Well, imagine that we’ve been a very prolific blogger, and we’ve now written thousands of posts. The list API is paginated, meaning that it will only return the first thousand keys. To see if there are more pages available, you can check the list_complete property. If it is false, you can use the cursor to fetch another page of results. The value of cursor is an opaque token that you pass to another call to list:

const value = await NAMESPACE.list()
const cursor = value.cursor
const next_value = await NAMESAPACE.list({"cursor": cursor})

This will give us another page of results, and we can repeat this process until list_complete is true.

Listing keys has one more trick up its sleeve: you can also return only keys that have a certain prefix. Imagine we want to have a list of posts, but only the posts that were made in October of 2019. While Workers KV is only a key/value store, we can use the prefix functionality to do interesting things by filtering the list. In our original implementation, we had stored the titles of keys only:

  • Title 1
  • Title 2

We could change this to include the date in YYYY-MM-DD format, with a colon separating the two:

  • 2019-09-01:Title 1
  • 2019-10-15:Title 2

We can now ask for a list of all posts made in 2019:

const value = await NAMESAPCE.list({"prefix": "2019"})

Or a list of all posts made in October of 2019:

const value = await NAMESAPCE.list({"prefix": "2019-10"})

These calls will only return keys with the given prefix, which in our case, corresponds to a date. This technique can let you group keys together in interesting ways. We’re looking forward to seeing what you all do with this new functionality!

Relaxing limits

For various reasons, there are a few hard limits with what you can do with Workers KV. We’ve decided to raise some of these limits, which expands what you can do.

The first is the limit of the number of namespaces any account could have. This was previously set at 20, but some of you have made a lot of namespaces! We’ve decided to relax this limit to 100 instead. This means you can create five times the number of namespaces you previously could.

Additionally, we had a two megabyte maximum size for values. We’ve increased the limit for values to ten megabytes. With the release of Workers Sites, folks are keeping things like images inside of Workers KV, and two megabytes felt a bit cramped. While Workers KV is not a great fit for truly large values, ten megabytes gives you the ability to store larger images easily. As an example, a 4k monitor has a native resolution of 4096 x 2160 pixels. If we had an image at this resolution as a lossless PNG, for example, it would be just over five megabytes in size.

KV browser

Finally, you may have noticed that there’s now a KV browser in the dashboard! Needing to type out a cURL command just to see what’s in your namespace was a real pain, and so we’ve given you the ability to check out the contents of your namespaces right on the web. When you look at a namespace, you’ll also see a table of keys and values:

What’s new with Workers KV?

The browser has grown with a bunch of useful features since it initially shipped. You can not only see your keys and values, but also add new ones:

What’s new with Workers KV?

edit existing ones:

What’s new with Workers KV?

…and even upload files!

What’s new with Workers KV?

You can also download them:

What’s new with Workers KV?

As we ship new features in Workers KV, we’ll be expanding the browser to include them too.

Wrangler integration

The Workers Developer Experience team has also been shipping some features related to Workers KV. Specifically, you can fully interact with your namespaces and the key/value pairs inside of them.

For example, my personal website is running on Workers Sites. I have a Wrangler project named “website” to manage it. If I wanted to add another namespace, I could do this:

$ wrangler kv:namespace create new_namespace
Creating namespace with title "website-new_namespace"
Success: WorkersKvNamespace {
    id: "<id>",
    title: "website-new_namespace",
}

Add the following to your wrangler.toml:

kv-namespaces = [
    { binding = "new_namespace", id = "<id>" }
]

I’ve redacted the namespace IDs here, but Wrangler let me know that the creation was successful, and provided me with the configuration I need to put in my wrangler.toml. Once I’ve done that, I can add new key/value pairs:

$ wrangler kv:key put "hello" "world" --binding new_namespace
Success

And read it back out again:

> wrangler kv:key get "hello" --binding new_namespace
world

If you’d like to learn more about the design of these features, “How we design features for Wrangler, the Cloudflare Workers CLI” discusses them in depth.

More to come

The Storage team is working hard at improving Workers KV, and we’ll keep shipping new stuff every so often. Our updates will be more regular in the future. If there’s something you’d particularly like to see, please reach out!

Inside the Web Browser’s Performance API

Post Syndicated from Young Park original https://blog.cloudflare.com/browser-performance-api/

Inside the Web Browser’s Performance API

Building a beautiful, feature-rich website is easier than ever before. Not long ago, you’d have to fire up a text editor and hand-craft a lot of HTML, CSS, and JavaScript. Today, you can use WYSIWYG tools and third-party libraries that make development much simpler. The flip side of this is that it can be hard to see everything that’s going into your website — and the performance can suffer.

The good news is that modern web browsers expose lots of performance data that can help you understand how your web page performs. With the launch of Browser Insights today, we can analyze the performance from the perspective of the web browser and what the end user actually experiences. In this post, we’ll dive into how we think about performance and utilize the timing APIs in the web browser.

How web browsers measure performance

In the old days, the only way for a developer to profile performance was to intercept requests and measure the time from the beginning of the page load until the end of the load event.

Today, we can use Web APIs that are supported by modern browsers. This is part of the web standard called the Performance API. The Performance API consists of a collection of individual APIs that include:

  • Navigation Timing (for timing information relates to the page and navigation)
  • Resource Timing (for timing data regarding the loading of an application’s resources)
  • Paint Timing (that provides timing information about paint operation during the page construction)

In this blog post, we will primarily focus on the Navigation Timing API.

Inside the Performance API

To see what’s collected with the Performance API, you can open the Developer Tools in Chrome browser and type ‘performance’ in the console tab (or type in performance.timing to get direct access to the PerformanceTiming attribute).

Try expanding the Performance object by clicking on the arrow before the label and again expand the ‘timing’. This is called PerformanceTiming, which includes all the timings that relate to the current page load as UNIX epoch timestamp (milliseconds). The timing attributes shown are not in the order of the load. So for better understanding, let’s look at the illustration provided by the W3C.

Inside the Web Browser’s Performance API
Image from https://www.w3.org/TR/navigation-timing/

As we can see from the diagram shown, each element (represented as a box above) in the order from left-to-right, represents the navigation flow of the page load. Each element has an attribute from the starting point to the end (and some have multiple attributes!) so that we can measure the elapsed time for each element. For example, to get the Request Time, you could type in the command like shown below in the console which appears to be 60 milliseconds.

Inside the Web Browser’s Performance API

How Cloudflare uses performance data

Once your website is proxied through Cloudflare and Browser Insights is enabled, we write and inject a JavaScript beacon into the web page. Our beacon collects metrics from the Performance API to send to our edge, where it can be used to understand where your website is slowing down or having any network problems. The reported data is shown in the Cloudflare dashboard on the Speed Page showing as averages of each timing metric:

Inside the Web Browser’s Performance API

The metrics we surface are:

  • DNS (domainLookupEnd – domainLookupStart): How long the DNS query takes. This could appear as zero if the connection is reused or the content was stored in the local cache (memory or disk).
  • TCP (connectEnd – connectStart): How long it takes to establish a TCP connection the server. If HTTPS, this process includes TLS negotiation time.
  • Request (responseStart – requestStart): The time elapsed between making an HTTP request and receiving the first byte of the response.
  • Response (responseEnd – responseStart): The time elapsed between the first byte and the last byte of the response received. You can think of this as a resource download time.
  • Processing (domComplete – domLoading): How long it took to render the page. If this number is big, you can optimize your document architecture, resource size, or configure settings under Speed page such as Auto Minify the source code. This document process can be drill down more with domInteractive, domContentLoadedEventStart, domContentLoadedEventEnd, and domComplete. We plan to provide more detailed analytics on this later on.
  • Load Event (loadEventEnd – loadEventStart): When the browser finishes loading its document and resources, it triggers a `load` event. This duration may be helpful to you if you have additional functions or any logic for the load event.
  • Total Time: Sum of each timing metrics shown on the graph.

If you are seeing any spikes or unusual form of a line in the stacked line chart, you could start investigating on each element to see what is causing the problem.

For more about how to use Browser Insights, see our announcement blog post.

What’s next

In this blog post, we’ve focused on the Navigation Timing API, because it’s at the heart of our first version of Browser Insights. In the near future, we plan to incorporate metrics from some of the other APIs. For example, we can break down some of the longer timings by looking at individual resource loads, and pointing out what’s taking longer. In addition to that, we plan to track JavaScript errors, provide a way to measure A/B performance, set up monitoring/alerting based on the metrics, and so on. So stay tuned!

How Castle is Building Codeless Customer Account Protection

Post Syndicated from Guest Author original https://blog.cloudflare.com/castle-building-codeless-customer-account-protection/

How Castle is Building Codeless Customer Account Protection

How Castle is Building Codeless Customer Account Protection

This is a guest post by Johanna Larsson, of Castle, who designed and built the Castle Cloudflare app and the supporting infrastructure.

Strong security should be easy.

Asking your consumers again and again to take responsibility for their security through robust passwords and other security measures doesn’t work. The responsibility of security needs to shift from end users to the companies who serve them.

Castle is leading the way for companies to better protect their online accounts with millions of consumers being protected every day. Uniquely, Castle extends threat prevention and protection for both pre and post login ensuring you can keep friction low but security high. With realtime responses and automated workflows for account recovery, overwhelmed security teams are given a hand. However, when you’re that busy, sometimes deploying new solutions takes more time than you have. Reducing time to deployment was a priority so Castle turned to Cloudflare Workers.

User security and friction

When security is no longer optional and threats are not black or white, security teams are left with trying to determine how to allow end-user access and transaction completions when there are hints of risk, or when not all of the information is available. Keeping friction low is important to customer experience. Castle helps organizations be more dynamic and proactive by making continuous security decisions based on realtime risk and trust.

Some of the challenges with traditional solutions is that they are often just focused on protecting the app or they are only focused on point of access, protecting against bot access for example. Tools specifically designed for securing user accounts however are fundamentally focused on protecting the accounts of the end-users, whether they are being targeting by human or bots. Being able to understand end-user behaviors and their devices both pre and post login is therefore critical in being able to truly protect each users. The key to protecting users is being able to decipher between normal and anomalous activity on an individual account and device basis. You also need a playbook to respond to anomalies and attacks with dedicated flows, that allows your end users to interact directly and provide feedback around security events.

By understanding the end user and their good behaviors, devices, and transactions, it is possible to automatically respond to account threats in real-time based on risk level and policy. This approach not only reduces end-user friction but enables security teams to feel more confident that they won’t ever be blocking a legitimate login or transaction.

Castle processes tens of millions of events every day through its APIs, including contextual information like headers, IP, and device types. The more information that can be associated with a request the better. This allows us to better recognize abnormalities and protect the end user. Collection of this information is done in two ways. One is done on the web application’s backend side through our SDKs and the other is done on the client side using our mobile SDK or browser script. Our experience shows that any integration of a security service based on user behavior and anomaly detection can involve many different parties across an organization, and it affects multiple layers of the tech stack. On top of the security related roles, it’s not unusual to also have to coordinate between backend, devops, and frontend teams. The information related to an end user session is often spread widely over a code base.

The cost of security

One of the biggest challenges in implementing a user-facing security and risk management solution is the variety of people and teams it needs attention from, each with competing priorities. Security teams are often understaffed and overwhelmed making it difficult to take on new projects. At the same time, it consumes time from product and engineering personnel on the application side, who are responsible for UX flows and performing continuous authentication post-login.

We’ve been experimenting with approaches where we can extract that complexity from your application code base, while also reducing the effort of integrating. At Castle, we believe that strong security should be easy.

How Castle is Building Codeless Customer Account Protection

With Cloudflare we found a service that enables us to create a more friendly, simple, and in the end, safe integration process by placing the security layer directly between the end user and your application. Security-related logic shouldn’t pollute your app, but should reside in a separate service, or shield, that covers your app. When the two environments are kept separate, this reduces the time and cost of implementing complex systems making integration and maintenance less stressful and much easier.

Our integration with Cloudflare aims to solve this implementation challenge, delivering end-to-end account protection for your users, both pre and post login, with the click of a button.

The codeless integration

In our quest for a purely codeless integration, key features are required. When every customer application is different, this means every integration is different. We want to solve this problem for you once and for all. To do this, we needed to move the security work away from the implementation details so that we could instead focus on describing the key interactions with the end user, like logins or bank transactions. We also wanted to empower key decision makers to recognize and handle crucial interactions in their systems. Creating a single solution that could be customized to fit each specific use case was a priority.

Building on top of Cloudflare’s platform, we made use of three unique and powerful products: Workers, Apps for Workers, and Workers KV.

Thanks to Workers we have full access to the interactions between the end user and your application. With their impressive performance, we can confidently run inline of website requests without creating noticeable latency. We will never slow down your site. And in order to achieve the flexibility required to match your specific use case, we created an internal configuration format that fully describes the interactions of devices and servers across HTTP, including web and mobile app traffic. It is in this Worker where we’ve implemented an advanced routing engine to match and collect information about requests and responses to events, directly from the edge. It also fully handles injecting the Castle browser script — one less thing to worry about.

All of this logic is kept separate from your application code, and through the Cloudflare App Store we are able to distribute this Worker, giving you control over when and where it is enabled, as well as what configurations are used. There’s no need to copy/paste code or manage your own Workers.

In order to achieve the required speed while running in distributed edge locations, we needed a high performing low latency datastore, and we found one in the Cloudflare Workers KV Store. Cloudflare Apps are not able to access the KV Store directly, but we’ve solved this by exposing it through a separate Worker that the Castle App connects to. Because traffic between Workers never leaves the Cloudflare network, this is both secure and fast enough to match your requirements. The KV Store allows us to maintain end user sessions across the world, and also gives us a place to store and update the configurations and sessions that drive the Castle App.

In combining these products we have a complete and codeless integration that is fully configurable and that won’t slow you down.

How does it work?

The data flow is straightforward. After installing the Castle App, Cloudflare will route your traffic through the Castle App, which uses the Castle Data Store and our API to intelligently protect your end users. The impact to traffic latency is minimal because most work is done in the background, not blocking the requests. Let’s dig deeper into each technical feature:

Script injection

One of the tools we use to verify user identity is a browser script: Castle.js. It is responsible for gathering device information and UI interaction behavior, and although it is not required for our service to function, it helps improve our verdicts. This means it’s important that it is properly added to every page in your web application. The Castle App, running between the end user and your application, is able to unobtrusively add the script to each page as it is served. In order for the script to also track page interactions it needs to be able to connect them to your users, which is done through a call to our script and also works out of the box with the Cloudflare interaction. This removes 100% of the integration work from your frontend teams.

Collect contextual information

The second half of the information that forms the basis of our security analysis is the information related to the request itself, such as IP and headers, as well as timestamps. Gathering this information may seem straightforward, but our experience shows some recurring problems in traditional integrations. IP-addresses are easily lost behind reverse proxies, as they need to be maintained as separate headers, like `X-Forwarded-For`, and the internal format of headers differs from platform to platform. Headers in general might get cut off based on whitelisting. The Castle App sees the original request as it comes in, with no outside influence or platform differences, enabling it to reliably create the context of the request. This saves your infrastructure and backend engineers from huge efforts debugging edge cases.

Advanced routing engine

Finally, in order to reliably recognize important events, like login attempts, we’ve built a fully configurable routing engine. This is fast enough to run inline of your web application, and supports near real-time configuration updates. It is powerful enough to translate requests to actual events in your system, like logins, purchases, profile updates or transactions. Using information from the request, it is then able to send this information to Castle, where you are able to analyze, verify and take action on suspicious activity. What’s even better, is that at any point in the future if you want to Castle protect a new critical user event – such as a withdrawal or transfer event – all it takes is adding a record to the configuration file. You never have to touch application code in order to expand your Castle integration across sensitive events.

We’ve put together an example TypeScript snippet that naively implements the flow and features we’ve discussed. The details are glossed over so that we can focus on the functionality.

addEventListener(event => event.respondWith(handleEvent(event)));

const respondWith = async (event: CloudflareEvent) => {
  // You configure the application with your Castle API key
  const { apiKey } = INSTALL_OPTIONS;
  const { request } = event;

  // Configuration is fetched from the KV Store
  const configuration = await getConfiguration(apiKey);

  // The session is also retrieved from the KV Store
  const session = await getUserSession(request);

  // Pass the request through and get the response
  let response = await fetch(request);

  // Using the configuration we can recognize events by running
  // the request+response and configuration through our matching engine
  const securityEvent = getMatchingEvent(request, response, configuration);

  if (securityEvent) {
    // With direct access to the raw request, we can confidently build the context
    // including a device ID generated by the browser script, IP, and headers
    const requestContext = getRequestContext(request);

    // Collecting the relevant information, the data is passed to the Castle API
    event.waitUntil(sendToCastle(securityEvent, session, requestContext));
  }

  // Because we have access to the response HTML page we can safely inject the browser
  // script. If the response is not an HTML page it is passed through untouched.
  response = injectScript(response, session);

  return response;
};

We hope we have inspired you and demonstrated how Workers can provide speed and flexibility when implementing end to end account protection for your end users with Castle. If you are curious about our service, learn more here.

Top Resources for API Architects and Developers

Post Syndicated from George Mao original https://aws.amazon.com/blogs/architecture/top-resources-for-api-architects-and-developers/

We hope you’ve enjoyed reading our series on API architecture and development. We wrote about best practices for REST APIs with Amazon API Gateway  and GraphQL APIs with AWS AppSync. This post will cover the top resources that all API developers should be aware of.

Tech Talks, Webinars, and Twitch Live Stream

The technical staff at AWS have produced a variety of digital media that cover new service launches, best practices, and customer questions. Be sure to review these videos for tips and tricks on building APIs:

  • Happy Little APIs: This is a multi part series produced by our awesome Developer Advocate, Eric Johnson. He leads a series of talks that demonstrate how to build a real world API.
  • API Gateway’s WebSocket webinar: API Gateway now supports real time APIs with Websockets. This webinar covers how to use this feature and why you should let API Gateway manage your realtime APIs.
  • Best practices for building enterprise grade APIs: API Gateway reduces the time it takes to build and deploy REST development but there are strategies that can make development, security, and management easier.
  • An Intro to AWS AppSync and GraphQL: AppSync helps you build sophisticated data applications with realtime and offline capabilities.

Gain Experience With Hands-On Workshops and Examples

One of the easiest ways to get started with Serverless REST API development is to use the Serverless Application Model (SAM). SAM lets you run APIs and Lambda functions locally on your machine for easy development and testing.

For example, you can configure API Gateway as an Event source for Lambda with just a few lines of code:

Type: Api
Properties:
Path: /photos
Method: post

There are many great examples on our GitHub page to help you get started with Authorization (IAMCognito), Request, Response,  various policies , and CORS configurations for API Gateway.

If you’re working with GraphQL, you should review the Amplify Framework. This is an official AWS project that helps you quickly build Web Applications with built in AuthN and backend APIs using REST or GraphQL. With just a few lines of code, you can have Amplify add all required configurations for your GraphQL API. You have two options to integrate your application with an AppSync API:

  1. Directly using the Amplify GraphQL Client
  2. Using the AWS AppSync SDK

An excellent walk through of the Amplify toolkit is available here, including an example showing how to create a single page web app using ReactJS powered by an AppSync GraphQL API.

Finally, if you are interested in a full hands on experience, take a look at:

  • The Amazon API Gateway WildRydes workshop. This workshop teaches you how to build a functional single page web app with a REST backend, powered by API Gateway.
  • The AWS AppSync GraphQL Photo Workshop. This workshop teaches you how to use Amplify to quickly build a Photo sharing web app, powered by AppSync.

Useful Documentation

The official AWS documentation is the source of truth for architects and developers. Get started with the API Gateway developer guide. API Gateway is currently has two APIs (V1 and V2) for managing the service. Here is where you can view the SDK and CLI reference.

Get started with the AppSync developer guide, and review the AppSync management API.

Summary

As an API architect, your job is not only to design and implement the best API for your use case, but your job is also to figure out which type of API is most cost effective for your product. For example, an application with high request volume (“chatty“) may benefit from a GraphQL implementation instead of REST.

API Gateway currently charges $3.50 / million requests and provides a free tier of 1 Million requests per month. There is tiered pricing that will reduce your costs as request volume rises. AppSync currently charges $4.00 / million for Query and Mutation requests.

While AppSync pricing per request is slightly higher, keep in mind that the nature of GraphQL APIs typically result in significantly fewer overall request numbers.

Finally, we encourage you to join us in the coming weeks — we will be starting a series of posts covering messaging best practices.

About the Author

George MaoGeorge Mao is a Specialist Solutions Architect at Amazon Web Services, focused on the Serverless platform. George is responsible for helping customers design and operate Serverless applications using services like Lambda, API Gateway, Cognito, and DynamoDB. He is a regular speaker at AWS Summits, re:Invent, and various tech events. George is a software engineer and enjoys contributing to open source projects, delivering technical presentations at technology events, and working with customers to design their applications in the Cloud. George holds a Bachelor of Computer Science and Masters of IT from Virginia Tech.

Announcing the General Availability of API Tokens

Post Syndicated from Garrett Galow original https://blog.cloudflare.com/api-tokens-general-availability/

Announcing the General Availability of API Tokens

APIs at Cloudflare

Announcing the General Availability of API Tokens

Today we are announcing the general availability of API Tokens – a scalable and more secure way to interact with the Cloudflare API. As part of making a better internet, Cloudflare strives to simplify manageability of a customer’s presence at the edge. Part of the way we do this is by ensuring that all of our products and services are configurable by API. Customers ranging from partners to enterprises to developers want to automate management of Cloudflare. Sometimes that is done via our API directly, and other times it is done via open source software we help maintain like our Terraform provider or Cloudflare-Go library. It is critical that customers who are automating management of Cloudflare can keep their Cloudflare services as secure as possible.

Least Privilege and Why it Matters

Securing software systems is hard. Limiting what a piece of software can do is a good defense to prevent mistakes or malicious actions from having greater impact than they could. The principle of least privilege helps guide how much access a given system should have to perform actions. Originally formulated by Jerome Saltzer, “Every program and every privileged user of the system should operate using the least amount of privilege necessary to complete the job.” In the case of Cloudflare, many customers have various domains routing traffic leveraging many different services. If a bad actor gets unauthorized access to a system they can use whatever access that system has to cause further damage or steal additional information.

Let’s see how the capabilities of API Tokens fit into the principle of least privilege.

About API Tokens

API Tokens provide three main capabilities:

  1. Scoping API Tokens by Cloudflare resource
  2. Scoping API Tokens by permission
  3. The ability to provision multiple API Tokens

Let’s break down each of these capabilities.

Scoping API Tokens by Cloudflare Resource

Cloudflare separates service configuration by zone which typically equates to a domain. Additionally, some customers have multiple accounts each with many zones. It is important that when granting API access to a service it only has access to the accounts resources and zones that are pertinent for the job at hand. API Tokens can be scoped to only cover specific accounts and specific zones. One common use case is if you have a staging zone and a production zone, then an API Token can be limited to only be able to affect the staging zone and not have access to the production zone.

Scoping API Tokens by Permission

Being able to scope an API Token to a specific zone is great, but in one zone there are many different services that can be configured: firewall rules, page rules, and load balancers just to name a few. If a customer has a service that should only be able to create new firewall rules in response to traffic patterns, then also allowing that service to change DNS records is a violation of least privilege. API Tokens allow you to scope each token to specific permission. Multiple permissions can be combined to create custom tokens to fit specific use cases.

Multiple API Tokens

If you use Cloudflare to protect and accelerate multiple services, then may be making API changes to Cloudflare from multiple locations – different servers, VMs, containers, or workers. Being able to create an API Token per service means each service is insulated to changes from another. If one API Token is leaked or needs to be rolled, there won’t be any impact to the other services’ API Tokens. Also the capabilities mentioned previously mean that each service can be scoped to exactly what actions and resources necessary. This allows customers to better realize the practice of least privilege for accessing Cloudflare by API.

Now let’s walk through how to create an API Token and use it.

Using API Tokens

To create your first API Token go to the ‘API Tokens’ section of your user profile which can be found here: dash.cloudflare.com/profile/api-tokens

1. On this page, you will find both a list of all of your API Tokens in addition to your Global API Key and Origin CA Key.

Announcing the General Availability of API Tokens
API Tokens Getting Started – Create Token

To create your first API Token, select ‘Create Token’.


2. On the create screen there are two ways to create your token. You can create it from scratch through the ‘Custom’ option or you can start with a predefined template by selecting ‘Start with a template’.

Announcing the General Availability of API Tokens
API Token Template Selection

For this case, we will use the ‘Edit zone DNS’ template to create an API Token that can edit a single zone’s DNS records.


3. Once the template is selected, we need to pick a zone for the API Token to be scoped to. Notice that the DNS Edit permission was already pre-selected.

Announcing the General Availability of API Tokens
Specifying the zone for which the token will be able to control DNS

In this case, ‘garrettgalow.com’ is selected as the Cloudflare zone that the API Token will be able to edit DNS records for.


4. Once I select continue to summary, I’m given a chance to review my selection. In this case the resources and permissions are quite simple, but this gives you a change to make sure you are giving the API Token exactly the correct amount of privilege before creating it.

Announcing the General Availability of API Tokens
Token Summary – confirmation


5. Once created, we are presented with the API Token. This screen is the only time you will be presented with the secret so be sure to put the secret in a safe place! Anyone with this secret can perform the granted actions on the resources specified so protect it like a password. In the below screenshot I have black boxed the secret for obvious reasons. If you happen to lose the secret, you can always regenerate it from the API Tokens table so you don’t have to configure all the permissions again.

Announcing the General Availability of API Tokens
Token creation completion screen with the token secret

In addition to the secret itself this screen provides an example curl request that can be used to verify that the token was successfully created. It also provides an example of how the token should be used for any direct HTTP requests. With API Tokens we now follow the RFC Authorization Bearer standard. Calling that API we see a successful response telling us that the token is valid and active

~$ curl -X GET "https://api.cloudflare.com/client/v4/user/tokens/verify" \
>      -H "Authorization: Bearer vh9awGupxxxxxxxxxxxxxxxxxxx" \
>      -H "Content-Type:application/json" | jq

{
  "result": {
    "id": "ad599f2b67cdccf24a160f5dcd7bc57b",
    "status": "active"
  },
  "success": true,
  "errors": [],
  "messages": [
    {
      "code": 10000,
      "message": "This API Token is valid and active",
      "type": null
    }
  ]
}

What’s coming next

For anyone using the Cloudflare API, we recommend moving to using API Tokens over their predecessor API Keys going forward. With this announcement, our Terraform provider, Cloudflare-Go library, and WordPress plugin are all updated for API Token compatibility. Other libraries will receive updates soon. Both API Tokens and API Keys will be supported for the time being for customers to be able to safely migrate. We have more planned capabilities for API Tokens to further safeguard how and when tokens are used, so stay tuned for future announcements!

Let us know what you think and what you’d like to see next regarding API security on the Cloudflare Community.

Things to Consider When You Build a GraphQL API with AWS AppSync

Post Syndicated from Steve Johnson original https://aws.amazon.com/blogs/architecture/things-to-consider-when-you-build-a-graphql-api-with-aws-appsync/

Co-authored by George Mao

When building a serverless API layer in AWS (one that provides a custom grammar for your serverless resources), your choices include Amazon API Gateway (REST) and AWS AppSync (GraphQL). We’ve discussed the differences between REST and GraphQL in our first post of this series and explored REST APIs in our second post. This post will dive deeper into GraphQL implementation with AppSync.

Note that the GraphQL specification is focused on grammar and expected behavior, and is light on implementation details. Therefore, each GraphQL implementation approaches these details in a different way. While this blog post speaks to architectural principles, it will also discuss specific features AppSync for practical advice.

Schema vs. Resolver Complexity

All GraphQL APIs are defined by their schema. The schema contains operation definitions (Queries, Mutations, and Subscriptions), as well as data definitions (Data and Input Types). Since GraphQL tools provide introspection, your Schema also becomes your API documentation. So, as your Schema grows, it’s important that it’s consistent and adheres to best practices (such as the use of Input Types for mutations).

Clients will love using your API if they can do what they want with as little work as possible. A good rule of thumb is to put any necessary complexity in the resolver rather than in the client code. For example, if you know client applications will need “Book” information that includes the cover art and current sales ranking – all from different data sources – you can build a single data type that combines them:

GraphQL API -1

In this case, the complexity associated with assembling that data should be handled in the resolver, rather than forcing the client code to make multiple calls and manipulate the returned data.

Mapping data storage to schema gets more complex when you are accessing legacy data services: internal APIs, external REST services, relational database SQL, and services with custom protocols or libraries. The other end of the spectrum is new development, where some architects argue they can map their entire schema to a single source. In practice, your mapping should consider the following:

  • Should slower data sources have a caching layer?
  • Do your most frequently used operations have low latency?
  • How many layers (services) does a request touch?
  • Can high latency requests be modeled asynchronously?

With AppSync, you have the option to use Pipeline Resolvers, which execute reusable functions within a resolver context. Each function in the pipeline can call one of the native resolver types.

Security

Public APIs (ones with an external endpoint) provide access to secured resources. The first line of defense is authorization – restricting who can call an operation in the GraphQL Schema. AppSync has four methods to authenticate clients:

  • API keys: Since the API key does not reference an identity and is easily compromised, we recommend it be used for development only.
  • IAMs: These are standard AWS credentials that are often used for server-side processes. A common example is assigning an execution role to a AWS Lambda function that makes calls to AppSync.
  • OIDC Tokens: These are time-limited and suitable for external clients.
  • Cognito User Pool Tokens: These provide the advantages of OIDC tokens, and also allow you to use the @auth transform for authorization.

Authorization in GraphQL is handled in the resolver logic, which allows field-level access to your data, depending on any criteria you can express in the resolver. This allows flexibility, but also introduces complexity in the form of code.

AppSync Resolvers use a Request/Response template pattern similar to API Gateway. Like API Gateway, the template language is Apache Velocity. In an AppSync resolver, you have the ability to examine the incoming authentication information (such as the IAM username) in the context variable. You can then compare that username against an owner field in the data being retrieved.

AppSync provides declarative security using the @auth and @model transforms. Transforms are annotations you add to your schema that are interpreted by the Amplify Toolchain. Using the @auth transform, you can apply different authentication types to different operations. AppSync will automatically generate resolver logic for DynamoDB, based on the data types in your schema. You can also define field-level permissions based on identity. Get detailed information.

Performance

To get a quantitative view of your API’s performance, you should enable field-level logging on your AppSync API. Doing so will automatically emit information into CloudWatch Logs. Then you can analyze AppSync performance with CloudWatch Logs Insights to identify performance bottlenecks and the root cause of operational problems, such as:

  • Resolvers with the maximum latency
  • The most (or least) frequently invoked resolvers
  • Resolvers with the most errors

Remember, choice of resolver type has an impact on performance. When accessing your data sources, you should prefer a native resolver type such as Amazon DynamoDB or Amazon ElasticSearch using VTL templates. The HTTP resolver type is very efficient, but latency depends on the performance of the downstream service. Lambda resolvers provide flexibility, but have the performance characteristics of your application code.

AppSync also has another resolver type, the Local Resolver, which doesn’t interact with a data source. Instead, this resolver invokes a mutation operation that will result in a subscription message being sent. This is useful in use cases where AppSync is used as a message bus, or in cases where the data has been modified by an external source, and notifications must be sent without modifying the data a second time.

GraphQL API -2

GraphQL Subscriptions

One of the reasons customers choose GraphQL is the power of Subscriptions. These are notifications that are sent immediately to clients when data has changed by a mutation. AppSync subscriptions are implemented using Websockets, and are directly tied to a mutation in the schema. The AppSync SDKs and AWS Amplify Library allow clients to subscribe to these real-time notifications.

AppSync Subscriptions have many uses outside standard API CRUD operations. They can be used for inter-client communication, such as a mobile or web chat application. Subscription notifications can also be used to provide asynchronous responses to long-running requests. The initial request returns quickly, while the full result can be sent via subscription when it’s complete (local resolvers are useful for this pattern).

Subscriptions will only be received if the client is currently running, connected to the server, and is subscribed to the target mutation. If you have a mobile client, you may want to augment these notifications with mobile or web push notifications.

Summary

The choice of using a GraphQL API brings many advantages, especially for client developers. While basic considerations of Security and Performance are important (as they are in any solution), GraphQL APIs require some thought and planning around Schema and Resolver organization, and managing their complexity.

About the author

Steve Johnson

Steve Johnson is a Specialist Solutions Architect at Amazon Web Services, focused on Mobile Applications. Steve helps customers design Mobile and GraphQL applications using AWS Amplify, Amazon AppSync, Amazon Cognito, and the AWS Serverless Suite of products. He is a speaker at AWS Summits and various tech events. Steve is a software and systems engineer and enjoys tinkering with all things mechanical and cloud related. He holds a Bachelor of Mechanical Engineering and Masters in Software Systems Engineering. Steve lives and works near us-east-1, because the latency is good.

 

Scaling Kubernetes deployments with Amazon CloudWatch metrics

Post Syndicated from Ignacio Riesgo original https://aws.amazon.com/blogs/compute/scaling-kubernetes-deployments-with-amazon-cloudwatch-metrics/

This post is contributed by Kwunhok Chan | Solutions Architect, AWS

 

In an earlier post, AWS introduced Horizontal Pod Autoscaler and Kubernetes Metrics Server support for Amazon Elastic Kubernetes Service. These tools make it easy to scale your Kubernetes workloads managed by EKS in response to built-in metrics like CPU and memory.

However, one common use case for applications running on EKS is the integration with AWS services. For example, you administer an application that processes messages published to an Amazon SQS queue. You want the application to scale according to the number of messages in that queue. The Amazon CloudWatch Metrics Adapter for Kubernetes (k8s-cloudwatch-adapter) helps.

 

Amazon CloudWatch Metrics Adapter for Kubernetes

The k8s-cloudwatch-adapter is an implementation of the Kubernetes Custom Metrics API and External Metrics API with integration for CloudWatch metrics. It allows you to scale your Kubernetes deployment using the Horizontal Pod Autoscaler (HPA) with CloudWatch metrics.

 

Prerequisites

Before starting, you need the following:

 

Getting started

Before using the k8s-cloudwatch-adapter, set up a way to manage IAM credentials to Kubernetes pods. The CloudWatch Metrics Adapter requires the following permissions to access metric data from CloudWatch:

cloudwatch:GetMetricData

Create an IAM policy with the following template:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "cloudwatch:GetMetricData"
            ],
            "Resource": "*"
        }
    ]
}

For demo purposes, I’m granting admin permissions to my Kubernetes worker nodes. Don’t do this in your production environment. To associate IAM roles to your Kubernetes pods, you may want to look at kube2iam or kiam.

If you’re using an EKS cluster, you most likely provisioned it with AWS CloudFormation. The following command uses AWS CloudFormation stacks to update the proper instance policy with the correct permissions:

aws iam attach-role-policy \
--policy-arn arn:aws:iam::aws:policy/AdministratorAccess \
--role-name $(aws cloudformation describe-stacks --stack-name ${STACK_NAME} --query 'Stacks[0].Parameters[?ParameterKey==`NodeInstanceRoleName`].ParameterValue' | jq -r ".[0]")

 

Make sure to replace ${STACK_NAME} with the nodegroup stack name from the AWS CloudFormation console .

 

You can now deploy the k8s-cloudwatch-adapter to your Kubernetes cluster.

$ kubectl apply -f https://raw.githubusercontent.com/awslabs/k8s-cloudwatch-adapter/master/deploy/adapter.yaml

 

This deployment creates a new namespace—custom-metrics—and deploys the necessary ClusterRole, Service Account, and Role Binding values, along with the deployment of the adapter. Use the created custom resource definition (CRD) to define the configuration for the external metrics to retrieve from CloudWatch. The adapter reads the configuration defined in ExternalMetric CRDs and loads its external metrics. That allows you to use HPA to autoscale your Kubernetes pods.

 

Verifying the deployment

Next, query the metrics APIs to see if the adapter is deployed correctly. Run the following command:

$ kubectl get --raw "/apis/external.metrics.k8s.io/v1beta1" | jq.
{
  "kind": "APIResourceList",
  "apiVersion": "v1",
  "groupVersion": "external.metrics.k8s.io/v1beta1",
  "resources": [
  ]
}

There are no resources from the response because you haven’t registered any metric resources yet.

 

Deploying an Amazon SQS application

Next, deploy a sample SQS application to test out k8s-cloudwatch-adapter. The SQS producer and consumer are provided, together with the YAML files for deploying the consumer, metric configuration, and HPA.

Both the producer and consumer use an SQS queue named helloworld. If it doesn’t exist already, the producer creates this queue.

Deploy the consumer with the following command:

$ kubectl apply -f https://raw.githubusercontent.com/awslabs/k8s-cloudwatch-adapter/master/samples/sqs/deploy/consumer-deployment.yaml

 

You can verify that the consumer is running with the following command:

$ kubectl get deploy sqs-consumer
NAME           DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
sqs-consumer   1         1         1            0           5s

 

Set up Amazon CloudWatch metric and HPA

Next, create an ExternalMetric resource for the CloudWatch metric. Take note of the Kind value for this resource. This CRD resource tells the adapter how to retrieve metric data from CloudWatch.

You define the query parameters used to retrieve the ApproximateNumberOfMessagesVisible for an SQS queue named helloworld. For details about how metric data queries work, see CloudWatch GetMetricData API.

apiVersion: metrics.aws/v1alpha1
kind: ExternalMetric:
  metadata:
    name: hello-queue-length
  spec:
    name: hello-queue-length
    resource:
      resource: "deployment"
      queries:
        - id: sqs_helloworld
          metricStat:
            metric:
              namespace: "AWS/SQS"
              metricName: "ApproximateNumberOfMessagesVisible"
              dimensions:
                - name: QueueName
                  value: "helloworld"
            period: 300
            stat: Average
            unit: Count
          returnData: true

 

Create the ExternalMetric resource:

$ kubectl apply -f https://raw.githubusercontent.com/awslabs/k8s-cloudwatch-adapter/master/samples/sqs/deploy/externalmetric.yaml

 

Then, set up the HPA for your consumer. Here is the configuration to use:

kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2beta1
metadata:
  name: sqs-consumer-scaler
spec:
  scaleTargetRef:
    apiVersion: apps/v1beta1
    kind: Deployment
    name: sqs-consumer
  minReplicas: 1
  maxReplicas: 10
  metrics:
  - type: External
    external:
      metricName: hello-queue-length
      targetValue: 30

 

This HPA rule starts scaling out when the number of messages visible in your SQS queue exceeds 30, and scales in when there are fewer than 30 messages in the queue.

Create the HPA resource:

$ kubectl apply -f https://raw.githubusercontent.com/awslabs/k8s-cloudwatch-adapter/master/samples/sqs/deploy/hpa.yaml

 

Generate load using a producer

Finally, you can start generating messages to the queue:

$ kubectl apply -f https://raw.githubusercontent.com/awslabs/k8s-cloudwatch-adapter/master/samples/sqs/deploy/producer-deployment.yaml

On a separate terminal, you can now watch your HPA retrieving the queue length and start scaling the replicas. SQS metrics generate at five-minute intervals, so give the process a few minutes:

$ kubectl get hpa sqs-consumer-scaler -w

 

Clean up

After you complete this experiment, you can delete the Kubernetes deployment and respective resources.

Run the following commands to remove the consumer, external metric, HPA, and SQS queue:

$ kubectl delete deploy sqs-producer
$ kubectl delete hpa sqs-consumer-scaler
$ kubectl delete externalmetric sqs-helloworld-length
$ kubectl delete deploy sqs-consumer

$ aws sqs delete-queue helloworld

 

Other CloudWatch integrations

AWS recently announced the preview for Amazon CloudWatch Container Insights, which monitors, isolates, and diagnoses containerized applications running on EKS and Kubernetes clusters. To get started, see Using Container Insights.

 

Get involved

This project is currently under development. AWS welcomes issues and pull requests, and would love to hear your feedback.

How could this adapter be best implemented to work in your environment? Visit the Amazon CloudWatch Metrics Adapter for Kubernetes project on GitHub and let AWS know what you think.

Announcing the New Cloudflare Partner Platform

Post Syndicated from Garrett Galow original https://blog.cloudflare.com/announcing-the-new-cloudflare-partner-platform/

Announcing the New Cloudflare Partner Platform

Announcing the New Cloudflare Partner Platform

When I first started at Cloudflare over two years ago, one of the first things I was tasked with was to help evolve our partner platform to support the changes in our service and the expanding needs of our partners and customers. Cloudflare’s existing partner platform was released in 2010. It is a testament to those who built it, that it was, and still is, in use today—but it was also clear that the landscape had substantially changed.

Since the launch of the existing partner platform, we had built and expanded multi-user access, and launched many new products: Argo, Load Balancing, and Cloudflare Workers, to name a few. Retrofitting the existing offering was not practical. Cloudflare needed a new partner platform that could meet the needs of partners and their customers.

As the team started to develop a new solution, we needed to find a partner who could keep us on the right path. The number of hypotheticals were infinite and we needed a first customer to ground ourselves. Lo and behold, not long after I had begun putting pen to paper, we found the perfect partner for the new platform.

The IBM Partnership

IBM was looking for a partner to bring various edge services to market quickly, and our suite of capabilities was what they were looking for. If you are not familiar with our partnership with IBM, you can learn a bit more about it in our blog post and on the IBM Cloud Internet Services landing page. We signed the contract in November 2017, and we had to be ready to launch by IBM Think the following February. Given that IBM’s engineering team needed time to integrate with us, we were on a tight timeline to deliver.

A number of team members and I jumped on a plane and flew to Austin, Texas (Hook ‘em!) to work with IBM and determine the minimum viable product (MVP). Over kolaches (for the Czech readers at home: Klobásník), IBM and Cloudflare nailed down the MVP requirements. Briefly, they were as follows:

  1. Full API integration to provision the building blocks of using Cloudflare.
    • This included:
      1. Accounts: The container of resources – typically zones
      2. Users: The way in which we partition access to accounts
  2. The ability to sell and provision Cloudflare’s paid services and package them in a way that made sense for IBM’s customers.
    • Our existing partner platform only supported zone plans and none of our newer offerings, such as Argo or load balancing.
    • IBM had specific requirements around how they could package and sell to customers, so our solution needed to be flexible enough to support that.
  3. Ensure that what we built was re-usable.
    • Cloudflare makes it a point to solve problems for scale. While we were focused on ensuring our first partner would be successful, we knew that long term we would need to be able to scale this solution to additional partners. Nothing we built could prevent us from doing that.

Over the next couple of months, many teams at Cloudflare came together to deliver this solution at breakneck speed. Given that the midpoint of this effort happened over the holiday season, I’m personally proud of our company not sacrificing employee’s time with their friends and families in order to deliver. Even when it feels like a sprint, it is still a marathon.

During this time, the engineering team we were working with at IBM felt like another team at Cloudflare. Their ability to move quickly, integrate, and validate our work was critical to the success of the project. At THINK in February 2018, we were able to announce the Beta of IBM CIS (Cloud Internet Services) powered by Cloudflare!

Following the initial release, we continued to add functionality to further enrich the IBM CIS offering, while behind the scenes we continued our work to redefine Cloudflare’s partner platform.

The New Partner Platform

Over the past year we have expanded the capabilities and completed the necessary work to enable more partners to be able to use what we initially built for the IBM partnership. Out of that comes our new partner platform we are announcing today. The new partner platform allows partners of Cloudflare to sell and provision Cloudflare for their customers in a scalable fashion.

Our new partner platform is the combination of two systems designed to fulfill specific needs:

1. Tenants: an abstraction on top of our existing accounts and users for easier management
2. Subscriptions: a new way of packaging and provisioning services

Tenants

An absolute necessity for partners is the ability to provision accounts for each of their customers. Normally the only way to get a Cloudflare account is to sign up on the dashboard. We needed a way for partners to be able to create end customer accounts at their discretion to support their specific onboarding needs. This also ensures proper separation of ownership between customers and allows end customers to access the Cloudflare dashboard directly.

With the introduction of tenants, our data model now looks like the following:

Announcing the New Cloudflare Partner Platform
Cloudflare Resource Data Model

Tenants provide partners the ability to create and manage the accounts for their customers. Each account created is a separate container of resources (zones, workers, etc) for each of customer. Users can be invited to each account as necessary for self service management, while the partner retains control of the capabilities enabled for each account. How a partner manages those capabilities brings us to the second major system that makes up the new partner platform.

Subscriptions

While not as obvious as the need for account provisioning, the ability to package and provision services is critical to providing differentiated offerings for partners of Cloudflare. One drawback of our old partner platform was the difficulty in ensuring new products and services were available to those partners. As Cloudflare grew, it reached the point where new paid services could not be added into the existing partner platform.

With subscriptions, this is no longer the case. What started as just a way to provision services for IBM, has now grown into the standard of how all customer services are provisioned at Cloudflare. Whether you purchase services through IBM CIS or buy Cloudflare Workers in our dashboard, behind the scenes, Subscriptions is what ensures you get exactly the right services enabled.

Enough talk, let’s show things in action!

The Partner Platform in Action

The full details of using the new partner platform can be found in our Provisioning API docs, but here we provide a walkthrough of a typical use case.

Using the new partner platform involves 4 steps:

  1. Provisioning Customer Accounts
  2. Granting Customer Access
  3. Enabling Services
  4. Service Configuration

1) Provisioning Customer Accounts

When onboarding customers, you want each to have their own Cloudflare account. This ensures one customer can not affect any resources belonging to another. By making a `POST /accounts` request, you can create an account for an individual customer.

Request:

curl -X POST \
    https://api.cloudflare.com/client/v4/accounts \
    -H 'Content-Type: application/json' \
    -H 'x-auth-email: <x-auth-email>' \
    -H 'x-auth-key: <x-auth-key>' \
    -d '{ "name": "Customer Account", 
          "type": "standard" 
        }'

Response:

{
    "result": {
        "id": "2bab6ace8c72ed3f09b9eca6db1396bb",
        "name": "Customer Account",
        "type": "standard",
        "settings": {
            "enforce_twofactor": false
        }
    },
    "success": true,
    "errors": [],
    "messages": []
}

This new account is owned by the partner. It can be managed by API, or in the UI by the partner or any additional administrators that are invited.

2) Granting Customer Access

Now that the customer’s account is created, let’s give them access to it. This step uses existing APIs and if you have shared access to a Cloudflare account before, then you have already done this.

Request:

curl -X POST \
    'https://api.cloudflare.com/client/v4/accounts/2bab6ace8c72ed3f09b9eca6db1396bb/members' \
    -H 'Content-Type: application/json' \
    -H 'x-auth-email: <x-auth-email>' \
    -H 'x-auth-key: <x-auth-key>' \
    -d '{ "email": "[email protected]",
          "roles": ["05784afa30c1afe1440e79d9351c7430"],
          "status": "accepted" 
        }'

Response:

{
    "result": {
        "id": "47bd8083af8516a20c410090d2f53655",
        "user": {
            "id": "fccad3c46f26dc2d6ba47ad19f639707",
            "first_name": null,
            "last_name": null,
            "email": "[email protected]",
            "two_factor_authentication_enabled": false
        },
        "status": "pending",
        "roles": [
            {
                "id": "05784afa30c1afe1440e79d9351c7430",
                "name": "Administrator",
                "description": "Can access the full account, except for membership management and billing.",
                "permissions": {
                    "organization": {
                        "read": true,
                        "edit": true
                    },
                    "zone": {
                        "read": true,
                        "edit": true
                    },
                    truncated...
                }
            }
        ]
    },
    "success": true,
    "errors": [],
    "messages": []
}

Alternatively, you can do this in the UI, from the Members section for the newly created account.

3) Enabling Services

Now the fun part! With the ability to provision subscriptions, you can enable paid services for your customers. Before we do that though, we will create a zone so we can attach a zone subscription to it.

Adding a zone as a partner is no different than adding a zone as a regular customer. It can also be done by the customer.

Request:

curl -X POST \
    https://api.cloudflare.com/client/v4/zones \
    -H 'Content-Type: application/json' \
    -H 'x-auth-email: <x-auth-email>' \
    -H 'x-auth-key: <x-auth-key>' \
    -d '{ "name": "theircompany.com",
            "account": { "id": "2bab6ace8c72ed3f09b9eca6db1396bb" }
        }'

Response:

{
    "result": {
        "id": "cae181e41197e2eb875d9bcb9396abe7",
        "name": "theircompany.com",
        "status": "pending",
        "paused": false,
        "type": "full",
        "development_mode": 0,
        "name_servers": [
            "lana.ns.cloudflare.com",
            "lynn.ns.cloudflare.com"
        ],
        "original_name_servers": null,
        "original_registrar": "cloudflare, inc.",
        "original_dnshost": null,
        "modified_on": "2019-05-30T17:51:08.510558Z",
        "created_on": "2019-05-30T17:51:08.510558Z",
        "activated_on": null,
        "meta": {
            "step": 4,
            "wildcard_proxiable": false,
            "custom_certificate_quota": 0,
            "page_rule_quota": 3,
            "phishing_detected": false,
            "multiple_railguns_allowed": false
        },
        "owner": {
            "id": null,
            "type": "user",
            "email": null
        },
        "account": {
            "id": "2bab6ace8c72ed3f09b9eca6db1396bb",
            "name": "Customer Account"
        },
        "permissions": [
            "#access:edit",
            "#access:read",
            ...truncated
        ],
        "plan": {
            "id": "0feeeeeeeeeeeeeeeeeeeeeeeeeeeeee",
            "name": "Free Website",
            "price": 0,
            "currency": "USD",
            "frequency": "",
            "is_subscribed": true,
            "can_subscribe": false,
            "legacy_id": "free",
            "legacy_discount": false,
            "externally_managed": false
        }
    },
    "success": true,
    "errors": [],
    "messages": []
}

For this customer we will provision a Pro plan for the newly created zone. If you are not familiar with our zone plans, then you can read about them here. For this, we make a call to the subscriptions service.

Request:

curl -X POST \
    https://api.cloudflare.com/client/v4/zones/cae181e41197e2eb875d9bcb9396abe7/subscription \
  -H 'Content-Type: application/json' \
  -H 'X-Auth-Email: <x-auth-email>' \
  -H 'X-Auth-Key: <x-auth-key>' \
  -d '{"rate_plan": {
          "id": "PARTNERS_PRO"}
      }'

Response:

{
    "success": true,
    "result": {
        "id": "ff563a93e11c46e7b278be46f49cdd2f",
        "product": {
            "name": "partners_cloudflare_zones",
            "period": "",
            "billing": "",
            "public_name": "CloudFlare Services",
            "duration": 0
        },
        "rate_plan": {
            "id": "partners_pro",
            "public_name": "Partners Professional Plan",
            "currency": "USD",
            "scope": "zone",
            "externally_managed": false,
            "sets": [
                "zone",
                "partner"
            ],
            "is_contract": true
        },
        "component_values": [
            {
                "name": "dedicated_certificates",
                "value": 0,
                "price": 0
            },
            {
                "name": "dedicated_certificates_custom",
                "value": 0,
                "price": 0
            },
            {
                "name": "page_rules",
                "value": 20,
                "default": 20,
                "price": 0
            },
            {
                "name": "zones",
                "value": 1,
                "default": 1,
                "price": 0
            }
        ],
        "zone": {
            "id": "cae181e41197e2eb875d9bcb9396abe7",
            "name": "theircompany.com"
        },
        "frequency": "monthly",
        "currency": "USD",
        "app": {
            "install_id": null
        },
        "entitled": true
    },
    "messages": null,
    "api_version": "2.0.0"
}

Now that the customer is set up with an account, zone, and zone subscription, the only thing left is configuring the resources appropriately.

4) Service Configuration

Service configuration can be done by either you, the partner, or the end customer. Most commonly, DNS records need to be added, security settings verified and updated, and customizations made. These can all be done either through our Client v4 APIs or the Cloudflare Dashboard.

Once that is done, the customer is all set!

This is just the beginning

With our announcement today, partners can protect and accelerate their customer’s internet services with Cloudflare’s partner platform. We have battled tested the underlying systems over the last year and are excited to partner with others to help make a better internet. We are not done yet though. We will be continually investing in the tenant and subscription services to expand their capabilities and simplify usage.

Announcing the New Cloudflare Partner Platform
Some of the latest partners using the new partner platform

If you are interested in partnering with Cloudflare, then reach out to [email protected]. If building the future of how Cloudflare’s partners and customers use our service sounds interesting then take a look at our career page.


For more information, see the following resources:

Android Rx onError Guidelines

Post Syndicated from Netflix Technology Blog original https://medium.com/netflix-techblog/android-rx-onerror-guidelines-e68e8dc7383f?source=rss----2615bd06b42e---4

Rx onError Guidelines

By Ed Ballot

“Creating a good API is hard.” — anyone who has created an API used by others

As with any API, wrapping your data stream in a Rx observable requires consideration for reasonable error handling and intuitive behavior. The following guidelines are intended to help developers create consistent and intuitive API.

Since we frequently create Rx Observables in our Android app, we needed a common understanding of when to use onNext() and when to use onError() to make the API more consistent for subscribers. The divergent understanding is partially because the name “onError” is a bit misleading. The item emitted by onError() is not a simple error, but a throwable that can cause significant damage if not caught. Our app has a global handler that prevents it from crashing outright, but an uncaught exception can still leave parts of the app in an unpredictable state.

TL;DR — Prefer onNext() and only use onError() for exceptional cases.

Considerations for onNext / onError

The following are points to consider when determining whether to use onNext() versus onError().

The Contract

First here are the definitions of the two from the ReactiveX contract page:

OnNext
conveys an item that is emitted by the Observable to the observer

OnError
indicates that the Observable has terminated with a specified error condition and that it will be emitting no further items

As pointed out in the above definition, a subscription is automatically disposed after onError(), just like after onComplete(). Because of this, onError() should only be used to signal a fatal error and never to signal an intermittent problem where more data is expected to stream through the subscription after the error.

Treat it like an Exception

Limit using onError() for exceptional circumstances when you’d also consider throwing an Error or Exception. The reasoning is that the onError() parameter is a Throwable. An example for differentiating: a database query returning zero results is typically not an exception. The database returning zero results because it was forcibly closed (or otherwise put in a state that cancels the running query) would be an exceptional condition.

Be Consistent

Do not make your observable emit a mix of both deterministic and non-deterministic errors. Something is deterministic if the same input always result in the same output, such as dividing by 0 will fail every time. Something is non-deterministic if the same inputs may result in different outputs, such as a network request which may timeout or may return results before the timeout. Rx has convenience methods built around error handling, such as retry() (and our retryWithBackoff()). The primary use of retry() is to automatically re-subscribe an observable that has non-deterministic errors. When an observable mixes the two types of errors, it makes retrying less obvious since retrying a deterministic failures doesn’t make sense — or is wasteful since the retry is guaranteed to fail. (Two notes: 1. retry can also be used in certain deterministic cases like user login attempts, where the failure is caused by incorrectly entering credentials. 2. For mixed errors, retryWhen() could be used to only retry the non-deterministic errors.) If you find your observable needs to emit both types of errors, consider whether there is an appropriate separation of concerns. It may be that the observable can be split into several observables that each have a more targeted purpose.

Be Consistent with Underlying APIs

When wrapping an asynchronous API in Rx, consider maintaining consistency with the underlying API’s error handling. For example, if you are wrapping a touch event system that treats moving off the device’s touchscreen as an exception and terminates the touch session, then it may make sense to emit that error via onError(). On the other hand, if it treats moving off the touchscreen as a data event and allows the user to drag their finger back onto the screen, it makes sense to emit it via onNext().

Avoid Business Logic

Related to the previous point. Avoid adding business logic that interprets the data and converts it into errors. The code that the observable is wrapping should have the appropriate logic to perform these conversions. In the rare case that it does not, consider adding an abstraction layer that encapsulates this logic (for both normal and error cases) rather than building it into the observable.

Passing Details in onError()

If your code is going to use onError(), remember that the throwable it emits should include appropriate data for the subscriber to understand what went wrong and how to handle it.

For example, our Falcor response handler uses a FalcorError class that includes the Status from the callback. Repositories could also throw an extension of this class, if extra details need to be included.


Android Rx onError Guidelines was originally published in Netflix TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Eating Dogfood at Scale: How We Build Serverless Apps with Workers

Post Syndicated from Jonathan Spies original https://blog.cloudflare.com/building-serverless-apps-with-workers/

Eating Dogfood at Scale: How We Build Serverless Apps with Workers

Eating Dogfood at Scale: How We Build Serverless Apps with Workers

You’ve had a chance to build a Cloudflare Worker. You’ve tried KV Storage and have a great use case for your Worker. You’ve even demonstrated the usefulness to your product or organization. Now you need to go from writing a single file in the Cloudflare Dashboard UI Editor to source controlled code with multiple environments deployed using your favorite CI tool.

Fortunately, we have a powerful and flexible API for managing your workers. You can customize your deployment to your heart’s content. Our blog has already featured many things made possible by that API:

These tools make deployments easier to configure, but it still takes time to manage. The Serverless Framework Cloudflare Workers plugin removes that deployment overhead so you can spend more time working on your application and less on your deployment.

Focus on your application

Here at Cloudflare, we’ve been working to rebuild our Access product to run entirely on Workers. The move will allow Access to take advantage of the resiliency, performance, and flexibility of Workers. We’ll publish a more detailed post about that migration once complete, but the experience required that we retool some of our process to match or existing development experience as much as possible.

To us this meant:

  • Git
  • Easily deploy
  • Different environments
  • Unit Testing
  • CI Integration
  • Typescript/Multiple Files
  • Everything Must Be Automated

The Cloudflare Access team looked at three options for automating all of these tools in our pipeline. All of the options will work and could be right for you, but custom scripting can be a chore to maintain and Terraform lacked some extensibility.

  1. Custom Scripting
  2. Terraform
  3. Serverless Framework

We decided on the Serverless Framework. Serverless Framework provided a tool to mirror our existing process as closely as possible without too much DevOps overhead. Serverless is extremely simple and doesn’t interfere with the application code. You can get a project set up and deployed in seconds. It’s obviously less work than writing your own custom management scripts. But it also requires less boiler plate than Terraform because the Serverless Framework is designed for the “serverless” niche. However, if you are already using Terraform to manage other Cloudflare products, Terraform might be the best fit.

Walkthrough

Everything for the project happens in a YAML file called serverless.yml. Let’s go through the features of the configuration file.

To get started, we need to install serverless from npm and generate a new project.

npm install serverless -g
serverless create --template cloudflare-workers --path myproject
cd myproject
npm install

If you are an enterprise client, you want to use the cloudflare-workers-enterprise template as it will set up more than one worker (but don’t worry, you can add more to any template). Also, I’ll touch on this later, but if you want to write your workers in Rust, use the cloudflare-workers-rust template.

You should now have a project that feels familiar, ready to be added to your favorite source control. In the project should be a serverless.yml file like the following.

service:
    name: hello-world

provider:
  name: cloudflare
  config:
    accountId: CLOUDFLARE_ACCOUNT_ID
    zoneId: CLOUDFLARE_ZONE_ID

plugins:
  - serverless-cloudflare-workers

functions:
  hello:
    name: hello
    script: helloWorld  # there must be a file called helloWorld.js
    events:
      - http:
          url: example.com/hello/*
          method: GET
          headers:
            foo: bar
            x-client-data: value

The service block simply contains the name of your service. This will be used in your Worker script names if you do not overwrite them.

Under provider, name must be ‘cloudflare’  and you need to add your account and zone IDs. You can find them in the Cloudflare Dashboard.

The plugins section adds the Cloudflare specific code.

Now for the good part: functions. Each block under functions is a Worker script.

name: (optional) If left blank it will be STAGE-service.name-script.identifier. If I removed name from this file and deployed in production stage, the script would be named production-hello-world-hello.

script: the relative path to the javascript file with the worker script. I like to organize mine in a folder called handlers.

events: Currently Workers only support http events. We call these routes. The example provided says that GET https://example.com/hello/<anything here> will  cause this worker to execute. The headers block is for testing invocations.

At this point you can deploy your worker!

[email protected] CLOUDFLARE_AUTH_KEY=XXXXXXXX serverless deploy

This is very easy to deploy, but it doesn’t address our requirements. Luckily, there’s just a few simple modifications to make.

Maturing our YAML File

Here’s a more complex YAML file.

service:
  name: hello-world

package:
  exclude:
    - node_modules/**
  excludeDevDependencies: false

custom:
  defaultStage: development
  deployVars: ${file(./config/deploy.${self:provider.stage}.yml)}

kv: &kv
  - variable: MYUSERS
    namespace: users

provider:
  name: cloudflare
  stage: ${opt:stage, self:custom.defaultStage}
  config:
    accountId: ${env:CLOUDFLARE_ACCOUNT_ID}
    zoneId: ${env:CLOUDFLARE_ZONE_ID}

plugins:
  - serverless-cloudflare-workers

functions:
  hello:
    name: ${self:provider.stage}-hello
    script: handlers/hello
    webpack: true
    environment:
      MY_ENV_VAR: ${self:custom.deployVars.env_var_value}
      SENTRY_KEY: ${self:custom.deployVars.sentry_key}
    resources: 
      kv: *kv
    events:
      - http:
          url: "${self:custom.deployVars.SUBDOMAIN}.mydomain.com/hello"
          method: GET
      - http:
          url: "${self:custom.deployVars.SUBDOMAIN}.mydomain.com/alsohello*"
          method: GET

We can add a custom section where we can put custom variables to use later in the file.

defaultStage: We set this to development so that forgetting to pass a stage doesn’t trigger a production deploy. Combined with the stage option under provider we can set the stage for deployment.

deployVars: We use this custom variable to load another YAML file dependent on the stage. This lets us have different values for different stages. In development, this line loads the file ./config/deploy.development.yml. Here’s an example file:

env_var_value: true
sentry_key: XXXXX
SUBDOMAIN: dev

kv: Here we are showing off a feature of YAML. If you assign a name to a block using the ‘&’, you can use it later as a YAML variable. This is very handy in a multi script account. We could have named this variable anything, but we are naming it kv since it holds our Workers Key Value storage settings to be used in our function below.

Inside of the kv block we’re creating a namespace and binding it to a variable available in your Worker. It will ensure that the namespace “users” exists and is bound to MYUSERS.

kv: &kv
  - variable: MYUSERS
    namespace: users

provider: The only new part of the provider block is stage.

stage: ${opt:stage, self:custom.defaultStage}

This line sets stage to either the command line option or custom.defaultStage if opt:stage is blank. When we deploy, we pass —stage=production to serverless deploy.

Now under our function we have added webpack, resources, and environment.

webpack: If set to true, will simply bundle each handler into a single file for deployment. It will also take a string representing a path to a webpack config file so you can customize it. This is how we add Typescript support to our projects.

resources: This block is used to automate resource creation. In resources we’re linking back to the kv block we created earlier.

Side note: If you would like to include WASM bindings in your project, it can be done in a very similar way to how we included Workers KV. For more information on WASM, see the documentation.

environment: This is the butter for the bread that is managing configuration for different stages. Here we can specify values to bind to variables to use in worker scripts. Combined with YAML magic, we can store our values in the aforementioned config files so that we deploy different values in different stages. With environments, we can easily tie into our CI tool. The CI tool has our deploy.production.yml. We simply run the following command from within our CI.

sls deploy --stage=production

Finally, I added a route to demonstrate that a script can be executed on multiple routes.

At this point I’ve covered (or hinted) at everything on our original list except Unit Testing. There are a few ways to do this.

We have a previous blog post about Unit Testing that covers using cloud worker, a great tool built by Dollar Shave Club.

My team opted to use the classic node frameworks mocha and sinon. Because we are using Typescript, we can build for node or build for v8. You can also make mocha work for non-typescript projects if you use an experimental feature that adds import/export support to node.

--experimental-modules

We’re excited about moving more and more of our services to Cloudflare Workers, and the Serverless Framework makes that easier to do. If you’d like to learn even more or get involved with the project, see us on github.com. For additional information on using Serverless Framework with Cloudflare Workers, check out our documentation on the Serverless Framework.

Announcing WebSocket APIs in Amazon API Gateway

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/announcing-websocket-apis-in-amazon-api-gateway/

This post is courtesy of Diego Magalhaes, AWS Senior Solutions Architect – World Wide Public Sector-Canada & JT Thompson, AWS Principal Software Development Engineer – Amazon API Gateway

Starting today, you can build bidirectional communication applications using WebSocket APIs in Amazon API Gateway without having to provision and manage any servers.

HTTP-based APIs use a request/response model with a client sending a request to a service and the service responding synchronously back to the client. WebSocket-based APIs are bidirectional in nature. This means that a client can send messages to a service and services can independently send messages to its clients.

This bidirectional behavior allows for richer types of client/service interactions because services can push data to clients without a client needing to make an explicit request. WebSocket APIs are often used in real-time applications such as chat applications, collaboration platforms, multiplayer games, and financial trading platforms.

This post explains how to build a serverless, real-time chat application using WebSocket API and API Gateway.

Overview

Historically, building WebSocket APIs required setting up fleets of hosts that were responsible for managing the persistent connections that underlie the WebSocket protocol. Now, with API Gateway, this is no longer necessary. API Gateway handles the connections between the client and service. It lets you build your business logic using HTTP-based backends such as AWS Lambda, Amazon Kinesis, or any other HTTP endpoint.

Before you begin, here are a couple of the concepts of a WebSocket API in API Gateway. The first is a new resource type called a route. A route describes how API Gateway should handle a particular type of client request, and includes a routeKey parameter, a value that you provide to identify the route.

A WebSocket API is composed of one or more routes. To determine which route a particular inbound request should use, you provide a route selection expression. The expression is evaluated against an inbound request to produce a value that corresponds to one of your route’s routeKey values.

There are three special routeKey values that API Gateway allows you to use for a route:

  • $default—Used when the route selection expression produces a value that does not match any of the other route keys in your API routes. This can be used, for example, to implement a generic error handling mechanism.
  • $connect—The associated route is used when a client first connects to your WebSocket API.
  • $disconnect—The associated route is used when a client disconnects from your API. This call is made on a best-effort basis.

You are not required to provide routes for any of these special routeKey values.

Build a serverless, real-time chat application

To understand how you can use the new WebSocket API feature of API Gateway, I show you how to build a real-time chat app. To simplify things, assume that there is only a single chat room in this app. Here are the capabilities that the app includes:

  • Clients join the chat room as they connect to the WebSocket API.
  • The backend can send messages to specific users via a callback URL that is provided after the user is connected to the WebSocket API.
  • Users can send messages to the room.
  • Disconnected clients are removed from the chat room.

Here’s an overview of the real-time chat application:

A serverless real-time chat application using WebSocket API on Amazon API Gateway

The application is composed of the WebSocket API in API Gateway that handles the connectivity between the client and servers (1). Two AWS Lambda functions react when clients connect (2) or disconnect (5) from the API. The sendMessage function (3) is invoked when the clients send messages to the server. The server sends the message to all connected clients (4) using the new API Gateway Management API. To track each of the connected clients, use a DynamoDB table to persist the connection identifiers.

To help you see this in action more quickly, I’ve provided an AWS SAM application that includes the Lambda functions, DynamoDB table definition, and IAM roles. I placed it into the AWS Serverless Application Repository. For more information, see the simple-websockets-chat-app repo on GitHub.

Create a new WebSocket API

  1. In the API Gateway console, choose Create API, New API.
  2. Under Choose the protocol, choose WebSocket.
  3. For API name, enter My Chat API.
  4. For Route Selection Expression, enter $request.body.action.
  5. Enter a description if you’d like and click Create API.

The attribute described by the Route Selection Expression value should be present in every message that a client sends to the API. Here’s one example of a request message:

{
    "action": "sendmessage",
    “data”: "Hello, I am using WebSocket APIs in API Gateway.”
}

To route messages based on the message content, the WebSocket API expects JSON-formatted messages with a property serving as a routing key.

Manage routes

Now, configure your new API to respond to incoming requests. For this app, create three routes as described earlier. First, configure the sendmessage route with a Lambda integration.

  1. In the API Gateway console, under My Chat API, choose Routes.
  2. For New Route Key, enter sendmessage and confirm it.

Each route includes route-specific information such as its model schemas, as well as a reference to its target integration. With the sendmessage route created, use a Lambda function as an integration that is responsible for sending messages.

At this point, you’ve either deployed the AWS Serverless Application Repository backend or deployed it yourself using AWS SAM. In the Lambda console, look at the sendMessage function. This function receives the data sent by one of the clients, looks up all the currently connected clients, and sends the provided data to each of them. Here’s a snippet from the Lambda function:

DDB.scan(scanParams, function (err, data) {
  // some code omitted for brevity

  var apigwManagementApi = new AWS.APIGatewayManagementAPI({
    apiVersion: "2018-11-29",
    endpoint: event.requestContext.domainName " /" + event.requestContext.stage
  });
  var postParams = {
    Data: JSON.parse(event.body).data
  };

  data.Items.forEach(function (element) {
    postParams.ConnectionId = element.connectionId.S;
    apigwManagementApi.postToConnection(postParams, function (err, data) {
      // some code omitted for brevity
    });
  });

When one of the clients sends a message with an action of “sendmessage,” this function scans the DynamoDB table to find all of the currently connected clients. For each of them, it makes a postToConnection call with the provided data.

To send messages properly, you must know the API to which your clients are connected. This means explicitly setting the SDK endpoint to point to your API.

What’s left is to properly track the clients that are connected to the chat room. For this, take advantage of two of the special routeKey values and implement routes for $connect and $disconnect.

The onConnect function inserts the connectionId value from requestContext to the DynamoDB table.

exports.handler = function(event, context, callback) {
  var putParams = {
    TableName: process.env.TABLE_NAME,
    Item: {
      connectionId: { S: event.requestContext.connectionId }
    }
  };

  DDB.putItem(putParams, function(err, data) {
    callback(null, {
      statusCode: err ? 500 : 200,
      body: err ? "Failed to connect: " + JSON.stringify(err) : "Connected"
    });
  });
};

The onDisconnect function removes the record corresponding with the specified connectionId value.

exports.handler = function(event, context, callback) {
  var deleteParams = {
    TableName: process.env.TABLE_NAME,
    Key: {
      connectionId: { S: event.requestContext.connectionId }
    }
  };

  DDB.deleteItem(deleteParams, function(err) {
    callback(null, {
      statusCode: err ? 500 : 200,
      body: err ? "Failed to disconnect: " + JSON.stringify(err) : "Disconnected."
    });
  });
};

As I mentioned earlier, the onDisconnect function may not be called. To keep from retrying connections that are never going to be available again, add some additional logic in case the postToConnection call does not succeed:

if (err.statusCode === 410) {
  console.log("Found stale connection, deleting " + postParams.connectionId);
  DDB.deleteItem({ TableName: process.env.TABLE_NAME,
                   Key: { connectionId: { S: postParams.connectionId } } });
} else {
  console.log("Failed to post. Error: " + JSON.stringify(err));
}

API Gateway returns a status of 410 GONE when the connection is no longer available. If this happens, delete the identifier from the DynamoDB table.

To make calls to your connected clients, your application needs a new permission: “execute-api:ManageConnections”. This is handled for you in the AWS SAM template.

Deploy the WebSocket API

The next step is to deploy the WebSocket API. Because it’s the first time that you’re deploying the API, create a stage, such as “dev,” and give a sample description.

The Stage editor screen displays all the information for managing your WebSocket API, such as Settings, Logs/Tracing, Stage Variables, and Deployment History. Also, make sure that you check your limits and read more about API Gateway throttling.

Make sure that you monitor your tests using the dashboard available for your newly created WebSocket API.

Test the chat API

To test the WebSocket API, you can use wscat, an open-source, command line tool.

  1. Install NPM.
  2. Install wscat:
    $ npm install -g wscat
  3. On the console, connect to your published API endpoint by executing the following command:
    $ wscat -c wss://{YOUR-API-ID}.execute-api.{YOUR-REGION}.amazonaws.com/{STAGE}
  4. To test the sendMessage function, send a JSON message like the following example. The Lambda function sends it back using the callback URL:
    $ wscat -c wss://{YOUR-API-ID}.execute-api.{YOUR-REGION}.amazonaws.com/dev
    connected (press CTRL+C to quit)
    > {"action":"sendmessage", "data":"hello world"}
    < hello world

If you run wscat in multiple consoles, each will receive the “hello world”.

You’re now ready to send messages and get responses back and forth from your WebSocket API!

Conclusion

In this post, I showed you how to use WebSocket APIs in API Gateway, including a message exchange between client and server using routes and the route selection expression. You used Lambda and DynamoDB to create a serverless real-time chat application. While this post only scratched the surface of what is possible, you did all of this without needing to manage any servers at all.

You can get started today by creating your own WebSocket APIs in API Gateway. To learn more, see the Amazon API Gateway Developer Guide. You can also watch the Building Real-Time Applications using WebSocket APIs Supported by Amazon API Gateway webinar hosted by George Mao.

I’m excited to see what new applications come up with your use of the new WebSocket API. I welcome any feedback here, in the AWS forums, or on Twitter.

Happy coding!

За едно дарение

Post Syndicated from Bozho original https://blog.bozho.net/blog/3132

През седмицата компанията, която стартирах преди шест месеца, дари лицензи на Държавна агенция „Електронно управление“ за използване (без ограничение във времето) на нашия софтуер, LogSentinel. В допълнение на прессъобщенията и фейсбук анонсите ми се иска да дам малко повече детайли.

Идеята за продукта и съответно компанията се роди няколко месеца след като вече не бях съветник за електронно управление. Шофирайки няколко часа и мислейки за приложение на наученото в последните две години (за блокчейн и за организационните, правните и техническите аспекти на големите институции) реших, че на пазара липсва решение за сигурна одитна следа – нещо, към което да пращаш всички събития, които са се случили в дадена система, и което да ги съхранява по начин, който или не позволява подмяна, или подмяната може да бъде идентифицирана изключително бързо. Попрочетох известно количество научни статии, написах прототип и след няколко месеца (които прекарах в Холандия) формализирахме създаването на компанията.

Софтуерът използва блокчейн по няколко начина – веднъж вътрешно, като структури от данни, и веднъж (опционално) да запише конкретни моменти от историята на събитията в Ethereum (криптовалути обаче не копае и не продава). В този смисъл, можем да го разгледаме като иновативен, макар че тази дума вече е клише.

В един момент решихме (със съдружниците ми), че държавата би имала полза от такова решение. Така или иначе сигурната одитна следа е добра практика и в немалко европейски нормативни актове има изисквания за такава следа. Не че не може да бъде реализирана по други начини – може, но ако всеки изпълнител пише отделно такова решение, както се е случвало досега, това би било загуба на време, а и не би било с такова ниво на сигурност. Пилотният проект е за интеграция със системата за обмен на данни между системи и регистри (т.е. кой до какви данни е искал достъп, в контекста на GDPR), но предстои да бъдат интегрирани и други системи. За щастие интеграцията е лесна и не отнема много време (ако се чудите как ни излиза „сметката“).

Когато журналист от Дневник ме пита „Защо го дарявате“, отговорът ми беше „Защо не?“. Така или иначе сме отделили достатъчно време да помагаме на държавата за електронното управление, не само докато бяхме в Министерски съвет, но и преди и след това, така че беше съвсем логично да помогнем и не само с мнения и документи, а и с това, което разработваме. Нямам намерение да участвам в обществени поръчки, които и да спечеля честно, винаги ще оставят съмнения, че са били наредени – хората до голяма степен с право имат негативни очаквания, че „и тоя си постла да намаже от държавния пост“. Това не е случаят и не искахме да има никакви съмнения по въпроса. Основният ни пазар е частният сектор, не обществените поръчки.

Даряване на софтуер за електронно управление вече се е случвало. Например в Естония. Там основни софтуерни компоненти са били дарени. Е, след това фирмите са получавали поръчки за надграждане и поддръжка (ние нямаме такова намерение). Но благодарение на това взаимодействие между държава и частен сектор, в Естония нещата „потръгват“. Нашето решение не е ключов компонент, така че едно дарение няма да доведе значителни промени и да настигнем Естония, но със сигурност ще бъде от помощ.

Като цяло реакцията на дарението беше позитивна, което е чудесно. Имаше и някои разумни притеснения и критики – например защо не отворим кода, като сме прокарали законово изменение за отворения код. Както неведнъж съм подчертавал, изискването важи само за софтуер, чиято разработка държавата поръчва и съответно става собственик. Случаят не е такъв, става дума за лицензи на готово решение. Но все пак всички компоненти (библиотеки и др.) около продукта са с отворен код и могат да се ползват свободно за интеграция.

Не смятам, че сме направили геройство, а просто една позитивна стъпка. И е факт, че в следствие на тази стъпка продуктът ще получи малко повече популярност. Но идеята на председателя на ДАЕУ беше самото действие на даряване да получи повече популярност и съответно да вдъхнови други доставчици. И би било супер, ако компании с устойчиви бизнеси, дарят по нещо от своето портфолио. Да, работата с държавата е трудна и има доста непредвидени проблеми, а бизнесите работят за да печелят, не за да подаряват. Но допринасянето за по-добра среда е нещо, което бизнесите по света правят. Например в САЩ големи корпорации „даряват“ временно най-добрите си служители на USDS, станал известен като „стартъп в Белия дом“. При нас също има опция за такъв подход (заложена в Закона за електронно управление), но докато стигнем до нея, и даренията на лицензи не са лош подход.

Може би все още не личи отвън, но след промените в закона, които бяха приети 2016-та, електронното управление тръгна, макар и бавно, в правилна посока. Използване на централизирани компоненти, използване на едни и същи решения на няколко места (вместо всеки път всичко от нулата), централна координация на проектите. Нашето решение се вписва в този подход и се надявам да допринесе за по-високата сигурност на системите в администрацията.

ЕСПЧ: Най-лошото решение за 2017

Post Syndicated from nellyo original https://nellyo.wordpress.com/2018/06/02/echr-24/

Strasbourg Observers традиционно обявяват най-добро и най-лошо решение на ЕСПЧ всяка година.   За най-лошо решение от 2017 г. е обявено особеното мнение по делото Bayev v. Russia  относно закона за  анти-гей-пропагандата в Русия: “Хомофобският характер на несъгласието на съдията от Русия  относно така наречения гей пропаганден закон  беше толкова шокиращ  за нашите читатели, че спечели наградата за най-лошото решение, въпреки че технически не е самостоятелно решение, а само особено мнение”.

Това е добра причина да се представи решението на ЕСПЧ от 2017, така както е представено от Strasbourg Observers:

Делото се отнася до молбите на  руски активисти за правата на хомосексуалните, всеки от които е признат за виновен за административното нарушение на “обществени дейности, насочени към насърчаване на хомосексуалността сред малолетните и непълнолетните”. Първият жалбоподател е провел демонстрация пред средно училище с две знамена, на които пише “Хомосексуализмът е нормален” и “Гордея се с моята хомосексуалност”. Вторият и третият кандидат  демонстрират  пред детска библиотека с банери, на които е написано, че “Русия има най-високата степен на тийнейджърско самоубийство в света, вкл. хомосексуалисти  предприемат тази стъпка поради липсата на информация. Депутатите са убийци на деца. Хомосексуализмът е добър! ”  и  “Децата имат право да знаят. Големите хора също са понякога хомосексуални. Хомосексуалните  също стават страхотни. Хомосексуалността е естествена и нормална “.

Жалбоподателите твърдят пред ЕСПЧ, че руското законодателство нарушава член 10 от ЕКПЧ и е дискриминационно, тъй като не се прилагат подобни ограничения по отношение на хетеросексуалното мнозинство.

Решението

Намеса в свободата на изразяване съществува, чл.10.2 ЕКПЧ предвижда възможност за намеса поради причини, свързани с морала и здравето, ЕСПЧ прави оценка дали в случая намесата има легитимна цел.

ЕСПЧ не вижда причина социалното приемане на хомосексуалността да е несъвместимо с поддържането на семейни ценности. Както е посочено в решението по делото Kozak v Полша,  няма приет правилен начин за лицата да водят личния си семеен или личен живот.

Неприемливи са опитите да се правят паралели между хомосексуалността и педофилията. Дори мнозинството от руснаците да имат отрицателно мнение за хомосексуалността, би било несъвместимо с основните ценности на Конвенцията, ако упражняването на права от малцинствена група е   обусловено от приемането й от мнозинството.

Правителството твърди, че насърчаването на взаимоотношения между лица от един и същ пол трябва да бъде забранено, тъй като отношенията между тях са  риск за общественото здраве и демографското развитие. ЕСПЧ не вижда как подобен закон би могъл да помогне за постигането на желаните демографски цели или как  липсата на такъв закон би ги засегнала неблагоприятно.

Правителството не е доказало и как педофилията и порнографията сред малолетните и непълнолетните (независимо от сексуалната ориентация на засегнатите лица) са свързани с хомосексуалността и с този закон.

Въпросните правни разпоредби не служат за постигане на легитимната цел на защитата на морала,   защита на здравето и защита на правата на другите.  Чрез приемането на такива закони властите засилват стигмата и предразсъдъците и насърчават хомофобията, която е несъвместима с понятията за равенство, плурализъм и толерантност, присъщи на едно демократично общество. Нарушение на  член 10 от ЕКПЧ.

Особеното мнение може да се прочете на сайта на ЕСПЧ. Според него децата трябва да се консултират предимно с родителите си или близки членове на семейството, вместо да получават информация за секса от плакати  на улицата, а също се твърди, че ЕСПЧ   не е взел сериозно предвид факта, че личният живот на децата е по-важен от свободата на изразяване на хомосексуалистите.

 

Ако настъпи война – 2: Съветите на Енота

Post Syndicated from Григор original http://www.gatchev.info/blog/?p=2143


Под предишния запис от темата можете да намерите няколко линка. Два са тежки за четене, но навеждат на някои мисли. Единият е разказ на преживял окупация по време на гражданската война в Югославия. Другият са съветите на бивш сътрудник на ГРУ (руското военно разузнаване), известен под прякора Енота, как да се оцелее при война.

В някой друг запис, ако ми остане време, ще споделя някои мисли около тях. В този превеждам част от съветите на Енота. Те не са напълно адекватни за България, по куп причини. Например, като знам какво представляват политиците ни, бих очаквал това, че сме победени и завладени, да го научим не от военни действия, а по радиото. Вероятно от същите тези политици…

Но може и да се лъжа – ако тук стане реална война, ситуацията ще е много по-близка до руската, отколкото до шведската. Каквото обществото, такава реалността му. А съветите на Енота са цинични и брутални, но понякога могат да са животоспасяващи. И вие сами си решавате кои от тях да приемете, а кои не.

—-

Сценариите могат да са различни. Същността е винаги една – трябва да оцелеете първите две седмици, после „ще се види“. В първия вариант не се натягайте особено. Изсипят ли се в Москва чеченци или друга подобна измет, хора за „работа“ срещу тях ще се намерят. На вас това не ви трябва. Ето че се ражда и правило номер едно. Не си врете носа никъде, най-вече в бой. За това винаги ще се намерят главорези. Вашата задача е да „опазите трудовите ресурси“, тоест себе си.

При втория вариант ще се наложи да воювате, ако естествено искате. При третия можете и да не воювате вече. Родината я просрахме. Може „със смъртта на храбрите“, може и да продължите да живеете и работите, ако ви позволят.

Главното е, че са възможни три степени на загазване. Локални боеве, пълномащабна война, безперспективна окупация с последващо разчленяване на страната. Обяснявам тези неща, за да разберете един много важен момент, за който хората обикновено забравят под стрес: действията ви зависят от обстановката. Никаква самодейност, само здрав смисъл.

Стрелба на улицата не значи край на света. Даже ако във входа ви е пунктът за първична обработка на ранените, а на двора има 120-мм миномет с разчета си, това не значи, че трябва да бягате веднага. (Е, ако има миномет, сменяйте позицията спешно, задължително ще го бумнат заедно с вас.)

Да-да, стрелбата и труповете още нищо не значат, колкото и да ви е странно. Ненавременната маневра „да се измитаме максимално далече“ може да ви струва живота. Не се суетете, не се паникьосвайте, вместо това внимателно се вгледайте КОЙ и ПО КОГО стреля, и най-важното – ЗАЩО.

Докато сме в града

Ако по време на събитията и безпорядъка ви удари по главата решението да бягате, накратко ще изложа шансовете ви. В град от сорта на Москва или Санкт-Петербург шансовете да оцелеете са много малки. В градовете няма достатъчен запас от продоволствие, а по време на размирици никой няма да го раздава. Храна има само в магазините и складовете (за тях забравете, войската или бандитите ще са там моментално).

Да купувате продоволствие има смисъл през първото денонощие, докато още се продава. После магазините ще ги затворят и персоналът ще ги окраде. Ако сте прозяпали момента за „покупката“, пушката в ръце и тръгвате да „приватизирате“. Съветвам ви да вземете с вас съсед, и не само един. Първо, така ще отнесете повече храна, ще ви трябва и кой да ви прикрива от други юнаци като вас, които ще срещнете там или на обратния път. Второ, огневата ви мощ с гладкостволка клони към нулата, още няколко дула ще са само от полза. Помнете обаче – съберете ли много народ, ставате „групова цел“. Също, подялбата на плячката може да е доста тъжна. 3-4 човека, не повече.

Естествено, трябва да имате у дома запаси от храна и вода. С водата е още по-зле, надали ще ви карат. Спре ли да тече от чешмата, имате в казанчето на тоалетната. Не си и помисляйте да я пускате! Тя е същата вода от крана, просто налята в казанчето. С нея можете да изкарате цяла седмица луксозно (или поне със сигурност да не умрете).

Ако имате възможност, грабвате туба-две и кормите някоя бензиностанция. Горивото е много важно. Не бива обаче да го държите у дома. Парите са лесно запалими. Направете си тайник, най-добре на чердака, в мазето хора ще се крият от обстрелите.

Надали ще ви убиват специално. В мътни времена никой не хаби боеприпасите по хора без оръжие. Естествено, това не е повод да се разхождате и да си подсвирквате преди сън, но помнете – вие не сте цел No. 1. Опитът от град Грозни показва, че мъже, воюващи на пълни обороти, просто игнорират цивилните, не им е до тях. Наказание за глупост винаги можете да получите, особено по здрач, но нещата не са съвсем лоши.

Помнете – не се разполагайте близо нито до телецентър, нито до инфраструктурен обект. Ако в жилището ви нахлуят въоръжени хора и ви „информират“, че то вече е позицията на картечницата им, им кажете „ОК, разполагайте се“, и изчезнете оттам. Без никакви „това е моя собственост, никъде няма да ида“. Получавате куршум в главата моментално, не им е до вас, пречите ли – решават проблема по най-лесния и бърз начин. Махайте се даже ако не ви карат. Противниците им много скоро ще обстрелят позицията им, и не с камъчета от прашка.

Избягвайте да се мотаете и пред болници. И двете страни ще искат да си карат там ранените, сградата е стратегическа, нищо чудно да почнат сражение за нея – а то значи стрелба. Почне ли пък бомбардировка, някой задължително ще цапне болницата, изобщо не се съмнявайте. Тези, дето са писали Женевската конвенция, няма да ги има на бойното поле, така че спазването ѝ е леко условно. Като в „Карибски пирати“: „Това не е законодателство. По-скоро е набор пожелателни правила.“

Не забравяйте – почне ли такава каша, вече нямате собственост. И настоятелно ви съветвам да не проимвате. Убивате, ако някой се опитва да докопа храната и водата ви, всичко останало са дреболии. Ако сте успели да размените в най-близкия полицейски участък колата си срещу автомат, голямо браво. Ако ще да е чисто нов Мерцедес срещу старичък сгъваем калашник със само 2-3 пълнителя, пак голямо браво. Колата вече не ви трябва. Да излезете с нея от града няма да можете 100%, а пък желанието да я надупчат ще е сериозно.

Докато сте в града, избягвайте камуфлажни дрехи. По такива се стреля.

И така, в града ни са почнали улични боеве. Решили сме впредвид обстоятелствата или по тактически съображения да останем там (въпреки че идеята почти винаги е лоша). Знаем, че магазините ще почнат да ги грабят още на второто денонощие, оръжие има в най-близкия полицейски участък, малко вода има в казанчето на тоалетната (ако успеете да се сдобиете с бутилка-две минерална от някой магазин, още по-добре). Собственост вече нямате. Който е въоръжен, той е прав. Където има въоръжени, вас трябва да ви няма. Който се облича като военен – воюва, независимо дали иска. Тайник с гориво е голям плюс, горивото в някой момент може да стане на цената на оръжието и боеприпасите. Към важни обекти не си и помисляте да се приближавате.

И още нещо. НИКОГА И НИКЪДЕ НЕ ХОДИТЕ ПРОСТО ЕЙ ТАКА, ОСОБЕНО ПЪК „ДА ВИДИТЕ КАКВО СТАВА“. В градски бой повечето неща се вършат тихо, по разузнавателно-диверсантски. Види ли ви разузнавателна група, 100% ще тръгне да ви коли. Във филмите ще ви покажат с пръст да мълчите и ще си продължат, в реалния живот ще ви заколят на място. Оцеляването и изпълнението на задачата им зависи от липсата на свидетели. Ако група е заела позиции в кашата на градски бой, в момента в който им видите позициите, ще направи същото. Даже картечен разчет, току-що окопал се на кръстовище, няма да изпитва топли чувства към вас. Ако ви забележат отдалече и ви канят с пръст „да поговорите“, бягайте колкото крака ви държат. Може да се усмихват, да изглеждат приветливо, да ви подмамват с провизии – докопат ли ви, млъквате завинаги. Ликвидирането на проблема „местни ни видяха“ е отработена процедура. Така че без въпроси и без да си показвате рогцата от черупката.

Ако ще напускаме града

Проблемът е, че градът е обкръжен, или в него се водят боеве, или и двете. Ако сте пропуснали момента на началото на сраженията, това е много лошо, но още не сте обречени. Винаги има как да се излезе от града. Независимо от ситуацията, има два етапа. Първи – движение през града. Втори – изход от обкръжението.

Около големите населени пунктове има околовръстни шосета, и те са основният проблем. Мотострелковаци в БТР-и, ако са по асфалт, могат да обкръжат град за няколко часа. Станало ли е вече, забравете всяка мисъл „да се промъкнете незабелязано“. „Непонятно движение“ в бойни условия означава задължително ред от автомата или картечницата. Златното правило „каквото не виждам, по него не стрелям“ често не работи. Обкръжението се преодолява чрез доброволно предаване. Но още не сме стигнали до него.

Важно: не се качвайте на превози! Какъвто и да е транспорт в града задължително ще бъде обстрелян.

И така, имате раница с благинки за оцеляване. В идеалния случай – малогабаритно оръжие (сгъваем калашник плюс пистолет, стандартният набор на ченгетата). И още една торбичка със същото като раницата, но в доста по-скромни мащаби. Примерно в раницата имате храна за три дни, в торбичката за още един ден. Торбичката – завързана за тялото ви така, че да не личи, и не я сваляте. Много важно – вземете отделно, ако щете в гащите, всички скъпоценности, които намерите.

Раницата я замятате с бял чаршаф и го закрепвате на нея стабилно. Прави се, ако ви види някой военен (а такива ще има много, не се и надявайте да се промъкнете незабелязани), да разбере че сте ЦИВИЛЕН и да не си издава позицията заради вас. Ще ви съпроводят в кръстачката на мерника, но ще ви оставят да продължите. Разбира се, не правете манифестация по централната улица, но и не се мажете с кал в стил Шварценегер – ще ви мярнат и отстрелят, понеже няма да знаят кой и какъв сте. Камуфлажа го зарежете, даже ако ви е единствените читави дрехи.

Вие сте цивилен и трябва да изглеждате като цивилен, с бяла раница като бяло знаме, иначе ще ви отстрелят. С целия си вид трябва да показвате, че не представлявате никакъв интерес, просто се омитате. Естествено, оръжието си е с вас, просто го криете, вместо да го носите открито. Пистолетът – в джоба, готов за стрелба. Автомат, ако сте се снабдили – прикладът сгънат, и под куртката. Предпазителя му по-добре свалете предварително, може да е труден за превключване и да се подшашнете. Патрон в цевта, разбира се. На гърдите не носете нищо обемисто, най-много скрития автомат – наложи ли се да залегнете, лежите ли върху някаква торба, дето ви повдига над земята, ще сте по-лесни за целене.

Ако към вас върви открито въоръжен човек, спирайте и „без фокуси“, другарите му са на позиция. Вероятно ще иска да ви обира, ако искаше да ви стреля, щеше да ви е застрелял вече. Поиска ли раницата – давате му я (така или иначе при изхода от обкръжението ще ви я вземат). Молите го да ви остави чаршафа (ще си го метнете на гърба) и малката торбичка. Моментът е психологически, даваме спокойно големите неща и молим да ни оставят малките. Като правило хората се съгласяват, на това и разчитаме още отначало. Да се измъкнете с куп благини няма да ви остави никой, нужни са всекиму.

Питат ли за оръжие – казвате, че имате автомат (не вадите и не показвате, а казвате със спокоен глас), и молите да ви го оставят. Ще ви го вземат 100%, но това ще ви позволи да запазите пистолета. (За него не споменавайте – дадете ли автомата, надали ще ви тършуват.) Автомата ще го забележат винаги, а щом го давате спокойно и без съпротива, значи не сте буен. Един вид разменяте своите неща за свои. Ако нямате пистолет, взимате гладкостволка в разглобен вид, важното е да дадете „голямото и страшно“ оръжие. Изхождаме от допускането, че щом вече градът е обкръжен и са почнали улични боеве, трябва да не сте зяпали реклами по телевизията, а да сте напазарували вече в полицейския участък.

Ако успеете за ден да минете 10-15 километра, това е отлична скорост. Не забравяйте, ще се движите не по права линия, а с много заобиколки заради уличните боеве. Ако по карта домът ви е на 10 километра от околовръстното, няма гаранция, че ще успеете да ги минете за денонощие. Вървете ДЕНЕМ. Тръгне ли някой откаченяк нощем, куршумът му е в кърпа вързан. Вървите денем, с бял чаршаф, предавате се – тръгнете ли да се криете, ще има огън по вас.

Стигнете ли до обкръжението или до заградителните кордони, изхвърляте пистолета и с високо вдигнати ръце и викане, че се предавате, размахвате белия чаршаф и вървите към войниците. При възможност гледайте да отивате не къде да е, а към пропускателен пункт или охранителен пост. Ако трябва, вървете половин километър към него с вдигнати ръце. Идеята е, че пунктовете и постовете са укрепени и често оборудвани за „прием“, войниците се чувстват там по-комфортно и шансовете да ви гръмнат са по-малки. Ще ви претръскат до дупка. Оръжието сте го изхвърлили предварително, кротичък гражданин сте. Ще ви привика офицер, вероятно лейтентант, надали по-старши, не е нужно да биете чело в земята. Намирате начин да говорите на четири очи с него и му предлагате ценности срещу право на преминаване. Мине ли всичко успешно, сте излезли от града.

По пътя с пълна сигурност ще изгубите почти всичко ценно и всичкото оръжие, и ще затриете за това смешно разстояние поне 1-2 дни. ТОВА Е НОРМАЛНО. Обкръжен град е огромен пленнически лагер. Давайте всичко, само за да излезете. Вътре ще почне глад, и то скоро.

И така, вървите внимателно, но не се криете като „разузнавачи“. Облечени сме цивилно, имаме бял парцал на гърба (отпред се вижда, че не носим оръжие, но откъм гърба – не, трябва да се застраховаме). Имаме малка торбичка с най-необходими благини. Имаме скъпоценности (злато) като валута. Оръжие, с което да не забравим да се разделим преди да стигнем до военния пост (намерят ли го у вас, трудно ще минете за цивилен – ще ви пишат или дезертьор, или преоблечен враг). Ако сте успели да излезете с тези неща от града за 1 до 3 дни, преминавайки от район в район, това е нормално.

От личния опит: обикновените фъстъчени сникерси са много хранителни. 6 двойни са дневната норма калории за мъж. Удобства за притопляне на храна надали ще имате. Сникерсите не са шведска маса, но по време на война не бъдете претенциозни към храната. Идеята със сникерсите, честно казано, е открадната от чеченците. Те на тях изкарват войната. Може да се яде в движение и е чудесна идея – захар, глюкоза, повдига настроението (ще ви е много нужно, понеже ще сте физически и психологически в дупка).

Главното – разберете, че момчетата с автоматите са бая напрегнати, по тях стрелят. Адски лесно е да им дадете повод да ви надупчат. Така че внимателно и без да се дуете. В простичък текст, съгласявайте се с всичко.

А сега много кратко ще разкажа накъде и защо натам. Дотук специално разглеждахме най-хардкор сценариите. И сега ще е така. Надявам се да не е нужно да обяснявам защо.

И така, най-лошият вариант: оказваме се извън града, но почти без храна и оръжие. В идеалния случай всеки от вас ще е огледал предварително (примерно сега) картата и ще е нахвърлил няколко места, където може да отстъпи. НИКАКВИ ГЕРОЙСТВА! Първо да се успокои ситуацията, после ще се оправяте къде какво става. Избирайте места според посоките на света. Прост пример: Санкт-Петербург. На запад надали ще е вариант да се отстъпва. На юг също няма смисъл. Ще бягате или на север, към Карелия, или на изток, към Новгородската и Тверската области. И от Москва най-вероятно ще са северното (Архангелск) и източното (Урал) направление.

Помнете: към военни обекти НЕ се приближава! Мисълта, че там са свои, „руски войничета“, и ще ви приемат и нахранят, е глупост. В НАЙ-ДОБРИЯ случай офицерите ще ви напсуват и изритат, не им е до вас, това не е пункт за прием на бежанци. А това, че всеки миг може обектът да бъде бомбардиран, е обективна реалност. Не бива да забравяте и още нещо: срочната служба сега я карат близо до дома. Почне ли патардия, не си и представяйте даже какво става в главите на военните, на които роднините и близките може още да са в града. Помнете – всички са хора. Военните преживяват, нервничат и психясват точно както и цивилните, само дето го правят с оръжие в ръка. Затова идеята, че „войничетата ще помогнат“, не е най-добрата.

Мъдрата идея е да имате „къщичка на село“, където в мазето има склад с консерви, вода, лекарства и прочее, и ще отстъпвате там. Чеченците така правеха – отиваха по селата. Но тук обсъждаме най-лошия възможен сценарий, а много хора нямат въпросното недвижимо имущество.

Най-просто ми е да дам за пример Санкт Петербург. Я да видим картата. Във всяка посока трябва да имате набелязани ПОНЕ по две места, близко и далечно. За близкото препоръчвам някой туристически къмпинг, разположен до неголямо населено място. Ако сте били на излет край някое езеро или рекичка, на барбекю примерно, спокойно можете да идете там. Първо, ще знаете какво ви чака. Ще имате представа къде там има питейна вода и продоволствие. Второ, познавате мястото и това много ще ви крепи психологически.

Бежанците са тъжна картина, тежко е да ги гледаш. А бягството „на стадо“ може да не е организирано от никого, и ще бягате сами и без крайна точка, където да ви приеме някакъв „червен кръст“. А най-вероятно така и ще бъде, не се съмнявайте. Първите сериозни „благотворители“ се появиха в Чечня чак след първата война. Две години цивилните бяха оставени на собствените си грижи.

И така, вече имаме две точки за отстъпление близо до града. Сега са ни нужни две далечни точки. Ако ще отстъпвате на север, бих ви предложил Соловецкия манастир, на остров в Бяло море. Има там едно селище на име Рабочеостровск, от него курсират кораби. Естествено, те няма вече да курсират, но винаги може да „приватизирате“ лодка с гребла на пристанището. Бяло море е сравнително спокойно. Да се преплава е реално (сложно е, но е възможно, от хленчене файда няма, така че гребем). На изток бих отстъпвал към Иверския манастир в Тверска област. Той също е на островче насред езеро. Наблизо има хранителни складове и производства (по трасе М10).

Защо манастири? Тях няма да ги бомбардират първи (това не значи, че после приоритетите няма да се променят). И още нещо: идеите за християнските добродетели ги забравете веднага. Нито ще ви чака там някой, нито ще ви се радва. Отивате там реално да се продавате в робство. Ще работите за тях, селскостопанска работа, или охранителна, или там каквото – те ще ви дават по нещичко за ядене. Стигнете ли, казвате им: „Силен и здрав мъж съм, ще работя каквото кажете срещу храна.“ Темата за морално-нравствената отговорност на поповете пред миряните я забравете веднага, и гък не гъквайте по въпроса.

Разбира се, всичко е условно. Може да изберете друго място. Главното е принципът: собствеността ви вече я няма, доволни сте да сте в полу-робство, ако ще ви хранят. Между другото, липсата на вашата собственост значи липса на собствеността и на другите. Който не може да защити имуществото си с оръжие, няма имущество.

Сега по въпроса за транспорта. Обществен транспорт вече няма да има. Плюс е, че можете да се снабдите с кола – да я „приватизирате“ или намерите захвърлена. Захвърлени с празен резервоар не пипайте, гориво няма откъде да намерите. И да я бутате до бензиностанцията, там няма да ви огрее. Снабдите ли се с кола, накачете я с бели парцали, в идеалния случай лепнете на покрива кръст от червен скоч. Не е панацея, и такива бомбардират, но шансовете да ви пропуснат са малко по-големи.

Движете се бавно! 50-60 км в час, не повече. Може да налетите на военен конвой, ако карате бързо, ще ви опукат „за всеки случай“. Цивилни, които се опитват да ви спрат, не се котират, давайте газ. Те с вас няма да споделят своето, а да „споделят“ вашето ще се пробват. Конвой или дори отделно военно возило – спирате на банкета, отваряте леко прозорец или врата и показвате ръцете. Да излизате не е мъдро, поражда желание да ви преджобят. Седите спокойно, без нерви и се молите наум. Да ги пронизвате с поглед не се препоръчва – гледайте пред себе си или в пода.

Ако всичко се е получило, имате покрив над главата, работа, храна и хора, с които да поговорите (и това е важно). Така можете да изчакате седмица-две, да видите какво става, да оцените ситуацията в страната и да вземате решения.

А сега малко цинизъм. Ако имате „обоз“, демек семейство, сте покойник. Имате ли семейство, сте длъжни да се ометете от града и да цъфнете на вилата (със запаси от храна и вода) в първите секунди, когато на улицата почнат да псуват Великия Пу. Ако нямате къде да отстъпите с „обоза“ си, сте пълнеж за ковчези, и обозът ви също. Не тъпейте, подгответе се предварително, близките ТРЯБВА ДА ИМА КЪДЕ ДА ОТИДАТ. И трябва да имат продоволствие. После правете каквото щете. Ако щете, се връщайте и воювайте, ако щете се връщайте и висете по кръчмите, докато жена ви е „на картофите“. Но мислете за тях предварително, после ще бъде късно. Всичко по-горе е за самотници, дето нямат какво да губят. Имате ли семейство, се пригответе предварително. Историята учи, че семейството е по-скъпо от Родината, поне на първия етап.

—-

Тези съвети (заедно с още – как да воювате и с какво да се снабдите, ако ще воювате) ги има на стотици места из руския Нет, поне от 2013 г. Често с дребни, но забележими разлики в текста. Същността обаче е една и съща.

Реални ли са? Някои ми се струват да не са съвсем. Не вярвам примерно при минаване на обкръжението за един пистолет (особено ако е модел, който не се използва от военни) да ви обявят за дезертьор или преоблечен враг. Дезертьорът ще е захвърлил оръжието до последния патрон, врагът ще носи нещо далеч по-делово от пистолет. (И никой от двамата няма да си каже за него открито.) Ако пък им трябва пушечно месо, и да нямате оръжие, ще ви дадат.

Също, съветът с лодката и гребането до Соловецките острови ми се струва попресилен. От Рабочеостровск до Соловецк са нещо към 70-80 км по права линия. С гребна лодка, каквато може да бъде намерена на пристанище от сорта на това в Рабочеостровск, и световен шампион надали ще ги преодолее за 24 часа, а опитен, но неподготвен за състезание гребец – за 48 часа. Положението се подобрява малко, ако сте цяла лодка яки гребци, но пак трябва да сте сигурни, че ви чакат поне 24 часа екстра време – а прогнози за времето в Онежката губа вероятно тогава също няма да има.

Като цяло впечатлението ми за Енота от разказа му е за човек с опит от войната в Чечня, но и с известна доза художествено въображение.

И си мисля – дано никога и никой не се нуждае от тези съвети.

Което ме навежда на още мисли – но тях ще напиша следващия път.

Препоръка на ЕК за достъпа до научна информация

Post Syndicated from nellyo original https://nellyo.wordpress.com/2018/05/31/access-9/

Станаха известни новите изисквания за академично израстване, включително изискванията за научна продукция.  Но научната продукция е свързана и с условията за научно творчество, вкл. системна промяна към отворена  наука.

Публикувана е Препоръка (ЕС) 2018/790 на Комисията от 25 април 2018 година относно достъпа до научна информация и нейното съхранение

Държавите членки следва да гарантират, че:

  • тези диалози укрепват свързаната технологична среда за отворена наука, която обхваща всички резултати от всички етапи на жизнения цикъл на научните изследвания (данни, публикации, програмно осигуряване, методи, протоколи и т.н.);
  • постепенно се постига системна промяна към отворена наука, която освен технологичната промяна и ефикасността включва също така принципа на реципрочност, изменения в културата сред изследователите, както и институционална промяна във връзка с научните изследвания в академичните институции и финансиращите органи в посока към отворена наука, в това число въпроси като морал и етика, когато е приложимо.

Държавите  следва да информират Комисията в срок от 18 месеца от публикуването на настоящата препоръка в Официален вестник на Европейския съюз, а след това — веднъж на всеки две години, за действията, предприети в изпълнение на елементите, предвидени в настоящата препоръка. На тази база Комисията следва да извършва преглед на напредъка в рамките на Съюза, за да прецени дали е необходимо допълнително действие за постигане на целите, предложени в настоящата препоръка.