Store your Cloudflare logs on R2

Post Syndicated from Tanushree Sharma original https://blog.cloudflare.com/store-your-cloudflare-logs-on-r2/

Store your Cloudflare logs on R2

Store your Cloudflare logs on R2

We’re excited to announce that customers will soon be able to store their Cloudflare logs on Cloudflare R2 storage. Storing your logs on Cloudflare will give CIOs and Security Teams an opportunity to consolidate their infrastructure; creating simplicity, savings and additional security.

Cloudflare protects your applications from malicious traffic, speeds up connections, and keeps bad actors out of your network. The logs we produce from our products help customers answer questions like:

  • Why are requests being blocked by the Firewall rules I’ve set up?
  • Why are my users seeing disconnects from my applications that use Spectrum?
  • Why am I seeing a spike in Cloudflare Gateway requests to a specific application?

Storage on R2 adds to our existing suite of logging products. Storing logs on R2 fills in gaps that our customers have been asking for: a cost-effective solution to store logs for any of our products for any period of time.

Goodbye to old school logging

Let’s rewind to the early 2000s. Most organizations were running their own self-managed infrastructure: network devices, firewalls, servers and all the associated software. Each company has to manage logs coming from hundreds of sources in the IT stack. With dedicated storage needed for retaining an endless volume of logs, specialized teams are required to build an ETL pipeline and make the data actionable.

Fast-forward to the 2010s. Organizations are transitioning to using managed services for their IT functions. As a result of this shift, the way that customers collect logs for all their services have changed too. With managed services, much of the logging load is shifted off of the customer.

The challenge now: collecting logs from a combination of managed services, each of which has its own quirks. Logs can be sent at varying latencies, in different formats and some are too detailed while others not detailed enough. To gain a single pane view of their IT infrastructure, companies need to build or buy a SIEM solution.

Cloudflare replaces these sets of managed services. When a customer onboards to Cloudflare, we make it super easy to gain visibility to their traffic that hits our network. We’ve built analytics for many of our products, such as CDN, Firewall, Magic Transit and Spectrum to both view high level trends and dig into patterns by slicing and dicing data.

Analytics are a great way to see data at an aggregate level, but we know that raw logs are important to our customers as well, so we’ve built out a set of logging products.

Logging today

During Speed Week we announced Instant Logs to show customers traffic as it hits their domain. Instant Logs is perfect for live debugging and triaging use cases. Monitor your traffic, make a config change and instantly view its impacts. In cases where you need to retroactively inspect your logs, we have Logpush.

We’ve built an impressive logging pipeline to get data from the 250+ cities that house our data centers to our customers in under a minute using Logpush. If your organization has existing practices for getting data across your stack into one place, we support Logpush to a variety of cloud storage or SIEM destinations. We also have partnerships in place with major SIEM platforms to surface Cloudflare data in ways that are meaningful to our customers.

Last but not least is Logpull. Using Logpull, customers can access HTTP request logs using our REST API. Our customers like Logpull because it’s easy to configure, they don’t have to worry about storing logs on a third party, and you can pull data ad hoc for up to seven days.

Why Cloudflare storage?

The top four requests we’ve heard from customers when it comes to logs are:

  • I have tight budgets and need low cost log storage.
  • They should be low effort to set up and maintain.
  • I should be able to store logs for as long as I need to.
  • I want to access my logs on Cloudflare for any product.

For many of our customers, Cloudflare is one of the most important data sources, and it also generates more data than other applications on their IT stack. R2 is significantly cheaper than other cloud providers, so our customers don’t need to compromise by sampling or leaving out logs from products all together in order to cut down on costs.

Just like the simplicity of Logpull, log storage on R2 will be quick and easy. With a one click setup, we’ll store your logs, and you don’t have to worry about any configuration details. Retention is totally in our customer’s control to match the security and compliance needs of their business. With R2, you can also store your logs for any products we have logging for today (and we’re always adding more as our product line expands).

Log storage; we’re just getting started

With log storage on Cloudflare, we’re creating the building blocks to allow customers to perform log analysis and forensics capabilities directly on Cloudflare. Whether conducting an investigation, responding to a support request or addressing an incident, using analytics for a birds eye view and inspecting logs to determine the root cause is a powerful combination.

If you’re interested in getting notified when you can store your logs on Cloudflare, sign up through this form.

We’re always looking for talented engineers to take on the challenges of working with data at an incredible scale. If you’re interested apply here.

Control input on suspicious sites with Cloudflare Browser Isolation

Post Syndicated from Tim Obezuk original https://blog.cloudflare.com/phishing-protection-browser/

Control input on suspicious sites with Cloudflare Browser Isolation

Control input on suspicious sites with Cloudflare Browser Isolation

Your team can now use Cloudflare’s Browser Isolation service to protect against phishing attacks and credential theft inside the web browser. Users can browse more of the Internet without taking on the risk. Administrators can define Zero Trust policies to prohibit keyboard input and transmitting files during high risk browsing activity.

Earlier this year, Cloudflare Browser Isolation introduced data protection controls that take advantage of the remote browser’s ability to manage all input and outputs between a user and any website. We’re excited to extend that functionality to apply more controls such as prohibiting keyboard input and file uploads to avert phishing attacks and credential theft on high risk and unknown websites.

Challenges defending against unknown threats

Administrators protecting their teams from threats on the open Internet typically implement a Secure Web Gateway (SWG) to filter Internet traffic based on threat intelligence feeds. This is effective at mitigating known threats. In reality, not all websites fit neatly into malicious or non-malicious categories.

For example, a parked domain with typo differences to an established web property could be legitimately registered for an unrelated product or become weaponized as a phishing attack. False-positives are tolerated by risk-averse administrators but come at the cost of employee productivity. Finding the balance between these needs is a fine art, and when applied too aggressively it leads to user frustration and the increased support burden of micromanaging exceptions for blocked traffic.

Legacy secure web gateways are blunt instruments that provide security teams limited options to protect their teams from threats on the Internet. Simply allowing or blocking websites is not enough, and modern security teams need more sophisticated tools to fully protect their teams without compromising on productivity.

Intelligent filtering with Cloudflare Gateway

Cloudflare Gateway provides a secure web gateway to customers wherever their users work. Administrators can build rules that include blocking security risks, scanning for viruses, or restricting browsing based on SSO group identity among other options. User traffic leaves their device and arrives at a Cloudflare data center close to them, providing security and logging without slowing them down.

Unlike the blunt instruments of the past, Cloudflare Gateway applies security policies based on the unique magnitude of data Cloudflare’s network processes. For example, Cloudflare sees just over one trillion DNS queries every day. We use that data to build a comprehensive model of what “good” DNS queries look like — and which DNS queries are anomalous and could represent DNS tunneling for data exfiltration, for example. We use our network to build more intelligent filtering and reduce false positives. You can review that research as well with Cloudflare Radar.

However, we know some customers want to allow users to navigate to destinations in a sort of “neutral” zone. Domains that are newly registered, or newly seen by DNS resolvers, can be the home of a great new service for your team or a surprise attack to steal credentials. Cloudflare works to categorize these as soon as possible, but in those initial minutes users have to request exceptions if your team blocks these categories outright.

Safely browsing the unknown

Cloudflare Browser Isolation shifts the risk of executing untrusted or malicious website code from the user’s endpoint to a remote browser hosted in a low-latency data center. Rather than aggressively blocking unknown websites, and potentially impacting employee productivity, Cloudflare Browser Isolation provides administrators control over how users can interact with risky websites.

Cloudflare’s network intelligence tracks higher risk Internet properties such as Typosquatting and New Domains. Websites in these categories could be benign websites, or phishing attacks waiting to be weaponized. Risk-averse administrators can protect their teams without introducing false-positives by isolating these websites and serving the website in a read-only mode by disabling file uploads, downloads and keyboard input.

Control input on suspicious sites with Cloudflare Browser Isolation

Users are able to safely browse the unknown website without risk of leaking credentials, transmitting files and falling victim to a phishing attack. Should the user have a legitimate reason to interact with an unknown website they are advised to contact their administrator to obtain elevated permissions while browsing the website.

See our developer documentation to learn more about remote browser policies.

Getting started

Cloudflare Browser Isolation is integrated natively into Cloudflare’s Secure Web Gateway and Zero Trust Network Access services, and unlike legacy remote browser isolation solutions does not require IT teams to piece together multiple disparate solutions or force users to change their preferred web browser.

The Zero Trust threat and data protection that Browser Isolation provides make it a natural extension for any company trusting a secure web gateway to protect their business. We’re currently including it with our Cloudflare for Teams Enterprise Plan at no additional charge.1 Get started at our Zero Trust web page.


1. For the first 2,000 seats until 31 Dec 2021

Introducing the Customer Metadata Boundary

Post Syndicated from Jon Levine original https://blog.cloudflare.com/introducing-the-customer-metadata-boundary/

Introducing the Customer Metadata Boundary

Introducing the Customer Metadata Boundary

Data localisation has gotten a lot of attention in recent years because a number of countries see it as a way of controlling or protecting their citizens’ data. Countries such as Australia, China, India, Brazil, and South Korea have or are currently considering regulations that assert legal sovereignty over their citizens’ personal data in some fashion — health care data must be stored locally; public institutions may only contract with local service providers, etc.

In the EU, the recent “Schrems II” decision resulted in additional requirements for companies that transfer personal data outside the EU. And a number of highly regulated industries require that specific types of personal data stay within the EU’s borders.

Cloudflare is committed to helping our customers keep personal data in the EU. Last year, we introduced the Data Localisation Suite, which gives customers control over where their data is inspected and stored.

Today, we’re excited to introduce the Customer Metadata Boundary, which expands the Data Localisation Suite to ensure that a customer’s end user traffic metadata stays in the EU.

Metadata: a primer

“Metadata” can be a scary term, but it’s a simple concept — it just means “data about data.” In other words, it’s a description of activity that happened on our network. Every service on the Internet collects metadata in some form, and it’s vital to user safety and network availability.

At Cloudflare, we collect metadata about the usage of our products for several purposes:

  • Serving analytics via our dashboards and APIs
  • Sharing logs with customers
  • Stopping security threats such as bot or DDoS attacks
  • Improving the performance of our network
  • Maintaining the reliability and resiliency of our network

What does that collection look like in practice at Cloudflare? Our network consists of dozens of services: our Firewall, Cache, DNS Resolver, DDoS protection systems, Workers runtime, and more. Each service emits structured log messages, which contain fields like timestamps, URLs, usage of Cloudflare features, and the identifier of the customer’s account and zone.

These messages do not contain the contents of customer traffic, and so they do not contain things like usernames, passwords, personal information, and other private details of customers’ end users. However, these logs may contain end-user IP addresses, which is considered personal data in the EU.

Data Localisation in the EU

The EU’s General Data Protection Regulation, or GDPR, is one of the world’s most comprehensive (and well known) data privacy laws. The GDPR does not, however, insist that personal data must stay in Europe. Instead, it provides a number of legal mechanisms to ensure that GDPR-level protections are available for EU personal data if it is transferred outside the EU to a third country like the United States. Data transfers from the EU to the US were, until recently, permitted under an agreement called the EU-U.S. Privacy Shield Framework.

Shortly after the GDPR went into effect, a privacy activist named Max Schrems filed suit against Facebook for their data collection practices. In July 2020, the Court of Justice of the EU issued the “Schrems II” ruling — which, among other things, invalidated the Privacy Shield framework. However, the court upheld other valid transfer mechanisms that ensure EU personal data won’t be accessed by U.S. government authorities in a way that violates the GDPR.

Since the Schrems II decision, many customers have asked us how we’re protecting EU citizens’ data. Fortunately, Cloudflare has had data protection safeguards in place since well before the Schrems II case, such as our industry-leading commitments on government data requests. In response to Schrems II in particular, we updated our customer Data Processing Addendum (DPA). We incorporated the latest Standard Contractual Clauses, which are legal agreements approved by the EU Commission that enable data transfer. We also added additional safeguards as outlined in the EDPB’s June 2021 Recommendations on Supplementary Measures. Finally, Cloudflare’s services are certified under the ISO 27701 standard, which maps to the GDPR’s requirements.

In light of these measures, we believe that our EU customers can use Cloudflare’s services in a manner consistent with GDPR and the Schrems II decision. Still, we recognize that many of our customers want their EU personal data to stay in the EU. For example, some of our customers in industries like healthcare, law, and finance may have additional requirements.  For that reason, we have developed an optional suite of services to address those requirements. We call this our Data Localisation Suite.

How the Data Localisation Suite helps today

Data Localisation is challenging for customers because of the volume and variety of data they handle. When it comes to their Cloudflare traffic, we’ve found that customers are primarily concerned about three areas:

  1. How do I ensure my encryption keys stay in the EU?
  2. How can I ensure that services like caching and WAF only run in the EU?
  3. How can ensure that metadata is never transferred outside the EU?

To address the first concern, Cloudflare has long offered Keyless SSL and Geo Key Manager, which ensure that private SSL/TLS key material never leaves the EU. Keyless SSL ensures that Cloudflare never has possession of the private key material at all; Geo Key Manager uses Keyless SSL under the hood to ensure the keys never leave the specified region.

Last year we addressed the second concern with Regional Services, which ensures that Cloudflare will only be able to decrypt and inspect the content of HTTP traffic inside the EU. In other words, SSL connections will only be terminated in Europe, and all of our layer 7 security and performance services will only run in our EU data centers.

Today, we’re enabling customers to address the third and final concern, and keep metadata local as well.

How the Metadata Boundary Works

The Customer Metadata Boundary ensures, simply, that end user traffic metadata that can identify a customer stays in the EU. This includes all the logs and analytics that a customer sees.

How are we able to do this? All the metadata that can identify a customer flows through a single service at our edge, before being forwarded to one of our core data centers.

When the Metadata Boundary is enabled for a customer, our edge ensures that any log message that identifies that customer (that is, contains that customer’s Account ID) is not sent outside the EU. It will only be sent to our core data center in the EU, and not our core data center in the US.

Introducing the Customer Metadata Boundary

What’s next

Today our Data Localisation Suite is focused on helping our customers in the EU localise data for their inbound HTTP traffic. This includes our Cache, Firewall, DDoS protection, and Bot Management products.

We’ve heard from customers that they want data localisation for more products and more regions. This means making all of our Data Localisation Products, including Geo Key Manager and Regional Services, work globally. We’re also working on expanding the Metadata Boundary to include our Zero Trust products like Cloudflare for Teams. Stay tuned!

Someone Is Running Lots of Tor Relays

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/12/someone-is-running-lots-of-tor-relays.html

Since 2017, someone is running about a thousand — 10% of the total — Tor servers in an attempt to deanonymize the network:

Grouping these servers under the KAX17 umbrella, Nusenu says this threat actor has constantly added servers with no contact details to the Tor network in industrial quantities, operating servers in the realm of hundreds at any given point.

The actor’s servers are typically located in data centers spread all over the world and are typically configured as entry and middle points primarily, although KAX17 also operates a small number of exit points.

Nusenu said this is strange as most threat actors operating malicious Tor relays tend to focus on running exit points, which allows them to modify the user’s traffic. For example, a threat actor that Nusenu has been tracking as BTCMITM20 ran thousands of malicious Tor exit nodes in order to replace Bitcoin wallet addresses inside web traffic and hijack user payments.

KAX17’s focus on Tor entry and middle relays led Nusenu to believe that the group, which he described as “non-amateur level and persistent,” is trying to collect information on users connecting to the Tor network and attempting to map their routes inside it.

In research published this week and shared with The Record, Nusenu said that at one point, there was a 16% chance that a Tor user would connect to the Tor network through one of KAX17’s servers, a 35% chance they would pass through one of its middle relays, and up to 5% chance to exit through one.

Slashdot thread.

Snapshots from the history of AI, plus AI education resources

Post Syndicated from Janina Ander original https://www.raspberrypi.org/blog/machine-learning-education-snapshots-history-ai-hello-world-12/

In Hello World issue 12, our free magazine for computing educators, George Boukeas, DevOps Engineer for the Astro Pi Challenge here at the Foundation, introduces big moments in the history of artificial intelligence (AI) to share with your learners:

The story of artificial intelligence (AI) is a story about humans trying to understand what makes them human. Some of the episodes in this story are fascinating. These could help your learners catch a glimpse of what this field is about and, with luck, compel them to investigate further.                   

The imitation game

In 1950, Alan Turing published a philosophical essay titled Computing Machinery and Intelligence, which started with the words: “I propose to consider the question: Can machines think?” Yet Turing did not attempt to define what it means to think. Instead, he suggested a game as a proxy for answering the question: the imitation game. In modern terms, you can imagine a human interrogator chatting online with another human and a machine. If the interrogator does not successfully determine which of the other two is the human and which is the machine, then the question has been answered: this is a machine that can think.

A statue of Alan Turing on a park bench in Manchester.
The Alan Turing Memorial in Manchester

This imitation game is now a fiercely debated benchmark of artificial intelligence called the Turing test. Notice the shift in focus that Turing suggests: thinking is to be identified in terms of external behaviour, not in terms of any internal processes. Humans are still the yardstick for intelligence, but there is no requirement that a machine should think the way humans do, as long as it behaves in a way that suggests some sort of thinking to humans.

In his essay, Turing also discusses learning machines. Instead of building highly complex programs that would prescribe every aspect of a machine’s behaviour, we could build simpler programs that would prescribe mechanisms for learning, and then train the machine to learn the desired behaviour. Turing’s text provides an excellent metaphor that could be used in class to describe the essence of machine learning: “Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child’s? If this were then subjected to an appropriate course of education one would obtain the adult brain. We have thus divided our problem into two parts: the child-programme and the education process.”

A chess board with two pieces of each colour left.
Chess was among the games that early AI researchers like Alan Turing developed algorithms for.

It is remarkable how Turing even describes approaches that have since been evolved into established machine learning methods: evolution (genetic algorithms), punishments and rewards (reinforcement learning), randomness (Monte Carlo tree search). He even forecasts the main issue with some forms of machine learning: opacity. “An important feature of a learning machine is that its teacher will often be very largely ignorant of quite what is going on inside, although he may still be able to some extent to predict his pupil’s behaviour.”

The evolution of a definition

The term ‘artificial intelligence’ was coined in 1956, at an event called the Dartmouth workshop. It was a gathering of the field’s founders, researchers who would later have a huge impact, including John McCarthy, Claude Shannon, Marvin Minsky, Herbert Simon, Allen Newell, Arthur Samuel, Ray Solomonoff, and W.S. McCulloch.   

Go has vastly more possible moves than chess, and was thought to remain out of the reach of AI for longer than it did.

The simple and ambitious definition for artificial intelligence, included in the proposal for the workshop, is illuminating: ‘making a machine behave in ways that would be called intelligent if a human were so behaving’. These pioneers were making the assumption that ‘every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it’. This assumption turned out to be patently false and led to unrealistic expectations and forecasts. Fifty years later, McCarthy himself stated that ‘it was harder than we thought’.

Modern definitions of intelligence are of distinctly different flavour than the original one: ‘Intelligence is the quality that enables an entity to function appropriately and with foresight in its environment’ (Nilsson). Some even speak of rationality, rather than intelligence: ‘doing the right thing, given what it knows’ (Russell and Norvig).

A computer screen showing a complicated graph.
The amount of training data AI developers have access to has skyrocketed in the past decade.

Read the whole of this brief history of AI in Hello World #12

In the full article, which you can read in the free PDF copy of the issue, George looks at:

  • Early advances researchers made from the 1950s onwards while developing games algorithms, e.g. for chess.
  • The 1997 moment when Deep Blue, a purpose-built IBM computer, beating chess world champion Garry Kasparov using a search approach.
  • The 2011 moment when Watson, another IBM computer system, beating two human Jeopardy! champions using multiple techniques to answer questions posed in natural language.
  • The principles behind artificial neural networks, which have been around for decades and are now underlying many AI/machine learning breakthroughs because of the growth in computing power and availability of vast datasets for training.
  • The 2017 moment when AlphaGo, an artificial neural network–based computer program by Alphabet’s DeepMind, beating Ke Jie, the world’s top-ranked Go player at the time.
Stacks of server hardware behind metal fencing in a data centre.
Machine learning systems need vast amounts of training data, the collection and storage of which has only become technically possible in the last decade.

More on machine learning and AI education in Hello World #12

In your free PDF of Hello World issue 12, you’ll also find:

  • An interview with University of Cambridge statistician David Spiegelhalter, whose work shaped some of the foundations of AI, and who shares his thoughts on data science in schools and the limits of AI 
  • An introduction to Popbots, an innovative project by MIT to open AI to the youngest learners
  • An article by Ken Kahn, researcher in the Department of Education at the University of Oxford, on using the block-based Snap! language to introduce your learners to natural language processing
  • Unplugged and online machine learning activities for learners age 7 to 16 in the regular ‘Lesson plans’ section
  • And lots of other relevant articles

You can also read many of these articles online on the Hello World website.

Find more resources for AI and data science education

In Hello World issue 16, the focus is on all things data science and data literacy for your learners. As always, you can download a free copy of the issue. And on our Hello World podcast, we chat with practicing computing educators about how they bring AI, AI ethics, machine learning, and data science to the young people they teach.

If you want a practical introduction to the basics of machine learning and how to use it, take our free online course.

Drawing of a machine learning ars rover trying to decide whether it is seeing an alien or a rock.

There are still many open questions about what good AI and data science education looks like for young people. To learn more, you can watch our panel discussion about the topic, and join our monthly seminar series to hear insights from computing education researchers around the world.

We are also collating a growing list of educational resources about these topics based on our research seminars, seminar participants’ recommendations, and our own work. Find the resource list here.

The post Snapshots from the history of AI, plus AI education resources appeared first on Raspberry Pi.

How to set up Amazon Quicksight dashboard for Amazon Pinpoint and Amazon SES engagement events

Post Syndicated from satyaso original https://aws.amazon.com/blogs/messaging-and-targeting/how-to-set-up-amazon-quicksight-dashboard-for-amazon-pinpoint-and-amazon-ses-events/

In this post, we will walk through using Amazon Pinpoint and Amazon Quicksight to create customizable messaging campaign reports. Amazon Pinpoint is a flexible and scalable outbound and inbound marketing communications service that allows customers to connect with users over channels like email, SMS, push, or voice. Amazon QuickSight is a scalable, serverless, embeddable, machine learning-powered business intelligence (BI) service built for the cloud. This solution allows event and user data from Amazon Pinpoint to flow into Amazon Quicksight. Once in Quicksight, customers can build their own reports that shows campaign performance on a more granular level.

Engagement Event Dashboard

Customers want to view the results of their messaging campaigns in ever increasing levels of granularity and ensure their users see value from the email, SMS or push notifications they receive. Customers also want to analyze how different user segments respond to different messages, and how to optimize subsequent user communication. Previously, customers could only view this data in Amazon Pinpoint analytics, which offers robust reporting on: events, funnels, and campaigns. However, does not allow analysis across these different parameters and the building of custom reports. For example, show campaign revenue across different user segments, or show what events were generated after a user viewed a campaign in a funnel analysis. Customers would need to extract this data themselves and do the analysis in excel.

Prerequisites

  • Digital user engagement event database solution must be setup at 1st.
  • Customers should be prepared to purchase Amazon Quicksight because it has its own set of costs which is not covered within Amazon Pinpoint cost.

Solution Overview

This Solution uses the Athena tables created by Digital user engagement events database solution. The AWS CloudFormation template given in this post automatically sets up the different architecture components, to capture detailed notifications about Amazon Pinpoint engagement events and log those in Amazon Athena in the form of Athena views. You still need to manually configure Amazon Quicksight dashboards to link to these newly generated Athena views. Please follow the steps below in order for further information.

Use case(s)

Event dashboard solutions have following use cases: –

  • Deep dive into engagement insights. (eg: SMS events, Email events, Campaign events, Journey events)
  • The ability to view engagement events at the individual user level.
  • Data/process mining turn raw event data into useful marking insights.
  • User engagement benchmarking and end user event funneling.
  • Compute campaign conversions (post campaign user analysis to show campaign effectiveness)
  • Build funnels that shows user progression.

Getting started with solution deployment

Prerequisite tasks to be completed before deploying the logging solution

Step 1 – Create AWS account, Pinpoint Project, Implement Event-Database-Solution.
As part of this step customers need to implement DUE Event database solution as the current solution (DUE event dashboard) is an extension of DUE event database solution. The basic assumption here is that the customer has already configured Amazon Pinpoint project or Amazon SES within the required AWS region before implementing this step.

The steps required to implement an event dashboard solution are as follows.

a/Follow the steps mentioned in Event database solution to implement the complete stack. Prior installing the complete stack copy and save the name Athena events database name as shown in the diagram. For my case it is due_eventdb. Database name is required as an input parameter for the current Event Dashboard solution.

b/Once the solution is deployed, navigate to the output page of the cloud formation stack, and copy, and save the following information, which will be required as input parameters in step 2 of the current Event Dashboard solution.

Step 2 – Deploy Cloud formation template for Event dashboard solution
This step generates a number of new Amazon Athena views that will serve as a data source for Amazon Quicksight. Continue with the following actions.

  • Download the cloud formation template(“Event-dashboard.yaml”) from AWS samples.
  • Navigate to Cloud formation page in AWS console, click up right on “Create stack” and select the option “With new resources (standard)”
  • Leave the “Prerequisite – Prepare template” to “Template is ready” and for the “Specify template” option, select “Upload a template file”. On the same page, click on “Choose file”, browse to find the file “Event-dashboard.yaml” file and select it. Once the file is uploaded, click “Next” and deploy the stack.

  • Enter following information under the section “Specify stack details”:
    • EventAthenaDatabaseName – As mentioned in Step 1-a.
    • S3DataLogBucket- As mentioned in Step 1-b
    • This solution will create additional 5 Athena views which are
      • All_email_events
      • All_SMS_events
      • All_custom_events (Custom events can be Mobile app/WebApp/Push Events)
      • All_campaign_events
      • All_journey_events

Step 3 – Create Amazon Quicksight engagement Dashboard
This step walks you through the process of creating an Amazon Quicksight dashboard for Amazon Pinpoint engagement events using the Athena views you created in step-2

  1. To Setup Amazon Quicksight for the 1st time please follow this link (this process is not needed if you have already setup Amazon Quicksight). Please make sure you are an Amazon Quicksight Administrator.
  2. Go/search Amazon Quicksight on AWS console.
  3. Create New Analysis and then select “New dataset”
  4. Select Athena as data source
  5. As a next step, you need to select what all analysis you need for respective events. This solution provides option to create 5 different set of analysis as mentioned in Step 2. They are a/All email events, b/All SMS Events, c/All Custom Events (Mobile/Web App, web push etc), d/ All Campaign events, e/All Journey events. Dashboard can be created from Quicksight analysis and same can be shared among the organization stake holders. Following are the steps to create analysis and dashboards for different type of events.
  6. Email Events –
    • For all email events, name the analysis “All-emails-events” (this can be any kind of customer preferred nomenclature), select Athena workgroup as primary, and then create a data source.
    • Once you create the data source Quicksight lists all the views and tables available under the specified database (in our case it is:-  due_eventdb). Select the email_all_events view as data source.
    • Select the event data location for analysis. There are mainly two options available which are a/ Import to Spice quicker analysis b/ Directly query your data. Please select the preferred options and then click on “visualize the data”.
    • Import to Spice quicker analysis – SPICE is the Amazon QuickSight Super-fast, Parallel, In-memory Calculation Engine. It’s engineered to rapidly perform advanced calculations and serve data. In Enterprise edition, data stored in SPICE is encrypted at rest. (1 GB of storage is available for free for extra storage customer need to pay extra, please refer cost section in this document )
    • Directly query your data – This process enables Quicksight to query directly to the Athena or source database (In the current case it is Athena) and Quicksight will not store any data.
    • Now that you have selected a data source, you will be taken to a blank quick sight canvas (Blank analysis page) as shown in the following Image, please drag and drop what visualization type you need to visualize onto the auto-graph pane. Please note that Amazon QuickSight is a Busines intelligence platform, so customers are free to choose the desired visualization types to observe the individual engagement events.
    • As part of this blog, we have displayed how to create some simple analysis graphs to visualize the engagement events.
    • As an initial step please Select tabular Visualization as shown in the Image.
    • Select all the event dimensions that you want to put it as part of the Table in X axis. Amazon Quicksight table can be extended to show as many as tables columns, this completely depends upon the business requirement how much data marketers want to visualize.
    • Further filtering on the table can be done using Quicksight filters, you can apply the filter on specific granular values to enable further filtering. For Eg – If you want to apply filtering on the destination email Id then 1/Select the filter from left hand menu 2/Add destination field as the filtering criterion 3/ Tick on the destination field you are trying to filter or search for the Destination email ID that 4/ All the result in the table gets further filtered as per the filter criterion
    • As a next step please add another visual from top left corner “Add -> Add Visual”, then select the Donut Chart from Visual types pane. Donut charts are always used for displaying aggregation.
    • Then select the “event_type” as the Group to visualize the aggregated events, this helps marketers/business users to figure out how many email events occurred and what are the aggregated success ratio, click ratio, complain ratio or bounce ratio etc for the emails/Campaign that’s sent to end users.
    • To create a Quicksight dashboards from the Quicksight analysis click Share menu option at the top right corner then select publish dashboard”. Provide required dashboard name while publishing the dashboard”. Same dashboard can be shared with multiple audiences in the Organization.
    • Following is the final version of the dashboard. As mentioned above Quicksight dashboards can be shared with other stakeholders and also complete dashboard can be exported as excel sheet.
  7. SMS Events-
    • As shown above SMS events can be analyzed using Quicksight and dash boards can be created out of the analysis. Please repeat all of the sub-steps listed in step 6. Following is a sample SMS dashboard.
  8. Custom Events-
    • After you integrate your application (app) with Amazon Pinpoint, Amazon Pinpoint can stream event data about user activity, different type custom events, and message deliveries for the app. Eg :- Session.start, Product_page_view, _session.stop etc. Do repeat all of the sub-steps listed in step 6 create a custom event dashboards.
  9. Campaign events
    • As shown before campaign also can be included in the same dashboard or you can create new dashboard only for campaign events.

Cost for Event dashboard solution
You are responsible for the cost of the AWS services used while running this solution. As of the date of publication, the cost for running this solution with default settings in the US West (Oregon) Region is approximately $65 a month. The cost estimate includes the cost of AWS Lambda, Amazon Athena, Amazon Quicksight. The estimate assumes querying 1TB of data in a month, and two authors managing Amazon Quicksight every month, four Amazon Quicksight readers witnessing the events dashboard unlimited times in a month, and a Quicksight spice capacity is 50 GB per month. Prices are subject to change. For full details, see the pricing webpage for each AWS service you will be using in this solution.

Clean up

When you’re done with this exercise, complete the following steps to delete your resources and stop incurring costs:

  1. On the CloudFormation console, select your stack and choose Delete. This cleans up all the resources created by the stack,
  2. Delete the Amazon Quicksight Dashboards and data sets that you have created.

Conclusion

In this blog post, I have demonstrated how marketers, business users, and business analysts can utilize Amazon Quicksight dashboards to evaluate and exploit user engagement data from Amazon SES and Pinpoint event streams. Customers can also utilize this solution to understand how Amazon Pinpoint campaigns lead to business conversions, in addition to analyzing multi-channel communication metrics at the individual user level.

Next steps

The personas for this blog are both the tech team and the marketing analyst team, as it involves a code deployment to create very simple Athena views, as well as the steps to create an Amazon Quicksight dashboard to analyse Amazon SES and Amazon Pinpoint engagement events at the individual user level. Customers may then create their own Amazon Quicksight dashboards to illustrate the conversion ratio and propensity trends in real time by integrating campaign events with app-level events such as purchase conversions, order placement, and so on.

Extending the solution

You can download the AWS Cloudformation templates, code for this solution from our public GitHub repository and modify it to fit your needs.


About the Author


Satyasovan Tripathy works at Amazon Web Services as a Senior Specialist Solution Architect. He is based in Bengaluru, India, and specialises on the AWS Digital User Engagement product portfolio. He likes reading and travelling outside of work.

Linux Foundation 2021 annual report

Post Syndicated from original https://lwn.net/Articles/877844/rss

For those who would like to catch up on what the Linux Foundation has been
doing, the 2021
annual report
is available as an 87-page PDF file.

In 2021, The Linux Foundation continued to see organizations
embrace open collaboration and open source principles, accelerating
new innovations, approaches, and best practices. As a community, we
made significant progress in the areas of cloud-native computing,
5G networking, software supply chain security, 3D gaming, and a
host of new industry and social initiatives.

Congrats to the winners of the 2021 Metasploit community CTF

Post Syndicated from Spencer McIntyre original https://blog.rapid7.com/2021/12/06/congrats-to-the-winners-of-the-2021-metasploit-community-ctf/

Congrats to the winners of the 2021 Metasploit community CTF

Thanks to everyone that participated in this year’s Metasploit community CTF! Like last year, this CTF ran over the past 4 days and invited community members to solve a series of challenges. This year saw 1,501 users registered across 727 teams. If you participated in the CTF, we have a feedback survey up here:

https://forms.gle/YmMR6Rrk9LCcrXzi8

Congrats to the winners of the 2021 Metasploit community CTF

Place Team Score
1 average ctf enjoyers 1800
2 deadastronauts 1800
3 LeukeTeamNaam 1800
4 PoSTLTimes 1800
5 EvilBunnyWrote 1700
6 BisonSquad 1700
7 Social Engineering Experts 1700
8 B47Sec 1700
9 AlphaSeal 1700
10 L1T 1600
11 APT593 1500
12 IML-SEC 1500
13 NYUSEC 1500
14 Neutrino_Cannon 1500
15 just use bloodhound 1500

We’ll be contacting the captains of the winning teams this week to arrange prize delivery. We want to thank CTFd again for providing the scoreboard software. We also want to thank our partners at TryHackMe for sponsoring prizes for the top 3 teams. Finally, we want to give a huge thank you to everyone that participated in this year’s CTF!

CTF Statistics

  • This year there were 18 total challenges for a maximum possible score of 1800
  • Four teams solved and captured all possible flags
  • The 4-of-hearts challenge had the most solves with 260, while the 7-of-hearts had the fewest at 6 (2.3% of registered teams)
  • A total of 265 teams made it onto the scoreboard
  • There were 1,264 correct challenge submissions and 1,524 incorrect challenge submissions
Challenge Solves
4 of Hearts 260
9 of Diamonds 201
2 of Spades 163
10 of Clubs 92
5 of Diamonds 70
4 of Diamonds 68
Jack of Hearts 61
9 of Spades 59
Ace of Hearts 56
2 of Clubs 48
8 of Clubs 43
3 of Hearts 29
Black Joker 29
5 of Clubs 27
4 of Clubs 18
Ace of Diamonds 18
3 of Clubs 16
7 of Hearts 6

As always, we’re excited to see the different approaches the hacker community took to solve our challenges as detailed in their writeups. There’s a #write-up-links channel dedicated for this in the public Slack workspace that we encourage users to check out.

Kubernetes Guardrails: Bringing DevOps and Security Together on Cloud

Post Syndicated from Alon Berger original https://blog.rapid7.com/2021/12/06/kubernetes-guardrails-bringing-devops-and-security-together-on-cloud/

Kubernetes Guardrails: Bringing DevOps and Security Together on Cloud

Cloud and container technologies are being increasingly embraced by organizations around the globe because of the efficiency, superior visibility, and control they provide to DevOps and IT teams.

While DevOps teams see the benefits of cloud and container solutions, these tools create a learning curve for their security colleagues. Because of this, security teams often want to slow down adoption while they figure out a strategy for maintaining security and compliance in these new fast-moving environments.

Container and Kubernetes (K8s) environments are already fairly complex as it is, and layering multiple additional security tools into the mix makes it even more challenging from a management perspective. Organizations need to find a way to enable their DevOps teams to move quickly and take advantage of the benefits of containers and K8s, while staying within the parameters the security team needs to maintain compliance with organizational policy.

This challenge goes beyond technology. These teams need to find a solution that allows them to work together well, doesn’t over-complicate their working relationship, and lets both sides get what they want with minimal overhead.

A holistic approach to Kubernetes security

As an open-source container orchestration system for automating deployment, scaling, and management of containerized applications, Kubernetes is extremely powerful. However, organizations must carefully balance their eagerness to embrace the dynamic, self-service nature of Kubernetes with the real-life need to manage and mitigate security and compliance risk.

Rapid7’s recent introduction of InsightCloudSec intelligently unifies both CSPM and CWPP functionalities, thus enabling a holistic approach for protecting valuable assets in the cloud — one that includes Kubernetes and workload security.

Learn more about InsightCloudSec here

Built for DevOps, trusted by security

In retrospect, 2020 was a tipping point for the Kubernetes community, with a massive increase in adoption across the globe. Many companies, seeking an efficient, cost-effective way to make this huge shift to the cloud, turned to Kubernetes. But this in turn created a growing need to remove Kubernetes security blind spots. For this reason, we’ve introduced Kubernetes Guardrails.

With Kubernetes Security Guardrails, organizations are equipped with a multi-cluster vulnerability scanner that covers rich Kubernetes security best practices and compliance policies, such as CIS Benchmarks. As part of Rapid7’s InsightCloudSec solution, this new capability introduces a platform-based and easy-to-maintain solution for Kubernetes security that is deployed in minutes and is fully streamlined in the Kubernetes pipeline.

Securing Kubernetes with InsightCloudSec

Kubernetes Security Guardrails is the most comprehensive solution for all relevant Kubernetes security requirements, designed from a DevOps perspective with in-depth visibility for security teams.

InsightCloudSec is designed to be an agentless state machine, seamlessly applied to any computing environment — public cloud or private software-defined infrastructure.

InsightCloudSec continually interacts with the APIs to gather information about the state of the hosts and the Kubernetes clusters of interest. These hosts can be GCP, AWS, Azure, or a private data center that can expose infrastructure information via an API.

Integrated within minutes, the Kubernetes Guardrails functionality simplifies the security assessment for the entire Kubernetes environment and the CI/CD pipeline, while also creating baseline profiles for each cluster, and highlighting and scoring security risks, misconfigurations, and hygiene drifts.

Both DevOps and Security teams enjoy the continuous and dynamic analysis of their Kubernetes deployments, all while seamlessly complying with regulatory requirements for Kubernetes.

With Kubernetes Guardrails, Dev teams are able to create a snapshot of cluster risks, delivered with a detailed list of misconfigurations, while detecting real-time hygiene and conformance drifts for deployments running on any cloud environment. Some of the most common use cases include:

  • Kubernetes vulnerability scanning
  • Hunting misplaced secrets and excessive secret access
  • Workload hardening (from pod security to network policies)
  • Istio security and configuration best practices
  • Ingress controllers security
  • Kubernetes API server access privileges
  • Kubernetes operators best practices
  • RBAC controls and misconfigurations

Ready to drive cloud security forward?

Rapid7 is proud to introduce a Kubernetes security solution that encapsulates all-in-one capabilities and unmatched coverage for all things Kubernetes.

With a security-first approach and strict compliance adherence, Kubernetes Guardrails enable a better understanding and control over distributed projects, and help organizations maintain smooth business operations.

Want to learn more? Watch the on-demand webinar on InsightCloudSec and its Kubernetes protection.

[$] A reference-count tracking infrastructure

Post Syndicated from original https://lwn.net/Articles/877603/rss

Reference counts are a commonly used mechanism for tracking the life cycle
of objects in a computing system. As long as every user of an object
correctly maintains its references by incrementing and decrementing the
reference count, that object will persist for as long as it
is needed
and will be properly destroyed once the last user is done. The “correctly”
in that sentence is important, though; things do not work
as well in the presence of reference-counting errors. Networking
developer Eric Dumazet is working on a
reference-count tracking system
that could prove useful for finding
these errors in the networking subsystem and, someday, throughout the kernel.

Thieves Using AirTags to “Follow” Cars

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2021/12/thieves-using-airtags-to-follow-cars.html

From Ontario and not surprising:

Since September 2021, officers have investigated five incidents where suspects have placed small tracking devices on high-end vehicles so they can later locate and steal them. Brand name “air tags” are placed in out-of-sight areas of the target vehicles when they are parked in public places like malls or parking lots. Thieves then track the targeted vehicles to the victim’s residence, where they are stolen from the driveway.

Thieves typically use tools like screwdrivers to enter the vehicles through the driver or passenger door, while ensuring not to set off alarms. Once inside, an electronic device, typically used by mechanics to reprogram the factory setting, is connected to the onboard diagnostics port below the dashboard and programs the vehicle to accept a key the thieves have brought with them. Once the new key is programmed, the vehicle will start and the thieves drive it away.

I’m not sure if there’s anything that can be done:

When Apple first released AirTags earlier this year, concerns immediately sprung up about nefarious use cases for the covert trackers. Apple responded with a slew of anti-stalking measures, but those are more intended for keeping people safe than cars. An AirTag away from its owner will sound an alarm, letting anyone nearby know that it’s been left behind, but it can take up to 24 hours for that alarm to go off — more than enough time to nab a car in the dead of night.

Security updates for Monday

Post Syndicated from original https://lwn.net/Articles/877821/rss

Security updates have been issued by Arch Linux (isync, lib32-nss, nss, opera, and vivaldi), Debian (gerbv and xen), Fedora (autotrace, chafa, converseen, digikam, dmtx-utils, dvdauthor, eom, kxstitch, libsndfile, nss, pfstools, php-pecl-imagick, psiconv, q, R-magick, rss-glx, rubygem-rmagick, seamonkey, skopeo, synfig, synfigstudio, vdr-scraper2vdr, vdr-skinelchihd, vdr-skinnopacity, vdr-tvguide, vim, vips, and WindowMaker), Mageia (golang, kernel, kernel-linus, mariadb, and vim), openSUSE (aaa_base, python-Pygments, singularity, and tor), Red Hat (nss), Slackware (mozilla), SUSE (aaa_base, kernel, openssh, php74, and xen), and Ubuntu (libmodbus, lrzip, samba, and uriparser).

Use a City Planning Analogy to Visualize and Create your Cloud Architecture

Post Syndicated from Marwan Al Shawi original https://aws.amazon.com/blogs/architecture/use-a-city-planning-analogy-to-visualize-and-create-your-cloud-architecture/

If you are new to creating cloud architectures, you might find it a daunting undertaking. However, there is an approach that can help you define a cloud architecture pattern by using a similar construct. In this blog post, I will show you how to envision your cloud architecture using this structured and simplified approach.

Such an approach helps you to envision the architecture as a whole. You can then create reusable architecture patterns that can be used for scenarios with similar requirements. It also will help you define the more detailed technological requirements and interdependencies of the different architecture components.

First, I will briefly define what is meant by an architecture pattern and an architecture component.

Architecture pattern and components

An architecture pattern can be defined as a mechanism used to structure multiple functional components of a software or a technology solution to address predefined requirements. It can be characterized by use case and requirements, and should be tested and reusable whenever possible.

Architecture patterns can be composed of three main elements: the architecture components, the specific functions or capabilities of each component, and the connectivity among those components.

A component in the context of a technology solution architecture is a building block. Modular architecture is composed of a collection of these building blocks.

To think modularly, you must look at the overall technology solution. What is its intended function as a complete system? Then, break it down into smaller parts or components. Think about how each component communicates with others. Identify and define each block or component and its specific roles and function. Consider the technical operational responsibilities each is expected to deliver.

Cloud architecture patterns and the city planning analogy

Let’s assume a content marketing company wants to provide marketing analytics to its partners. It proposes a SaaS solution, by offering an analytics dashboard on Amazon Web Services (AWS). This company may offer the same solution in other locations in the future.

How would you create a reusable architecture pattern for such a solution? To simplify the concept of a component and the architecture pattern, let’s use city planning as a frame of reference.

Subarchitectures or components

A city can be imagined as consisting of three organizing contexts or components:

  1. Overall City Architecture (the big picture)
  2. District Architecture
  3. Building Architecture

Let’s define each of these components or subarchitectures, and see how they correlate to an enterprise cloud architecture.

I. City Architecture consists of the city structures and the integrations of services required by the population, see Figure 1.

Figure 1. Oversimplified city layout

Figure 1. Oversimplified city layout

The overall anticipated capacity within a certain period must be calculatedfor roads, sewage, water, electricity grids, and overall city layout. Typically, this structure should be built from the intended purpose or vision of the city. This can be the type of services it will offer, and the function of each district.

Think of City Architecture as the overall cloud architecture for your enterprise. Include the anticipated capacity, the layout (single Region, multi-Region), type, and number of Amazon Virtual Private Cloud (VPC)s. Decide how you will connect and integrate all these different architecture components.

The initial workflow that can be used to define the high-level architecture pattern layout of the SaaS solution example is analogous to the overall city architecture. We can define its three primary elements: architecture components, specific functions of each component, and the connectivity among those components.

  1. Production environment. The front and backend of your application. It provides the marketing data analytics dashboard.
  2. Testing and development environment. A replica of, but isolated from the Production app. Users’ traffic doesn’t pass through security inspection layer.
  3. Security layer. Provides perimeter security inspection. Users’ traffic passes through security inspection layer.

Translating this workflow into an AWS architecture, Figure 2 shows the analogous structure.

  • Single AWS Region (to be offered in a specific geographical area)
  • Amazon VPC to host the production application
  • Amazon VPC to host the test/dev application
  • Separate VPC (or a layer within a VPC) to provide security services for perimeter security inspection
  • Customer’s connectivity (for example, over public internet, or VPN)
  • AWS Transit Gateway (TGW) to connect and isolate the different components (VPCs and VPN)
Figure 2. Architecture pattern (high-level layout)

Figure 2. Architecture pattern (high-level layout)

Domain-driven design

At this stage, you may also consider a domain-driven design (DDD). This is an approach to software development that centers on a domain model. With your DDD, you can break the solution into different bounded contexts. You can translate the business functions/capabilities into logical domains, and then define how they communicate.

Let’s use the same SaaS example and further analyze the requirements of the solution with the DDD approach in mind. The SaaS solution is offered based on two types of industries: regulated with specific security compliance, and non-regulated. By translating this into logical domains, we can optimize the design to offer a more modular architecture. This will minimize the blast radius of the solution, as illustrated in Figure 3. Watch How AWS Minimizes the Blast Radius of Failures.

Figure 3. DDD-based architecture pattern (high-level layout)

Figure 3. DDD-based architecture pattern (high-level layout)

Now let’s think of governmental boundaries within a city and among its districts. This can be analogous to AWS accounts structures and the trust boundaries among them. By applying this to the example preceding, the VPC with the security compliance requirements can be placed in a separate AWS account. Read Design principles for organizing your AWS accounts.

II. District Architecture consists of the structures and integrations required within a district to manage its buildings, see Figure 4.

Figure 4. City structure with districts

Figure 4. City structure with districts

It illustrates how to connect/integrate back to the city-wide architecture. It should consider the overall anticipated capacity within each district.

For instance, a district can be designed based on the type of function/service it provides, such as residential district, leisure district, or business district.

Mapping this to cloud architecture, you can envision it as the more specific functions/services you are expecting from a certain block, component, or domain. Your architecture can be within one or multiple VPCs, as shown in Figure 5. The structure of a domain or block can vary by number of Availability Zones and VPCs, type of external access, compliance requirements, and the hosted application requirements. Each of these blocks serves a different function and requires different specifications. However, they all need to integrate back to the overall cloud and network architecture to provide a cohesive design.

The architect must define and specify clearly the communication model among the architecture components. You may further break the application architecture at the module level into microservices using the DDD approach. An example is the use of Micro-frontend Architectures on AWS.

Figure 5. Architecture module structure

Figure 5. Architecture module structure

III. Building Architecture refers to the buildings’ structures and standards required to deliver the specific properties/services within a district. It also must integrate back with the district architecture.

To apply this to your architecture, envision the specialized functions/capabilities you are expecting from your application within a module (subcomponents). What are the requirements needed for the application tiers? In this example, let’s assume that the VPC without security compliance requirements will use a frontend web tier on Amazon EC2. Its backend database will be Amazon Relational Database Service (RDS).

Each of these subcomponents must integrate with other components and modules, as well as to the public internet. For example, an AWS Application Load Balancer could handle connections requests from external users, and AWS Web Application Firewall (AWS WAF) used as the perimeter security layer. AWS Transit Gateway could connect to other modules (VPCs). NAT gateways could provide connectivity to the internet for the internal systems in a VPC (shown in Figure 6.)

Figure 6. Architecture module and its subcomponents structure

Figure 6. Architecture module and its subcomponents structure

Conclusion

The vision and goal of a city architecture can set the basis for districts’ architectures. In turn, the district architecture sets the basis of the building architecture within a district. Similarly, the targeted enterprise cloud architecture goal should set the key requirements of the building blocks (or functional components) of the architecture.

Each architecture block sets the requirements of the subcomponents. They collectively construct a system or module of a system, as illustrated in Figure 7.

Figure 7. Structure of cloud architecture requirements and interdependencies

Figure 7. Structure of cloud architecture requirements and interdependencies

As a next step, assess your architecture from both a scale and reliability perspective. Designing for scale alone is not enough. Reliable scalability should be always the targeted architectural attribute. Read Architecting for Reliable Scalability.

Екозащитници бият камбаната „Клиентелизъм и мафия“: Еколозите за Асен Личев и кандидат-министрите на БСП и ИТН

Post Syndicated from Николай Марченко original https://bivol.bg/%D0%BA%D0%BB%D0%B8%D0%B5%D0%BD%D1%82%D0%B5%D0%BB%D0%B8%D0%B7%D1%8A%D0%BC-%D0%B8-%D0%BC%D0%B0%D1%84%D0%B8%D1%8F-%D0%B5%D0%BA%D0%BE%D0%BB%D0%BE%D0%B7%D0%B8%D1%82%D0%B5-%D0%B7%D0%B0.html

понеделник 6 декември 2021


Екозащитните организации не са доволни от наследието на служебния министър на околната среда и водите Асен Личев, става ясно от коментарите на лидерите им за „Биволъ“. Миналата седмица коалицията „За…

InsightCloudSec Supports 12 New AWS Services Announced at re:Invent

Post Syndicated from Chris DeRamus original https://blog.rapid7.com/2021/12/06/insightcloudsec-supports-12-new-aws-services-announced-at-re-invent/

InsightCloudSec Supports 12 New AWS Services Announced at re:Invent

In case you didn’t hear, Amazon hosted AWS re:Invent in Las Vegas last week. As has come to be expected at the annual mega-event, Amazon made a number of huge announcements and launched a significant number of improvements and brand-new services and settings to enhance their public cloud platform, including an improved version of Amazon Inspector, S3 Object Ownership, Recycle Bin, EBS Archive Mode, and more.

Along with these announcements comes plenty of excitement and fanfare from the developer community who gets to take advantage of the new functionality. And that excitement is warranted. But these announcements also usually come with a hint of hesitation from their colleagues in security, who are responsible for analyzing all of these new services and settings to ensure that they are used properly and don’t introduce unintended consequences to their AWS environment. Yes, security is a factor here, but those unintended consequences also include costs associated with rolling out these new services. Rightfully so: It can often take weeks or months for organizations to vet these services, define governance policies, and actually start taking advantage of them.

But in order to help extinguish some of that announcement-induced anxiety and allow our customers to start taking advantage of these services as quickly as possible, the InsightCloudSec team has worked day and night for the last week to deliver support for a dozen of the new services that AWS rolled out last week.

In all of these cases, InsightCloudSec gathers the data related to these services across all AWS accounts and regions and consolidates it, giving security teams a single place to see all of the information across the entire AWS footprint. In many cases, the support also enhances the services provided by Amazon by providing additional context about the service or the resources associated with it.

Rather than choosing between slowing down innovation or taking on unmitigated risk, our customers will have the ability to take full advantage of each of these services as soon as they are available.

The list of newly supported AWS services and services includes:

Let’s take a look at a few of the most critical services and what they mean for DevOps and Security teams.

The new AWS Inspector

Amazon Inspector is a vulnerability management service that continually scans AWS workloads for software vulnerabilities and unintended network exposure. As an AWS-built service, Amazon Inspector is designed to exchange data and interact with other core AWS services not only to identify potential security findings, but also to automate addressing those findings.

By joining insights from both AWS Inspector and Rapid7, customers benefit from immediate value in the form of multiple enhancements across the board. These include enhanced risk assessment of containers and workloads, unified visibility and control, and robust context across AWS environments.

By consolidating AWS’s vulnerability management solutions with Rapid7’s cloud security capabilities, organizations are enabled with a ​​highly scalable service, equipped with optimized security controls to better handle their most valuable assets.

InsightCloudSec seamlessly complements the new and improved AWS Inspector, allowing customers to leverage enhanced capabilities including:

  • Identify regions, accounts, and compute instances where AWS Inspector is not enabled, along with a new bot action to turn on the capability across EC2/ECR
  • Identify compute resources by risk score and/or specific findings
  • Identify accounts and regions with the highest overall risk
  • Add Inspector as an agent type so customers can switch to the “Vulnerability View,” which provides a single pane of glass to view and naturally sort assets by risk/severity findings across their entire fleet of accounts
  • Use Inspector data to enrich existing Insights such as resources on a public subnet, resources with an IAM role attached, etc. to build new Insights (e.g., EC2 instance on a public subnet with a security group exposing SSH that has been identified as high-risk by Inspector)

VPC Network Access Analyzer

Another great new service that Amazon rolled out is the Amazon VPC Network Access Analyzer, which helps their customers identify network configurations that lead to unintended network access. The tool essentially allows you to create a scope or query — for example, you could create a scope to find all web apps that do not use a firewall to access internet resources — then analyze your AWS account against that scope. It then serves up a list of the unexpected network paths between resources defined in the scope.

InsightCloudSec supports this new network analyzer by consuming all findings from the analyses our customers run in their entire AWS environment. This gives cloud security teams a single place to see all results, rather than having to jump from account to account to gather all the information across the entire AWS footprint. It also enriches the data provided by that network analysis with additional context about the resources, such as whether they have misconfigurations or overly permissive IAM policies attached to them, helping the user see the bigger picture and more effectively prioritize their work. Finally, the automation capabilities in InsightCloudSec allow our users to automatically schedule these network scans on a regular basis across target accounts, eliminating all manual effort.

S3 Object Ownership

The sheer scale of S3 makes access management a blind spot for a number of organizations. For years, customers who use S3 have had the ability to set object-level permissions, effectively superseding the access permissions established at the bucket level. While enhancements have been introduced over the years such as Block Public Access, which can help mitigate the chance of objects being made public via direct Access Control Lists (ACLs), not all customers leverage the capability. Amazon has gone a step further by introducing a capability known as S3 Object Ownership, which gives administrators the ability to completely disable object-level ACLs. This setting is now the default value on all newly created S3 buckets, and customers can now migrate their existing S3 buckets to leverage this capability.

InsightCloudSec now detects the presence of this capability and renders it in the product, as well as through the API response via the `Object Ownership` property. A new filter was created to identify S3 buckets based on the value of this property, and the team has also expanded the core Insight Storage Container Without Uniform Bucket Level Access to work across AWS, AWS GovCloud, and AWS China.

InsightCloudSec Supports 12 New AWS Services Announced at re:Invent

Recycle Bin

Data Lifecycle Management can be challenging as customer cloud footprints grow from dozens of cloud accounts to hundreds or even thousands. At InsightCloudSec we’ve seen customers with millions of EBS Snapshots across their fleet of accounts. While many of our customers have embraced AWS Backup to help centralize their backup and retention management, there’s always concern with the accidental removal of an important snapshot while performing cleanup operations across accounts.

AWS offers a new Recycle Bin service that can be used to reduce the risk of accidental deletion. Think of Recycle Bin in the same way that Recycle Bin operates on your own computer. When enabled, the capability will store snapshots for a period of time defined by the customer and allow them to be recovered. Customers define the length of time they’d like these snapshots to remain in the recycle bin before being permanently deleted.

InsightCloudSec now provides visibility into these Recycle Bin Rules directly within the Resources section of the product. We’ve also included filters to identify accounts/regions where snapshots exist and recycle bin rules are not in place. These filters can help InsightCloudSec customers continue to meet their evolving governance needs.

InsightCloudSec Supports 12 New AWS Services Announced at re:Invent

EBS Snapshot Archive

Going beyond Recycle Bin, AWS now offers a new archive mode storage class that, when enabled, can help customers reduce their storage costs. EBS Snapshot Archive is a new storage tier that is up to 75% cheaper than the standard storage tier. Converting to this tier is quite straightforward and can be done via the AWS Console or programmatic API.

To help our customers further reduce spend with this service, we’ve introduced visibility into this storage tier, along with a new filter and Bot action to help customers begin migrating to this new tier where applicable.

InsightCloudSec Supports 12 New AWS Services Announced at re:Invent

More to come

This is one of our team’s favorite weeks each year. It’s always great to see the new capabilities that the teams at Amazon have been hard at work on and how they take in customer feedback to mature their offering. The InsightCloudSec team will be introducing support for a number of these enhancements with our release this week (21.7.3) and will be working closely with our customers to add additional features and capabilities.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Replace your hardware firewalls with Cloudflare One

Post Syndicated from Ankur Aggarwal original https://blog.cloudflare.com/replace-your-hardware-firewalls-with-cloudflare-one/

Replace your hardware firewalls with Cloudflare One

Replace your hardware firewalls with Cloudflare One

Today, we’re excited to announce new capabilities to help customers make the switch from hardware firewall appliances to a true cloud-native firewall built for next-generation networks. Cloudflare One provides a secure, performant, and Zero Trust-enabled platform for administrators to apply consistent security policies across all of their users and resources. Best of all, it’s built on top of our global network, so you never need to worry about scaling, deploying, or maintaining your edge security hardware.

As part of this announcement, Cloudflare launched the Oahu program today to help customers leave legacy hardware behind; in this post we’ll break down the new capabilities that solve the problems of previous firewall generations and save IT teams time and money.

How did we get here?

In order to understand where we are today, it’ll be helpful to start with a brief history of IP firewalls.

Stateless packet filtering for private networks

The first generation of network firewalls were designed mostly to meet the security requirements of private networks, which started with the castle and moat architecture we defined as Generation 1 in our post yesterday. Firewall administrators could build policies around signals available at layers 3 and 4 of the OSI model (primarily IPs and ports), which was perfect for (e.g.) enabling a group of employees on one floor of an office building to access servers on another via a LAN.

This packet filtering capability was sufficient until networks got more complicated, including by connecting to the Internet. IT teams began needing to protect their corporate network from bad actors on the outside, which required more sophisticated policies.

Better protection with stateful & deep packet inspection

Firewall hardware evolved to include stateful packet inspection and the beginnings of deep packet inspection, extending basic firewall concepts by tracking the state of connections passing through them. This enabled administrators to (e.g.) block all incoming packets not tied to an already present outgoing connection.

These new capabilities provided more sophisticated protection from attackers. But the advancement came at a cost: supporting this higher level of security required more compute and memory resources. These requirements meant that security and network teams had to get better at planning the capacity they’d need for each new appliance and make tradeoffs between cost and redundancy for their network.

In addition to cost tradeoffs, these new firewalls only provided some insight into how the network was used. You could tell users were accessing 198.51.100.10 on port 80, but to do a further investigation about what these users were accessing would require you to do a reverse lookup of the IP address. That alone would only land you at the front page of the provider, with no insight into what was accessed, reputation of the domain/host, or any other information to help answer “Is this a security event I need to investigate further?”. Determining the source would be difficult here as well, it would require correlating a private IP address handed out via DHCP with a device and then subsequently a user (if you remembered to set long lease times and never shared devices).

Application awareness with next generation firewalls

To accommodate these challenges, the industry introduced the Next Generation Firewall (NGFW). These were the long reigning, and in some cases are still the industry standard, corporate edge security device. They adopted all the capabilities of previous generations while adding in application awareness to help administrators gain more control over what passed through their security perimeter. NGFWs introduced the concept of vendor-provided or externally-provided application intelligence, the ability to identify individual applications from traffic characteristics. Intelligence which could then be fed into policies defining what users could and couldn’t do with a given application.

As more applications moved to the cloud, NGFW vendors started to provide virtualized versions of their appliances. These allowed administrators to no longer worry about lead times for the next hardware version and allowed greater flexibility when deploying to multiple locations.

Over the years, as the threat landscape continued to evolve and networks became more complex, NGFWs started to build in additional security capabilities, some of which helped consolidate multiple appliances. Depending on the vendor, these included VPN Gateways, IDS/IPS, Web Application Firewalls, and even things like Bot Management and DDoS protection. But even with these features, NGFWs had their drawbacks — administrators still needed to spend time designing and configuring redundant (at least primary/secondary) appliances, as well as choosing which locations had firewalls and incurring performance penalties from backhauling traffic there from other locations. And even still, careful IP address management was required when creating policies to apply pseudo identity.

Adding user-level controls to move toward Zero Trust

As firewall vendors added more sophisticated controls, in parallel, a paradigm shift for network architecture was introduced to address the security concerns introduced as applications and users left the organization’s “castle” for the Internet. Zero Trust security means that no one is trusted by default from inside or outside the network, and verification is required from everyone trying to gain access to resources on the network. Firewalls started incorporating Zero Trust principles by integrating with identity providers (IdPs) and allowing users to build policies around user groups — “only Finance and HR can access payroll systems” — enabling finer-grained control and reducing the need to rely on IP addresses to approximate identity.

These policies have helped organizations lock down their networks and get closer to Zero Trust, but CIOs are still left with problems: what happens when they need to integrate another organization’s identity provider? How do they safely grant access to corporate resources for contractors? And these new controls don’t address the fundamental problems with managing hardware, which still exist and are getting more complex as companies go through business changes like adding and removing locations or embracing hybrid forms of work. CIOs need a solution that works for the future of corporate networks, instead of trying to duct tape together solutions that address only some aspects of what they need.

The cloud-native firewall for next-generation networks

Cloudflare is helping customers build the future of their corporate networks by unifying network connectivity and Zero Trust security. Customers who adopt the Cloudflare One platform can deprecate their hardware firewalls in favor of a cloud-native approach, making IT teams’ lives easier by solving the problems of previous generations.

Connect any source or destination with flexible on-ramps

Rather than managing different devices for different use cases, all traffic across your network — from data centers, offices, cloud properties, and user devices — should be able to flow through a single global firewall. Cloudflare One enables you to connect to the Cloudflare network with a variety of flexible on-ramp methods including network-layer (GRE or IPsec tunnels) or application-layer tunnels, direct connections, BYOIP, and a device client. Connectivity to Cloudflare means access to our entire global network, which eliminates many of the challenges with physical or virtualized hardware:

  • No more capacity planning: The capacity of your firewall is the capacity of Cloudflare’s global network (currently >100Tbps and growing).
  • No more location planning: Cloudflare’s Anycast network architecture enables traffic to connect automatically to the closest location to its source. No more picking regions or worrying about where your primary/backup appliances are — redundancy and failover are built in by default.
  • No maintenance downtimes: Improvements to Cloudflare’s firewall capabilities, like all of our products, are deployed continuously across our global edge.
  • DDoS protection built in: No need to worry about DoS attacks overwhelming your firewalls; Cloudflare’s network automatically blocks attacks close to their source and sends only the clean traffic on to its destination.

Configure comprehensive policies, from packet filtering to Zero Trust

Cloudflare One policies can be used to secure and route your organizations traffic across all the various traffic ramps. These policies can be crafted using all the same attributes available through a traditional NGFW while expanding to include Zero Trust attributes as well. These Zero Trust attributes can include one or more IdPs or endpoint security providers.

When looking strictly at layers 3 through 5 of the OSI model, policies can be based on IP, port, protocol, and other attributes in both a stateless and stateful manner. These attributes can also be used to build your private network on Cloudflare when used in conjunction with any of the identity attributes and the Cloudflare device client.

Additionally, to help relieve the burden of managing IP allow/block lists, Cloudflare provides a set of managed lists that can be applied to both stateless and stateful policies. And on the more sophisticated end, you can also perform deep packet inspection and write programmable packet filters to enforce a positive security model and thwart the largest of attacks.

Cloudflare’s intelligence helps power our application and content categories for our Layer 7 policies, which can be used to protect your users from security threats, prevent data exfiltration, and audit usage of company resources. This starts with our protected DNS resolver, which is built on top of our performance leading consumer 1.1.1.1 service. Protected DNS allows administrators to protect their users from navigating or resolving any known or potential security risks. Once a domain is resolved, administrators can apply HTTP policies to intercept, inspect, and filter a user’s traffic. And if those web applications are self-hosted or SaaS enabled you can even protect them using a Cloudflare access policy, which acts as a web based identity proxy.

Last but not least, to help prevent data exfiltration, administrators can lock down access to external HTTP applications by utilizing remote browser isolation. And coming soon, administrators will be able to log and filter which commands a user can execute over an SSH session.

Simplify policy management: one click to propagate rules everywhere

Traditional firewalls required deploying policies on each device or configuring and maintaining an orchestration tool to help with this process. In contrast, Cloudflare allows you to manage policies across our entire network from a simple dashboard or API, or use Terraform to deploy infrastructure as code. Changes propagate across the edge in seconds thanks to our Quicksilver technology. Users can get visibility into traffic flowing through the firewall with logs, which are now configurable.

Consolidating multiple firewall use cases in one platform

Firewalls need to cover a myriad of traffic flows to satisfy different security needs, including blocking bad inbound traffic, filtering outbound connections to ensure employees and applications are only accessing safe resources, and inspecting internal (“East/West”) traffic flows to enforce Zero Trust. Different hardware often covers one or multiple use cases at different locations; we think it makes sense to consolidate these as much as possible to improve ease of use and establish a single source of truth for firewall policies. Let’s walk through some use cases that were traditionally satisfied with hardware firewalls and explain how IT teams can satisfy them with Cloudflare One.

Protecting a branch office

Traditionally, IT teams needed to provision at least one hardware firewall per office location (multiple for redundancy). This involved forecasting the amount of traffic at a given branch and ordering, installing, and maintaining the appliance(s). Now, customers can connect branch office traffic to Cloudflare from whatever hardware they already have — any standard router that supports GRE or IPsec will work — and control filtering policies across all of that traffic from Cloudflare’s dashboard.

Step 1: Establish a GRE or IPsec tunnel
The majority of mainstream hardware providers support GRE and/or IPsec as tunneling methods. Cloudflare will provide an Anycast IP address to use as the tunnel endpoint on our side, and you configure a standard GRE or IPsec tunnel with no additional steps — the Anycast IP provides automatic connectivity to every Cloudflare data center.

Step 2: Configure network-layer firewall rules
All IP traffic can be filtered through Magic Firewall, which includes the ability to construct policies based on any packet characteristic — e.g., source or destination IP, port, protocol, country, or bit field match. Magic Firewall also integrates with IP Lists and includes advanced capabilities like programmable packet filtering.

Step 3: Upgrade traffic for application-level firewall rules
After it flows through Magic Firewall, TCP and UDP traffic can be “upgraded” for fine-grained filtering through Cloudflare Gateway. This upgrade unlocks a full suite of filtering capabilities including application and content awareness, identity enforcement, SSH/HTTP proxying, and DLP.

Replace your hardware firewalls with Cloudflare One

Protecting a high-traffic data center or VPC

Firewalls used for processing data at a high-traffic headquarters or data center location can be some of the largest capital expenditures in an IT team’s budget. Traditionally, data centers have been protected by beefy appliances that can handle high volumes gracefully, which comes at an increased cost. With Cloudflare’s architecture, because every server across our network can share the responsibility of processing customer traffic, no one device creates a bottleneck or requires expensive specialized components. Customers can on-ramp traffic to Cloudflare with BYOIP, a standard tunnel mechanism, or Cloudflare Network Interconnect, and process up to terabits per second of traffic through firewall rules smoothly.

Replace your hardware firewalls with Cloudflare One

Protecting a roaming or hybrid workforce

In order to connect to corporate resources or get secure access to the Internet, users in traditional network architectures establish a VPN connection from their devices to a central location where firewalls are located. There, their traffic is processed before it’s allowed to its final destination. This architecture introduces performance penalties and while modern firewalls can enable user-level controls, they don’t necessarily achieve Zero Trust. Cloudflare enables customers to use a device client as an on-ramp to Zero Trust policies; watch out for more updates later this week on how to smoothly deploy the client at scale.

Replace your hardware firewalls with Cloudflare One

What’s next

We can’t wait to keep evolving this platform to serve new use cases. We’ve heard from customers who are interested in expanding NAT Gateway functionality through Cloudflare One, who want richer analytics, reporting, and user experience monitoring across all our firewall capabilities, and who are excited to adopt a full suite of DLP features across all of their traffic flowing through Cloudflare’s network. Updates on these areas and more are coming soon (stay tuned).

Cloudflare’s new firewall capabilities are available for enterprise customers today. Learn more here and check out the Oahu Program to learn how you can migrate from hardware firewalls to Zero Trust — or talk to your account team to get started today.

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close