How to upgrade to Zabbix 5.2

Post Syndicated from Dmitry Lambert original https://blog.zabbix.com/how-to-upgrade-to-zabbix-5-2/12872/

Zabbix recently released the new 5.2 version with awesome new features overviewed in the video, such as hashicorp vault, IoT monitoring, improved performance, granular permission system, and much more. The upgrade to the latest Zabbix 5.2 is fast and easy.

Contents

I. Upgrade Zabbix on CentOS 8 (0:45)
II. New in Zabbix 5.2 (2:26)
III. Upgrade procedure (3:39)
IV. Conclusion (9:51)

Upgrade Zabbix on CentOS 8

To upgrade the existing Zabbix instance installed on CentOS 7, it is recommended to migrate to CentOS 8 first as currently there are no official packages available of CentOS 7 for Zabbix 5.2. Since there is no clean way to upgrade CentOS installation, it is recommended to create a new server on CentOS 8 and then migrate your database and spin up Zabbix Server there. CentOS 7 is old and has limited packages in the repositories, which are not updated anymore. Even if you use all the latest software and packages from the official repo, it will not be enough to successfully run all of the functionality of Zabbix 5.2. For instance, TLS 1.3 will not be available, as well as database encryption.

If you run the default frontend and default server from the packages, and if you don’t have any customization patches, any custom functionality, or edited PHP source code that you want to save, then the upgrade is straightforward.

New in Zabbix 5.2

To find out what we can expect after the upgrade, on the Zabbix documentation page for the 5.2 release, go to Installation > Upgrade notes for 5.2.0.

  • The minimum required PHP version has been upped from 7.2.0 to 7.2.5, which might be an issue on CentOS 7.
  • User roles. Now you can assign different roles to your existing users through your existing permission system will not be broken.
  • Time zone definition. If you are running your frontend with multiple virtual hosts, you can delete all of those and configure everything natively in the frontend.
  • Refreshing unsupported items setting has been removed from Administration > General > Other, with the item update interval now used for each unsupported item.
  • Template screens converted to dashboards with the screens to be set up in the dashboards now as widgets of template dashboards.
  • The session of the Zabbix frontend is now stored in a cookie.

If you upgrade, you will still have to log in to the frontend providing the username and password.

Upgrade procedure

In the Zabbix documentation page, open Upgrade procedure > Upgrade from packages > 1 Red Hat Enterprise Linux/CentOS, where the upgrade notes are available, which are helpful if you’re upgrading from the old version.

1. Stop Zabbix server.

# systemctl stop zabbix-server

2. Back up the existing Zabbix database. It is very important as there is no rollback functionality. If you can’t test the upgrade in the dev environment, it is recommended to backup the database.

3. Back up configuration files, PHP files, and Zabbix binaries.

Configuration files:

# mkdir /opt/zabbix-backup/
# cp /etc/zabbix/zabbix_server.conf /opt/zabbix-backup/
# cp /etc/httpd/conf.d/zabbix.conf  /opt/zabbix-backup/

PHP files and Zabbix binaries:

# cp -R /usr/share/zabbix/ /opt/zabbix-backup/
# cp -R /usr/share/doc/zabbix-* /opt/zabbix-backup/

This step is optional if you have an official frontend, official server, etc., without any patches or customizations. In this case, you can download the required official files for Zabbix 5.2 from the sources or install them from the repository as a package in case is something goes wrong.

4. Update repository configuration package. To proceed with the upgrade, you need to update your current repository package, especially if you don’t have packages for Zabbix 5.2 yet.

# rpm -Uvh https://repo.zabbix.com/zabbix/5.2/rhel/8/x86_64/zabbix-release-5.2-1.el8.noarch.rpm

Then you can run:

# yum clean all

and

# yum makecache

to make sure that the repository will be picked up. Then you’ll see the new packages of Zabbix 5.2.

Metadata cache created

These commands are not mandatory, but otherwise, you could upgrade your repository and then find out that the Zabbix 5.2.0 upgrade package is ‘not found’.

5. Upgrade Zabbix components. If you’re new to Zabbix and this is your first upgrade, you don’t need to do anything manually, run any database upgrade scripts, manually change or adjust the schema of the database, or do whatever else. All you have to do is upgrade your packages:

# yum upgrade zabbix-server-mysql zabbix-web-mysql zabbix-agent

NOTE. You can substitute the elements of this command depending on the software actually installed.

Type ‘-y‘ to automatically confirm everything. After the update is complete, you can check the installed version by running:

# zabbix_server -V

 

To upgrade the web frontend with Apache on RHEL 8 correctly, also run:

# yum install zabbix-apache-conf

NOTE. Type ‘y’ when asked for confirmation.

Now all you have to do to start the upgrade automatically is run:

# systemctl start zabbix-server

Then check the log file of the Zabbix server:

# tail -f /var/log/zabbix/zabbix_server.log

or

# less /var/log/zabbix/zabbix_server.log

After starting the new 5.2 binary, you will see in the log file that the current database version is 5.0, while the required mandatory version is 5.2. That’s why the Zabbix server package will start the automatic database upgrade.

Now we need to run:

# systemctl restart httpd php-fpm

as those were still running while we were upgrading the packages.

Then you need to log in as cookies are stored differently.

In the frontend, you’ll see the updated version and new features, such as new roles we can add in Administration > User roles.

User roles

6. Review component configuration parameters. Make sure to see the upgrade notes for details on mandatory changes.

7. Start Zabbix processes.

# systemctl start zabbix-server
# systemctl start zabbix-proxy
# systemctl start zabbix-agent
# systemctl start zabbix-agent2

Conclusion

This was everything required for the installation to run successfully. The process is straightforward if you don’t have any complicated customizations of your frontend, Zabbix server binaries, etc. Though in production, it is recommended to test installation out in the dev environment and make backups.

I definitely recommend you to try out the newest Zabbix 5.2.

Thanks for your attention! Like, comment, subscribe!

 

 

Beat – An Acoustics Inspired DDoS Attack

Post Syndicated from Omer Yoachimik original https://blog.cloudflare.com/beat-an-acoustics-inspired-ddos-attack/

Beat - An Acoustics Inspired DDoS Attack

Beat - An Acoustics Inspired DDoS Attack

On the week of Black Friday, Cloudflare automatically detected and mitigated a unique ACK DDoS attack, which we’ve codenamed “Beat”, that targeted a Magic Transit customer. Usually, when attacks make headlines, it’s because of their size. However, in this case, it’s not the size that is unique but the method that appears to have been borrowed from the world of acoustics.

Acoustic inspired attack

As can be seen in the graph below, the attack’s packet rate follows a wave-shaped pattern for over 8 hours. It seems as though the attacker was inspired by an acoustics concept called beat. In acoustics, a beat is a term that is used to describe an interference of two different wave frequencies. It is the superposition of the two waves. When the two waves are nearly 180 degrees out of phase, they create the beating phenomenon. When the two waves merge they amplify the sound and when they are out of sync they cancel one another, creating the beating effect.

Beat - An Acoustics Inspired DDoS Attack
Beat DDoS Attack

Acedemo.org has a nice tool where you can create your own beat wave. As you can see in the screenshot below, the two waves in blue and red are out of phase and the purple wave is their superposition, the beat wave.

Beat - An Acoustics Inspired DDoS Attack
Source: https://academo.org/demos/wave-interference-beat-frequency/ 

Reverse engineering the attack

It looks like the attacker launched a flood of packets where the rate of the packets is determined by the equation of the beat wave: ybeat=y1+y2. The two equations y1 and y2 represent the two waves.

Each equation is expressed as

Beat - An Acoustics Inspired DDoS Attack

where fi is the frequency of each wave and t is time.

Therefore, the packet rate of the attack is determined by manipulation of the equation

Beat - An Acoustics Inspired DDoS Attack

to achieve a packet rate that ranges from ~18M to ~42M pps.

To get to the scale of this attack we will need to multiply ybeat by a certain variable a and also add a constant c, giving us ybeat=aybeat+c. Now, it’s been a while since I played around with equations, so I’m only going to try and get an approximation of the equation.

By observing the attack graph, we can guesstimate that

Beat - An Acoustics Inspired DDoS Attack

by playing around with desmos’s cool graph visualizer tool, if we set f1=0.0000345 and f2=0.00003455 we can generate a graph that resembles the attack graph. Plotting in those variables, we get:

Beat - An Acoustics Inspired DDoS Attack

Now this formula assumes just one node firing the packets. However, this specific attack was globally distributed, and if we assume that each node, or bot in this botnet, was firing an equal amount of packets at an equal rate, then we can divide the equation by the size of the botnet; the number of bots b. Then the final equation is something in the form of:

Beat - An Acoustics Inspired DDoS Attack

In the screenshot below, g = f 1. You can view this graph here.

Beat - An Acoustics Inspired DDoS Attack

Beating the drum

The attacker may have utilized this method in order to try and overcome our DDoS protection systems (perhaps thinking that the rhythmic rise and fall of the attack would fool our systems). However, flowtrackd, our unidirectional TCP state tracking machine, detected it as being a flood of ACK packets that do not belong to any existing TCP connection. Therefore, flowtrackd automatically dropped the attack packets at Cloudflare’s edge.

The attacker was beating the drum for over 19 hours with an amplitude of ~7 Mpps, a wavelength of ~4 hours, and peaking at ~42 Mpps. During the two days in which the attack took place, Cloudflare systems automatically detected and mitigated over 700 DDoS attacks that targeted this customer. The attack traffic accumulated at almost 500 Terabytes out of a total of 3.6 Petabytes of attack traffic that targeted this single customer in November alone. During those two days, the attackers utilized mainly ACK floods, UDP floods, SYN floods, Christmas floods (where all of the TCP flags are ‘lit’), ICMP floods, and RST floods.

The challenge of TCP based attacks

TCP is a stateful protocol, which means that in some cases, you’d need to keep track of a TCP connection’s state in order to know if a packet is legitimate or part of an attack, i.e. out of state. We were able to provide protection against out-of-state TCP packet attacks for our “classic” WAF/CDN service and Spectrum service because in both cases Cloudflare serves as a reverse-proxy seeing both ingress and egress traffic.

Beat - An Acoustics Inspired DDoS Attack

However, when we launched Magic Transit, which relies on an asymmetric routing topology with a direct server return (DSR), we couldn’t utilize our existing TCP connection tracking systems.

Beat - An Acoustics Inspired DDoS Attack

And so, being a software-defined company, we’re able to write code and spin up software when and where needed — as opposed to vendors that utilize dedicated DDoS protection hardware appliances. And that is what we did. We built flowtrackd, which runs autonomously on each server at our network’s edge. flowtrackd is able to classify the state of TCP flows by analyzing only the ingress traffic, and then drops, challenges, or rate-limits attack packets that do not correspond to an existing flow.

Beat - An Acoustics Inspired DDoS Attack

flowtrackd works together with our two additional DDoS protection systems, dosd and Gatebot, to assure our customers are protected against DDoS attacks, regardless of their size or sophistication — in this case, serving as a noise-canceling system to the Beat attack; reducing the headaches for our customers.

Read more about how our DDoS protection systems work here.

Add face recognition with Raspberry Pi | Hackspace 38

Post Syndicated from Andrew Gregory original https://www.raspberrypi.org/blog/add-face-recognition-with-raspberry-pi-hackspace-38/

It’s hard to comprehend how far machine learning has come in the past few years. You can now use a sub-£50 computer to reliably recognise someone’s face with surprising accuracy.

Although this kind of computing power is normally out of reach of microcontrollers, adding a Raspberry Pi computer to your project with the new High Quality Camera opens up a range of possibilities. From simple alerting applications (‘Mum’s arrived home!’), to dynamically adjusting settings based on the person using the project, there’s a lot of fun to be had.

Here’s a beginner’s guide to getting face recognition up and running.

Face recognition using machine learning is hard work, so the latest, greatest Raspberry Pi 4 is a must

1. Prepare your Raspberry Pi
For face recognition to work well, we’re going to need some horsepower, so we recommend a minimum of Raspberry Pi 3B+, ideally a Raspberry Pi 4. The extra memory will make all the difference. To keep as much resource as possible available for our project, we’ve gone for a Raspberry Pi OS Lite installation with no desktop.

Make sure you’re on the network, have set a new password, enabled SSH if you need to, and updated everything with sudo apt -y update && sudo apt -y full-upgrade. Finally, go into settings by running sudo raspi-config and enable the camera in ‘Interfacing Options’.

2. Attach the camera
This project will work well with the original Raspberry Pi Camera, but the new official HQ Camera will give you much better results. Be sure to connect the camera to your Raspberry Pi 4 with the power off. Connect the ribbon cable as instructed in hsmag.cc/HQCameraGetStarted. Once installed, boot up your Raspberry Pi 4 and test the camera is working. From the command line, run the following:
raspivid -o test.h264 -t 10000
This will record ten seconds of video to your microSD card. If you have an HDMI cable plugged in, you’ll see what the camera can see in real-time. Take some time to make sure the focus is correct before proceeding.

3. Install dependencies
The facial recognition library we are using is one that has been maintained for many years by Adam Geitgey. It contains many examples, including Python 3 bindings to make it really simple to build your own facial recognition applications. What is not so easy is the number of dependencies that need to be installed first. There are way too many to list here, and you probably won’t want to type them out, so head over to hsmag.cc/FacialRec so that you can cut and paste the commands. This step will take a while to complete on a Raspberry Pi 4, and significantly longer on a Model 3 or earlier.

3. Install the libraries
Now that we have everything in place, we can install Adam’s applications and Python bindings with a simple, single command:
sudo pip3 install face_recognition
Once installed, there are some examples we can download to try everything out.
cd
git clone --single-branch https://github.com/ageitgey/face_recognition.git
In this repository is a range of examples showing the different ways the software can be used, including live video recognition. Feel free to explore and remix.

5. Example images
The examples come with a training image of Barack Obama. To run the example:
cd ./face_recognition/examples
python3 facerec_on_raspberry_pi.py

On your smartphone, find an image of Obama using your favourite search engine and point it at the camera. Providing focus and light are good you will see:
“I see someone named Barack Obama!”
If you see a message saying it can’t recognise the face, then try a different image or try to improve the lighting if you can. Also, check the focus for the camera and make sure the distance between the image and camera is correct.

Who are you? What even is a name? Can a computer decide your identity?

6. Training time
The final step is to start recognising your own faces. Create a directory and, in it, place some good-quality passport-style photos of yourself or those you want to recognise. You can then edit the facerec_on_raspberry_pi.py script to use those files instead. You’ve now got a robust prototype of face recognition. This is just the beginning. These libraries can also identify ‘generic’ faces, meaning it can detect whether a person is there or not, and identify features such as the eyes, nose, and mouth. There’s a world of possibilities available, starting with these simple scripts. Have fun!

Issue 38 of Hackspace Magazine is out NOW

Front cover of hack space magazine featuring a big striped popcorn bucket filled with maker tools and popcorn

Each month, HackSpace magazine brings you the best projects, tips, tricks and tutorials from the makersphere. You can get it from the Raspberry Pi Press online store, The Raspberry Pi store in Cambridge, or your local newsagents.

Each issue is free to download from the HackSpace magazine website.

The post Add face recognition with Raspberry Pi | Hackspace 38 appeared first on Raspberry Pi.

Building high-quality benchmark tests for Amazon Redshift using Apache JMeter

Post Syndicated from Asser Moustafa original https://aws.amazon.com/blogs/big-data/building-high-quality-benchmark-tests-for-amazon-redshift-using-apache-jmeter/

In the introductory post of this series, we discussed benchmarking benefits and best practices common across different open-source benchmarking tools. As a reminder of why benchmarking is important, Amazon Redshift allows you to scale storage and compute independently, and for you to choose an appropriately balanced compute layer, you need to profile the compute requirements of various production workloads. Existing Amazon Redshift customers also desire an approach to scale up with eyes wide open, and benchmarking different Amazon Redshift cluster configurations against various production workloads can help you appropriately accommodate workload expansion. In addition, you may also use benchmark tests to proactively monitor a production cluster’s performance in real time.

For prospective Amazon Redshift customers, benchmarking Amazon Redshift is often one of the main components of evaluation and a key source of insight into the price-to-performance ratio of different Amazon Redshift configurations.

Open-source tools, with their cost-efficiency and vendor neutrality, are often the preferred choice for profiling your production workloads and benchmark tests. However, best practices for using these tools are scarce, possibly resulting in flawed compute profiles, flawed benchmark results, customer frustration, or bloated timelines.

As mentioned, this series is divided into multiple installments, with the first installment discussing general best practices for benchmarking, and the subsequent installments discussing the strengths and challenges with different open-source tools such as SQLWorkbench, psql, and Apache JMeter. In this post, we discuss benchmarking Amazon Redshift with the Apache JMeter open-source tool.

One final point before we get started: there is a lot that could be said about benchmarking—more than can be accommodated in a single post. Analytics Specialists Solutions Architects such as myself frequently and happily engage with current and prospective customers to help you evaluate your benchmarking strategy and approach at no charge. I highly recommend you take advantage of that benefit by reaching out to your AWS account SA.

Apache JMeter

Apache JMeter is an open-source load testing application written in Java that you can use to load test web applications, backend server applications, databases, and more. You can run it on Windows and a number of different Linux/UNIX systems; for this post we run it in a Windows environment. To install Apache JMeter on a Windows EC2 machine, complete the following steps:

  1. Launch a Windows EC2 instance using a Windows Server AMI (such as Microsoft Windows Server 2019 Base).
  2. Connect via RDP to the Windows EC2 Instance (RDP for macOS can be downloaded from Apple’s App Store).
  3. Download and unzip the Apache JMeter .zip file from the Apache JMeter download page.
  4. Download the Redshift JDBC driver and add the driver .jar file to JMeter’s /lib When setting up the JDBC connection in the JMeter GUI, use com.amazon.redshift.jdbc.Driver as the driver class name).
  5. Download the Apache Plugins Manager .jar file to JMeter’s /lib/ext The Apache Plugins Manager enables additional crucial functionality in Apache JMeter for benchmark testing (such as Ultimate Thread Group).
  6. Increase the JVM heap size for Apache JMeter by changing the corresponding JVM parameters in the jmeter.bat file located in the Apache JMeter /bin folder. For example, see the following code:
    edit C:\Dev\apache-jmeter-5.1.1\bin\jmeter.bat rem set HEAP=-Xms1g -Xmx1g -XX:MaxMetaspaceSize=256m set HEAP=-Xms5g -Xmx5g -XX:MaxMetaspaceSize=1g

  1. Choose the jmeter.bat file (double-click) to start Apache JMeter.

Apache JMeter supports both GUI and CLI modes, and although you may find the Apache JMeter GUI straightforward with a relatively small learning curve, it’s highly recommended that you use the Apache JMeter GUI primarily for defining benchmark tests, and perhaps running small-to-medium-sized benchmark tests. For large load tests, it’s highly recommended that you use the Apache JMeter CLI to minimize the risk of the Apache JMeter GUI exhausting its host’s compute resources, causing it to enter a non-responsive state or fail with an out-of-memory error. Using the CLI for large load tests also helps minimize any impact on the benchmark results.

In the following example, I demonstrate creating a straightforward load test using both the Apache JMeter GUI and CLI. The load test aims to measure query throughput while simulating 50 concurrent users with the following personas:

  • 20 users submit only small queries, which are of low complexity and typically have a runtime of 0–30 seconds in the current system, such as business intelligence analyst queries
  • 20 users submit only medium queries, which are of moderate complexity and typically have a runtime of 31–300 seconds in the current system, such as data engineer queries
  • 10 users submit only large queries, which are very complex and typically have a runtime over 5 minutes in the current system, such as data scientist queries

The load test is configured to run for 15 minutes, which is a pretty short test duration, so you can increase that setting to 30 minutes or more. We rely on JMeter’s query throughput calculation, but we can also manually compute query throughput from the runtime metadata that is gathered if we so desire.

For this post, I skip over discussing the possible Amazon Redshift cluster tweaks that you could use to squeeze every drop of performance out of Amazon Redshift, and instead rely on the strength of its default state to be optimized to achieve excellent query throughput on diverse workloads.

Apache JMeter has a number of building blocks, such as thread groups, that can be used to define a wide variety of benchmark tests, and each building block can have a number of community implementations (for example, Arrivals Thread Group or Ultimate Thread Group).

The following diagram provides a basic illustration of the various Apache JMeter building blocks to be leveraged in this load test, how they interact with each other, and the typical order in which are they created; in some cases, I mention the specific implementation of the building block to be used in parenthesis (such as Ultimate Thread Group).

The following table delves deeper into the purpose that each building block serves in our load test.

Apache JMeter Component Purpose
Test Plan Represents an atomic test case (simulate 50 users concurrently querying a Redshift cluster with twice the baseline node count)
JDBC Connection Configuration Represents all the JDBC information needed to connect to the Amazon Redshift cluster (such as JDBC URL, username, and password)
User Defined Variables A collection of key-value pairs that can be used as parameters throughout the test plan and make it easier to maintain or change the test behavior
Listener Captures and displays or writes test output such as SQL result sets
Thread Group A simulated group of users that perform the test function (submit a SQL query)
JDBC Request The action to be taken by the simulated users (SQL query text)

Apache JMeter (GUI)

The following screenshot is the resulting load test.

The following screenshot is the resulting load test.

The following screenshot provides a close up of the building block tree.

The following screenshot provides a close up of the building block tree.

In the following sections, we examine each building block in greater detail.

Test Plan

The test plan serves as the parent container for our entire benchmark test, and we can change its name in the visual tree that appears in the Apache JMeter GUI by editing the Name field.

I take advantage of the User Defined Variables section to set my own custom variables that hold values needed by all components in the test case, such as the JDBC URL, test duration, and number of users submitting small, medium, and large queries. The baseDir variable is actually a variable that is intended to be embedded in other variables, rather than directly referenced by other test components. I left all other settings at their default on this page.

The baseDir variable is actually a variable that is intended to be embedded in other variables, rather than directly referenced by other test components.

JDBC Connection Configuration

We use the JDBC Connection Configuration building block to create a database connection pool that is used by the simulated users to submit queries to Amazon Redshift. The value specified in Variable Name for created pool is the identifier that is used to reference this connection pool in other JMeter building blocks. In this example, I named it RedshiftJDBCConfig.

By setting the Max Number of Connections to 0, the connection pool can grow as large as it needs to. That may not be the desired behavior for all test scenarios, so be sure to set it as you see fit.

In the Init SQL statements section, I provide an example of how to use SQL to disable the result set cache in Amazon Redshift for every connection created, or perform other similar initialization code.

Towards the end, I input the database JDBC URL (which is actually a variable reference to a variable defined in the test plan), JDBC driver class name, and database username and password. I left all other fields at their default on this page.

I left all other fields at their default on this page.

User Defined Variables

You can add a User Defined Variables building block in several places, and it’s best to use this capability to limit the scope of each variable. For this post, we use an instance of the User Defined Variables building block to hold the output file names of each listener in this test plan (if you look closely, you can see the values of these variables reference the baseDir variable, which was defined in our test plan). You can also notice three other instances of the User Defined Variables building block for the small, medium, and large thread groups—again so that the scope of variables is kept appropriately narrow.

You can add a User Defined Variables building block in several places, and it’s best to use this capability to limit the scope of each variable.

Listeners

Listeners control where test output is written and how it’s processed. There are many different kinds of listeners that, for example, allow you to capture your test output as a tree, table, or graph. Other listeners can summarize and aggregate test metadata (such as the number of test samples submitted during the test). I choose to add several listeners in this test plan just for demonstration, but I have found the listeners Aggregate Report and View Results in Table to be most helpful to me. The following screenshot shows the View Results in Table output.

The following screenshot shows the View Results in Table output.

The following screenshot shows the Aggregate Report output.

The following screenshot shows the Aggregate Report output.

You can also save output from listeners after a test run to a different file through the JMeter menu.

Thread group: Ultimate Thread Group

A thread group can be thought of as a group of simulated users, which is why for this post, I create three separate thread groups: one to represent each of three previously mentioned user personas being simulated (small, medium, and large). Each thread group is named accordingly.

We use the Thread Schedule section to control how many users should be created and at what time interval. In this test, I chose to have all 20 small users created at start time without any delays. This is achieved by a one-row entry in the Thread Schedule and setting the Start Threads Count thread group property to 20 users (or the matching variable, as we do in the following screenshot).

We use the Thread Schedule section to control how many users should be created and at what time interval.

Alternatively, I could stagger user creation by creating multiple rows and setting the Initial Delay sec field to control each row’s startup delay. With the row entries in the following screenshot, an additional five users are created every 5 seconds.

With the row entries in the following screenshot, an additional five users are created every 5 seconds.

Thread group: User Defined Variables

An additional User Defined Variables instance is added to each of the three thread groups to hold the variables in their individual scope, or that would preferably be configurable at an individual thread group level. For this post, I make the JDBC Connection Configuration a variable so that it’s customizable for each individual thread group (JDBC_Variable_Name_In_Pool). This allows me to, for example, rapidly switch two different test clusters.

An additional User Defined Variables instance is added to each of the three thread groups to hold the variables in their individual scope.

JDBC Request

The JDBC Request can be thought of as the benchmark query or SQL test query to be submitted non-stop by each simulated user in this thread group. To configure this JDBC Request, I specified the appropriate JDBC Connection Configuration and some very simple test SQL. I could have also used Apache JMeter’s ability to parameterize queries so that they vary from one iteration to another using a predetermined set of parameter values. For example, for the SQL statement select * from customer where cust_id=<some value>, Apache JMeter could be configured to set the value in the filter clause to a randomly chosen value from a pre-compiled list of filter values for each sample submission. I left all other settings at their default.

The JDBC Request can be thought of as the benchmark query or SQL test query to be submitted non-stop by each simulated user in this thread group.

Apache JMeter (CLI)

The Apache JMeter GUI saves test plans in .jmx files that can be used to run the same test plan in Apache JMeter’s console mode. The following CLI command demonstrates how you can use the LoadTestExample.jmx file that was created in the previous steps using the GUI to run the same load test:

> <jmeter_install_dir>\bin\jmeter -n -t LoadTestExample.jmx -e -l test.out

The sample output is from a 30-second run of LoadTestExample.jmx.

The sample output is from a 30-second run of LoadTestExample.jmx.

After the test has completed, several output files are created, such as a JMeter application log, query output files from the listeners (if any), and test statistics from listeners (if any). For this post, the statistical metrics captured for the test run are located in a JSON file inside the report-output directory. See the following screenshot.

For this post, the statistical metrics captured for the test run are located in a JSON file inside the report-output directory.

The \report-output\statistics.json file captures a lot of useful metrics, such as the total samples (like SQL queries) submitted during the test duration, achieved query throughput, and number of small, medium, and large queries and their individual throughput. The following screenshot shows a sampling of the data from statistics.json.

The following screenshot shows a sampling of the data from statistics.json.

Conclusion

In this series of posts, we discussed several recommended best practices for conducting high-quality benchmark tests. Some of the best practices represented core principles that span all the open-source tools discussed (such as consistency in testing methodology). In this particular post, we reviewed the strengths and appropriateness of Apache JMeter for conducting benchmark tests. I hope this series has been helpful, and strongly encourage current and prospective customers to reach out to me or other AWS colleagues if you wish to delve deeper.


About the Author

Asser Moustafa is an Analytics Specialist Solutions Architect at AWS based out of Dallas, Texas. He advises customers in the Americas on their Amazon Redshift and data lake architectures and migrations, starting from the POC stage to actual production deployment and maintenance

Get up to speed with partial clone and shallow clone

Post Syndicated from Derrick Stolee original https://github.blog/2020-12-21-get-up-to-speed-with-partial-clone-and-shallow-clone/

As your Git repositories grow, it becomes harder and harder for new developers to clone and start working on them. Git is designed as a distributed version control system. This means that you can work on your machine without needing a connection to a central server that controls how you interact with the repository. This is only fully realizable if you have all reachable data in your local repository.

What if there was a better way? Could you get started working in the repository without downloading every version of every file in the entire Git history? Git’s partial clone and shallow clone features are options that can help here, but they come with their own tradeoffs. Each option breaks at least one expectation from the normal distributed nature of Git, and you might not be willing to make those tradeoffs.

If you are working with an extremely large monorepo, then these tradeoffs are more likely to be worthwhile or even necessary to interact with Git at that scale!

Before digging in on this topic, be sure you are familiar with how Git stores your data, including commits, trees, and blob objects. I presented some of these ideas and other helpful tips at GitHub Universe in my talk, Optimize your monorepo experience.

Quick Summary

There are three ways to reduce clone sizes for repositories hosted by GitHub.

  • git clone --filter=blob:none <url> creates a blobless clone. These clones download all reachable commits and trees while fetching blobs on-demand. These clones are best for developers and build environments that span multiple builds.
  • git clone --filter=tree:0 <url> creates a treeless clone. These clones download all reachable commits while fetching trees and blobs on-demand. These clones are best for build environments where the repository will be deleted after a single build, but you still need access to commit history.
  • git clone --depth=1 <ulr> creates a shallow clone. These clones truncate the commit history to reduce the clone size. This creates some unexpected behavior issues, limiting which Git commands are possible. These clones also put undue stress on later fetches, so they are strongly discouraged for developer use. They are helpful for some build environments where the repository will be deleted after a single build.

Full clones

As we discuss the different clone types, we will use a common representation of Git objects:

  • Boxes are blobs. These represent file contents.
  • Triangles are trees. These represent directories.
  • Circles are commits. These are snapshots in time.

We use arrows to represent a relationship between objects. Basically, if an OID B appears inside a commit or tree A, then the object A has an arrow to the object B. If we can follow a list of arrows from an object A to another object C, then we say C is reachable from A. The process of following these arrows is sometimes referred to as walking objects.

We can now describe the data downloaded by a git clone command! The client asks the server for the latest commits, then the server provides those objects and every other reachable object. This includes every tree and blob in the entire commit history!

In this diagram, time moves from left to right. The arrows between a commit and its parents therefore go from right to left. Each commit has a single root tree. The root tree at the HEAD commit is fully expanded underneath, while the rest of the trees have arrows pointing towards these objects.

This diagram is purposefully simple, but if your repository is very large you will have many commits, trees, and blobs in your history. Likely, the historical data forms a majority of your data. Do you actually need all of it?

These days, many developers always have a network connection available as they work, so asking the server for a little more data when necessary might be an acceptable trade-off.

This is the critical design change presented by partial clone.

Partial clone

Git’s partial clone feature is enabled by specifying the --filter option in your git clone command. The full list of filter options exist in the git rev-list documentation, since you can use git rev-list --filter=<filter> --all to see which objects in your repository match the filter. There are several filters available, but the server can choose to deny your filter and revert to a full clone.

On github.com and GitHub Enterprise Server 2.22+, there are two options available:

  1. Blobless clones: git clone --filter=blob:none <url>
  2. Treeless clones: git clone --filter=tree:0 <url>

Let’s investigate each of these options.

Blobless clones

When using the --filter=blob:none option, the initial git clone will download all reachable commits and trees, and only download the blobs for commits when you do a git checkout. This includes the first checkout inside the git clone operation. The resulting object model is shown here:

The important thing to notice is that we have a copy of every blob at HEAD but the blobs in the history are not present. If your repository has a deep history full of large blobs, then this option can significantly reduce your git clone times. The commit and tree data is still present, so any subsequent git checkout only needs to download the missing blobs. The Git client knows how to batch these requests to ask the server only for the missing blobs.

Further, when running git fetch in a blobless clone, the server only sends the new commits and trees. The new blobs are downloaded only after a git checkout. Note that git pull runs git fetch and then git merge, so it will download the necessary blobs during the git merge command.

When using a blobless clone, you will trigger a blob download whenever you need the contents of a file, but you will not need one if you only need the OID of a file. This means that git log can detect which commits changed a given path without needing to download extra data.

This means that blobless clones can perform commands like git merge-basegit log, or even git log -- <path> with the same performance as a full clone.

Commands like git diff or git blame <path> require the contents of the paths to compute diffs, so these will trigger blob downloads the first time they are run. However, the good news is that after that you will have those blobs in your repository and do not need to download them a second time. Most developers only need to run git blame on a small number of files, so this tradeoff of a slightly slower git blame command is worth the faster clone and fetch times.

Blobless clones are the most widely-used partial clone option. I’ve been using them myself for months without issue.

Treeless clones

In some repositories, the tree data might be a significant portion of the history. Using --filter=tree:0, a treeless clone downloads all reachable commits, then downloads trees and blobs on demand. The resulting object model is shown here:

Note that we have all of the data at HEAD, but otherwise only have commit data. This means that the initial clone can be much faster in a treeless clone than in a blobless or full clone. Further, we can run git fetch to download only the latest commits. However, working in a treeless clone is more difficult because downloading a missing tree when needed is more expensive.

For example, a git checkout command changes the HEAD commit, usually to a commit where we do not have the root tree. The Git client then asks the server for that root tree by OID, but also for all reachable trees from that root tree. Currently, this request does not tell the server that the client already has some root trees, so the server might send many trees the client already has locally. After the trees are downloaded, the client can detect which blobs are missing and request those in a batch.

It is possible to work in a treeless clone without triggering too many requests for extra data, but it is much more restrictive than a blobless clone.

For example, history operations such as git merge-base or git log (without extra options) only use commit data. These will not trigger extra downloads.

However, if you run a file history request such as git log -- <path>, then a treeless clone will start downloading root trees for almost every commit in the history!

We strongly recommend that developers do not use treeless clones for their daily work. Treeless clones are really only helpful for automated builds when you want to quickly clone, compile a project, then throw away the repository. In environments like GitHub Actions using public runners, you want to minimize your clone time so you can spend your machine time actually building your software! Treeless clones might be an excellent option for those environments.

⚠ Warning: While writing this article, we were putting treeless clones to the test beyond the typical limits. We noticed that repositories that contain submodules behave very poorly with treeless clones. Specifically, if you run git fetch in a treeless clone, then the logic in Git that looks for changed submodules will trigger a tree request for every new commit! This behavior can be avoided by running git config fetch.recurseSubmodules false in your treeless clones. We are working on a more robust fix in the Git client.

Shallow clones

Partial clones are relatively new to Git, but there is an older feature that does something very similar to a treeless clone: shallow clones. Shallow clones use the --depth=<N> parameter in git clone to truncate the commit history. Typically, --depth=1 signifies that we only care about the most recent commits. Shallow clones are best combined with the --single-branch --branch=<branch> options as well, to ensure we only download the data for the commit we plan to use immediately.

The object model for a shallow clone is shown in this diagram:

Here, the commit at HEAD exists, but its connection to its parents and the rest of the history is severed. The commits whose parents are removed are called shallow commits and together form the shallow boundary. The commit objects themselves have not changed, but there is some metadata in the client repository directing the Git client to ignore those parent connections. All trees and blobs are downloaded for any commit that exists on the client.

Since the commit history is truncated, commands such as git merge-base or git log show different results than they would in a full clone! In general, you cannot count on them to work as expected. Recall that these commands work as expectedly in partial clones. Even in blobless clones, commands like git blame -- <path> will work correctly, if only a little slower than in full clones. Shallow clones don’t even make that a possibility!

The other major difference is how git fetch behaves in a shallow clone. When fetching new commits, the server must provide every tree and blob that is “new” to these commits, relative to the shallow commits. This computation can be more expensive than a typical fetch, partly because a well-maintained server can make use of reachability bitmaps. Depending on how others are contributing to your remote repository, a git fetch operation in a shallow clone might end up downloading an almost-full commit history!

Here are some descriptions of things that can go wrong with shallow clones that negate the supposed values. For these reasons we do not recommend shallow clones except for builds that delete the repository immediately afterwards. Fetching from shallow clones can cause more harm than good!

Remember the “shallow boundary” mentioned earlier? The client sends that boundary to the server during a git fetch command, telling the server that it doesn’t have all of the reachable commits behind that boundary. The client then asks for the latest commits and everything reachable from those until hitting a shallow commit in the boundary. If another user starts a topic branch below that boundary and then the shallow client fetches that topic (or worse, the topic is merged into the default branch), then the server needs to walk the full history and serve the client what amounts to almost a full clone! Further, the server needs to calculate that data without the advantage of performance features like reachability bitmaps.

Comparing Clone Options

Let’s recall each of our clone options. Instead of looking them at a pure object level, let’s explore each category of object. The figures below group the data that is downloaded by each repository type. In addition to the data downloaded at clone, let’s consider the situation where some time passes and then the client runs git fetch and then git checkout to move to a new commit. For each of these options, how much data is downloaded.

Full clones download all reachable objects. Typically, blobs are responsible for most of this data.

In a partial clone, some data is not served immediately and is delayed until the client needs it. Blobless clones skip blobs except those needed at checkout time. Treeless clones skip all trees in the history in favor of downloading a full copy of the trees needed for each checkout.

Blobless clone Treeless clone
git clone --depth=1,
git fetch
git clone --depth=1,
git fetch --depth=1

What do the numbers say?

Fellow GitHub engineer @solmazabbaspour designed and ran an experiment to compare these different clone options on a variety of open source repositories. She will post a blog post tomorrow giving full details and data for the experiment, but I’ll share the executive summary here. Here are some common themes we identified that could help you choose the right scenario for your own usage:

There are many different types of clones beyond the default full clone. If you truly need to have a distributed workflow and want all of the data in your local repository, then you should continue using full clones. If you are a developer focused on a single repository and your repository is reasonably-sized, the best approach is to do a full clone.

You might switch to a blobless partial clone if your repository is very large due to many large blobs, as that clone will help you get started more quickly. The trade-off is that some commands such as git checkout or git blame will require downloading new blob data when necessary.

In general, calculating a shallow fetch is computationally more expensive compared to a full fetch. Always use a full fetch instead of a shallow fetch both in fully and shallow cloned repositories.

In workflows such as CI builds when there is a need to do a single clone and delete the repository immediately, shallow clones are a good option. Shallow clones are the fastest way to get a copy of the working directory at the tip commit with the additional cost that fetching from these repositories is much more expensive, so we do not recommend shallow clones for developers. If you need the commit history for your build, then a treeless partial clone might work better for you than a full clone.

In general, your mileage may vary. Now that you are armed with these different options and the object model behind them, you can go and play with these kinds of clones. You should also be aware of some pitfalls of these non-full clone options:

  • Shallow clones skip the commit history. This makes commands such as git log or git merge-base unavailable. Never fetch from a shallow clone!
  • Treeless clones contain commit history, but it is very expensive to download missing trees. Thus, git log (without a path) and git merge-base are available, but commands like git log -- <path> and git blame are extremely slow and not recommended in these clones.
  • Blobless clones contain all reachable commits and trees, so Git downloads blobs when it needs access to file contents. This means that commands like git log -- <path> are available but commands like git blame are a bit slower on their first run. However, this can be a great way to get started on a very large repository with a lot of old, large blobs.
  • Full clones work as expected. The only downside is the time required to download all of that data, plus the extra disk space for all those files.

Be sure to upgrade to the latest Git version so you have all the latest performance improvements!

[$] LWN’s 2020 Retrospective

Post Syndicated from original https://lwn.net/Articles/840446/rss

Predictions are hard, as they say, especially when they are about the
future. So perhaps your editor can be forgiven for not anticipating that
2020 would be the sort of year that makes one think nostalgically about
trips to the dentist, waiting in a crowded motor-vehicle office, or
crossing the Pacific in a row-47 middle seat. If only we had known how
good we had it. Be that as it may, this year is finally coming to an end.
Read on for a look back at the year, starting with the ill-advised predictions made in January.

How FanDuel Group secures personally identifiable information in a data lake using AWS Lake Formation

Post Syndicated from Damian Grech original https://aws.amazon.com/blogs/big-data/how-fanduel-group-secures-personally-identifiable-information-in-a-data-lake-using-aws-lake-formation/

This post is co-written with Damian Grech from FanDuel

FanDuel Group is an innovative sports-tech entertainment company that is changing the way consumers engage with their favorite sports, teams, and leagues. The premier gaming destination in the US, FanDuel Group consists of a portfolio of leading brands across gaming, sports betting, daily fantasy sports, advance-deposit wagering, and TV/media, including FanDuel, Betfair US, and TVG. FanDuel Group has a presence across 50 states and over 8.5 million customers. The company is based in New York with offices in California, New Jersey, Florida, Oregon, and Scotland. FanDuel Group is a subsidiary of Flutter Entertainment plc, the world’s largest sports betting and gaming operator with a portfolio of globally recognized brands and a constituent of the FTSE 100 index of the London Stock Exchange.

In this post, we discuss how FanDuel used AWS Lake Formation and Amazon Redshift Spectrum to restrict access to personally identifiable information (PII) in their data lake.

The challenge

In 2018, a series of mergers led to the creation of FanDuel Group, and the combined data engineering team found themselves operating three data warehouses running on Amazon Redshift. The team decided to create a new single platform to replace the three separate warehouses, consisting of a data warehouse containing the core business data model and a data lake to catalog and hold all other types of data. FanDuel’s vision was to create an unified data platform that served their data requirements. This included the ability to ingest and organize real-time and batch datasets, and secure and govern PII.

Because the end-users of the existing data warehouses were familiar with Amazon Redshift, it was critical that they be able to access the data lake using Amazon Redshift. Other important architecture considerations included a simplified user experience, the ability to scale to huge data volumes, and a robust security model to provision relevant data to analysts and data scientists.

To accomplish the vision, FanDuel decided to modernize the data platform and introduce Amazon Simple Storage Service (Amazon S3)-based data lakes. Data lakes are a logical construct that allows data to be stored in its native format using open data formats. With a data lake architecture, FanDuel can enable data analysts to analyze large volume of data without significant modeling. Also, data lakes allow FanDuel to store structured and unstructured data.

Some of the data to be stored in the data lake was customer PII, so access to this category of data needed to be carefully restricted to only employees who required access to perform their job functions. To address these security challenges, FanDuel first tested out a tag-based approach on Amazon S3 to restrict access to the PII data. The idea was to write two datasets for a single dataset—one with PII and another without PII—and apply tags for files where PII is stored, securing files using AWS Identity and Access Management (IAM) policies. This approach was complex and needed 100–200 hours of development time for every data source that was ingested.

Solution overview

FanDuel decided to use Lake Formation and Redshift Spectrum to solve this challenge. The following architectural diagram shows how FanDuel secured their data lake.

The solution includes the following steps:

  1. The FanDuel team registered the S3 location in Lake Formation.

After the location is registered, Lake Formation takes control of the data lake, thereby eliminating the need to set up complicated policies in IAM.

  1. FanDuel built AWS Glue ETL jobs to extract data from sources, including MySQL databases and flat files. They used AWS Glue to cleanse and transform raw data to form refined datasets stored in Parquet-formatted files. They also used AWS Glue crawlers to register the cleansed datasets in the Data Catalog.
  2. The team used Lake Formation to set up column-based permissions using two roles:
    1. LimitedPIIAnalyst – Granted access to all columns. Only analysts who needed access to PII data were assigned this role.
    2. NonPIIAnalyst – Granted access to non-PII columns. By default, analysts using the data lake were assigned this role.
  3. FanDuel created two external schemas using Redshift Spectrum: one using the NonPIIAnalyst role, and one using the LimitedPIIAnalyst The following code is an example of the DDL that uses the role that was set up in Lake Formation:
    CREATE EXTERNAL SCHEMA nonpii_data_lake FROM DATA CATALOG
    DATABASE 'fanduel_data_lake' REGION 'us-east-1'
    IAM_ROLE 'arn:aws:iam::123456789012:role/NonPIIAnalyst';
    
    CREATE EXTERNAL SCHEMA limitedpii_data_lake FROM DATA CATALOG
    DATABASE 'fanduel_data_lake' REGION 'us-east-1'
    IAM_ROLE 'arn:aws:iam::123456789012:role/LimitedPIIAnalyst';
    

FanDuel could already manage access permissions by adding or removing users from a group in Amazon Redshift, so they already had a group consisting of only the analysts who should be permitted access to PII. The following code grants this group access to the limitedpii_data_lake schema, which effectively means only this group can query the data lake using the LimitedPIIAnalyst role:

GRANT USAGE ON SCHEMA nonpii_data_lake TO base_group;
GRANT SELECT ON ALL TABLES IN SCHEMA nonpii_data_lake TO base_group;
GRANT USAGE ON SCHEMA limitedpii_data_lake TO pii_permitted_group;
GRANT SELECT ON ALL TABLES IN SCHEMA limitedpii_data_lake TO pii_permitted_group;

Benefits

The ability to extend queries to the data lake with Redshift Spectrum and have column-level access control provides superior control over the S3 tag-based permissions approach that was originally considered. This architecture provided the following benefits for FanDuel:

  • FanDuel could offer new capabilities to data analysts. For example, data analysts could quickly access raw data with PII and combine it with existing data in Amazon Redshift. Lake Formation provided a single view for monitoring the data access patterns.
  • Lake Formation column-level access control allowed them to secure PII data, which otherwise would have taken a complex S3 tag-based approach. This saved 100–200 hours of development time for every new data source and data footprint, because the original approach required creating two files (one with PII and another without PII), tagging files, and setting up permissions based on tags.
  • The ability to extend access from Amazon Redshift to the data lake with appropriate access control has allowed FanDuel to reduce data stored in Amazon Redshift.

Conclusion

FanDuel will leverage its new data platform to ingest additional data sources with real-time data so analysts and data scientists can gain insights and improve customer experience.

Questions or feedback? Send an email to [email protected].


About the Authors

Damian Grech is a Data Engineering Senior Manager at FanDuel. Damian has over 15 years of experience in software delivery and has worked with organizations ranging from large enterprises to start-ups at their infant stages. In his spare time, you can find him either experimenting in the kitchen or trailing the Scottish Highlands.

 

 

Shiv Narayanan is Global Business Development Manager for Data Lakes and Analytics solutions at AWS. He works with AWS customers across the globe to strategize, build, develop and deploy modern data platforms. Shiv loves music, travel, food and trying out new tech.

 

 

 

Sidhanth Muralidhar is a Senior Technical Account Manager at Amazon Web Services. He works with large enterprise customers who run their workloads on AWS. He is passionate about working with customers and helping them in their cloud journey. In his spare time, he loves to play and watch football.

 

 

 

 

 

 

Continuously building and delivering Maven artifacts to AWS CodeArtifact

Post Syndicated from Vinay Selvaraj original https://aws.amazon.com/blogs/devops/continuously-building-and-delivering-maven-artifacts-to-aws-codeartifact/

Artifact repositories are often used to share software packages for use in builds and deployments. Java developers using Apache Maven use artifact repositories to share and reuse Maven packages. For example, one team might own a web service framework that is used by multiple other teams to build their own services. The framework team can publish the framework as a Maven package to an artifact repository, where new versions can be picked up by the service teams as they become available. This post explains how you can set up a continuous integration pipeline with AWS CodePipeline and AWS CodeBuild to deploy Maven artifacts to AWS CodeArtifact. CodeArtifact is a fully managed pay-as-you-go artifact repository service with support for software package managers and build tools like Maven, Gradle, npm, yarn, twine, and pip.

Solution overview

The pipeline we build is triggered each time a code change is pushed to the AWS CodeCommit repository. The code is compiled using the Java compiler, unit tested, and deployed to CodeArtifact. After the artifact is published, it can be consumed by developers working in applications that have a dependency on the artifact or by builds running in other pipelines. The following diagram illustrates this architecture.

Architecture diagram of the solution

 

All the components in this pipeline are fully managed and you don’t pay for idle capacity or have to manage any servers.

 

Prerequisites

This post assumes you have the following tools installed and configured:

 

Creating your resources

To create the CodeArtifact domain, CodeArtifact repository, CodeCommit, CodePipeline, CodeBuild, and associated resources, we use AWS CloudFormation. Save the provided CloudFormation template below as codeartifact-cicd-pipeline.yaml and create a stack:


---
Description: Code Artifact CI/CD Pipeline

Parameters:
  GitRepoBranchName:
    Type: String
    Default: main

Resources:

  ArtifactBucket:
    Type: AWS::S3::Bucket
  
  CodeArtifactDomain:
    Type: AWS::CodeArtifact::Domain
    Properties:
      DomainName: !Sub "${AWS::StackName}-domain"
  
  CodeArtifactRepository:
    Type: AWS::CodeArtifact::Repository
    Properties:
      DomainName: !GetAtt CodeArtifactDomain.Name
      RepositoryName: !Sub "${AWS::StackName}-repo"

  CodeRepository:
    Type: AWS::CodeCommit::Repository
    Properties:
      RepositoryDescription: Maven artifact code repository
      RepositoryName: !Sub "${AWS::StackName}-maven-artifact-repo"
  
  CodeBuildProject:
    Type: AWS::CodeBuild::Project
    Properties:
      Name: !Sub "${AWS::StackName}-CodeBuild"
      Artifacts:
        Type: CODEPIPELINE
      Environment:
        EnvironmentVariables:
          - Name: CODEARTIFACT_DOMAIN
            Type: PLAINTEXT
            Value: !GetAtt CodeArtifactDomain.Name
          - Name: CODEARTIFACT_REPO
            Type: PLAINTEXT
            Value: !GetAtt CodeArtifactRepository.Name
        Type: LINUX_CONTAINER
        ComputeType: BUILD_GENERAL1_SMALL
        Image: aws/codebuild/amazonlinux2-x86_64-standard:3.0
      ServiceRole: !GetAtt CodeBuildServiceRole.Arn
      Source:
        Type: CODEPIPELINE
        BuildSpec: buildspec.yaml
  
  Pipeline:
    Type: AWS::CodePipeline::Pipeline
    Properties:
      ArtifactStore:
        Type: S3
        Location: !Ref ArtifactBucket
      RoleArn: !GetAtt CodePipelineServiceRole.Arn
      Stages:
      - Name: Source
        Actions:
        - Name: SourceAction
          ActionTypeId:
            Category: Source
            Owner: AWS
            Version: '1'
            Provider: CodeCommit
          OutputArtifacts:
          - Name: SourceBundle
          Configuration:
            BranchName: !Ref GitRepoBranchName
            RepositoryName: !GetAtt CodeRepository.Name
          RunOrder: '1'

      - Name: Deliver
        Actions:
        - Name: CodeBuild
          InputArtifacts:
          - Name: SourceBundle
          ActionTypeId:
            Category: Build
            Owner: AWS
            Version: '1'
            Provider: CodeBuild
          Configuration:
            ProjectName: !Ref CodeBuildProject
          RunOrder: '1'

  CodeBuildServiceRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
        - Sid: ''
          Effect: Allow
          Principal:
            Service:
            - codebuild.amazonaws.com
          Action: sts:AssumeRole
      Policies:
      - PolicyName: CodePipelinePolicy
        PolicyDocument:
          Version: '2012-10-17'
          Statement:
          - Sid: CloudWatchLogsPolicy
            Effect: Allow
            Action:
            - logs:CreateLogGroup
            - logs:CreateLogStream
            - logs:PutLogEvents
            Resource:
            - "*"
          - Sid: CodeCommitPolicy
            Effect: Allow
            Action:
            - codecommit:GitPull
            Resource:
            - !GetAtt CodeRepository.Arn
          - Sid: S3GetObjectPolicy
            Effect: Allow
            Action:
            - s3:GetObject
            - s3:GetObjectVersion
            Resource:
            - !Sub "arn:aws:s3:::${ArtifactBucket}/*"
          - Sid: S3PutObjectPolicy
            Effect: Allow
            Action:
            - s3:PutObject
            Resource:
            - !Sub "arn:aws:s3:::${ArtifactBucket}/*"
          - Sid: BearerTokenPolicy
            Effect: Allow
            Action:
            - sts:GetServiceBearerToken
            Resource: "*"
            Condition:
              StringEquals:
                sts:AWSServiceName: codeartifact.amazonaws.com
          - Sid: CodeArtifactPolicy
            Effect: Allow
            Action:
            - codeartifact:GetAuthorizationToken
            Resource:
            - !Sub "arn:aws:codeartifact:${AWS::Region}:${AWS::AccountId}:domain/${CodeArtifactDomain.Name}"
          - Sid: CodeArtifactPackage
            Effect: Allow
            Action:
            - codeartifact:PublishPackageVersion
            - codeartifact:PutPackageMetadata
            - codeartifact:ReadFromRepository
            Resource:
            - !Sub "arn:aws:codeartifact:${AWS::Region}:${AWS::AccountId}:package/${CodeArtifactDomain.Name}/${CodeArtifactRepository.Name}/*"
          - Sid: CodeArtifactRepository
            Effect: Allow
            Action:
            - codeartifact:ReadFromRepository
            - codeartifact:GetRepositoryEndpoint
            Resource:
            - !Sub "arn:aws:codeartifact:${AWS::Region}:${AWS::AccountId}:repository/${CodeArtifactDomain.Name}/${CodeArtifactRepository.Name}"          

  CodePipelineServiceRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
        - Sid: ''
          Effect: Allow
          Principal:
            Service:
            - codepipeline.amazonaws.com
          Action: sts:AssumeRole
      Policies:
      - PolicyName: CodePipelinePolicy
        PolicyDocument:
          Version: '2012-10-17'
          Statement:
          - Action:
            - s3:GetObject
            - s3:GetObjectVersion
            - s3:GetBucketVersioning
            Resource: !Sub "arn:aws:s3:::${ArtifactBucket}/*"
            Effect: Allow
          - Action:
            - s3:PutObject
            Resource:
            - !Sub "arn:aws:s3:::${ArtifactBucket}/*"
            Effect: Allow
          - Action:
            - codecommit:GetBranch
            - codecommit:GetCommit
            - codecommit:UploadArchive
            - codecommit:GetUploadArchiveStatus
            - codecommit:CancelUploadArchive
            Resource:
              - !GetAtt CodeRepository.Arn
            Effect: Allow
          - Action:
            - codebuild:StartBuild
            - codebuild:BatchGetBuilds
            Resource: 
              - !GetAtt CodeBuildProject.Arn
            Effect: Allow
          - Action:
            - iam:PassRole
            Resource: "*"
            Effect: Allow
Outputs:
  CodePipelineArtifactBucket:
    Value: !Ref ArtifactBucket
  CodeRepositoryHttpCloneUrl:
    Value: !GetAtt CodeRepository.CloneUrlHttp
  CodeRepositorySshCloneUrl:
    Value: !GetAtt CodeRepository.CloneUrlSsh

aws cloudformation deploy                         \
  --stack-name codeartifact-pipeline               \
  --template-file codeartifact-cicd-pipeline.yaml  \
  --capabilities CAPABILITY_IAM

 

If you have a Maven project you want to use, you can use that. Otherwise, create a new one:


mvn archetype:generate        \
  -DgroupId=com.mycompany.app \
  -DartifactId=my-app         \
  -DarchetypeArtifactId=maven-archetype-quickstart \
  -DarchetypeVersion=1.4 -DinteractiveMode=false

 

Initialize a Git repository for the Maven project and add the CodeCommit repository that was created in the CloudFormation stack as a remote repository:


cd my-app
git init
CODECOMMIT_URL=$(aws cloudformation describe-stacks --stack-name codeartifact-pipeline --query "Stacks[0].Outputs[?OutputKey=='CodeRepositoryHttpCloneUrl'].OutputValue" --output text)
git remote add origin $CODECOMMIT_URL

 

Updating the POM file

The Maven project’s POM file needs to be updated with the distribution management section. This lets Maven know where to publish artifacts. Add the distributionManagement section inside the project element of the POM. Be sure to update the URL with the correct URL for the CodeArtifact repository you created earlier. You can find the CodeArtifact repository URL with the get-repository-endpoint CLI command:


aws codeartifact get-repository-endpoint --domain codeartifact-pipeline-domain  --repository codeartifact-pipeline-repo --format maven

 

Add the following to the Maven project’s pom.xml:


<distributionManagement>
  <repository>
    <id>codeartifact</id>
    <name>codeartifact</name>
    <url>Replace with the URL from the get-repository-endpoint command</url>
  </repository>
</distributionManagement>

Creating a settings.xml file

Maven needs credentials to use to authenticate with CodeArtifact when it performs the deployment. CodeArtifact uses temporary authorization tokens. To pass the token to Maven, a settings.xml file is created in the top level of the Maven project. During the deployment stage, Maven is instructed to use the settings.xml in the top level of the project instead of the settings.xml that normally resides in $HOME/.m2. Create a settings.xml in the top level of the Maven project with the following contents:


<settings>
  <servers>
    <server>
      <id>codeartifact</id>
      <username>aws</username>
      <password>${env.CODEARTIFACT_TOKEN}</password>
    </server>
  </servers>
</settings>

Creating the buildspec.yaml file

CodeBuild uses a build specification file with commands and related settings that are used during the build, test, and delivery of the artifact. In the build specification file, we specify the CodeBuild runtime to use pre-build actions (update AWS CLI), and build actions (Maven build, test, and deploy). When Maven is invoked, it is provided the path to the settings.xml created in the previous step, instead of the default in $HOME/.m2/settings.xml. Create the buildspec.yaml as shown in the following code:


version: 0.2

phases:
  install:
    runtime-versions:
      java: corretto11

  pre_build:
    commands:
      - pip3 install awscli --upgrade --user

  build:
    commands:
      - export CODEARTIFACT_TOKEN=`aws codeartifact get-authorization-token --domain ${CODEARTIFACT_DOMAIN} --query authorizationToken --output text`
      - mvn -s settings.xml clean package deploy

 

Running the pipeline

The final step is to add the files in the Maven project to the Git repository and push the changes to CodeCommit. This triggers the pipeline to run. See the following code:


git checkout -b main
git add settings.xml buildspec.yaml pom.xml src
git commit -a -m "Initial commit"
git push --set-upstream origin main

 

Checking the pipeline

At this point, the pipeline starts to run. To check its progress, sign in to the AWS Management Console and choose the Region where you created the pipeline. On the CodePipeline console, open the pipeline that the CloudFormation stack created. The pipeline’s name is prefixed with the stack name. If you open the CodePipeline console before the pipeline is complete, you can watch each stage run (see the following screenshot).

CodePipeline Screenshot

If you see that the pipeline failed, you can choose the details in the action that failed for more information.

Checking for new artifacts published in CodeArtifact

When the pipeline is complete, you should be able to see the artifact in the CodeArtifact repository you created earlier. The artifact we published for this post is a Maven snapshot. CodeArtifact handles snapshots differently than release versions. For more information, see Use Maven snapshots. To find the artifact in CodeArtifact, complete the following steps:

  1. On the CodeArtifact console, choose Repositories.
  2. Choose the repository we created earlier named myrepo.
  3. Search for the package named my-app.
  4. Choose the my-app package from the search results.
    CodeArtifact Assets
  5. Choose the Dependencies tab to bring up a list of Maven dependencies that the Maven project depends on.CodeArtifact Dependencies

 

Cleaning up

To clean up the resources you created in this post, you need to remove them in the following order:


# Empty the CodePipeline S3 artifact bucket
CODEPIPELINE_BUCKET=$(aws cloudformation describe-stacks --stack-name codeartifact-pipeline --query "Stacks[0].Outputs[?OutputKey=='CodePipelineArtifactBucket'].OutputValue" --output text)
aws s3 rm s3://$CODEPIPELINE_BUCKET --recursive

# Delete the CloudFormation stack
aws cloudformation delete-stack --stack-name codeartifact-pipeline

Conclusion

This post covered how to build a continuous integration pipeline to deliver Maven artifacts to AWS CodeArtifact. You can modify this solution for your specific needs. For more information about CodeArtifact or the other services used, see the following:

 

Visualizing GitHub’s global community

Post Syndicated from Tal Safran original https://github.blog/2020-12-21-visualizing-githubs-global-community/

This is the second post in a series about how we built our new homepage.

  1. How our globe is built
  2. How we collect and use the data behind the globe
  3. How we made the page fast and performant
  4. How We Illustrate at GitHub
  5. How we designed the homepage and wrote the narrative

In the first post, my teammate Tobias shared how we made the 3D globe come to life, with lots of nitty gritty details about Three.js, performance optimization, and delightful touches.

But there’s another side to the story—the data! We hope you enjoy the read. ✨

Data goals

When we kicked off the project, we knew that we didn’t want to make just another animated globe. We wanted the data to be interesting and engaging. We wanted it to be real, and most importantly, we wanted it to be live.

Luckily, the data was there.

The challenge then became designing a data service that addressed the following challenges:

  1. How do we query our massive volume of data?
  2. How do we show you the most interesting bits?
  3. How do we geocode user locations in a way that respects privacy?
  4. How do we expose the computed data back to the monolith?
  5. How do we not break GitHub? 😊

Let’s begin, shall we?

Querying GitHub

So, how hard could it be to show you some recent pull requests? It turns out it’s actually very simple:

class GlobeController < ApplicationController
  def data
    pull_requests = PullRequest
      .where(open: true)
      .joins(:repositories)
      .where("repository.is_open_source = true")
      .last(10_000)

    render json: pull_requests
  end
end

Just kidding 😛

Because of the volume of data generated on GitHub every day, the size of our databases, as well as the importance of keeping GitHub fast and reliable, we knew we couldn’t query our production databases directly.

Luckily, we have a data warehouse and a fantastic team that maintains it. Data from production is fetched, sanitized, and packaged nicely into the data warehouse on a regular schedule. The data can then be queried using Presto, a flavor of SQL meant for querying large sets of data.

We also wanted the data to be as fresh as possible. So instead of querying snapshots of our MySQL tables that are only copied over once a day, we were able to query data coming from our Apache Kafka event stream that makes it into the data warehouse much more regularly.

As an example, we have an event that is reported every time a pull request is merged. The event is defined in a format called protobuf, which stands for “protocol buffer.”

Here’s what the protobuf for a merged pull request event might look like:

message PullRequestMerge {
  github.v1.entities.User actor = 1;
  github.v1.entities.Repository repository = 2;
  github.v1.entities.User repository_owner = 3;
  github.v1.entities.PullRequest pull_request = 4;
  github.v1.entities.Issue issue = 5;
}

Each row corresponds to an “entity,” each of which is defined in its own protobuf file. Here’s a snippet from the definition of a pull request entity:

message PullRequest {
  uint64 id = 1;
  string global_relay_id = 2;
  uint64 author_id = 3;

  enum PullRequestState {
    UNKNOWN = 0;
    OPEN = 1;
    CLOSED = 2;
    MERGED = 3;
  }
  PullRequestState pull_request_state = 4;

  google.protobuf.Timestamp created_at = 5;
  google.protobuf.Timestamp updated_at = 6;
}

Including an entity in an event will pass along all of the attributes defined for it. All of that data gets copied into our data warehouse for every pull request that is merged.

This means that a Presto query for pull requests merged in the past day could look like:

SELECT
  pull_request.created_at,
  pull_request.updated_at,
  pull_request.id,
  issue.number,
  repository.id
FROM kafka.github.pull_request_merge
WHERE
  day >= CAST((CURRENT_DATE - INTERVAL '1' DAY) AS VARCHAR)

There are a few other queries we make to pull in all the data we need. But as you can see, this is pretty much standard SQL that pulls in merged pull requests from the last day in the event stream.

Surfacing interesting data

We wanted to make sure that whatever data we showed was interesting, engaging, and appropriate to be spotlighted on the GitHub homepage. If the data was good, visitors would be enticed to explore the vast ecosystem of open source being built on GitHub at that given moment. Maybe they’d even make a contribution!

So how do we find good data?

Luckily our data team came to the rescue yet again. A few years ago, the Data Science Team put together a model to rank the “health” of repositories based on 30-plus features weighted by importance. A healthy repository doesn’t necessarily mean having a lot of stars. It also takes into account how much current activity is happening and how easy it is to contribute to the project, to name a few.

The end result is a numerical health score that we can query against in the data warehouse.

SELECT repository_id
FROM data_science.github.repository_health_scores
WHERE 
  score > 0.75

Combining this query with the above, we can now pull in merged pull requests from repositories with health scores above a certain threshold:

WITH
healthy_repositories AS (
  SELECT repository_id
  FROM data_science.github.repository_health_scores
  WHERE 
    score > 0.75
)

SELECT
  a.pull_request.created_at,
  a.pull_request.updated_at,
  a.pull_request.id,
  a.issue.number,
  a.repository.id
FROM kafka.github.pull_request_merge a
JOIN healthy_repositories b
ON a.repository.id = b.repository_id
WHERE
  day >= CAST((CURRENT_DATE - INTERVAL '1' DAY) AS VARCHAR)

We do some other things to ensure the data is good, like filtering out accounts with spammy behavior. But repository health scores are definitely a key ingredient.

Geocoding user-provided locations

Your GitHub profile has an optional free text field for providing your location. Some people fill it out with their actual location (mine says “San Francisco”), while others use fake or funny locations (42 users have “Middle Earth” listed as theirs). Many others choose to not list a location. In fact, two-thirds of users don’t enter anything and that’s perfectly fine with us.

For users that do enter something, we try to map the text to a real location. This is a little harder to do than using IP addresses as proxies for locations, but it was important to us to only include data that users felt comfortable making public in the first place.

In order to map the free text locations to latitude and longitude pairs, we use Mapbox’s forward geocoding API and their Ruby SDK. Here’s an example of a forward geocoding of “New York City”:

MAPBOX_OPTIONS = {
  limit: 1,
  types: %w(region place country),
  language: "en"
}

Mapbox::Geocoder.geocode_forward("New York City", MAPBOX_OPTIONS)

=> [{
  "type" => "FeatureCollection",
  "query" => ["new", "york", "city"],
  "features" => [{
    "id" => "place.15278078705964500",
    "type" => "Feature",
    "place_type" => ["place"],
    "relevance" => 1,
    "properties" => {
      "wikidata" => "Q60"
    },
    "text_en" => "New York City",
    "language_en" => "en",
    "place_name_en" => "New York City, New York, United States",
    "text" => "New York City",
    "language" => "en",
    "place_name" => "New York City, New York, United States",
    "bbox" => [-74.2590879797556, 40.477399, -73.7008392055224, 40.917576401307],
    "center" => [-73.9808, 40.7648],
    "geometry" => {
      "type" => "Point", "coordinates" => [-73.9808, 40.7648]
    },
    "context" => [{
      "id" => "region.17349986251855570",
      "wikidata" => "Q1384",
      "short_code" => "US-NY",
      "text_en" => "New York",
      "language_en" => "en",
      "text" => "New York",
      "language" => "en"
    }, {
      "id" => "country.19678805456372290",
      "wikidata" => "Q30",
      "short_code" => "us",
      "text_en" => "United States",
      "language_en" => "en",
      "text" => "United States",
      "language" => "en"
    }]
  }],
  "attribution" => "NOTICE: (c) 2020 Mapbox and its suppliers. All rights reserved. Use of this data is subject to the Mapbox Terms of Service (https://www.mapbox.com/about/maps/). This response and the information it contains may not be retained. POI(s) provided by Foursquare."
}, {}]

There is a lot of data there, but let’s focus on text, relevance, and center for now. Here are those fields for the “New York City”:

result = Mapbox::Geocoder.geocode_forward("New York City", MAPBOX_OPTIONS)
result[0]["features"][0].slice("text", "relevance", "center")

=> {"text"=>"New York City", "relevance"=>1, "center"=>[-73.9808, 40.7648]}

If you use “NYC” query string, you get the exact same result:

result = Mapbox::Geocoder.geocode_forward("NYC", MAPBOX_OPTIONS)
result[0]["features"][0].slice("text", "relevance", "center")

=> {"text"=>"New York City", "relevance"=>1, "center"=>[-73.9808, 40.7648]}

Notice that the text is still “New York City” in this second example? That is because Mapbox is normalizing the results. We use the normalized text on the globe so viewers get a consistent experience. This also takes care of capitalization and misspellings.

The center field is an array containing the longitude and latitude of the location.

And finally, the relevance score is an indicator of Mapox’s confidence in the results. A relevance score of 1 is the highest, but sometimes users enter locations that Mapbox is less sure about:

result = Mapbox::Geocoder.geocode_forward("Middle Earth", MAPBOX_OPTIONS)
result[0]["features"][0].slice(text", "relevance", "center")

=> {"text"=>"Earth City", "relevance"=>0.5, "center"=>[-90.4682, 38.7689]}

We discard anything with a score of less than 1, just to get confidence that the location we show feels correct.

Mapbox also provides a batch geocoding endpoint. This allows us to query multiple locations in one request:

MAPBOX_ENDPOINT = "mapbox.places-permanent"

query_string = "{San Francisco};{Berlin};{Dakar};{Tokyo};{Lima}"

Mapbox::Geocoder.geocode_forward(query_string, MAPBOX_OPTIONS, MAPBOX_ENDPOINT)

After we’ve geocoded and normalized all of the results, we create a JSON representation of the pull request and its locations so our globe JavaScript client knows how to parse it.

Here’s a pull request we recently featured that was opened in San Francisco and merged in Tokyo:

{
   "uml":"Tokyo",
   "gm":{
      "lat":35.68,
      "lon":139.77
   },
   "uol":"San Francisco",
   "gop":{
      "lat":37.7648,
      "lon":-122.463
   },
   "l":"JavaScript",
   "nwo":"mdn/browser-compat-data",
   "pr":7937,
   "ma":"2020-12-17 04:00:48.000",
   "oa":"2020-12-16 10:02:31.000"
}

We use short keys to shave off some bytes from the JSON we end up serving so the globe loads faster.

Airflow, HDFS, and Munger

We run our data warehouse queries and geocoding throughout the day to ensure that the data on the homepage is always fresh.

For scheduling this work, we use another system from Apache called Airflow. Airflow lets you run scheduled jobs that contain a sequence of tasks. Airflow calls these workflows Direct Acyclical Graphs (or DAGs for short), which is a borrowed term from graph theory in computer science. Basically this means that you schedule one task at a time, execute the task, and when the task is done, then the next task is scheduled and eventually executed. Tasks can pass along information to each other.

At a high level, our DAG executes the following tasks:

  1. Query the data warehouse.
  2. Geocode locations from the results.
  3. Write the results to a file.
  4. Expose the results to the GitHub Rails app.

We covered the first two steps earlier. For writing the file, we use HDFS, which is a distributed file system that’s part of the Apache Hadoop project. The file is then uploaded to Munger, an internal service we use to expose results from the data science pipeline back to the GitHub Rails app that powers github.com.

Here’s what this might look like in the Airflow UI:

Each column in that screenshot represents a full DAG run of all of the tasks. The last column with the light green circle at the top indicates that the DAG is in the middle of a run. It’s completed the build_home_page_globe_table task (represented by a dark green box) and now has the next task write_to_hdfs scheduled (dark blue box).

Our Airflow instance runs more than just this one DAG throughout the day, so we may stay in this state for some time before the scheduler is ready to pick up the write_to_hdfs task. Eventually the remaining tasks should run. If everything ends up running smoothly, we should see all green:

Wrapping up

Hope that gives you a glimpse into how we built this!

Again, thank you to all the teams that made the GitHub homepage and globe possible. This project would not have been possible without years of investment in our data infrastructure and data science capabilities, so a special shout out to Kim, Jeff, Preston, Ike, Scott, Jamison, Rowan, and Omoju.

More importantly, we could not have done it without you, the GitHub community, and your daily contributions and projects that truly bring the globe to life. Stay tuned—we have even more in store for this project coming soon.

In the meantime, I hope to see you on the homepage soon. 😉

How we built the GitHub globe

Post Syndicated from Tobias Ahlin original https://github.blog/2020-12-21-how-we-built-the-github-globe/

GitHub is where the world builds software. More than 56 million developers around the world build and work together on GitHub. With our new homepage, we wanted to show how open source development transcends the borders we’re living in and to tell our product story through the lens of a developer’s journey.

Now that it’s live, we would love to share how we built the homepage-directly from the voices of our designers and developers. In this five-part series, we’ll discuss:

  1. How our globe is built
  2. How we collect and use the data behind the globe
  3. How we made the page fast and performant
  4. How our illustrators work with designers and engineers
  5. How we designed the homepage and wrote the narrative

At Satellite in 2019, our CEO Nat showed off a visualization of open source activity on GitHub over a 30-day span. The sheer volume and global reach was astonishing, and we knew we wanted to build on that story.

 

 The main goals we set out to achieve in the design and development of the globe were:

  • An interconnected community. We explored many different options, but ultimately landed on pull requests. It turned out to be a beautiful visualization of pull requests being opened in one part of the world and closed in another.
  • A showcase of real work happening now. We started by simply showing the pull requests’ arcs and spires, but quickly realized that we needed “proof of life.” The arcs could quite as easily just be design animations instead of real work. We iterated on ways to provide more detail and found most resonance with clear hover states that showed the pull request, repo, timestamp, language, and locations. Nat had the idea of making each line clickable, which really upleveled the experience and made it much more immersive. Read more here.
  • Attention to detail and performance. It was extremely important to us that the globe not only looked inspiring and beautiful, but that it performed well on all devices. We went through many, many iterations of refinement, and there’s still more work to be done.

Rendering the globe with WebGL

At the most fundamental level, the globe runs in a WebGL context powered by three.js. We feed it data of recent pull requests that have been created and merged around the world through a JSON file. The scene is made up of five layers: a halo, a globe, the Earth’s regions, blue spikes for open pull requests, and pink arcs for merged pull requests. We don’t use any textures: we point four lights at a sphere, use about 12,000 five-sided circles to render the Earth’s regions, and draw a halo with a simple custom shader on the backside of a sphere.

To draw the Earth’s regions, we start by defining the desired density of circles (this will vary depending on the performance of your machine—more on that later), and loop through longitudes and latitudes in a nested for-loop. We start at the south pole and go upwards, calculate the circumference for each latitude, and distribute circles evenly along that line, wrapping around the sphere:

for (let lat = -90; lat <= 90; lat += 180/rows) {
  const radius = Math.cos(Math.abs(lat) * DEG2RAD) * GLOBE_RADIUS;
  const circumference = radius * Math.PI * 2;
  const dotsForLat = circumference * dotDensity;
  for (let x = 0; x < dotsForLat; x++) {
    const long = -180 + x*360/dotsForLat;
    if (!this.visibilityForCoordinate(long, lat)) continue;

    // Setup and save circle matrix data
  }
}

To determine if a circle should be visible or not (is it water or land?) we load a small PNG containing a map of the world, parse its image data through canvas’s context.getImageData(), and map each circle to a pixel on the map through the visibilityForCoordinate(long, lat) method. If that pixel’s alpha is at least 90 (out of 255), we draw the circle; if not, we skip to the next one.

After collecting all the data we need to visualize the Earth’s regions through these small circles, we create an instance of CircleBufferGeometry and use an InstancedMesh to render all the geometry.

Making sure that you can see your own location

As you enter the new GitHub homepage, we want to make sure that you can see your own location as the globe appears, which means that we need to figure where on Earth that you are. We wanted to achieve this effect without delaying the first render behind an IP look-up, so we set the globe’s starting angle to center over Greenwich, look at your device’s timezone offset, and convert that offset to a rotation around the globe’s own axis (in radians):

const date = new Date();
const timeZoneOffset = date.getTimezoneOffset() || 0;
const timeZoneMaxOffset = 60*12;
rotationOffset.y = ROTATION_OFFSET.y + Math.PI * (timeZoneOffset / timeZoneMaxOffset);

It’s not an exact measurement of your location, but it’s quick, and does the job.

Visualizing pull requests

The main act of the globe is, of course, visualizing all of the pull requests that are being opened and merged around the world. The data engineering that makes this possible is a different topic in and of itself, and we’ll be sharing how we make that happen in an upcoming post. Here we want to give you an overview of how we’re visualizing all your pull requests.

 

Let’s focus on pull requests being merged (the pink arcs), as they are a bit more interesting. Every merged pull request entry comes with two locations: where it was opened, and where it was merged. We map these locations to our globe, and draw a bezier curve between these two locations:

const curve = new CubicBezierCurve3(startLocation, ctrl1, ctrl2, endLocation);

We have three different orbits for these curves, and the longer the two points are apart, the further out we’ll pull out any specific arc into space. We then use instances of TubeBufferGeometry to generate geometry along these paths, so that we can use setDrawRange() to animate the lines as they appear and disappear.

As each line animates in and reaches its merge location, we generate and animate in one solid circle that stays put while the line is present, and one ring that scales up and immediately fades out. The ease out easings for these animations are created by multiplying a speed (here 0.06) with the difference between the target (1) and the current value (animated.dot.scale.x), and adding that to the existing scale value. In other words, for every frame we step 6% closer to the target, and as we’re coming closer to that target, the animation will naturally slow down.

// The solid circle
const scale = animated.dot.scale.x + (1 - animated.dot.scale.x) * 0.06;
animated.dot.scale.set(scale, scale, 1);

// The landing effect that fades out
const scaleUpFade = animated.dotFade.scale.x + (1 - animated.dotFade.scale.x) * 0.06;
animated.dotFade.scale.set(scaleUpFade, scaleUpFade, 1);
animated.dotFade.material.opacity = 1 - scaleUpFade;

Creative constraints from performance optimizations

The homepage and the globe needs to perform well on a variety of devices and platforms, which early on created some creative restrictions for us, and made us focus extensively on creating a well-optimized page. Although some modern computers and tablets could render the globe at 60 FPS with antialias turned on, that’s not the case for all devices, and we decided early on to leave antialias turned off and optimize for performance. This left us with a sharp and pixelated line running along the top left edge of the globe, as the globe’s highlighted edge met the darker color of the background:

This encouraged us to explore a halo effect that could hide that pixelated edge. We created one by using a custom shader to draw a gradient on the backside of a sphere that’s slightly larger than the globe, placed it behind the globe, and tilted it slightly on its side to emphasize the effect in the top left corner:

const halo = new Mesh(haloGeometry, haloMaterial);
halo.scale.multiplyScalar(1.15);
halo.rotateX(Math.PI*0.03);
halo.rotateY(Math.PI*0.03);
this.haloContainer.add(halo);

This smoothed out the sharp edge, while being a much more performant operation than turning on antialias. Unfortunately, leaving antialias off also produced a fairly prominent moiré effect as all the circles making up the world came closer and closer to each other as they neared the edges of the globe. We reduced this effect and simulated the look of a thicker atmosphere by using a fragment shader for the circles where each circle’s alpha is a function of its distance from the camera, fading out every individual circle as it moves further away:

if (gl_FragCoord.z > fadeThreshold) {
  gl_FragColor.a = 1.0 + (fadeThreshold - gl_FragCoord.z ) * alphaFallOff;
}

Improving perceived speed

We don’t know how quickly (or slowly) the globe is going to load on a particular device, but we wanted to make sure that the header composition on the homepage is always balanced, and that you got the impression that the globe loads quickly even if there’s a slight delay before we can render the first frame.

We created a bare version of the globe using only gradients in Figma and exported it as an SVG. Embedding this SVG in the HTML document adds little overhead, but makes sure that something is immediately visible as the page loads. As soon as we’re ready to render the first frame of the globe, we transition between the SVG and the canvas element by crossfading between and scaling up both elements using the Web Animations API. Using the Web Animations API enables us to not touch the DOM at all during the transition, ensuring that it’s as stutter-free as possible.

const keyframesIn = [
      { opacity: 0, transform: 'scale(0.8)' },
      { opacity: 1, transform: 'scale(1)' }
    ];
const keyframesOut = [
      { opacity: 1, transform: 'scale(0.8)' },
      { opacity: 0, transform: 'scale(1)' }
    ];
const options = { fill: 'both', duration: 600, easing: 'ease' };

this.renderer.domElement.animate(keyframesIn, options);
const placeHolderAnim = placeholder.animate(keyframesOut, options);
placeHolderAnim.addEventListener('finish', () => {
  placeholder.remove();
});

Graceful degradation with quality tiers

We aim at maintaining 60 FPS while rendering an as beautiful globe as we can, but finding that balance is tricky—there are thousands of devices out there, all performing differently depending on the browser they’re running and their mood. We constantly monitor the achieved FPS, and if we fail to maintain 55.5 FPS over the last 50 frames we start to degrade the quality of the scene.

 

There are four quality tiers, and for every degradation we reduce the amount of expensive calculations. This includes reducing the pixel density, how often we raycast (figure out what your cursor is hovering in the scene), and the amount of geometry that’s drawn on screen—which brings us back to the circles that make up the Earth’s regions. As we traverse down the quality tiers, we reduce the desired circle density and rebuild the Earth’s regions, here going from the original ~12 000 circles to ~8 000:

// Reduce pixel density to 1.5 (down from 2.0)
this.renderer.setPixelRatio(Math.min(AppProps.pixelRatio, 1.5));
// Reduce the amount of PRs visualized at any given time
this.indexIncrementSpeed = VISIBLE_INCREMENT_SPEED / 3 * 2;
// Raycast less often (wait for 4 additional frames)
this.raycastTrigger = RAYCAST_TRIGGER + 4;
// Draw less geometry for the Earth’s regions
this.worldDotDensity = WORLD_DOT_DENSITY * 0.65;
// Remove the world
this.resetWorldMap();
// Generate world anew from new settings
this.buildWorldGeometry();

A small part of a wide-ranging effort

These are some of the techniques that we use to render the globe, but the creation of the globe and the new homepage is part of a longer story, spanning multiple teams, disciplines, and departments, including design, brand, engineering, product, and communications. We’ll continue the deep-dive in this 5-part series, so come back soon or follow us on Twitter @GitHub for all the latest updates on this project and more.

Next up: how we collect and use the data behind the globe.

In the meantime, don’t miss out on the new GitHub globe wallpapers from the GitHub Illustration Team to enjoy the globe from your desktop or mobile device:


Love the new GitHub homepage or any of the work you see here? Join our team

Optimizing data warehouse storage

Post Syndicated from Netflix Technology Blog original https://netflixtechblog.com/optimizing-data-warehouse-storage-7b94a48fdcbe

By Anupom Syam

Background

At Netflix, our current data warehouse contains hundreds of Petabytes of data stored in AWS S3, and each day we ingest and create additional Petabytes. At this scale, we can gain a significant amount of performance and cost benefits by optimizing the storage layout (records, objects, partitions) as the data lands into our warehouse.

There are several benefits of such optimizations like saving on storage, faster query time, cheaper downstream processing, and an increase in developer productivity by removing additional ETLs written only for query performance improvement. On the other hand, these optimizations themselves need to be sufficiently inexpensive to justify their own processing cost over the gains they bring.

We built AutoOptimize to efficiently and transparently optimize the data and metadata storage layout while maximizing their cost and performance benefits.

This article will list some of the use cases of AutoOptimize, discuss the design principles that help enhance efficiency, and present the high-level architecture. Then deep dive into the merging use case of AutoOptimize and share some results and benefits.

Use cases

We found several use cases where a system like AutoOptimize can bring tons of value. Some of the optimizations are prerequisites for a high-performance data warehouse. Sometimes Data Engineers write downstream ETLs on ingested data to optimize the data/metadata layouts to make other ETL processes cheaper and faster. The goal of AutoOptimize is to centralize such optimizations that will remove duplicate work and while doing it more efficiently than vanilla ETLs.

Merge

As the data lands into the data warehouse through real-time data ingestion systems, it comes in different sizes. This results in a perpetually increasing number of small files across the partitions. Merging those numerous smaller files into a handful of larger files can make query processing faster and reduce storage space.

Sort

Presorted records and files in partitions make queries faster and save significant amounts of storage space as it enables a higher level of compression. We already had some existing tables with sorting stages to reduce table storage and improve downstream query performance.

Compaction

Modern data warehouses allow updating and deleting pre-existing records. Iceberg plans to enable this in the form of delta files. Over time, the number of delta files grows, and compacting them to their source files can make the read operations more optimal.

Metadata optimization

In Iceberg, the physical partitioning is decoupled from logical partitioning by keeping a map to file locations in the metadata. This enables us to add additional indexes in the metadata to make point queries more optimal. We can also reorganize the metadata to make file scanning much faster.

Design Principles

For AutoOptimize to efficiently optimize the data layout, we’ve made the following choices:

  1. Just in time vs. periodic optimization
    Only optimize a given data set when required (based on what changed) instead of blind periodic runs.
  2. Essential vs. complete optimization
    Allow users to optimize at the point of diminishing returns instead of a binary setting. For example, we allow a partition to have a few small files instead of always merging files in perfect sizes.
  3. Minimum replacement vs. full overwrite
    Only replace the required minimum amount of files instead of a full sweep overwrite.

These principles reduce resource usage by being more efficient and effective while lowering the end-to-end latency in data processing.

Other than these principles, there are some other design considerations to support and enable:

  • Multi-tenancy with database and table prioritization.
  • Both automatic (event-driven) as well as manual (ad-hoc) optimization.
  • Transparency to end-users.

High-Level Design

AutoOptimize High-Level Design

AutoOptimize is split into 2 subsystems (Service and Actors) to decouple the decisions from the actions at a high level. This decoupling of responsibilities helps us to design, manage, use, and scale the subsystems independently.

AutoOptimize Service

The service is the decision-maker. It decides what to do and when to do in response to an incoming event. It is responsible for listening to incoming events and requests and prioritizing different tables and actions to make the best usage of the available resources.

The work done in the service can be further broken down into the following 3 steps:

Observe: Listen to changes in the warehouse in near real-time. Also, respond to ad-hoc requests created manually by end-users.

Orient: Gather tuning parameters for a particular table that changed. Also, adjust the resource allocation for the table or the number of actors depending on the backlog.

Decide: Determine the highest value action with the right parameters for this particular change and when to act depending on how the action falls in the global priority across all tables and actions.

In AutoOptimize, the service is a cluster of Java (Spring Boot) applications using Redis to keep the states.

AutoOptimize Actors

Actors in AutoOptimize are responsible for the actual work (merging/sorting/compaction etc.). The AutoOptimize Service sends commands to the actors that specify what to do. The job of Actors is to perform those commands in a distributed and fault-tolerant manner.

Actors in AutoOptimize are a pool of long-running Spark jobs managed by the AutoOptimize service.

This was not intentional but we found that the way we modularized AutoOptimize’s decision-making workflow is very similar to the OODA loop and decided to use the same taxonomy.

Other Components

Iceberg
We use Apache Iceberg as the table format. AutoOptimize relies on some of the Iceberg specific features such as snapshot and atomic operations to perform the optimizations in an accurate and scalable manner.

AutoAnalyze
In short, AutoAnalyze finds the best tuning/configuration parameters for a table. It uses “What-If” experiments and previous experiences and heuristics to find the most fitting attributes for a table. We will publish a follow-up blog post about AutoAnalyze in the future. For AutoOptimize, it may find if a table needs file merging or suggest a target file size and other parameters.

Deep Dive into File Merge

File merge is the first use-case that we built for AutoOptimize. Previously we had our homegrown system called Ursula responsible for data ingestion into the Hive based warehouse. The Ursula based pipeline also performed file merges on the ingested table partitions periodically. Since then, we have moved our ingestion to Keystone and our table layout to Iceberg.

The migration out of Ursula to Keystone/Iceberg based ingestion initiated the need for a replacement for Ursula file merge. File merging is necessary for a low latency streaming ingestion pipeline as data often arrive late and unevenly. The number of small files cripples across partitions over time and can have some serious side effects like:

  1. Slowing down queries.
  2. More processing resources.
  3. Increase in storage space.

The goal of File merge in AutoOptimize is to efficiently reduce the side effects while not adding additional latency to the data pipeline.

Solutions

This section will discuss some of the solutions that helped us achieve the previously stated goals.

Just in time optimization

AutoOptimize file merge gets triggered via table change events. This allows AutoOptimize to act right away with a minimum lag. But the problem with being event-driven is it’s expensive to scan the changed partitions every time they change. If we can determine “how noisy” a partition is from the changesets in a rolling manner, we will eliminate unnecessary full partition scanning with early signals from snapshots.

Essential work

After a full partition scan, AutoOptimize gets a more comprehensive view of the state of the partition. We can get a more accurate state of the partition at this stage and avoid non-essential work.

Partition Entropy
We introduced a concept called Partition Entropy (PE) used for early pruning at each step to reduce actual work. It’s a set of stats about the state of the partition. We calculate this in a rolling manner after each snapshot scan and more exhaustively after each partition scan.

The parts of PE that deal with file sizes are called File Size Entropy (FSE). FSE of a partition is derived from the Mean Squared Error (MSE) of file sizes in a partition. We will use the terms FSE and MSE interchangeably.

We use the standard Mean Squared Error formula:

Where,

N = Number of files in the partition
Target = Target File Size
Actual = min(Actual File Size, Target)

When a partition is scanned, it’s easy to calculate the MSE using the above formula as we know the sizes of all files in that partition. We store the MSE and N for each partition in Redis for later use.

At the snapshot scan stage, we get a commit definition containing the list of files and their metadata (like size, number of records, etc.) that got added and deleted in the commit. We calculate the new MSE’ of a changed partition in a rolling manner from the snapshot information and the previously stored stats using this formula:

Where,

M = Number of files added in the snapshot.
Target = Target File Size.
Actual = min(Actual File Size, Target)
N = Previously stored number of files in the partition.
MSE = Previously stored MSE.

We have a tolerance threshold (T) for each partition and skip further processing of the partition if MSE < T². This helps us significantly reduce the number of full partition scans at the snapshot scan step and the number of actual merges in the partition scan stage.

Entropy-Based Filtering

The actual formulas are a little bit more complicated than what stated here, as we need to take care of deleted files and some other edge cases. We could also use Mean Absolute Error but we want to be biased towards outliers — as the goal is to have a more even file size in a partition than having a mixed bag of different sizes with some perfect sized files.

Minimum replacement

Once we start processing a partition, we find the minimum amount of work needed to reduce the File Size Entropy and thus reduce the number of small files.

We use 2 different packing algorithms to achieve this:

Knuth/Plass line breaking algorithm
We use this strategy when the sort order among files is important. With a correct error function (ex: Error²), this algorithm helps minimize MSE with a bounding run time of O(n²).

First Fit Decreasing bin packing algorithm
We use a modified version of the original FFD algorithm if we can ignore the sort order. This helps reduce the number of replacements with an O(nlog(n)) running time.

These methods help us smooth out the file size histogram while doing it optimally with minimal file replacement.

Multi-tenancy

AutoOptimize is multi-tenant; that is, it runs on many different databases and tables. When running the optimizations, it also needs to prioritize and allocate resources at different levels for different tasks. It requires answering questions like which table should be processed first or get more resource bandwidth or what optimization gives the most ROI.

To support multi-tenancy and tasks prioritization, it needs to have the following properties:

  • Weighted resource sharing across different priorities.
  • Fair resource sharing across different tables and tasks with the same priority.
  • Handle bursts to prevent starvation.

We use different types of Weighted Fair Queue implementations inside AutoOptimize, including different combinations of the followings:

  1. Weighted Round Robin
  2. Deficit Weighted Round Robin
  3. Fixed Priority Preemptive

Reliable Priority Queue
To support prioritization and fair resource usage, we introduced a concept called Reliable Priority Queue (RPQ) in AutoOptimize. A reliable queue does not lose items if the subscriber fails to process the items after a dequeue. An RPQ also has a sense of prioritization across different items while being reliable. The concept is fairly similar to the default Redis RPOPLPUSH reliable queue pattern. But for AutoOptimize’s use case, we use Sorted Sets instead of lists to enable prioritization.

The goal of AutoOptimize is to optimize the warehouse with a holistic perspective. Making it multi-tenant with a notion of different priorities helps us make the most optimal resource allocation.

Results

22% reduction in partition scans

2% reduction in merge actions

72% reduction in file replacements

These savings are stacked on top of each other as they are applied in sequence in the AutoOptimize pipeline. This results in a massive reduction in actual processing need while reducing the number of files by 80%.

80% reduction in the number of files

70% saving in compute

We are using 70% less compute instances than our previous merge implementation.

We also see up to 60% improvement in query performance and an additional 1% saving in storage.

Benefits

Increase processing efficiency: As AutoOptimize uses file replacement and can avoid processing by filtering early, it can save processing costs by skipping files that are not required to be merged.

Increase storage efficiency: AutoOptimize helps save storage costs by enabling AutoAnalyze recommendations to sort the records.

Reduce lag: Periodic overwrite ETLs take more time as it works in batches. AutoOptimize reduces end to end lag in data processing by optimizing as we go.

Faster query: A smaller number of files results in smaller file scanning, fewer network calls, and makes queries faster.

Ease of use: AutoOptimize provides a frictionless way to setup optimization with minimum maintenance overhead from Data Engineering.

Developer productivity: Instead of adding an ETL per table for merging, which adds ongoing incremental maintenance cost, we have a single solution that can transparently scale to many tables.

Conclusion

We believe the problems we faced at Netflix are not unique, and some of the techniques and design considerations we made can be applied more generally. By laying out the data intelligently as they are ingested into the warehouse, we are removing complexities for Data Engineers and accelerating the end-to-end pipeline. At the same time, we are gaining a significant amount of performance and cost improvement by optimizing only when it makes sense. We plan to extend AutoOptimize into other use cases and integrate it more with the Iceberg ecosystem in the future.


Optimizing data warehouse storage was originally published in Netflix TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Security updates for Monday

Post Syndicated from original https://lwn.net/Articles/840972/rss

Security updates have been issued by Debian (curl, influxdb, lxml, node-ini, php-pear, and postsrsd), Fedora (chromium, curl, firefox, matrix-synapse, mingw-jasper, phpldapadmin, and thunderbird), Mageia (openjpeg2), openSUSE (gcc7, openssh, PackageKit, python-urllib3, slurm_18_08, and webkit2gtk3), Oracle (fapolicydbug, firefox, nginx:1.16, nodejs:12, and thunderbird), Red Hat (libpq, openssl, and thunderbird), and SUSE (curl, firefox, openssh, ovmf, slurm_17_11, slurm_18_08, slurm_20_02, and xen).

Cellebrite Can Break Signal

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/12/cellebrite-can-break-signal.html

Cellebrite announced that it can break Signal. (Note that the company has heavily edited its blog post, but the original — with lots of technical details — was saved by the Wayback Machine.)

News article. Slashdot post.

The whole story is puzzling. Cellebrite’s details will make it easier for the Signal developers to patch the vulnerability. So either Cellebrite believes it is so good that it can break whatever Signal does, or the original blog post was a mistake.

EDITED TO ADD (12/22): Signal’s Moxie Marlinspike takes serious issue with Cellebrite’s announcement. I have urged him to write it up, and will link to it when he does.

EDITED TO ADD (12/23): I need to apologize for this post. I finally got the chance to read all of this more carefully, and it seems that all Cellebrite is doing is reading the texts off of a phone they can already access. To this has nothing to do with Signal at all. So: never mind. False alarm. Apologies, again.

Configure identity-based policies in Cloudflare Gateway

Post Syndicated from Pete Zimmerman original https://blog.cloudflare.com/configure-identity-based-policies-in-cloudflare-gateway/

Configure identity-based policies in Cloudflare Gateway

Configure identity-based policies in Cloudflare Gateway

During Zero Trust Week in October, we released HTTP filtering in Cloudflare Gateway, which expands protection beyond DNS threats to those at the HTTP layer as well. With this feature, Cloudflare WARP proxies all Internet traffic from an enrolled device to a data center in our network. Once there, Cloudflare Gateway enforces organization-wide rules to prevent data loss and protect team members.

However, rules are not one-size-fits-all. Corporate policies can vary between groups or even single users. For example, we heard from customers who want to stop users from uploading files to cloud storage services except for a specific department that works with partners. Beyond filtering, security teams asked for the ability to audit logs on a user-specific basis. If a user account was compromised, they needed to know what happened during that incident.

We’re excited to announce the ability for administrators to create policies based on a user’s identity and correlate that identity to activity in the Gateway HTTP logs. Your team can reuse the same identity provider integration configured in Cloudflare Access and start building policies tailored to your organization today.

Fine-grained rule enforcement

Until today, organizations could protect their users’ Internet-bound traffic by configuring DNS and HTTP policies that applied to every user. While that makes it simple to configure policies to enforce content restrictions and mitigate security threats, any IT administrator knows that for every policy there’s an exception to that policy.

Configure identity-based policies in Cloudflare Gateway

For example, a corporate content policy might restrict users from accessing social media —  which is not ideal for a marketing team that needs to manage digital marketing campaigns. Administrators can now configure a rule in Gateway to ensure a marketing team can always reach social media from their corporate devices.

Configure identity-based policies in Cloudflare Gateway

To meet corporate policy requirements for the rest of the organization, the administrator can then build a second rule to block all social media. They can drag-and-drop that rule below the marketing team’s rule, giving it a lower precedence so that anyone not in marketing will instead be evaluated against this policy.

Configure identity-based policies in Cloudflare Gateway

Identity integration and filtering options

Cloudflare Gateway leverages the integration between your chosen identity provider (IdP) and Cloudflare Access to add identity to rules and logs. Customers can integrate one or more providers at the same time, including corporate providers like Okta and Azure AD, as well as public providers like GitHub and LinkedIn.

Configure identity-based policies in Cloudflare Gateway

When users first launch the WARP client, they will be prompted to authenticate with one of the providers configured. Once logged in, Cloudflare Gateway can send their traffic through your organization’s policies and attribute each connection to the user’s identity.

Depending on what your IdP supports, you can create rules based on the following attributes:

Attribute Example
User Name John Doe
User Email [email protected]
User Group Name* Marketing Team
User Group Email* [email protected]
User Group ID 1234

*Note: some IdPs use group email in place of a group name

Cloudflare Gateway gives teams the ability to create fine-grained rules that meet the real needs of IT administrators. But policy enforcement is only one side of the equation — protecting users and preventing corporate data loss requires visibility into Internet traffic across an organization, for auditing compliance or security incident investigations.

User-level visibility in activity logs

In addition to the ability to create identity-based rules, IT administrators can use the Gateway activity logs to filter the HTTP traffic logs for specific users and device IDs. This is critical for reasons with varying degrees of seriousness: on one end an administrator can identify users who are attempting to bypass content security policies, and on the other end, that administrator can identify users or devices that may be compromised.

Configure identity-based policies in Cloudflare Gateway

Securing your team from Internet threats requires IT or security administrators to keep pace with evolving attackers and, just as importantly, maintain full visibility on what’s happening to your users and data. Cloudflare Gateway now allows you to do both, so your team can get back to what matters.

One more thing

At the end of Zero Trust Week, we announced our Cloudflare Isolated Browser to protect organizations from Internet threats unknown to threat intelligence (i.e., zero-day attacks). By integrating with Gateway, organizations can use the Remote Browser to provide higher levels of security to individual users who might be targets of spear phishing campaigns.

For example, consider an employee in the finance department who interfaces with systems handling procurements or fund disbursement. A security team might consider preventing this employee from accessing the public Internet with their native browser and forcing that traffic into an isolated remote browser. Any traffic destined to internal systems would use the native browser. To create this policy, an administrator could create the following rules:

Configure identity-based policies in Cloudflare Gateway

While other Gateway rules protect you from known threats, the isolate rule can help guard against everything else. Your team can build rules that isolate traffic based on identity or content without requiring the user to switch between browsers or client applications.

Cloudflare Browser Isolation is available in private beta today; you can sign up to join the wait list here.

What’s next?

We’re excited to bring customers with us on our journey to providing a full Secure Web Gateway with features such as network-level rules, in-line anti-virus scanning, and data loss prevention. This feature is available to any Gateway Standard or Teams customer at no additional cost. We plan to extend these capabilities from individual remote users to branch offices and data centers.

Our goal is dead-simple integration and configuration of products that secure your users and data, so you can focus on bringing your own products into the world — we’re thrilled to help you do that. Follow this link to get started.

The Stargate | The MagPi 101

Post Syndicated from Rob Zwetsloot original https://www.raspberrypi.org/blog/the-stargate-the-magpi-101/

Fans of the Stargate SG-1 series, prepare to be inspired: a fellow aficionado has fashioned his own model of the show’s iconic portal. Nicola King takes an interstellar trip in the latest issue of The MagPi Magazine.

A mini version of the Stargate from TV sat on a table. Blue glowing light emits from the fake tunnel

When Kristian Tysse began making some projects on his new 3D printer, he soon became aware that the possibility of printing his own ‘working’ Stargate SG-1 model was within his grasp at last. “I suddenly realised I might now have enough knowledge about 3D printing, Raspberry Pi, motors, and programming to actually make a Stargate model of my own,” he tells us. “I wanted people who are familiar with the show to immediately know what it was, and tried to make it work as best I could, while staying as true as possible to the feeling and essence of the TV show.”

Raspberry Pi buried in the wires powering the mini stargate

Kristian also wanted to use a Raspberry Pi within this fully interactive, light-up, moving-parts project as “it is a powerful device with lots of flexibility. I do like that it functions as a full computer with an operating system with all the possibility that brings.”

Model minutiae

The back of the stargate controller with no lights on

You only have to look at the model to see just how much 3D printing was needed to get all of the parts ready to piece together, and Kristian created it in segments. But one of the key parts of his model is the DHD or Dial Home Device which viewers of the series will be familiar with. “The DHD functions as a USB keyboard and, when the keys are used, it sends signals to the (Python) program on Raspberry Pi that engages the different motors and lights in a proper Stargate way,” he enthuses. “If a correct set of keys/symbols are pressed on the DHD, the wormhole is established – illustrated on my Stargate with an infinity mirror effect.” 

“I wanted people who are familiar with the show to immediately know what it was”

Kristian Tysse

However, the DHD was a challenge, and Kristian is still tweaking it to improve how it works. He admits that writing the software for the project was also tricky, “but when I think back, the most challenging part was actually making it ‘functional’, and fitting all the wires and motors on it without destroying the look and shape of the Stargate itself.”

Dazzling detail

A close up of the stargate control panel with glowing orange touch buttons

Kristian admits to using a little artistic licence along the way, but he is keen to ensure the model replicates the original as far as possible. “I have taken a few liberties here and there. People on the social media channels are quick to point out differences between my Stargate and the one in the series. I have listened to most of those and done some changes. I will implement some more of those changes as the project continues,” he says. He also had to redesign the project several times, and had a number of challenges to overcome, especially in creating the seven lit, moving chevrons: “I tried many different approaches before I landed on the right one.”

The results of Kristian’s time-intensive labours are truly impressive, and show what you can achieve when you are willing to put in the hours and the attention to detail. Take a look at Kristian’s extremely detailed project page to see more on this super-stellar make.

Issue #101 of The MagPi Magazine out NOW

The front cover of the magazine featuring Raspberry Pi 400

Never want to miss an issue? Subscribe to The MagPi and we’ll deliver every issue straight to your door. Also, if you’re a new subscriber and get the 12-month subscription, you’ll get a completely free Raspberry Pi Zero bundle with a Raspberry Pi Zero W and accessories.

The post The Stargate | The MagPi 101 appeared first on Raspberry Pi.

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close