Седмицата в „Тоест“ (14–18 декември)

Post Syndicated from Тоест original https://toest.bg/editorial-14-18-december-2020/

Това е последният ни брой за тази твърде особена 2020 година. Сигурни сме, че мнозина нямат търпение тя да отмине и да бъде забравена. Въпреки че вероятно е по-добре да я запомним и да направим нужните изводи.

И макар да знаем, че около Коледа и Нова година претенциите към портфейлите ви силно нарастват, ще ни се да припомним, че разчитаме единствено на вашата подкрепа, за да продължаваме да търсим за вас есенцията, смисъла и поуките от случващото се в обществото. През тази трудна за мнозина година загубихме част от дарителите си и напълно разбираме, че нямаше как да е иначе. Но благодарим на всички – и на тези, които се отказаха, и на тези, които продължиха с подкрепата си. Благодарим и на най-новите си дарители, които именно сега са решили, че е важно да подпомогнат усилията ни. Оценяваме го истински!


Емилия Милчева

Нека изпратим годината с обзор на Емилия Милчева на най-важното от вътрешнополитическа гледна точка, с което ще запомним 2020-та. Трите думи на годината според нея са коронавирус, протести и Божков, но около тях има много подтеми, свързани или не, които е добре да останат като едно наум за следващата 2021-ва, през която ни предстои най-малко два пъти да отидем до урните за гласуване.


Светла Енчева

В началото на октомври писахме за обученията в МВР за „превенция на агресивни прояви в обществото“, които се свеждат до… преброяване на радикализираните роми. Светла Енчева продължава темата с мисловните лупинги в търсенето на радикализация сред ромите, анализирайки изследване и наръчник на агенция „Тренд“, в които прелива от грешки, некоректни подходи и подвеждащи тълкувания.


Йоанна Елми

От този брой започваме съвсем нова рубрика – „Училище на ХХI век“, която е посветена на темата образование и ще бъде съвместно начинание между нас и „Заедно в час“. Водещата на рубриката e Йоанна Елми, а първият ѝ текст се фокусира върху проблемите и решенията при обучението на деца, чийто майчин език не е български. Прочетете повече в „Езикът като дом“.


Архитектурната 2020-та е белязана от скука, 15-минутни градове и новата нормалност, смята арх. Анета Василева. Според нея годината е донесла една голяма пауза, но и отминалите две десетилетия от новия век също не изглеждат заредени с особени амбиции за промени и революции в архитектурата, за разлика от първите 20 години на ХХ век.


Марин Бодаков

Темата за външнополитическото напрежение около взаимоотношенията със Северна Македония продължава да е на дневен ред, но Марин Бодаков предлага да потърсим допирни точки в литературата и поезията. Негов събеседник тази седмица е македонският поет, преводач и критик Владимир Мартиновски. От интервюто ще научите и любопитни неща, например каква е разликата между превод и препев. И колко малко познаваме литературата на съседите си, с които иначе претендираме, че говорим един и същ език.


И най-накрая… последните за тази година препоръки на Марин за хубави книги, излезли наскоро у нас, в авторската му рубрика „По буквите“. Този път това са стихосбирките „Кухата сърцевина на живота“ от проф. Цочо Бояджиев и „Колсхил“ от Фиона Сампсън, както и повестта „Спомени за коне“ от Йордан Радичков. Понеже наближава Коледа, нека припомним, че българските издатели са сред изпитващите най-големи затруднения в настоящата пандемична ситуация, а хубавата книга е чудесен подарък за всеки празник – дори и без повод. Подарявайте повече книги в очакване на една нова, светла и изпълнена с оптимизъм за бъдещето 2021 година!

Пожелаваме ви от сърце уютни празници и да сте здрави!

П.П. Очаквайте първия ни брой през новата година на 9 януари, събота.

Тоест“ разчита единствено на финансовата подкрепа на читателите си.

[$] 5.11 Merge window, part 1

Post Syndicated from original https://lwn.net/Articles/840129/rss

When Linus Torvalds released
the 5.10 kernel
, he noted that the 5.11 merge window would run up
against the holidays. He indicated strongly that maintainers should send
him pull requests early as a result. Maintainers appear to have listened;
over 10,000 non-merge changesets were pulled into the mainline in the first
three days of the 5.11 merge window. Read on for a summary of the most
significant changes in that flood of patches.

Mythbusting the Analytics Journey

Post Syndicated from Netflix Technology Blog original https://netflixtechblog.com/mythbusting-the-analytics-journey-58d692ea707e

Part of our series on who works in Analytics at Netflix — and what the role entails

by Alex Diamond

This Q&A aims to mythbust some common misconceptions about succeeding in analytics at a big tech company.

This isn’t your typical recruiting story. I wasn’t actively looking for a new job and Netflix was the only place I applied. I didn’t know anyone who worked there and just submitted my resume through the Jobs page 🤷🏼‍♀️ . I wasn’t even entirely sure what the right role fit would be and originally applied for a different position, before being redirected to the Analytics Engineer role. So if you find yourself in a similar situation, don’t be discouraged!

How did you come to Netflix?

Movies and TV have always been one of my primary sources of joy. I distinctly remember being a teenager, perching my laptop on the edge of the kitchen table to “borrow” my neighbor’s WiFi (back in the days before passwords 👵🏻), and streaming my favorite Netflix show. I felt a little bit of ✨magic✨ come through the screen each time, and that always stuck with me. So when I saw the opportunity to actually contribute in some way to making the content I loved, I jumped at it. Working in Studio Data Science & Engineering (“Studio DSE”) was basically a dream come true.

Not only did I find the subject matter interesting, but the Netflix culture seemed to align with how I do my best work. I liked the idea of Freedom and Responsibility, especially if it meant having autonomy to execute projects all the way from inception through completion. Another major point of interest for me was working with “stunning colleagues”, from whom I could continue to learn and grow.

What was your path to working with data?

My road-to-data was more of a stumbling-into-data. I went to an alternative high school for at-risk students and had major gaps in my formal education — not exactly a head start. I then enrolled at a local public college at 16. When it was time to pick a major, I was struggling in every subject except one: Math. I completed a combined math bachelors + masters program, but without any professional guidance, networking, or internships, I was entirely lost. I had the piece of paper, but what next? I held plenty of jobs as a student, but now I needed a career.

A visual representation of all the jobs I had in high school and college: From pizza, to gourmet rice krispie treats, to clothing retail, to doors and locks

After receiving a grand total of *zero* interviews from sending out my resume, the natural next step was…more school. I entered a PhD program in Computer Science and shortly thereafter discovered I really liked the coding aspects more than the theory. So I earned the honor of being a PhD dropout.

A visual representation of all the hats I’ve worn

And here’s where things started to click! I used my newfound Python and SQL skills to land an entry-level Business Intelligence Analyst position at a company called Big Ass Fans. They make — you guessed it — very large industrial ventilation fans. I was given the opportunity to branch out and learn new skills to tackle any problem in front of me, aka my “becoming useful” phase. Within a few months I’d picked up BI tools, predictive modeling, and data ingestion/ETL. After a few years of wearing many different proverbial hats, I put them all to use in the Analytics Engineer role here. And ever since, Netflix has been a place where I can do my best work, put to use the skills I’ve gathered over the years, and grow in new ways.

What does an ordinary day look like?

As part of the Studio DSE team, our work is focused on aiding the movie-making process for our Netflix Originals, leading all the way up to a title’s launch on the service. Despite the affinity for TV and movies that brought me here, I didn’t actually know very much about how they got made. But over time, and by asking lots of questions, I’ve picked up the industry lingo! (Can you guess what “DOOD” stands for?)

My main stakeholders are members of our Studio team. They’re experts on the production process and an invaluable resource for me, sharing their expertise and providing context when I don’t know what something means. True to the “people over process” philosophy, we adapt alongside our stakeholders’ needs throughout the production process. That means the work products don’t always fit what you might imagine a traditional Analytics Engineer builds — if such a thing even exists!

A typical production lifecycle

On an ordinary day, my time is generally split evenly across:

  • 🤝📢 Speaking with stakeholders to understand their primary needs
  • 🐱💻 Writing code (SQL, Python)
  • 📊📈 Building visual outputs (Tableau, memos, scrappy web apps)
  • 🤯✍️ Brainstorming and vision planning for future work

Some days have more of one than the others, but variety is the spice of life! The one constant is that my day always starts with a ridiculous amount of coffee. And that it later continues with even more coffee. ☕☕☕

My road-to-data was more of a stumbling-into-data.

What advice would you give to someone just starting their career in data?

🐾 Dip your toes in things. As you try new things, your interests will evolve and you’ll pick up skills across a broad span of subject areas. The first time I tried building the front-end for a small web app, it wasn’t very pretty. But it piqued my interest and after a few times it started to become second nature.

💪 Find your strengths and weaknesses. You don’t have to be an expert in everything. Just knowing when to reach out for guidance on something allows you to uplevel your skills in that area over time. My weakness is statistics: I can use it when needed but it’s just not a subject that comes naturally to me. I own that about myself and lean on my stats-loving peers when needed.

🌸 Look for roles that allow you to grow. As you grow in your career, you’ll provide impact to the business in ways you didn’t even expect. As a business intelligence analyst, I gained data science skills. And in my current Analytics Engineer role, I’ve picked up a lot of product management and strategic thinking experience.

This is what I look like.

☝️ One Last Thing

I started off my career with the vague notion of, “I guess I want to be a data scientist?” But what that’s meant in practice has really varied depending on the needs of each job and project. It’s ok if you don’t have it all figured out. Be excited to try new things, lean into strengths, and don’t be afraid of your weaknesses — own them.

If this post resonates with you and you’d like to explore opportunities with Netflix, check out our analytics site, search open roles, and learn about our culture. You can also find more stories like this here.


Mythbusting the Analytics Journey was originally published in Netflix TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

Metasploit Wrap-Up

Post Syndicated from Grant Willcox original https://blog.rapid7.com/2020/12/18/metasploit-wrap-up-92/

Metasploit Wrap-Up

It’s the week of December 17th and that can only mean one thing: a week until Christmas! For those of you who don’t celebrate Christmas, a very happy Hanukkah/Chanukah, Kwanzaa, Diwali, Chinese New Year, Winter Solstice and Las Posadas to you all!

This is our last weekly wrap-up this year, but as always, we’ll be publishing an annual Metasploit wrap-up just after the new year that covers all the shells we got in 2020.

Without further ado, let’s jump into it!

CVE-2020-1054: I heard you still got Windows 7, so let’s play a game

Oh dear Windows 7, you just can’t catch a break. timwr continued his LPE contributions this week with a exploit for CVE-2020-1054, a OOB write vulnerability via the DrawIconEx() function in win32k.sys. This bug was originally found by bee13oy of Qihoo 360 Vulcan Team and Netanel Ben-Simon and Yoav Alon of Check Point Research and was reported to Microsoft in May 2020. The module targets Windows 7 SP1 x64 and grants SYSTEM level code execution. Whilst Windows 7 is EOL, it is still being used by 17.68% of all Windows computers as of November 2020 according to some statistics. That is still a fair market share even if its popularity has been gradually diminishing over time. Furthermore, although users can update Windows 7, it is now mostly a manual process unless you are on one of Windows extended support plans. This increases the time needed to apply patches and also increases the possibility that users may forget to install specific patches. Hopefully none of your clients’ systems are still running Windows 7, but in case you are on a pen test and happen to encounter one, this exploit might provide the access you need to pivot further into the network.

Parse me to your shell

The second highlight of this week was a PR from our very own wvu-r7 targeting CVE-2020-14871, a buffer overflow within the parse_user_name() function of the PAM (Pluggable Authentication Module) component of Solaris SunSSH running on Oracle Solaris versions 10 and 11. The exploit supports SunSSH 1.1.5 running on solaris 10u11 1/13 (x86) within either VMWare or VirtualBox and grants unauthenticated users a shell as the root user. Pretty nifty stuff!

New modules (2)

Enhancements and features

Bugs fixed

Get it

As always, you can update to the latest Metasploit Framework with msfupdate
and you can get more details on the changes since the last blog post from
GitHub:

If you are a git user, you can clone the Metasploit Framework repo (master branch) for the latest.
To install fresh without using git, you can use the open-source-only Nightly Installers or the
binary installers (which also include the commercial edition).

Setting up automated data quality workflows and alerts using AWS Glue DataBrew and AWS Lambda

Post Syndicated from Romi Boimer original https://aws.amazon.com/blogs/big-data/setting-up-automated-data-quality-workflows-and-alerts-using-aws-glue-databrew-and-aws-lambda/

Proper data management is critical to successful, data-driven decision-making. An increasingly large number of customers are adopting data lakes to realize deeper insights from big data. As part of this, you need clean and trusted data in order to gain insights that lead to improvements in your business. As the saying goes, garbage in is garbage out—the analysis is only as good as the data that drives it.

Organizations today have continuously incoming data that may develop slight changes in schema, quality, or profile over a period of time. To ensure data is always of high quality, we need to consistently profile new data, evaluate that it meets our business rules, alert for problems in the data, and fix any issues. In this post, we leverage AWS Glue DataBrew, a visual data preparation tool that makes it easy to profile and prepare data for analytics and machine learning (ML). We demonstrate how to use DataBrew to publish data quality statistics and build a solution around it to automate data quality alerts.

Overview of solution

In this post, we walk through a solution that sets up a recurring profile job to determine data quality metrics and, using your defined business rules, report on the validity of the data. The following diagram illustrates the architecture.

We’ll walk through a solution that takes sets up a recurring Profile job to determine data quality metrics, and using your defined business rules.

The steps in this solution are as follows:

  1. Periodically send raw data to Amazon Simple Storage Service (Amazon S3) for storage.
  2. Read the raw data in Amazon S3 and generate a scheduled DataBrew profile job to determine data quality.
  3. Write the DataBrew profile job output to Amazon S3.
  4. Trigger an Amazon EventBridge event after job completion.
  5. Invoke an AWS Lambda function based on the event, which reads the profile output from Amazon S3 and determines whether the output meets data quality business rules.
  6. Publish the results to an Amazon Simple Notification Service (Amazon SNS) topic.
  7. Subscribe email addresses to the SNS topic to inform members of your organization.

Prerequisites

For this walkthrough, you should have the following prerequisites:

Deploying the solution

For a quick start of this solution, you can deploy the provided AWS CloudFormation stack. This creates all the required resources in your account (us-east-1 Region). Follow the rest of this post for a deeper dive into the resources.

  1. Choose Launch Stack:

  1. In Parameters, for Email, enter an email address that can receive notifications.
  2. Scroll to the end of the form and select I acknowledge that AWS CloudFormation might create IAM resources.
  3. Choose Create stack.

It takes a few minutes for the stack creation to complete; you can follow progress on the Events tab.

  1. Check your email inbox and choose Confirm subscription in the email from AWS Notifications.

The default behavior of the deployed stack runs the profile on Sundays. You can start a one-time run from the DataBrew console to try out the end-to-end solution.

Setting up your source data in Amazon S3

In this post, we use an open dataset of New York City Taxi trip record data from The Registry of Open Data on AWS. This dataset represents a collection of CSV files defining trips taken by taxis and for-hire vehicles in New York City. Each record contains the pick-up and drop-off IDs and timestamps, distance, passenger count, tip amount, fair amount, and total amount. For the purpose of illustration, we use a static dataset; in a real-world use case, we would use a dataset that is refreshed at a defined interval.

You can download the sample dataset (us-east-1 Region) and follow the instructions for this solution, or use your own data that gets dumped into your data lake on a recurring basis. We recommend creating all your resources in the same account and Region. If you use the sample dataset, choose us-east-1.

Creating a DataBrew profile job

To get insights into the quality of our data, we run a DataBrew profile job on a recurring basis. This profile provides us with a statistical summary of our dataset, including value distributions, sparseness, cardinality, and type determination.

Connecting a DataBrew dataset

To connect your dataset, complete the following steps:

  1. On the DataBrew console, in the navigation pane, choose Datasets.
  2. Choose Connect new dataset.
  3. Enter a name for the dataset.
  4. For Enter your source from S3, enter the S3 path of your data source. In our case, this is s3://nyc-tlc/misc/.
  5. Select your dataset (for this post, we choose the medallions trips dataset FOIL_medallion_trips_june17.csv).

  1. Scroll to the end of the form and choose Create dataset.

Creating the profile job

You’re now ready to create your profile job.

  1. In the navigation pane, choose Datasets.
  2. On the Datasets page, select the dataset that you created in the previous step. The row in the table should be highlighted.
  3. Choose Run data profile.
  4. Select Create profile job.
  5. For Job output settings, enter an S3 path as destination for the profile results. Make sure to note down the S3 bucket and key, because you use it later in this tutorial.
  6. For Permissions, choose a role that has access to your input and output S3 paths. For details on required permissions, see DataBrew permission documentation.
  7. On the Associate schedule drop-down menu, choose Create new schedule.
  8. For Schedule name, enter a name for the schedule.
  9. For Run frequency, choose a frequency based on the time and rate at which your data is refreshed.
  10. Choose Add.

  1. Choose Create and run job.

The job run on sample data typically takes 2 minutes to complete.

Exploring the data profile

Now that we’ve run our profile job, we can expose insightful characteristics about our dataset. We can also review the results of the profile through the visualizations of the DataBrew console or by reading the raw JSON results in our S3 bucket.

The profile analyzes both at a dataset level and column level granularity. Looking at our column analytics for String columns, we have the following statistics:

  • MissingCount – The number of missing values in the dataset
  • UniqueCount – The number of unique values in the dataset
  • Datatype – The data type of the column
  • CommonValues – The top 100 most common strings and their occurrences
  • Min – The length of the shortest String value
  • Max – The length of the longest String value
  • Mean – The average length of the values
  • Median – The middle value in terms of character count
  • Mode – The most common String value length
  • StandardDeviation – The standard deviation for the lengths of the String values

For numerical columns, we have the following:

  • Min – The minimum value
  • FifthPercentile – The value that represents 5th percentile (5% of values fall below this and 95% fall above)
  • Q1 – The value that represents 25th percentile (25% of values fall below this and 75% fall above)
  • Median – The value that represents 50th percentile (50% of values fall below this and 50% fall above)
  • Q3 – The value that represents 75th percentile (75% of values fall below this and 25% fall above)
  • NinetyFifthPercentile – The value that represents 95th percentile (95% of values fall below this and 5% fall above)
  • Max – The highest value
  • Range – The difference between the highest and lowest values
  • InterquartileRange – The range between the 25th percentile and 75th percentile values
  • StandardDeviation – The standard deviation of the values (measures the variation of values)
  • Kurtosis – The kurtosis of the values (measures the heaviness of the tails in the distribution)
  • Skewness – The skewness of the values (measures symmetry in the distribution)
  • Sum – The sum of the values
  • Mean – The average of the values
  • Variance – The variance of the values (measures divergence from the mean)
  • CommonValues – A list of the most common values in the column and their occurrence count
  • MinimumValues – A list of the 5 minimum values in the list and their occurrence count
  • MaximumValues – A list of the 5 maximum values in the list and their occurrence count
  • MissingCount – The number of missing values
  • UniqueCount – The number of unique values
  • ZerosCount – The number of zeros
  • Datatype – The datatype of the column
  • Min – The minimum value
  • Max – The maximum value
  • Median – The middle value
  • Mean – The average value
  • Mode – The most common value 

Finally, at a dataset level, we have an overview of the profile as well as cross-column analytics:

  • DatasetName – The name of the dataset the profile was run on
  • Size – The size of the data source in KB
  • Source – The source of the dataset (for example, Amazon S3)
  • Location – The location of the data source
  • CreatedBy – The ARN of the user that created the profile job
  • SampleSize – The number of rows used in the profile
  • MissingCount – The total number of missing cells
  • DuplicateRowCount – The number of duplicate rows in the dataset
  • StringColumnsCount – The number of columns that are of String type
  • NumberColumnsCount – The number of columns that are of numeric type
  • BooleanColumnsCount – The number of columns that are of Boolean type
  • MissingWarningCount – The number of warnings on columns due to missing values
  • DuplicateWarningCount – The number of warnings on columns due to duplicate values
  • JobStarted – A timestamp indicating when the job started
  • JobEnded – A timestamp indicating when the job ended
  • Correlations – The statistical relationship between columns

By default, the DataBrew profile is run on a 20,000-row First-N sample of your dataset. If you want to increase the limit and run the profile on your entire dataset, send a request to [email protected].

Creating an SNS topic and subscription

Amazon SNS allows us to deliver messages regarding the quality of our data reliably and at scale. For this post, we create an SNS topic and subscription. The topic provides us with a central communication channel that we can broadcast to when the job completes, and the subscription is then used to receive the messages published to our topic. For our solution, we use an email protocol in the subscription in order to send the profile results to the stakeholders in our organization.

Creating the SNS topic

To create your topic, complete the following steps:

  1. On the Amazon SNS console, in the navigation pane, choose Topics.
  2. Choose Create topic.
  3. For Type, select Standard.
  4. For Name, enter a name for the topic.

For Name, enter a name for the topic.

  1. Choose Create topic.
  2. Take note of the ARN in the topic details to use later.

Creating the SNS subscription

To create your subscription, complete the following steps:

  1. In the navigation pane, choose Subscriptions.
  2. Choose Create subscription.
  3. For Topic ARN, choose the topic that you created in the previous step.
  4. For Protocol, choose Email.
  5. For Endpoint, enter an email address that can receive notifications.

For Endpoint, enter an email address that can receive notifications.

  1. Choose Create subscription.
  2. Check your email inbox and choose Confirm subscription in the email from AWS Notifications.

Creating a Lambda function for business rule validation

The profile has provided us with an understanding of the characteristics of our data. Now we can create business rules that ensure we’re consistently monitoring the quality our data.

For our sample taxi dataset, we will validate the following:

  • Making sure the pu_loc_id and do_loc_id columns meet a completeness rate of 90%.
  • If more than 10% of the data in those columns is missing, we’ll notify our team that the data needs to be reviewed.

Creating the Lambda function

To create your function, complete the following steps:

  1. On the Lambda console, in the navigation pane, choose Functions.
  2. Choose Create function.
  3. For Function name¸ enter a name for the function.
  4. For Runtime, choose the language you want to write the function in. If you want to use the code sample provided in this tutorial, choose Python 3.8.

For Runtime, choose the language you want to write the function in. If you want to use the code sample provided in this tutorial, choose Python 3.8.

  1. Choose Create function.

Adding a destination to the Lambda function

You now add a destination to your function.

  1. On the Designer page, choose Add destination.
  2. For Condition, select On success.
  3. For Destination type, choose SNS topic.
  4. For Destination, choose the SNS topic from the previous step.

For Destination, choose the SNS topic from the previous step.

  1. Choose Save.

Authoring the Lambda function

For the function code, enter the following sample code or author your own function that parses the DataBrew profile job JSON and verifies it meets your organization’s business rules.

If you use the sample code, make sure to fill in the values of the required parameters to match your configuration:

  • topicArn – The resource identifier for the SNS topic. You find this on the Amazon SNS console’s topic details page (for example, topicArn = 'arn:aws:sns:us-east-1:012345678901:databrew-profile-topic').
  • profileOutputBucket – The S3 bucket the profile job is set to output to. You can find this on the DataBrew console’s job details page (for example, profileOutputBucket = 'taxi-data').
  • profileOutputPathKey – The S3 key the profile job is set to output to. You can find this on the DataBrew console’s job details page (for example, profileOutputPathKey = profile-out/'). If you’re writing directly to an S3 bucket, keep this as an empty String (profileOutputPathKey = '').
    import json
    import boto3
    
    sns = boto3.client('sns')
    s3 = boto3.client('s3')
    s3Resource = boto3.resource('s3')
    
    # === required parameters ===
    topicArn = 'arn:aws:sns:<YOUR REGION>:<YOUR ACCOUNT ID>:<YOUR TOPIC NAME>'
    profileOutputBucket = '<YOUR S3 BUCKET NAME>'
    profileOutputPrefix = '<YOUR S3 KEY>'
    
    def verify_completeness_rule(bucket, key):
        # completeness threshold set to 10%
        threshold = 0.1
        
        # parse the DataBrew profile
        profileObject = s3.get_object(Bucket = bucket, Key = key)
        profileContent = json.loads(profileObject['Body'].read().decode('utf-8'))
        
        # verify the completeness rule is met on the pu_loc_id and do_loc_id columns
        for column in profileContent['columns']:
            if (column['name'] == 'pu_loc_id' or column['name'] == 'do_loc_id'):
                if ((column['missingValuesCount'] / profileContent['sampleSize']) > threshold):
                    # failed the completeness check
                    return False
    
        # passed the completeness check
        return True
    
    def lambda_handler(event, context):
        jobRunState = event['detail']['state']
        jobName = event['detail']['jobName'] 
        jobRunId = event['detail']['jobRunId'] 
        profileOutputKey = ''
    
        if (jobRunState == 'SUCCEEDED'):
            profileOutputPostfix = jobRunId[3:] + '.json'
    
            bucket = s3Resource.Bucket(profileOutputBucket)
            for object in bucket.objects.filter(Prefix = profileOutputPrefix):
                if (profileOutputPostfix in object.key):
                    profileOutputKey = object.key
            
            if (verify_completeness_rule(profileOutputBucket, profileOutputKey)):
                message = 'Nice! Your profile job ' + jobName + ' met business rules. Head to https://console.aws.amazon.com/databrew/ to view your profile.' 
                subject = 'Profile job ' + jobName + ' met business rules' 
            else:
                message = 'Uh oh! Your profile job ' + jobName + ' did not meet business rules. Head to https://console.aws.amazon.com/databrew to clean your data.'
                subject = 'Profile job ' + jobName + ' did not meet business rules'
        
        else:
            # State is FAILED, STOPPED, or TIMEOUT - intervention required
            message = 'Uh oh! Your profile job ' + jobName + ' is in state ' + jobRunState + '. Check the job details at https://console.aws.amazon.com/databrew#job-details?job=' + jobName
            subject = 'Profile job ' + jobName + ' in state ' + jobRunState
            
        response = sns.publish (
            TargetArn = topicArn,
            Message = message,
            Subject = subject
        )
    
        return {
            'statusCode': 200,
            'body': json.dumps(response)
        }

Updating the Lambda function’s permissions

In this final step of configuring your Lambda function, you update your function’s permissions.

  1. In the Lambda function editor, choose the Permissions tab.
  2. For Execution role, choose the role name to navigate to the AWS Identity and Access Management (IAM) console.
  3. In the Role summary, choose Add inline policy.
  4. For Service, choose S3.
  5. For Actions, under List, choose ListBucket.
  6. For Actions, under Read, choose Get Object.
  7. In the Resources section, for bucket, choose Add ARN.
  8. Enter the bucket name you used for your output data in the create profile job step.
  9. In the modal, choose Add.
  10. For object, choose Add ARN.
  11. For bucket name, enter the bucket name you used for your output data in the create profile job step and append the key (for example, taxi-data/profile-out).
  12. For object name, choose Any. This provides read access to all objects in the chosen path.
  13. In the modal, choose Add.
  14. Choose Review policy.
  15. On the Review policy page, enter a name.
  16. Choose Create policy. 

We return to the Lambda function to add a trigger later, so keep the Lambda service page open in a tab as you continue to the next step, adding an EventBridge rule.

Creating an EventBridge rule for job run completion

EventBridge is a serverless event bus service that we can configure to connect applications. For this post, we configure an EventBridge rule to route DataBrew job completion events to our Lambda function. When our profile job is complete, the event triggers the function to process the results.

Creating the EventBridge rule

To create our rule in EventBridge, complete the following steps:

  1. On the EventBridge console, in the navigation pane, choose Rules.
  2. Choose Create rule.
  3. Enter a name and description for the rule.
  4. In the Define pattern section, select Event pattern.
  5. For Event matching pattern, select Pre-defined pattern by service.
  6. For Service provider, choose AWS.
  7. For Service name, choose AWS Glue DataBrew.
  8. For Event type, choose DataBrew Job State Change.
  9. For Target, choose Lambda function.
  10. For Function, choose the name of the Lambda function you created in the previous step.

For Function, choose the name of the Lambda function you created in the previous step.

  1. Choose Create.

Adding the EventBridge rule as the Lambda function trigger

To add your rule as the function trigger, complete the following steps:

  1. Navigate back to your Lambda function configuration page from the previous step.
  2. In the Designer, choose Add trigger.
  3. For Trigger configuration, choose EventBridge (CloudWatch Events).
  4. For Rule, choose the EventBridge rule you created in the previous step.

For Rule, choose the EventBridge rule you created in the previous step.

  1. Choose Add.

Testing your system

That’s it! We’ve completed all the steps required for this solution to run periodically. To give it an end-to-end test, we can run our profile job once and wait for the resulting email to get our results.

  1. On the DataBrew console, in the navigation pane, choose Jobs.
  2. On the Profile jobs tab, select the job that you created. The row in the table should be highlighted.
  3. Choose Run job.
  4. In the Run job modal, choose Run job.

A few minutes after the job is complete, you should receive an email notifying you of the results of your business rule validation logic.

A few minutes after the job is complete, you should receive an email notifying you of the results of your business rule validation logic.

Cleaning up

To avoid incurring future charges, delete the resources created during this walkthrough.

Conclusion

In this post, we walked through how to use DataBrew alongside Amazon S3, Lambda, EventBridge, and Amazon SNS to automatically send data quality alerts. We encourage you to extend this solution by customizing the business rule validation to meet your unique business needs.


About the Authors

Romi Boimer is a Sr. Software Development Engineer at AWS and a technical lead for AWS Glue DataBrew. She designs and builds solutions that enable customers to efficiently prepare and manage their data. Romi has a passion for aerial arts, in her spare time she enjoys fighting gravity and hanging from fabric.

 

 

Shilpa Mohan is a Sr. UX designer at AWS and leads the design of AWS Glue DataBrew. With over 13 years of experience across multiple enterprise domains, she is currently crafting products for Database, Analytics and AI services for AWS. Shilpa is a passionate creator, she spends her time creating anything from content, photographs to crafts.

Sharing Amazon Redshift data securely across Amazon Redshift clusters for workload isolation

Post Syndicated from Harsha Tadiparthi original https://aws.amazon.com/blogs/big-data/sharing-amazon-redshift-data-securely-across-amazon-redshift-clusters-for-workload-isolation/

Amazon Redshift data sharing allows for a secure and easy way to share live data for read purposes across Amazon Redshift clusters. Amazon Redshift is a fast, fully managed cloud data warehouse that makes it simple and cost-effective to analyze all your data using standard SQL and your existing business intelligence (BI) tools. It allows you to run complex analytic queries against terabytes to petabytes of structured data, using sophisticated query optimization, columnar storage on high-performance storage, and massively parallel query runs.

In this post, we discuss how to use Amazon Redshift data sharing to achieve workload isolation across diverse analytics use cases and achieve business-critical SLAs. For more information about this new feature, see Announcing Amazon Redshift data sharing (preview).

How to use Amazon Redshift data sharing

Amazon Redshift data sharing allows a producer cluster to share data objects to one or more Amazon Redshift consumer clusters for read purposes without having to copy the data. With this approach, workloads isolated to different clusters can share and collaborate frequently on data to drive innovation and offer value-added analytic services to your internal and external stakeholders. You can share data at many levels, including databases, schemas, tables, views, columns, and user-defined functions, to provide fine-grained access controls that can be tailored for different users and businesses that all need access to Amazon Redshift data.

Data sharing between Amazon Redshift clusters is a two-step process. First, the producer cluster administrator that wants to share data creates an Amazon Redshift data share, a new named object introduced with this release to serve as a unit of sharing. The producer cluster adds the needed database objects such as schemas, tables, and views to the data share and specifies a list of consumer clusters with which to share the data share. Following that, privileged users on consumer clusters create an Amazon Redshift local database reference from the data share made available to them and grant permissions on the database objects to appropriate users and groups. Users and groups can then list the shared objects as part of the standard metadata queries and start querying immediately.

Solution overview

For this post, we use a use case in which the producer cluster is a central ETL cluster hosting enterprise sales data, a 3 TB Cloud DW benchmark dataset based on the TPC-DS benchmark dataset. This cluster serves multiple BI and data science clusters purpose-built for distinct business groups within the organization. One such group is the sales BI team, who runs BI reports using customer sales data created in the central ETL cluster and joined with the product reviews dataset that they loaded into the BI cluster they manage.

This approach helps the sales BI team isolate data lifecycle management between the enterprise sales dataset in the ETL producer from the product reviews data that they fully manage in the BI consumer cluster to simplify data stewardship. It also allows for agility, allows sizing clusters independently to provide workload isolation, and creates a simple cost charge-back model.

As depicted in the following diagram, the central ETL cluster etl_cluster hosts the sales data in a schema named sales. We demonstrate how to build the semantic layer later in this post. A superuser in etl_cluster then creates a data share named salesdatashare, adds the bi_semantic schema and all objects in that schema to the data share, and grants usage permissions to the BI consumer cluster named bi_cluster. Keep in mind that a data share is simply a metadata container and represents what data is shared from producer to consumer. No data is actually moved.

As depicted in the following diagram, the central ETL cluster etl_cluster hosts the sales data in a schema named sales and performs transformations to create a semantic layer required for BI reports in a new schema named bi_semantic.

The superuser in the BI consumer cluster creates a local database reference named sales_semantic from the data share (step 2 in the preceding diagram). The BI users use the product reviews dataset in the local schema named product_reviews and join with bi_semantic data for reporting purposes (step 3).

You can find the script in the products review dataset, which we use in this post to load the dataset into bi_cluster. You can load the DW benchmark dataset into etl_cluster using this github link. Loading these datasets into the respective Amazon Redshift clusters is outside the scope of this post, and is a prerequisite to following the instructions we outline.

The following diagram depicts the cloud DW benchmark data model used.

The following diagram depicts the cloud DW benchmark data model used.

The following table summarizes the data.

Table Name Rows
STORE_SALES 8,639,936,081
CUSTOMER_ADDRESS 15,000,000
CUSTOMER 30,000,000
CUSTOMER_DEMOGRAPHICS 1,920,800
ITEM 360,000
DATE_DIM 73,049

Building a BI semantic layer

A BI semantic layer is a representation of enterprise data in a way that simplifies BI reporting requirements and offers better performance. In our use case, the BI semantic layer transforms sales data to create a customer denormalized dataset and another dataset for all store sales by product in a given year. The following queries are run on the etl_cluster to create the BI semantic layer.

  1. Create a new schema to host BI semantic tables with the following SQL:
    Create schema bi_semantic;

  2. Create a denormalized customer view with select columns required for sales BI team:
    create view bi_semantic.customer_denorm 
    as
    select
    	c_customer_sk,
    	c_customer_id,
    	c_birth_year,
    	c_birth_country,
    	c_last_review_date_sk,
    	ca_city,
    	ca_state,
    	ca_zip,
    	ca_country,
    	ca_gmt_offset,
    	cd_gender,
    	cd_marital_status,
    	cd_education_status
    from sales.customer c, sales.customer_address ca, sales.customer_demographics cd
    where
    c.c_current_addr_sk=ca.ca_address_sk
    and c.c_current_cdemo_sk=cd.cd_demo_sk;

  1. Create a second view for all product sales with columns required for BI team:
    create view bi_semantic.product_sales
    as 
    select 
    	i_item_id,
    	i_product_name,
    	i_current_price,
    	i_wholesale_cost,
    	i_brand_id,
    	i_brand,
    	i_category_id,
    	i_category,
    	i_manufact,
    	d_date,
    	d_moy,
    	d_year,
    	d_quarter_name,
    	ss_customer_sk,
    	ss_store_sk,
    	ss_sales_price,
    	ss_list_price,
    	ss_net_profit,
    	ss_quantity,
    	ss_coupon_amt
    from sales.store_sales ss, sales.item i, sales.date_dim d
    where ss.ss_item_sk=i.i_item_sk
    and ss.ss_sold_date_sk=d.d_date_sk;

Sharing data across Amazon Redshift clusters

Now, let’s share the bi_semantic schema in the etl_cluster with the bi _cluster.

  1. Create a data share in the etl_cluster using the following command when connected to the etl_cluster. The producer cluster superuser and database owners can create data share objects. By default, PUBLICACCESSIBLE is false. If the producer cluster is publicly accessible, you can add PUBLICACCESSIBLE = true to the following command:
    CREATE DATASHARE SalesDatashare;

  1. Add the BI semantic views to the data share. To add objects to the data share, add the schema before adding objects. Use ALTER DATASHARE to share the entire schema; to share tables, views, and functions in a given schema; and to share objects from multiple schemas:
    ALTER DATASHARE SalesDatashare ADD SCHEMA bi_semantic;
    ALTER DATASHARE SalesDatashare ADD ALL TABLES IN SCHEMA bi_semantic;

The next step requires a cluster namespace GUID from the bi_cluster. One way to find the namespace value of a cluster is to run the SQL statement select current_namespace when connected to the bi_cluster. Another way is on the Amazon Redshift console: choose your Amazon Redshift consumer cluster, and find the value under Namespace located in the General information section.

  1. Add consumers to the data share using the following command:
    GRANT USAGE ON DATASHARE SalesDatashare TO NAMESPACE '1m137c4-1187-4bf3-8ce2-e710b7100eb2';

  1. View the list of the objects added to the share using the following command. The share type is outbound on the producer cluster.
    DESC DATASHARE salesdatashare;

The following screenshot shows our list of objects.

The following screenshot shows our list of objects.

Consuming the data share from the consumer BI Amazon Redshift cluster

From the bi_cluster, let’s review, consume, and set permissions on the data share for end-user consumption.

  1. On the consumer BI cluster, view the data shares using the following command as any user:
    SHOW DATASHARES;

The following screenshot shows our results. Consumers should be able to see the objects within the incoming share but not the full list of consumers associated with the share. For more information about querying the metadata of shares, see DESC DATASHARE.

The following screenshot shows our results.

  1. Start the consumption by creating a local database from the salesdatashare. Cluster users with the permission to do so can create a database from the shares. We use the namespace from the etl_cluster.
    CREATE DATABASE Sales_semantic from DATASHARE SalesDatashare OF NAMESPACE '45b137c4-1287-4vf3-8cw2-e710b7138nd9'; 

Consumers should be able to see databases that they created from the share, along with the databases local to the cluster, at any point by querying SVV_REDSHIFT* tables. Data share objects aren’t available for queries until a local database reference is created using a create database statement.

  1. Run the following command to list the databases in bi_cluster:
    select * from svv_redshift_databases;

The following screenshot shows that both the local and shared databases are listed so that you can explore and navigate metadata for shared datasets.

The following screenshot shows that both the local and shared databases are listed so that you can explore and navigate metadata for shared datasets.

  1. Grant usage on the database to bi_group, where bi_group is a local Amazon Redshift group with BI users added to that group:
    GRANT USAGE ON DATABASE sales_semantic TO bi_group;

Querying as the BI user

In this section, you connect as a user in the bi_group who got access to the shared data. The user is still connected to the local database on the bi_cluster but can query the shared data via the new cross-database query functionality in Amazon Redshift.

  1. Review the list of objects in the share by running the following SQL:
    SELECT schema_name, table_name, table_type FROM  svv_redshift_tables
         where database_name = 'sales_semantic'

The following screenshot shows our results.

The following screenshot shows our results.

  1. Review the list of columns in the customer_denorm view::
    SELECT * FROM  svv_redshift_columns 
       where database_name = 'sales_semantic' and table_name = 'customer_denorm';

The following screenshot shows our results.

The following screenshot shows our results.

  1. Query the shared objects using three-part notation just like querying any other local database object, using a notation <database>.<schema>.<view/table>:
    select count(*) from sales_semantic.bi_semantic.customer_denorm;

Following is your result:

28950139

  1. Analyze the local product reviews data by joining the shared customer_denorm data to identify the top ratings by customer states for this BI report:
    SELECT PR.product_category, c.ca_state AS customer_state,
                  count(PR.star_rating) AS cnt
          FROM product_reviews.amazon_reviews PR,               --local data
               sales_semantic.bi_semantic.customer_denorm  C    –-shared data
          WHERE  PR.customer_id = C.c_customer_sk
             AND PR.marketplace = 'US'
          GROUP BY 1, 2
          order by cnt desc
          Limit 10;

The following screenshot shows our results.

The following screenshot shows our results.

Adding a data science consumer

Now, let’s assume the company has decided to spin up a data science team to help with new sales strategies, and this team performs analytics on the sales data. The data science team is new and has very different access patterns and SLA requirements compared to the BI team. Thanks to the data sharing feature, onboarding new use cases such as this is easy.

We add a data science consumer cluster named ds_cluster. Because the data science users need access to data in salesdatashare, the superuser in the etl_cluster can simply grant access to the ds_cluster by adding them as another consumer for the share without moving any data:

GRANT USAGE ON DATASHARE SalesDatashare TO NAMESPACE ''1h137c4-1187-4w53-8de2-e710b7100es2';

The following diagram shows our updated architecture with the data science consumer (step 4).

The following diagram shows our updated architecture with the data science consumer (step 4).

This way, multiple clusters of different sizes can access the same dataset and isolate workloads to meet their SLA requirements. Users in these respective clusters are granted access to shared objects to meet their stringent security requirements. The producer keeps control of the data and at any point can remove certain objects from the share or remove access to the share for any of these clusters, and the consumers immediately lose access to the data. Also, as more data is ingested into the producer cluster, the consumer sees transactionally consistent data instantly.

Monitoring and security

Amazon Redshift offers comprehensive auditing capabilities using system tables and AWS CloudTrail to allow you to monitor the data sharing permissions and usage across all the consumers and revoke access instantly when necessary. The permissions are granted by the superusers from both the producer and the consumer clusters to define who gets access to what objects, similar to the grant commands used in the earlier scenario. You can use the following commands to audit the usage and activities for the data share.

Track all changes to the data share and the shared database imported from the data share with the following code:

Select username, share_name, recordtime, action, 
         share_object_type, share_object_name 
  from svl_datashare_change_log
   order by recordtime desc;

The following screenshot shows our results.

The following screenshot shows our results.

Track data share access activity (usage), which is relevant only on the producer, with the following code:

Select * from svl_datashare_usage;

The following screenshot shows our results.

The following screenshot shows our results.

Summary

Amazon Redshift data sharing provides workload isolation by allowing multiple consumers to share data seamlessly without the need to unload and load data. We also presented a step-by-step guide for securely sharing data from a producer to multiple consumer clusters.


About the Authors

Harsha Tadiparthi is a Specialist Sr. Solutions Architect, AWS Analytics. He enjoys solving complex customer problems in Databases and Analytics and delivering successful outcomes. Outside of work, he loves to spend time with his family, watch movies, and travel whenever possible.

 

 

Harshida Patel is a Specialist Sr. Solutions Architect, Analytics with AWS.

AWS publishes FINMA ISAE 3000 Type 2 attestation report for the Swiss financial industry

Post Syndicated from Niyaz Noor original https://aws.amazon.com/blogs/security/aws-publishes-finma-isae-3000-type-2-attestation-report-for-the-swiss-financial-industry/

Gaining and maintaining customer trust is an ongoing commitment at Amazon Web Services (AWS). Our customers’ industry security requirements drive the scope and portfolio of compliance reports, attestations, and certifications we pursue. Following up on our announcement in November 2020 of the new EU (Zurich) Region, AWS is pleased to announce the issuance of the Swiss Financial Market Supervisory Authority (FINMA) ISAE 3000 Type 2 attestation report.

The FINMA ISAE 3000 Type 2 report, conducted by an independent third-party audit firm, provides Swiss financial industry customers with the assurance that the AWS control environment is appropriately designed and implemented to address key operational risks, as well as risks related to outsourcing and business continuity management. Additionally, the report provides customers with important guidance on complementary user entity controls (CUECs), which customers should consider implementing as part of the shared responsibility model to help them comply with FINMA’s control objectives. The report covers the period from 4/1/2020 to 9/30/2020, with a total of 124 AWS services and 22 global Regions included in the scope. A full list of certified services and Regions are presented within the published FINMA report.

The report covers the five core FINMA circulars that are applicable to Swiss banks and insurers in the context of outsourcing arrangements to the cloud. These FINMA circulars are intended to assist regulated financial institutions in understanding approaches to due diligence, third-party management, and key technical and organizational controls that should be implemented in cloud outsourcing arrangements, particularly for material workloads. The report’s scope covers, in detail, the requirements of the following FINMA circulars:

  • 2018/03 “Outsourcing – banks and insurers” (31.10.2019);
  • 2008/21 “Operational Risks – Banks” – Principle 4 Technology Infrastructure (31.10.2019);
  • 2008/21 “Operational Risks – Banks” – Appendix 3 Handling of electronic Client Identifying Data (31.10.2019);
  • 2013/03 “Auditing” (04.11.2020) – Information Technology (21.04.2020);
  • Business Continuity Management (BCM) minimum standards proposed by the Swiss Insurance Association (01.06.2015) and Swiss Bankers Association (29.08.2013);

The alignment of AWS with FINMA requirements demonstrates our continuous commitment to meeting the heightened expectations for cloud service providers set by Swiss financial services regulators and customers. Customers can use the FINMA report to conduct their due diligence, which may minimize the effort and costs required for compliance. The FINMA report for AWS is now available free of charge to AWS customers within the AWS Artifact. More information on how to download the FINMA report is available here.

Some useful resources related to FINMA:

As always, AWS is committed to bringing new services into the scope of our FINMA program in the future based on customers’ architectural and regulatory needs. Please reach out to your AWS account team if you have questions about the FINMA report.

If you have feedback about this post, submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Niyaz Noor

Niyaz is a Security Audit Program Manager at AWS, leading multiple security certification programs across the Asia Pacific, Japan, and Europe Regions. During his career, he has helped multiple cloud service providers obtain global and regional security certification. He is passionate about delivering programs that build customers’ trust and provide them assurance on cloud security.

NSA on Authentication Hacks (Related to SolarWinds Breach)

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/12/nsa-on-authentication-hacks-related-to-solarwinds-breach.html

The NSA has published an advisory outlining how “malicious cyber actors” are “are manipulating trust in federated authentication environments to access protected data in the cloud.” This is related to the SolarWinds hack I have previously written about, and represents one of the techniques the SVR is using once it has gained access to target networks.

From the summary:

Malicious cyberactors are abusing trust in federated authentication environments to access protected data. The exploitation occurs after the actors have gained initial access to a victim’s on-premises network. The actors leverage privileged access in the on-premises environment to subvert the mechanisms that the organization uses to grant access to cloud and on-premises resources and/or to compromise administrator credentials with the ability to manage cloud resources. The actors demonstrate two sets of tactics, techniques,and procedures (TTP) for gaining access to the victim network’s cloud resources, often with a particular focus on organizational email.

In the first TTP, the actors compromise on-premises components of a federated SSO infrastructure and steal the credential or private key that is used to sign Security Assertion Markup Language (SAML) tokens(TA0006, T1552, T1552.004). Using the private keys, the actors then forge trusted authentication tokens to access cloud resources. A recent NSA Cybersecurity Advisory warned of actors exploiting a vulnerability in VMware Access and VMware Identity Manager that allowed them to perform this TTP and abuse federated SSO infrastructure.While that example of this TTP may have previously been attributed to nation-state actors, a wealth of actors could be leveraging this TTP for their objectives. This SAML forgery technique has been known and used by cyber actors since at least 2017.

In a variation of the first TTP, if the malicious cyber actors are unable to obtain anon-premises signing key, they would attempt to gain sufficient administrative privileges within the cloud tenant to add a malicious certificate trust relationship for forging SAML tokens.

In the second TTP, the actors leverage a compromised global administrator account to assign credentials to cloud application service principals (identities for cloud applications that allow the applications to be invoked to access other cloud resources). The actors then invoke the application’s credentials for automated access to cloud resources (often email in particular) that would otherwise be difficult for the actors to access or would more easily be noticed as suspicious (T1114, T1114.002).

This is an ongoing story, and I expect to see a lot more about TTP — nice acronym there — in coming weeks.

Related: Tom Bossert has a scathing op-ed on the breach. Jack Goldsmith’s essay is worth reading. So is Nick Weaver’s.

What’s New in InsightIDR: Q4 2020 in Review

Post Syndicated from Margaret Zonay original https://blog.rapid7.com/2020/12/18/whats-new-in-insightidr-q4-2020-in-review/

What’s New in InsightIDR: Q4 2020 in Review

Throughout the year, we’ve provided roundups of what’s new in InsightIDR, our cloud-based SIEM tool (see the H1 recap post, and our most recent Q3 2020 recap post). As we near the end of 2020, we wanted to offer a closer look at some of the recent updates and releases in InsightIDR from Q4 2020.

Complete endpoint visibility with enhanced endpoint telemetry (EET)

With the addition of the enhanced endpoint telemetry (EET) add-on module, InsightIDR customers now have the ability to access all process start activity data (aka any events captured when an application, service, or other process starts on an endpoint) in InsightIDR’s log search. This data provides a full picture of endpoint activity, enabling customers to create custom detections, see the full scope of an attack, and effectively detect and respond to incidents. Read more about this new add-on in our blog here, and see our on-demand demo below.

Network Traffic Analysis: Insight Network Sensor for AWS now in general availability

In our last quarterly recap, we introduced our early access period for the Insight Network Sensor for AWS, and today we’re excited to announce its general availability. Now, all InsightIDR customers can deploy a network sensor on their AWS Virtual Private Cloud and configure it to communicate with InsightIDR. This new sensor generates the same data outputs as the existing Insight Network Sensor, and its ability to deploy in AWS cloud environments opens up a whole new way for customers to gain insight into what is happening within their cloud estates. For more details, check out the requirements here.

What’s New in InsightIDR: Q4 2020 in Review

New Attacker Behavior Analytics (ABA) threats

Our threat intelligence and detection engineering (TIDE) team and SOC experts are constantly updating our detections as they discover new threats. Most recently, our team added 86 new Attacker Behavior Analytics (ABA) threats within InsightIDR. Each of these threats is a collection of three rules looking for one of 38,535 specific Indicators of Compromise (IoCs) known to be associated with a malicious actor’s various aliases.  

In total, we have 258 new rules, or three for each type of threat. The new rule types for each threat are as follows:

  • Suspicious DNS Request – <Malicious Actor Name> Related Domain Observed
  • Suspicious Web Request – <Malicious Actor Name> Related Domain Observed
  • Suspicious Process – <Malicious Actor Name> Related Binary Executed

New InsightIDR detections for activity related to recent SolarWinds Orion attack: The Rapid7 Threat Detection & Response team has compared publicly available indicators against our existing detections, deployed new detections, and updated our existing detection rules as needed. We also published in-product queries so that customers can quickly determine whether activity related to the breaches has occurred within their environment. Rapid7 is closely monitoring the situation, and will continue to update our detections and guidance as more information becomes available. See our recent blog post for additional details.

Custom Parser editing

InsightIDR customers leveraging our Custom Parsing Tool can now edit fields in their pre-existing parsers. With this new addition, you can update the parser name, extract additional fields, and edit existing extracted fields. For detailed information on our Custom Parsing Tool capabilities, check out our help documentation here.

What’s New in InsightIDR: Q4 2020 in Review

Record user-driven and automated activity with Audit Logging

Available to all InsightIDR customers, our new Audit Logging service is now in Open Preview. Audit logging enables you to track user driven and automated activity in InsightIDR and across Rapid7’s Insight Platform, so you can investigate who did what, when. Audit Logging will also help you fulfill compliance requirements if these details are requested by an external auditor. Learn more about the Audit Logging Open Preview in our help docs here, and see step-by-step instructions for how to turn it on here.

What’s New in InsightIDR: Q4 2020 in Review

New event source integrations: Cybereason, Sophos Intercept X, and DivvyCloud by Rapid7

With our recent event source integrations with Cybereason and Sophos Intercept X, InsightIDR customers can spend less time jumping in and out of multiple endpoint protection tools and more time focusing on investigating and remediating attacks within InsightIDR.

  • Cybereason: Cybereason’s Endpoint Detection and Response (EDR) platform detects events that signal malicious operations (Malops), which can now be fed as an event source to InsightIDR. With this new integration, every time an alert fires in Cybereason, it will get relayed to InsightIDR. Read more in our recent blog post here.
  • Sophos Intercept X: Sophos Intercept X is an endpoint protection tool used to detect malware and viruses in your environment. InsightIDR features a Sophos Intercept X event source that you can configure to parse alert types as Virus Alert events. Check out our help documentation here.
  • DivvyCloud: This past spring, Rapid7 acquired DivvyCloud, a leader in Cloud Security Posture Management (CSPM) that provides real-time analysis and automated remediation for cloud and container technologies. Now, we’re excited to announce a custom log integration where cloud events from DivvyCloud can be sent to InsightIDR for analysis, investigations, reporting, and more. Check out our help documentation here.

Stay tuned for more!

As always, we’re continuing to work on exciting product enhancements and releases throughout the year. Keep an eye on our blog and release notes as we continue to highlight the latest in detection and response at Rapid7.

Not an InsightIDR customer? Start a free trial today.

Get Started

Правителството на ГЕРБ се държи “пропутински” България бави убежище за съратник на Навални

Post Syndicated from Николай Марченко original https://bivol.bg/%D0%B1%D1%8A%D0%BB%D0%B3%D0%B0%D1%80%D0%B8%D1%8F-%D0%B1%D0%B0%D0%B2%D0%B8-%D1%83%D0%B1%D0%B5%D0%B6%D0%B8%D1%89%D0%B5-%D0%B7%D0%B0-%D1%81%D1%8A%D1%80%D0%B0%D1%82%D0%BD%D0%B8%D0%BA-%D0%BD%D0%B0-%D0%BD.html

петък 18 декември 2020


41-годишният учител по английски и граждански активист от Русия Евгений Сергеевич Чупов почти 1,5 г. се опитва да получи политическо убежище в България. Чупов е доброволец от екипа на Иван Жданов – директор на Фонда за борба с корупцията (ФБК) на Алексей Навални и известен като неговата “дясна ръка”. В екипа на Навални Евгений е събирал подписи за него по време на предизборната кампания за Московската градска дума през 2019 г. Но на 28 май 2019 г. е арестуван от полицията, пребит и заплашен от началника на Центъра за борба с екстремизма към районното МВР на Москва Алексей Маскунов. „В Кавказ биха те убили“, казва полицейският шеф на активиста. Формалният повод за задържането е, че бил спукал гумите на колата на един от офицерите на полицията Игор Шепел, подчинен на Маскунов.

Задържането е станало 2 дни преди да дойде на традиционната си почивка в България, за кято има издадени визи за цялото семейство, заедно със самолетни билети. В България обаче Чупов получава предупреждение, че ако се върне в Русия го чака нов сигурен арест и репресивни действия от страна на властите. Освен това той е получил и заплахи за живота си по мобилно приложение от полицейски шефове в Москва. Поради тази причина руснакът решава да поиска политическо убежище в страната, заедно със семейството си. 

Въпреки абсурдността на обвиненията и опасността за живота на активиста при завръщането му в Русия, Държавната агенция за бежанците (ДАБ) мотае месеци наред семейството на Чупов, а двамата със съпругата му имат четири деца. В ръководството на агенцията с казуса “Чупови” е заето политически-ангажирано лице –  общинският съветник от ГЕРБ Иван Миланов. Той е известен с това, че е гласувал за скандалната продажба на земя на безценица в Божурище, което предизвика арест и обвинение от прокуратурата на кмета Георги Димов през май на 2019 г. Скандалният отказ за убежище е подписан от председателката на ДАБ Петя Първанова, която е и бивш служебен министър на вътрешните работи. 

Снимка: “Биволъ”

Докато световноизвестните медии като Bellingcat, CNN, Spiegel и The Insider Russia се цитират из целия свят след разследването им за осмината агенти на Федералната служба за сигурност ФСБ, отровили Алексей Навални, в София политическо убежище очаква един от доброволците на „Партията на прогреса“ на лидера на руската опозиция, участвал в кампанията за местните избори в Москва през 2019 г.

И това се случва на фона на осъденото от министъра на външните работи Екатерина Захариева отравяне на Алексей Навални, заради което през октомври 2020 г. България наравно с другите страни-членки на ЕС гласува допълнителни санкции срещу Москва.

Причината Евгений Чупов и семейството му да не са екстрадирани към Русия е това, че зад тях застанаха НПО-та като „Гласът на България“, „Българският червен кръст (БЧК), Атлантически съвет – България и руската правозащитна организация „Мемориал“. „Биволъ“ успя да говори с московския адвокат на активиста Владимир Воронин и да посети Евгений Чупов и семейството му, което даде ексклузивно интервю пред камерите на медията ни и предостави копия на цялата си документация по случая.

„Хванахме те!“

На 28 май 2019 г. московчанинът Евгений Чупов води децата си на детска градина. На връщане е задържан и прекарва 12 часа в арест към РПУ към Войковски район на гр. Москва. В интервю за „Биволъ“ той споделя, че задържането му е “без никакво официално обяснение защо”.

Снимка: Николай Марченко, “Биволъ”

Разследващият му показал някакъв протокол, според който той е нанесъл щети върху гумите на някаква кола „Киа Рио“, която по съвпадение е на сътрудник на същия Център за борба с екстремизма към МВР на Руската федерация.

Въпросният т. нар. Център „Е“ към МВР на Русия тормози от години екипа на Алексей Навални с външно наблюдение, следене преди и след протестите,  и задържане на активисти из цялата страна. Сътрудникът е майор в полицията Игор Шепел.

След като Чупов е заведен в стая без видеонаблюдение, полицаите искат да му отнемат смартфона. Но като отказва да го предостави без съответния протокол според процедурата, е ударен силно в корема.

Оставен е без телефон и без връзка с адвоката си, осигурен му от директора на Фонда за борба с корупцията (ФБК) на Алексей Навални – Иван Жданов.

Началникът на районния отдел за противодействие на екстремизъм Алексей Маскунов започва да преглежда съобщенията в телефона на активиста и контактите му.

Руският полицейски шеф Тимур Валиулин се прости с поста си заради недекларирани имоти в България

„Той ми каза: доброволец си в щаба на Иван Жданов, кандидата (на местните избори за Московската градска дума през лятото на 2019 г. – бел. ред.) на Навални. И сега те хванахме!“, припомня си Евгений.

След това полицейският началник записва и номерата на иззетите от него лични банкови карти.

„Каза ми: Сега ще видим кой те финансира!”

“Сред службите в Русия като КГБ (днес – ФСБ – бел. ред.) има мнение, че след като ние като активисти си защитаваме жилищата, дърветата, парковете, значи всички сме финансирани от Държавния департамент на САЩ“, разказва Чупов.

Маскунов също така го е заплашил, че полицията ще подхвърли наркотици в апартамента му и той ще трябва да лежи с години:

“Нали знаеш как действаме с наркотиците?“.

Обичайна практика на руските полицаи е да „натопяват“ лидерите на опозицията, НПО-та и журналисти с „намерените наркотици“. „Биволъ“ писа през 2019 г. как наркотици са подхвърлени на журналиста Иван Голунов, а това доведе до оставки след публикации на световни медии, които предизвикаха намесата на президента Владимир Путин: вижте материала „Репортерът на „Медуза“ в ареста, зам.-кмет на Москва – в пентхаус за 20 млн. евро“.

Репортерът на “Медуза” – в ареста, зам.-кмет на Москва – в пентхаус за €20 млн.

„Аз съм сигурен, че заплахите към мен са свързани с политическата ми активност, тъй като съм подкрепял кандидата за Московската държавна дума Иван Жданов, който ръководи Фонда за борба с коурпцията на Алексей Навални – тази НПО от години се занимава с разследвания на различни случаи на корупция, законови нарушения и други нередности в Русия“

Освен, че е доброволец по време на изборите през 2019 г., той е основател и координатор на московските квартални НПО-та «Отбрана на Головино», «Отбрана на Левобережни» и «Отбрана на района Аеропорт», които се борят с презастрояването, изсичането на дървета и др. нарушения от страна на общината.

Всичко това се случва само два дни преди ваканцията на Евгений Чупов и семейството му в България.  Те идват в страната ни всяко лято, за да прекарат отпуската си при приятелите им във Варна. Тъй като България не е в Шенген, визите на семейството са обяснението защо след три месеца не искат политическо убежище другаде.

Адвокатът: Ако се върне, го пращат в ареста!

Владимир Воронин (снимка: Twitter)

„Биволъ“ се свърза и с московския адвокат на Евгений Чупов Владимир Воронин, който от години сътрудничи с ФБК на Алексей Навални, за да изясни има ли опасност за активиста и семейството му, ако бъдат принудени да се върнат в Русия.

„Налице е абсолютната реалност, че по престъплението, което според законодателството на Руската федерация се смята за такова с неголяма тежест, наистина могат да го пратят в следствен арест и то за доста дълъг период от време“, коментира адв. Воронин пред медията ни.

„По силата на това, че най-вероятно ще сметнат, че се е укрил от руските власти,  това означава, че спрямо него може да бъде наложена най-строгата мярка за неотклонение – задържане под стража“.

„А след престой в следствения арест никой не става нито по-здрав, нито пък се оказва по-близо до семейството си. Затова разбира се, смятам, че ако се върне, ще бъде задържан под стража и смятам, че това е опасността“, категоричен бе защитникът на Евгений Чупов.

Адвокатът посочи и с какви доказателства разполага защитата за наличие на политическо преследване:

„Доказателствата ни са на базата на това, че Евгений е водил дейност в качеството си на доброволец”.

“Раздавал е флайъри, агитирал е в паркове, многократно е задържан и привличан към административна отговорност. Това, макар по руското законодателство и да не е много сурово подвеждане под отговорност, след това внезапно е задържан, след като е завел детето си до детска градина“.

Той изрази и възмущението си, че много продължително време не го допускали впри подзащитния му в качеството на негов адвокат. „Задържаха го сутринта. Научих това едва по обед. И не по-рано от 6 ч. вечерта бях допуснат при него. През цялото това време той беше без адвокат и какво точно се е случвало с него тогава, не знам“ , казва Владимир Воронин.

„И той ни разказва, че през цялото време с него са водени някакви неразбираеми разговори, които естествено не са предвидени от Процесуалния кодекс на РФ, казвали са му, че могат да му подхвърлят наркотици и да започнат срещу него по-сериозно наказателно преследване“.

„Затова тук виждам ясно, че по отношение на подзащитния е имало оказване на натиск. Той не е можел да се свърже с адвоката си, единственото което той е успял, е да напише на съпругата си, че е задържан. И тя да се свърже с адвоката му. През продължителния период от време не са му викали Спешна помощ, макар че той не се е чувствал добре. Иззели са му всичко, което е имал в себе си“, разказва още адвокатът.

Защитата е писала и съответните жалби с искане да бъдат върнати личните вещи на Евгенин Чупов:

„Мобилният му телефон, камера, още нещо имаше там от вещите му…Досега нищо не е върнато“.

„И когато неотдавна подадох заявление в полицейското районно, получих абсолютно неясен отговор за това, че срещу него е започнало наказателно преследване и това е. Тоест, наборът от всичките тези фактори, според мен, свидетелстват, че това е политически мотивирано дело“, разказа за „Биволъ“ Владимир Воронин.

Срамният отказ на ДАБ

През август 2019 г., вече в България, Евгений Чупов и съпругата му кандидатстват за закрила от Държавната агенция за бежанците по политически причини.

И тук започва сагата им от почти година и половина с безкрайната бюрокрация в България – интервюта, кореспонденция и чакане. Най-сложна е необходимостта от подновяване на личните карти през 3 месеца.

В началото на 2020 г. е привикан в Регистрационно-приемателния център – София в кв. Овча Купел. “Там разбрахме, че ни е отказано убежище“, припомня Чупов. Отказът е подписан лично от председателката на ДАБ Петя Първанова, която е и бивш служебен министър на вътрешните работи в кабинета на Марин Райков през 2013 г.

Руската служба на Радио Свободна Европа – Радио Свобода, публикува част от решението, според което:

СНимка: Николай Марченко, “Биволъ”

„изложената фактическа обстановка не дава основание да се предполага, че заявителят е бил принуден да напусне родината си заради реална опасност от сериозни посегателства като смъртно наказание, мъчения, нечовешко или унизително отношение или наказание“.

В Административния съд – София град са заведени три дела от името на Евгений Чупов, съпругата му с малолетните деца и за непълнолетните. При обжалване казусът трябва да бъде решен окончателно от Върховния административен съд (ВАС).

От ДАБ абсолютно наивно и формално едновременно твърдят, че били отправяли “запитване” до руското МВР с цел да проверят дали е имало случай на “упражнено насилие” над Евгений Чупов при задържането му през 2019 г.

Но били получили отрицателен отговор, че било “направено разследване за превишаване на правомощия на служители”, но то не усановило да е имало такова спрямо активиста. Нима в София са очаквали друг официален отговор от страна на Москва?

А през февруари инж. Иван Миланов, който е директор “Международна дейност” към ДАБ, пише доста странна 8-странична “справка” за ситуацията с правата на човека в Руската федерация, която е почти изцяло копипейст от руски медии. “Документът” съдържа доста повърхнoстен “анализ”, без да има достатъчно подробности за атаките срещу екипа на Алексей Навални, подложени на следене, обиски, задържания, блокиране на банкови карти и дори отвличания или арести.

Инж. Миланов отделя на темата едва един абзац, предпочитайки да разсъждава върху наличието на парламентарна “опозиция” в руската Държавна дума като партията ЛДПР на Владимир Жириновски, който самият от години не крие, че е управляван от Кремъл наравно с останалите формации в законодателния орган.

Общинският съветник в Божурище и директор в ДАБ Иван Миланов

Иван Миланов обаче няма как да не е ясно за какъв сериозен казус става дума, след като се представя за човек с “дипломатическа кариера” и е политически ангажирано лице. А именно – общински съветник от управляващата партия ГЕРБ.

Ето и как определя себе си в партийната листовка по време на кампанията, публикувана на уебстраницата на тогава все още кандидат-кмет на Божурище от ГЕРБ Георги Димов:

“Решителна и динамична личност съм. Силно позитивен, спортен характер, обичащ литературата, музиката и технологиите. На мен може да се разчита не само, когато сме добре…Имам изключително ниско ниво на компромис по отношение на лъжа и неправда”.

През лятото на 2019 г. при задържането на кмета на Божурище Георги Димов, общинският съветник Иван Миланов се оправдава пред bTV заради гласа си за сскандалната продажба на земя под пазарната, довела до арест на местния градоначалник:

“Подкрепих го, защото това е земя ливада”.

“Тя е таква земя, не може да бъде по-скъпа…”, коментира тогава “решителният” гербаджия Иван Миланов за мащабната измама в Община Божурище. Той обаче не се споменава сред задържаните или свидетели, продължавайки с “дипломатическата” си кариера в ръководството на ДАБ, където изготвя “справки” за случаите като този на семейство Чупови.

Медийният ефект 

Но въпреки мощната “аналитика” на ДАБ, медиите и НПО-тата се оказват далеч по-професионални и по-ефективни. През септември 2020 г. за отказа на ДАБ няколко пъти писа „Свободна Европа“. Известната в Русия правозащитна организация “Мемориал” предостави своето препоръчително писмо на Евгений Чупов, в което се опитва да увери властите в София, че за него е опасно да се връща в Русия.

Чупов

“Заплахите, с които Евгений Чупов ще се сблъска или би могъл да се сблъска в случай, че се върне в Русия, могат да бъдат окачествени като преследвания поради признака за принадлежност към определени политически убеждения”.

“Съответните заплахи могат да бъдат окачествени като “преследване” по смисъла на чл.1 на Конвенцията “За статута на бежанците” от 1951 г. и според допълнителния протокол към документа от 1967 г.”, гласи писмото на “Мемориал”, с което “Биволъ” разполага.

Публично изрази възмущението си и „Атлантическият съвет на България“, според който „българската държава трябва не с думи, а с конкретни действия и дела да доказва ежедневно, че е свободна, европейска, демократична и правова държава. 

Със съжаление научаваме, че все още не е уреден статутът на г-н Евгений Чупов. Отново ставаме свидетели на липсата на устойчиво справедливо решение. Атлантическият съвет на България настоява за незабавно решение на случая и предоставяне на политическо убежище на г-н Евгений Чупов. Вече има достатъчно публично оповестени сведения за упражнен в Русия полицейски натиск и насилие върху г-н Евгений Чупов заради това, че е бил част от доброволците, помагали в предизборната кампания на кандидат от екипа на руския опозиционен лидер Алексей Навални.  

Едва ли има в страната друг човек, който като Евгений Чупов да се чувства благодарен на българските медии. След публикациите на Каспаров.ру, Радио Свобода, сайтовете „Свободна Европа“,  Фактор.бг и становището на сайта на Атлантическия съвет – София, изведнъж на 25 септември 2020 г. ДАБ оттегля решението си. Това стана ясно от определението на Софийския административен съд (САД), който прекрати прекрати делото, заведено от Евгений Чупов с цел оспорване отказа на ДАБ за политическо убежище.

Агенцията на ЕС за основните права не може да помогне…

Оказа се, че ДАБ е оттеглила решението, с което отказва да даде закрила на Чупови. “Поради оттеглянето на оспорваното по делото решение, съдебното производство по делото е процесуално недопустимо”, пише в определението на съда. При обжалване казусът ще бъде решен окончателно от Върховния административен съд.

На 4 декеври 2020 г. Агенцията на ЕС за основните права (FRA) отговаря на Евгений Чупов, че няма как да му помогне по казуса му в България.

Със съжаление Ви уведомяваме, че мандатът на FRA не й дава правомощия да разглежда отделни случаи или жалби. Освен това агенцията няма правомощия да наблюдава държавите-членки на ЕС за наличие на нарушения на човешките права. Следователно, ние не можем да предложим никакъв съвет или помощ по Вашия случай.

Че напрактика не могат да въздействат върху властите в София отговарят на Евгений Чупов и от Европарламента.

Европарламентът нямало как да помогне…

„Биволъ“ се опита да се свърже с ръководството на ДАБ и нейния председател Петя Първанова. Тъй като не е била на работното място според секретарката й, се наложи да бъде проведен разговор с ръководителката на „Връзки с обществеността“ Калина Йотова.

„Не, официални позиции ние конкретно по казуси нямаме, тъй като когато се касае за граждани, които пред нас търсят международна закрила и са настанени при нас, ние нямаме и по принцип не даваме информация за такива хора“.

„Вижте, агенцията не се оправдава. Агенцията, когато предоставя информация, това е информацията на самите чужденци, на тези, за които се работи по техния статут, и на трети страни. И съответно ние нямаме практика, международното право не позволява да се дава такава информация, съответно да се коментират на съда решенията не е наша работа като институция, нали ме разбирате?“, каза тя.

Според нея колегите й в ДАБ, написали абсурдната „справка“ за ситуацията в Русия „си вършат съвестно работата“:

„Не ми е работата да им давам оценка на колегите си“.

На писмените въпроси от ДАБ щели да реагират „в обичайния срок, в който отговаряме“.

Петя Първанова с бившия главен прокурор Сотир Цацаров (Снимка: “Утро Русе”)

Занимаващата се със случая на Чупови началник на отдел „Производство за международна защита“ в РПЦ в Овча купел към ДАБ Елеонора Йорданова отказа да коментира ситуацията пред „Биволъ“ с мотива, че „не е упълномощена“.

Това не попречи на същата служителка през 2016 г. да търси оправдания от името на ДАБ пред Нова ТВ по повод пуснатия да живее на квартири в София “терорист от Анбах” Мохамед Далел, сириец, извършил атентат в германския град. Тогава тя твърди пред телевизията, че е бил “абсолютно уравновесен човек”. И допълва тогава също, че честите интервюта с очакващите убежище се провеждали тогава, когато се подготвял отказ за такова.

Официалният писмен отговор на пресслужбата на ДАБ от името на Петя Първанова също не носи каквато и да било сериозна информация, освен, че в институцията държат на дискретността и не искат никаква прозрачност по казусите като този на Евгений Чупов.

„В отговор на Вашето запитване, получено по електронната поща на 16 декември 2020 г., Ви информирам, че в момента лицето, от което се интересувате е в производство за международна закрила. Изследват се всички факти и обстоятелства, свързани с молбата му за международна закрила. В законоустановените срокове ще бъде постановено решение”.

Не става ясно и защо са толкова много провежданите интервюта, без да има яснота за бъдещето на семейството в България.

“С оглед спецификата на работата на Държавната агенция за бежанците при Министерския съвет (ДАБ при МС) и законовите актове, свързани с личните данни, ДАБ при МС не обсъжда конкретни казуси с трети лица. В рамките на производството търсещите закрила получават пълна информация за своите права и задължения, като могат да ангажират адвокати, които да ги представляват по време на процедурата и да защитават техните права“.

Докато отказва на едно семейство от Москва в правото да се настани в България, самата Петя Ангелова Първанова през 2019 г. се сдобива с къща с двор (реалната застроена площа – 105  кв. м) в с. Шипочане в Самоков срещу 56 хил. лв. Това показва данъчната й декларация за 2020 г., която може да се види в търсачката на “Биволъ” и сайта Bird.bg “Български политически лица”, според която ексминистърката от години трупа влогове в лева, евро и и британски лири по сметките си в банка ДСК и в ПИБ.

На 15 декември 2020 г. Евгений Чупов уведомява „Биволъ“, че е поканен на поредно интервю в центъра в Овча купел. „Преди малко ми се обадиха от агенцията, ново интервю е насрочено за понеделник в 10 ч.“, съобщи активистът.

Дали обаче ще получи дългоочакваното решение за политическо убежище, остава да гадаем, тъй като икономическите облаги от страна на Кремъл като проекта за газопровод „Турски поток 2“ са основният приоритет за правителството на Бойко Борисов. Добрата новина е, че санкциите на САЩ след встъпването в длъжност на избрания президент Джо Байдън през януари 2021 г., са въпрос на време.

Дотогава източноевропейските лидери като Бойко Борисов, Александър Вучич и Виктор Орбан ще се стараят да задоволят егото на руския държавен глава Владимир Путин, имитирайки „довършването“ на корупционния проект и създавайки всякакъв вид пречки за тези, които се борят срещу режима в Москва.

Запитан от „Биволъ“ какво би казал на премиера Бойко Борисов, Евгений Чупов бе непреклонен:

„Мисля, че българският министър-председател е наясно с това, което се случва в Руската федерация“.

Снимка: “Биволъ”

„Бих му казал: Отворете си очите! Ако сте подписали Женевската конвенция, си я спазвайте, ако сте приели съответното законодателство, спазвайте го!“.

Оказа се, че Чупови не са единствените руснаци, които бягат от Русия и безуспешно търсят закрила от ДАБ. Флора Ахметова и синът й Даниил от Санкт Петербург са получили от агенцията отказ за убежище, въпреки че семейството е заплашено от престъпните групировки в Русия. А в края на февруари те трябва да напуснат общежитието за бежанци в столичния квартал „Овча купел“, писа Faktor.bg, уведомен от Евгений Чупов за казуса на сънародничката му.

 

Security updates for Friday

Post Syndicated from original https://lwn.net/Articles/840731/rss

Security updates have been issued by Arch Linux (blueman, chromium, gdk-pixbuf2, hostapd, lib32-gdk-pixbuf2, minidlna, nsd, pam, and unbound), CentOS (gd, openssl, pacemaker, python-rtslib, samba, and targetcli), Debian (kernel, lxml, and mediawiki), Fedora (mbedtls), openSUSE (clamav and openssl-1_0_0), Oracle (firefox and openssl), Red Hat (openssl, postgresql:12, postgresql:9.6, and thunderbird), Scientific Linux (openssl and thunderbird), and SUSE (cyrus-sasl, openssh, slurm_18_08, and webkit2gtk3).

US Schools Are Buying Cell Phone Unlocking Systems

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2020/12/us-schools-are-buying-cell-phone-unlocking-systems.html

Gizmodo is reporting that schools in the US are buying equipment to unlock cell phones from companies like Cellebrite:

Gizmodo has reviewed similar accounting documents from eight school districts, seven of which are in Texas, showing that administrators paid as much $11,582 for the controversial surveillance technology. Known as mobile device forensic tools (MDFTs), this type of tech is able to siphon text messages, photos, and application data from student’s devices. Together, the districts encompass hundreds of schools, potentially exposing hundreds of thousands of students to invasive cell phone searches.

The eighth district was in Los Angeles.

Да оправим наредбата за електронните рецепти

Post Syndicated from Bozho original https://blog.bozho.net/blog/3673

Министерският съвет е приел изменение на наредбата за реда за отпускане на лекарствени продукти, добавяйки текстове за електронна рецепта., която е обнародвана днес.

Това, за съжаление, се е случило в пълно нарушение на Закона за нормативните актове и не е минало през обществено обсъждане, така че мога да дам коментари само пост фактум, с идеята все пак някои неща да бъдат прецизирани. Разбирам, че ако можеш да извадиш 30 дни от срока за приемане на наредба в кризисна ситуация е разумно да го направиш, но пандемия има от 9 месеца, а за електронна рецепта се говори от десетилетия.

Ако пишеш изменения на наредбата чак сега, това показва безкрайна неподготвеност. А наредбата явно е писана „на коляно“, с много недомислени неща и терминологични грешки. Да, ще свърши работа, колкото да могат да се издават електронни рецепти, но ще трябва да се коригира в движение.

Ето и моите критики и предложения:

  • Наредбата казва, че „електронните предписания се издават, въвеждат, обработват и съхраняват чрез специализиран медицински и аптечен софтуер.“. Електронните рецепти трябва да се съхраняват в Националната здравна информационна система (НЗИС), а не в медицинския и аптечен софтуер. Там може да се съхраняват временно и за удобство, но централното място за съхранение трябва да бъде НЗИС. В следваща алинея наредбата урежда „висящото“ положение на прогресивно появяваща се НЗИС, което може да звучи добре, но няма как да е нормативен акт. Наредба не може да казва „като стават някакви неща, ще видим как ще ги ползвате“.
  • Наредбата казва, че „За издаването на електронно предписание лекарят или лекарят по дентална медицина се идентифицира чрез КЕП“. Ред за идентифициране с КЕП има временен по наредба към Закона за електронното управление и е редно тази наредба да препрати към нея, защото иначе „идентификация с КЕП“ не значи нищо. А е важно в този контекст как точно ще се извършва идентификацията. Правилният подход е от квалифицираният електронен подпис на лекаря да се вземе ЕГН-то, то да се провери в регистъра на лекарите (или фармацевтите, за които имат сходен ред в следващия член) (вместо, например, да се изисква вписване на служебен номер в КЕП). Също така, идентификация трябва да може да се извършва и по реда на Закона за електронната идентификация. Но при издаване, процесът на идентификация може всъщност да е излишен – предвид, че рецепта се издава чрез софтуер при лекаря (и се проверява в аптеката), лекарят вече е влязъл в системата. Наредбата не казва пред кого се идентифицират, така че стъпката може да се премахне.
  • Според наредбата „При електронното предписване на лекарствения продукт се извършва автоматична проверка в регистъра [..]“ – тук е важно да се отбележат техническите параметри на тази проверка, т.е. посоченото по-горе – че на база на ЕГН се извлича УИН. Дали регистъра на БЛС и на фармацевтите поддържат такава справка? И да не поддържат, може бързо и лесно да се добави, тъй като я има. Но по-проблемното е тази алинея е, че тя не представлява норма, а прави разказ. Нормативните актове създават права и задължения. Не може да кажеш „се прави проверка“. Казваш кой е длъжен да направи тази проверка. Освен, че е лош нормативен текст, наистина не става ясно кой е длъжен да я прави – дали доставчикът на болничен и аптечен софтуер, или, както е правилно – НЗИС, откъдето трябва да минават всички рецепти. Само че НЗИС не е лице, на което може да се вменят задължения, така че трябва ясно да се напише, че Министерството на здравеопазването прави тази проверка чрез НЗИС.
  • Подписване на рецептата с КЕП според наредбата се прави след проверка в регистрите, а е редно да е обратно – в момента на изпращане на подписаната рецепта към НЗИС, да се проверят всичките ѝ реквизити.
  • След като пациентът отиде в аптеката,“ Магистър-фармацевтът извършва действия по идентифициране на издаденото електронно предписание, при които като водещ критерий използва ЕГН на пациента.“ – това тука е доста проблемно. Според публичните изказвания от преди месец идентифицирането на рецептата трябваше да става по ЕГН и последните 4 цифри от кода на рецептата. Този текст не казва как се прави проверка, „водещ критерий“ е неясно. Може ли само по този критерий (не би трябвало)? По кои други критерии може – ЕГН+номер на лична карта, ЕГН+4 цифри от номер на рецепта? По принцип е добре да се спести разпечатване на електронни рецепти (защото това би ги обезсмислило), така че предложението, което аз бях направил пред 8 месеца беше ЕГН+номер на лична карта, или поне част от номер на личната карта. Фармацевтът може да вижда няколко активни рецепти и е добре да знае коя да изпълни. Дали това трябва да се опише в наредба е спорно, но предвиждам обърквания, поне в началото
  • „При осигурени технически и организационни условия за това от Министерството на здравеопазването и НЗОК“ – това е много неясен критерий, за да го има в нормативен акт. Редно е държавата първо създаде условията и тогава да налага срокове.
  • Липсват изисквания за сигурност и защита на данните – в какъв вид НЗИС обработва и съхранява рецептите? След колко време те се изтриват или анонимизират? Има ли проследимост кой какви справки е правил – напр. фармацевти по кои ЕГН-та са търсили и съответно има ли отпуснати след това лекарства. Кой до каква функционалност има достъп? Как е уреден достъпа до НЗИС в МЗ, включително за справочни функционалности? Комисията за защита на личните данни дала ли е становище по проекта на наредба?
  • Липсват ясни инструкции за доставчиците на болничен и аптечен софтуер – къде и какви номенклатури да ползват и имат ли гаранция, че те са актуални. Наредбата казва, че „Програмните интерфейси и номенклатурите за обмен на информация между медицинския и аптечен софтуер и НЗИС се актуализират текущо“, но това е неясно и неприемливо. Липсва препратка към чл. 14 от наредбата към Закона за електронно управление, която урежда поддържането на версии на програмните интерфейси – не трябва да може МЗ/НЗИС да смени от днес за утре един итнерфейс и всичко да се счупи.
  • Не е уреден форматът на кода на рецептата. Това е малък проблем, но обикновено се урежда с нормативния акт, който въвежда дадена система. Бих предложил да следва предписанията на чл. 4, ал. 5 от наредбата към ЗЕУ, т.е. да ползва UUID (RFC 4122), особено ако няма да се налага да се цитират 4 цифри/букви от него

Дано електронните рецепти да заработят добре. Но това ще бъде въпреки тази наредба, която освен, че идва в най-последния възможен момент, нарушавайки Закона за нормативните актове, е и доста непрецизна и неясна. С две думи – така не се прави. МЗ възлага на Информационно обсжлуване НЗИС през юли. Оттогава имаше предостатъчно време да се подготви наредбата и да се приеме след обсъждане и изчистване на проблемите в нея.

Това е просто още един пример как се случва електронното управление у нас – на парче, на коляно, в последния момент и само под много силен обществен натиск. Все пак, хубавото е, че нещо се случва – че ще има електронни рецепти (и направления). Но Министерството на здравеопазването (и всяко друго) трябва да работи по-качествено и по-прозрачно.

Материалът Да оправим наредбата за електронните рецепти е публикуван за пръв път на БЛОГодаря.

Computing Euclidean distance on 144 dimensions

Post Syndicated from Marek Majkowski original https://blog.cloudflare.com/computing-euclidean-distance-on-144-dimensions/

Computing Euclidean distance on 144 dimensions

Computing Euclidean distance on 144 dimensions

Late last year I read a blog post about our CSAM image scanning tool. I remember thinking: this is so cool! Image processing is always hard, and deploying a real image identification system at Cloudflare is no small achievement!

Some time later, I was chatting with Kornel: “We have all the pieces in the image processing pipeline, but we are struggling with the performance of one component.” Scaling to Cloudflare needs ain’t easy!

The problem was in the speed of the matching algorithm itself. Let me elaborate. As John explained in his blog post, the image matching algorithm creates a fuzzy hash from a processed image. The hash is exactly 144 bytes long. For example, it might look like this:

00e308346a494a188e1043333147267a 653a16b94c33417c12b433095c318012
5612442030d14a4ce82c623f4e224733 1dd84436734e4a5d6e25332e507a8218
6e3b89174e30372d

The hash is designed to be used in a fuzzy matching algorithm that can find “nearby”, related images. The specific algorithm is well defined, but making it fast is left to the programmer — and at Cloudflare we need the matching to be done super fast. We want to match thousands of hashes per second, of images passing through our network, against a database of millions of known images. To make this work, we need to seriously optimize the matching algorithm.

Naive quadratic algorithm

The first algorithm that comes to mind has O(K*N) complexity: for each query, go through every hash in the database. In naive implementation, this creates a lot of work. But how much work exactly?

First, we need to explain how fuzzy matching works.

Given a query hash, the fuzzy match is the “closest” hash in a database. This requires us to define a distance. We treat each hash as a vector containing 144 numbers, identifying a point in a 144-dimensional space. Given two such points, we can calculate the distance using the standard Euclidean formula.

For our particular problem, though, we are interested in the “closest” match in a database only if the distance is lower than some predefined threshold. Otherwise, when the distance is large,  we can assume the images aren’t similar. This is the expected result — most of our queries will not have a related image in the database.

The Euclidean distance equation used by the algorithm is standard:

Computing Euclidean distance on 144 dimensions

To calculate the distance between two 144-byte hashes, we take each byte, calculate the delta, square it, sum it to an accumulator, do a square root, and ta-dah! We have the distance!

Here’s how to count the squared distance in C:

Computing Euclidean distance on 144 dimensions

This function returns the squared distance. We avoid computing the actual distance to save us from running the square root function – it’s slow. Inside the code, for performance and simplicity, we’ll mostly operate on the squared value. We don’t need the actual distance value, we just need to find the vector with the smallest one. In our case it doesn’t matter if we’ll compare distances or squared distances!

As you can see, fuzzy matching is basically a standard problem of finding the closest point in a multi-dimensional space. Surely this has been solved in the past — but let’s not jump ahead.

While this code might be simple, we expect it to be rather slow. Finding the smallest hash distance in a database of, say, 1M entries, would require going over all records, and would need at least:

  1. 144 * 1M subtractions
  2. 144 * 1M multiplications
  3. 144 * 1M additions

And more. This alone adds up to 432 million operations! How does it look in practice? To illustrate this blog post we prepared a full test suite. The large database of known hashes can be well emulated by random data. The query hashes can’t be random and must be slightly more sophisticated, otherwise the exercise wouldn’t be that interesting. We generated the test smartly by byte-swaps of the actual data from the database — this allows us to precisely control the distance between test hashes and database hashes. Take a look at the scripts for details. Here’s our first run of the first, naive, algorithm:

$ make naive
< test-vector.txt ./mmdist-naive > test-vector.tmp
Total: 85261.833ms, 1536 items, avg 55.509ms per query, 18.015 qps

We matched 1,536 test hashes against a database of 1 million random vectors in 85 seconds. It took 55ms of CPU time on average to find the closest neighbour. This is rather slow for our needs.

SIMD for help

An obvious improvement is to use more complex SIMD instructions. SIMD is a way to instruct the CPU to process multiple data points using one instruction. This is a perfect strategy when dealing with vector problems — as is the case for our task.

We settled on using AVX2, with 256 bit vectors. We did this for a simple reason — newer AVX versions are not supported by our AMD CPUs. Additionally, in the past, we were not thrilled by the AVX-512 frequency scaling.

Using AVX2 is easier said than done. There is no single instruction to count Euclidean distance between two uint8 vectors! The fastest way of counting the full distance of two 144-byte vectors with AVX2 we could find is authored by Vlad:

Computing Euclidean distance on 144 dimensions

It’s actually simpler than it looks: load 16 bytes, convert vector from uint8 to int16, subtract the vector, store intermediate sums as int32, repeat. At the end, we need to do complex 4 instructions to extract the partial sums into the final sum. This AVX2 code improves the performance around 3x:

$ make naive-avx2 
Total: 25911.126ms, 1536 items, avg 16.869ms per query, 59.280 qps

We measured 17ms per item, which is still below our expectations. Unfortunately, we can’t push it much further without major changes. The problem is that this code is limited by memory bandwidth. The measurements come from my Intel i7-5557U CPU, which has the max theoretical memory bandwidth of just 25GB/s. The database of 1 million entries takes 137MiB, so it takes at least 5ms to feed the database to my CPU. With this naive algorithm we won’t be able to go below that.

Vantage Point Tree algorithm

Since the naive brute force approach failed, we tried using more sophisticated algorithms. My colleague Kornel Lesiński implemented a super cool Vantage Point algorithm. After a few ups and downs, optimizations and rewrites, we gave up. Our problem turned out to be unusually hard for this kind of algorithm.

We observed “the curse of dimensionality”. Space partitioning algorithms don’t work well in problems with large dimensionality — and in our case, we have an enormous number of 144 dimensions. K-D trees are doomed. Locality-sensitive hashing is also doomed. It’s a bizarre situation in which the space is unimaginably vast, but everything is close together. The volume of the space is a 347-digit-long number, but the maximum distance between points is just 3060 – sqrt(255*255*144).

Space partitioning algorithms are fast, because they gradually narrow the search space as they get closer to finding the closest point. But in our case, the common query is never close to any point in the set, so the search space can’t be narrowed to a meaningful degree.

A VP-tree was a promising candidate, because it operates only on distances, subdividing space into near and far partitions, like a binary tree. When it has a close match, it can be very fast, and doesn’t need to visit more than O(log(N)) nodes. For non-matches, its speed drops dramatically. The algorithm ends up visiting nearly half of the nodes in the tree. Everything is close together in 144 dimensions! Even though the algorithm avoided visiting more than half of the nodes in the tree, the cost of visiting remaining nodes was higher, so the search ended up being slower overall.

Smarter brute force?

This experience got us thinking. Since space partitioning algorithms can’t narrow down the search, and still need to go over a very large number of items, maybe we should focus on going over all the hashes, extremely quickly. We must be smarter about memory bandwidth though — it was the limiting factor in the naive brute force approach before.

Perhaps we don’t need to fetch all the data from memory.

Short distance

The breakthrough came from the realization that we don’t need to count the full distance between hashes. Instead, we can compute only a subset of dimensions, say 32 out of the total of 144. If this distance is already large, then there is no need to compute the full one! Computing more points is not going to reduce the Euclidean distance.

The proposed algorithm works as follows:

1. Take the query hash and extract a 32-byte short hash from it

2. Go over all the 1 million 32-byte short hashes from the database. They must be densely packed in the memory to allow the CPU to perform good prefetching and avoid reading data we won’t need.

3. If the distance of the 32-byte short hash is greater or equal a best score so far, move on

4. Otherwise, investigate the hash thoroughly and compute the full distance.

Even though this algorithm needs to do less arithmetic and memory work, it’s not faster than the previous naive one. See make short-avx2. The problem is: we still need to compute a full distance for hashes that are promising, and there are quite a lot of them. Computing the full distance for promising hashes adds enough work, both in ALU and memory latency, to offset the gains of this algorithm.

There is one detail of our particular application of the image matching problem that will help us a lot moving forward. As we described earlier, the problem is less about finding the closest neighbour and more about proving that the neighbour with a reasonable distance doesn’t exist. Remember — in practice, we don’t expect to find many matches! We expect almost every image we feed into the algorithm to be unrelated to image hashes stored in the database.

It’s sufficient for our algorithm to prove that no neighbour exists within a predefined distance threshold. Let’s assume we are not interested in hashes more distant than, say, 220, which squared is 48,400. This makes our short-distance algorithm variation work much better:

$ make short-avx2-threshold
Total: 4994.435ms, 1536 items, avg 3.252ms per query, 307.542 qps

Origin distance variation

Computing Euclidean distance on 144 dimensions

At some point, John noted that the threshold allows additional optimization. We can order the hashes by their distance from some origin point. Given a query hash which has origin distance of A, we can inspect only hashes which are distant between |A-threshold| and |A+threshold| from the origin. This is pretty much how each level of Vantage Point Tree works, just simplified. This optimization — ordering items in the database by their distance from origin point — is relatively simple and can help save us a bit of work.

While great on paper, this method doesn’t introduce much gain in practice, as the vectors are not grouped in clusters — they are pretty much random! For the threshold values we are interested in, the origin distance algorithm variation gives us ~20% speed boost, which is okay but not breathtaking. This change might bring more benefits if we ever decide to reduce the threshold value, so it might be worth doing for production implementation. However, it doesn’t work well with query batching.

Transposing data for better AVX

But we’re not done with AVX optimizations! The usual problem with AVX is that the instructions don’t normally fit a specific problem. Some serious mind twisting is required to adapt the right instruction to the problem, or to reverse the problem so that a specific instruction can be used. AVX2 doesn’t have useful “horizontal” uint16 subtract, multiply and add operations. For example, _mm_hadd_epi16 exists, but it’s slow and cumbersome.

Instead, we can twist the problem to make use of fast available uint16 operands. For example we can use:

  1. _mm256_sub_epi16
  2. _mm256_mullo_epi16
  3. and _mm256_add_epu16.

The add would overflow in our case, but fortunately there is add-saturate _mm256_adds_epu16.

The saturated add is great and saves us conversion to uint32. It just adds a small limitation: the threshold passed to the program (i.e., the max squared distance) must fit into uint16. However, this is fine for us.

To effectively use these instructions we need to transpose the data in the database. Instead of storing hashes in rows, we can store them in columns:

Computing Euclidean distance on 144 dimensions

So instead of:

  1. [a1, a2, a3],
  2. [b1, b2, b3],
  3. [c1, c2, c3],

We can lay it out in memory transposed:

  1. [a1, b1, c1],
  2. [a2, b2, c2],
  3. [a3, b3, c3],

Now we can load 16 first bytes of hashes using one memory operation. In the next step, we can subtract the first byte of the querying hash using a single instruction, and so on. The algorithm stays exactly the same as defined above; we just make the data easier to load and easier to process for AVX.

The hot loop code even looks relatively pretty:

Computing Euclidean distance on 144 dimensions

With the well-tuned batch size and short distance size parameters we can see the performance of this algorithm:

$ make short-inv-avx2
Total: 1118.669ms, 1536 items, avg 0.728ms per query, 1373.062 qps

Whoa! This is pretty awesome. We started from 55ms per query, and we finished with just 0.73ms. There are further micro-optimizations possible, like memory prefetching or using huge pages to reduce page faults, but they have diminishing returns at this point.

Computing Euclidean distance on 144 dimensions
Roofline model from Denis Bakhvalov’s book‌‌

If you are interested in architectural tuning such as this, take a look at the new performance book by Denis Bakhvalov. It discusses roofline model analysis, which is pretty much what we did here.

Do take a look at our code and tell us if we missed some optimization!

Summary

What an optimization journey! We jumped between memory and ALU bottlenecked code. We discussed more sophisticated algorithms, but in the end, a brute force algorithm — although tuned — gave us the best results.

To get even better numbers, I experimented with Nvidia GPU using CUDA. The CUDA intrinsics like vabsdiff4 and dp4a fit the problem perfectly. The V100 gave us some amazing numbers, but I wasn’t fully satisfied with it. Considering how many AMD Ryzen cores with AVX2 we can get for the cost of a single server-grade GPU, we leaned towards general purpose computing for this particular problem.

This is a great example of the type of complexities we deal with every day. Making even the best technologies work “at Cloudflare scale” requires thinking outside the box. Sometimes we rewrite the solution dozens of times before we find the optimal one. And sometimes we settle on a brute-force algorithm, just very very optimized.

The computation of hashes and image matching are challenging problems that require running very CPU intensive operations.. The CPU we have available on the edge is scarce and workloads like this are incredibly expensive. Even with the optimization work talked about in this blog post, running the CSAM scanner at scale is a challenge and has required a huge engineering effort. And we’re not done! We need to solve more hard problems before we’re satisfied. If you want to help, consider applying!

Save the date for Coolest Projects 2021

Post Syndicated from Helen Drury original https://www.raspberrypi.org/blog/save-the-date-coolest-projects-2021/

The year is drawing to a close, and we are so excited for 2021!

More than 700 young people from 39 countries shared their tech creations in the free Coolest Projects online showcase this year! We loved seeing so many young people shine with their creative projects, and we can’t wait to see what the world’s next generation of digital makers will present at Coolest Projects in 2021.

A Coolest Projects participant showing off their tech creation

Mark your calendar for registration opening

Coolest Projects is the world-leading technology fair for young people! It’s our biggest event, and we are running it online again next year so that young people can participate safely and from wherever they are in the world.

Through Coolest Projects, young people are empowered to show the world something they’re making with tech — something THEY are excited about! Anyone up to age 18 can share their creation at Coolest Projects.

On 1 February, we will open registrations for the 2021 online showcase. Mark the date in your calendar! All registered projects will get their very own spot in the Coolest Projects online showcase gallery, where the whole world can discover them.

Taking part is completely free and enormously fun

If a young person in your life — your family, your classroom, your coding club — is making something with tech that they love, we want them to register it for Coolest Projects. It doesn’t matter how small or big their project is, because the Coolest Projects showcase is about celebrating the love we all share for getting creative with tech.

A teenage girl presenting a digital making project on a tablet

Everyone who registers a project becomes part of a worldwide community of peers who express themselves and their interests with creative tech. We will also have special judges pick their favourite projects! Taking part in Coolest Projects is a wonderful way to connect with others, be inspired, and learn from peers.

So if you know a tech-loving young person, get them excited for taking part in Coolest Projects!

“We are so very happy to have reached people who love to code and are enjoying projects from all over the world…everyone’s contributions have blown our minds…we are so so happy ️:woman-cartwheeling:️Thank you to Coolest Projects for hosting the best event EVER :star::star::star:

– mother of a participant in the 2020 online showcase

Want inspiration for projects? You can still explore all the wonderful projects from the 2020 showcase gallery.

A Coolest Projects participant

Young people can participate with whatever they’re making

Everyone is invited to take part in Coolest Projects — the showcase is for young people with any level of experience. The project they register can be whatever they like, from their very first Scratch animation, to their latest robotics project, website, or phone app. And we invite projects at any stages of the creation process, whether they’re prototypes, finished products, or works-in-progress!

  • To make the youngest participants and complete beginners feel like they belong, we work hard to make sure that taking part is a super welcoming and inspiring experience! In the showcase, they will discover what is possible with technology and how they can use it to shape their world.
  • And for the young creators who are super tech-savvy and make advanced projects, showcasing their creation at Coolest Projects is a great way to get it seen by some amazing people in the STEM sector: this year’s special judges were British astronaut Tim Peake, Adafruit CEO Limor Fried, and other fabulous tech leaders!

Sign up for the latest Coolest Projects news

To be the first to know when registration opens, you only have to sign up for our newsletter:

We will send you regular news about Coolest Projects to keep you up to date and help you inspire the young tech creator in your life!

The post Save the date for Coolest Projects 2021 appeared first on Raspberry Pi.

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close