Security updates for Friday

Post Syndicated from original https://lwn.net/Articles/889983/

Security updates have been issued by Debian (wireshark), Fedora (389-ds-base), Mageia (golang, wavpack, and zlib), openSUSE (yaml-cpp), SUSE (expat and yaml-cpp), and Ubuntu (linux, linux-aws, linux-kvm, linux-lts-xenial, linux-aws-5.4, linux-azure, linux-gcp, linux-gcp-5.13, linux-gcp-5.4, linux-gke, linux-gke-5.4, linux-gkeop, linux-gkeop-5.4, linux-aws-hwe, linux-gcp-4.15, linux-oracle, linux-intel-5.13, and tomcat9).

The end of the road for Cloudflare CAPTCHAs

Post Syndicated from Reid Tatoris original https://blog.cloudflare.com/end-cloudflare-captcha/

The end of the road for Cloudflare CAPTCHAs

The end of the road for Cloudflare CAPTCHAs

There is no point in rehashing the fact that CAPTCHA provides a terrible user experience. It’s been discussed in detail before on this blog, and countless times elsewhere. One of the creators of the CAPTCHA has publicly lamented that he “unwittingly created a system that was frittering away, in ten-second increments, millions of hours of a most precious resource: human brain cycles.” We don’t like them, and you don’t like them.

So we decided we’re going to stop using CAPTCHAs. Using an iterative platform approach, we have already reduced the number of CAPTCHAs we choose to serve by 91% over the past year.

Before we talk about how we did it, and how you can help, let’s first start with a simple question.

Why in the world is CAPTCHA still used anyway?

If everyone agrees CAPTCHA is so bad, if there have been calls to get rid of it for 15 years, if the creator regrets creating it, why is it still widely used?

The frustrating truth is that CAPTCHA remains an effective tool for differentiating real human users from bots despite the existence of CAPTCHA-solving services. Of course, this comes with a huge trade off in terms of usability, but generally the alternatives to CAPTCHA are blocking or allowing traffic, which will inherently increase either false positives or false negatives. With a choice between increased errors and a poor user experience (CAPTCHA), many sites choose CAPTCHA.

CAPTCHAs are also a safe choice because so many other sites use them. They delegate abuse response to a third party, and remove the risk from the website with a simple integration. Using the most common solution will rarely get you into trouble. Plug, play, forget.

Lastly, CAPTCHA is useful because it has a long history of a known and stable baseline. We’ve tracked a metric called CAPTCHA (or Challenge) Solve Rate for many years. CAPTCHA solve rate is the number of CAPTCHAs solved, divided by the number of page loads. For our purposes both failing or not attempting to solve the CAPTCHA count as a failure, since in either case a user cannot access the content they want to. We find this metric to typically be stable for any particular website. That is, if the solve rate is 1%, it tends to remain at 1% over time. We also find that any change in solve rate – up or down – is a strong indicator of an attack in progress. Customers can monitor the solve rate and create alerts to notify them when it changes, then investigate what might be happening.

Many alternatives to CAPTCHA have been tried, including our own Cryptographic Attestation. However, to date, none have seen the amount of widespread adoption of CAPTCHAs. We believe attempting to replace CAPTCHA with a single alternative is the main reason why. When you replace CAPTCHA, you lose the stable history of the solve rate, and making decisions becomes more difficult. If you switch from deciphering text to picking images, you will get vastly different results. How do you know if those results are good or bad? So, we took a different approach.

Many solutions, not one

Rather than try to unilaterally deprecate and replace CAPTCHA with a single alternative, we built a platform to test many alternatives and see which had the best potential to replace CAPTCHA. We call this Cloudflare Managed Challenge.

The end of the road for Cloudflare CAPTCHAs

Managed Challenge is a smarter solution than CAPTCHA. It defers the decision about whether to serve a visual puzzle to a later point in the flow after more information is available from the browser. Previously, a Cloudflare customer could only choose between either a CAPTCHA or JavaScript Challenge as the action of a security or firewall rule. Now, the Managed Challenge option will decide to show a visual puzzle or other means of proving humanness to visitors based on the client behavior exhibited during a challenge and based on the telemetry we receive from the visitor. A customer simply tells us, “I want you (Cloudflare) to take appropriate actions to challenge this type of traffic as you see necessary.

With Managed Challenge, we adapt the actual challenge outcome to the individual visitor/browser. As a result, we can fine-tune the difficulty of the challenge itself and avoid showing visual puzzles to more than 90% of human requests, while at the same time presenting harder challenges to visitors that exhibit non-human behaviors.

When a visitor encounters a Managed Challenge, we first run a series of small non-interactive JavaScript challenges gathering more signals about the visitor/browser environment. This means we deploy in-browser detections and challenges at the time the request is made. Challenges are selected based on what characteristics the visitor emits and based on the initial information we have about the visitor. Those challenges include, but are not limited to, proof-of-work, proof-of-space, probing for web APIs, and various challenges for detecting browser-quirks and human behavior.

They also include machine learning models that detect common features of end visitors who were able to pass a CAPTCHA before. The computational hardness of those initial challenges may vary by visitor, but is targeted to run fast. Managed Challenge is also integrated into the Cloudflare Bot Management and Super Bot Fight Mode systems by consuming signals and data from the bot detections.

After our non-interactive challenges have been run, we evaluate the gathered signals. If by the combination of those signals we are confident that the visitor is likely human, no further action is taken, and the visitor is redirected to the destined page without any interaction required. However, in some cases, if the signal is weak, we present a visual puzzle to the visitor to prove their humanness. In the context of Managed Challenge, we’re also experimenting with other privacy-preserving means of attesting humanness, to continue reducing the portion of time that Managed Challenge uses a visual puzzle step.

We started testing Managed Challenge last year, and initially, we chose from a rotating subset of challenges, one of them being CAPTCHA. At the start, CAPTCHA was still used in the vast majority of cases. We compared the solve rate for the new challenge in question, with the existing, stable solve rate for CAPTCHA. We thus used CAPTCHA solve rate as a goal to work towards as we improved our CAPTCHA alternatives, getting better and better over time. The challenge platform allows our engineers to easily create, deploy, and test new types of challenges without impacting customers. When a challenge turns out to not be useful, we simply deprecate it. When it proves to be useful, we increase how often it is used. In order to preserve ground-truth, we also randomly choose a small subset of visitors to always solve a visual puzzle to validate our signals.

Managed Challenge performs better than CAPTCHA

The Challenge Platform now has the same stable solve rate as previously used CAPTCHAs.

The end of the road for Cloudflare CAPTCHAs

Using an iterative platform approach, we have reduced the number of CAPTCHAs we serve by 91%. This is only the start. By the end of the year, we will reduce our use of CAPTCHA as a challenge to less than 1%. By skipping the visual puzzle step for almost all visitors, we are able to reduce the visitor time spent in a challenge from an average of 32 seconds to an average of just one second to run our non-interactive challenges. We also see churn improvements: our telemetry indicates that visitors with human properties are 31% less likely to abandon a Managed Challenge than on the traditional CAPTCHA action.

Today, the Managed Challenge platform rotates between many challenges. A Managed Challenge instance consists of many sub-challenges: some of them are established and effective, whereas others are new challenges we are experimenting with. All of them are much, much faster and easier for humans to complete than CAPTCHA, and almost always require no interaction from the visitor.

Managed Challenge replaces CAPTCHA for Cloudflare

We have now deployed Managed Challenge across the entire Cloudflare network. Any time we show a CAPTCHA to a visitor, it’s via the Managed Challenge platform, and only as a benchmark to confirm our other challenges are performing as well.

All Cloudflare customers can now choose Managed Challenge as a response option to any Firewall rule instead of CAPTCHA. We’ve also updated our dashboard to encourage all Cloudflare customers to make this choice.

The end of the road for Cloudflare CAPTCHAs

You’ll notice that we changed the name of the CAPTCHA option to ‘Legacy CAPTCHA’. This more accurately describes what CAPTCHA is: an outdated tool that we don’t think people should use. As a result, the usage of CAPTCHA across the Cloudflare network has dropped significantly, and usage of managed challenge has increased dramatically.

The end of the road for Cloudflare CAPTCHAs

As noted above, today CAPTCHA represents 9% of Managed Challenge solves (light blue), but that number will decrease to less than 1% by the end of the year. You’ll also see the gray bar above, which shows when our customers have chosen to show a CAPTCHA as a response to a Firewall rule triggering. We want that number to go to zero, but the good news is that 63% of customers now choose Managed Challenge rather than CAPTCHA when they create a Firewall rule with a challenge response action.

The end of the road for Cloudflare CAPTCHAs

We expect this number to increase further over time.

If you’re using the Cloudflare WAF, log into the Dashboard today and look at all of your Firewall rules. If any of your rules are using “Legacy CAPTCHA” as a response, please change it now! Select the “Managed Challenge” response option instead. You’ll give your users a better experience, while maintaining the same level of protection you have today. If you’re not currently a Cloudflare customer, stay tuned for ways you can reduce your own use of CAPTCHA.

ICYMI: Serverless Q1 2022

Post Syndicated from James Beswick original https://aws.amazon.com/blogs/compute/icymi-serverless-q1-2022/

Welcome to the 16th edition of the AWS Serverless ICYMI (in case you missed it) quarterly recap. Every quarter, we share all the most recent product launches, feature enhancements, blog posts, webinars, Twitch live streams, and other interesting things that you might have missed!

Calendar

In case you missed our last ICYMI, check out what happened last quarter here.

AWS Lambda

Lambda now offers larger ephemeral storage for functions, up to 10 GB. Previously, the storage was set to 512 MB. There are several common use-cases that can benefit from expanded temporary storage, including extract-transform load (ETL) jobs, machine learning inference, and data processing workloads. To see how to configure the amount of /tmp storage in AWS SAM, deploy this Serverless Land Pattern.

Ephemeral storage settings

For Node.js developers, Lambda now supports ES Modules and top-level await for Node.js 14. This enables developers to use a wider range of JavaScript packages in functions. With top-level await, when used with Provisioned Concurrency, this can improve cold-start performance when using asynchronous initialization.

For .NET developers, Lambda now supports .NET 6 as both a managed runtime and container base image. You can now use new features of the runtime such as improved logging, simplified function definitions using top-level statements, and improved performance using source generators.

The Lambda console now allows you to share test events with other developers in your team, using granular IAM permissions. Previously, test events were only visible to the builder who created them. To learn about creating sharable test events, read this documentation.

Amazon EventBridge

Amazon EventBridge Schema Registry helps you create code bindings from event schemas for use directly in your preferred IDE. You can generate these code bindings for a schema by using the EventBridge console, APIs, or AWS SDK toolkits for Jetbrains (Intellij, PyCharm, Webstorm, Rider) and VS Code. This feature now supports Go, in addition to Java, Python, and TypeScript, and is available at no additional cost.

AWS Step Functions

Developers can test state machines locally using Step Functions Local, and the service recently announced mocked service integrations for local testing. This allows you to define sample output from AWS service integrations and combine them into test cases to validate workflow control. This new feature introduces a robust way to state machines in isolation.

Amazon DynamoDB

Amazon DynamoDB now supports limiting the number of items processed in PartiQL operation, using an optional parameter on each request. The service also increased default Service Quotas, which can help simplify the use of large numbers of tables. The per-account, per-Region quota increased from 256 to 2,500 tables.

AWS AppSync

AWS AppSync added support for custom response headers, allowing you to define additional headers to send to clients in response to an API call. You can now use the new resolver utility $util.http.addResponseHeaders() to configure additional headers in the response for a GraphQL API operation.

Serverless blog posts

January

Jan 6 – Using Node.js ES modules and top-level await in AWS Lambda

Jan 6 – Validating addresses with AWS Lambda and the Amazon Location Service

Jan 20 – Introducing AWS Lambda batching controls for message broker services

Jan 24 – Migrating AWS Lambda functions to Arm-based AWS Graviton2 processors

Jan 31 – Using the circuit breaker pattern with AWS Step Functions and Amazon DynamoDB

Jan 31 – Mocking service integrations with AWS Step Functions Local

February

Feb 8 – Capturing client events using Amazon API Gateway and Amazon EventBridge

Feb 10 – Introducing AWS Virtual Waiting Room

Feb 14 – Building custom connectors using the Amazon AppFlow Custom Connector SDK

Feb 22 – Building TypeScript projects with AWS SAM CLI

Feb 24 – Introducing the .NET 6 runtime for AWS Lambda

March

Mar 6 – Migrating a monolithic .NET REST API to AWS Lambda

Mar 7 – Decoding protobuf messages using AWS Lambda

Mar 8 – Building a serverless image catalog with AWS Step Functions Workflow Studio

Mar 9 – Composing AWS Step Functions to abstract polling of asynchronous services

Mar 10 – Building serverless multi-Region WebSocket APIs

Mar 15 – Using organization IDs as principals in Lambda resource policies

Mar 16 – Implementing mutual TLS for Java-based AWS Lambda functions

Mar 21 – Running cross-account workflows with AWS Step Functions and Amazon API Gateway

Mar 22 – Sending events to Amazon EventBridge from AWS Organizations accounts

Mar 23 – Choosing the right solution for AWS Lambda external parameters

Mar 28 – Using larger ephemeral storage for AWS Lambda

Mar 29 – Using AWS Step Functions and Amazon DynamoDB for business rules orchestration

Mar 31 – Optimizing AWS Lambda function performance for Java

First anniversary of Serverless Land Patterns

Serverless Patterns Collection

The DA team launched the Serverless Patterns Collection in March 2021 as a repository of serverless examples that demonstrate integrating two or more AWS services. Each pattern uses an infrastructure as code (IaC) framework to automate the deployment. These can simplify the creation and configuration of the services used in your applications.

The Serverless Patterns Collection is both an educational resource to help developers understand how to join different services, and an aid for developers that are getting started with building serverless applications.

The collection has just celebrated its first anniversary. It now contains 239 patterns for CDK, AWS SAM, Serverless Framework, and Terraform, covering 30 AWS services. We have expanded example runtimes to include .NET, Java, Rust, Python, Node.js and TypeScript. We’ve served tens of thousands of developers in the first year and we’re just getting started.

Many thanks to our contributors and community. You can also contribute your own patterns.

Videos

YouTube: youtube.com/serverlessland

Serverless Office Hours – Tues 10 AM PT

Weekly live virtual office hours. In each session we talk about a specific topic or technology related to serverless and open it up to helping you with your real serverless challenges and issues. Ask us anything you want about serverless technologies and applications.

YouTube: youtube.com/serverlessland
Twitch: twitch.tv/aws

January

February

March

FooBar Serverless YouTube channel

The Developer Advocate team is delighted to welcome Marcia Villalba onboard. Marcia was an AWS Serverless Hero before joining AWS over two years ago, and she has created one of the most popular serverless YouTube channels. You can view all of Marcia’s videos at https://www.youtube.com/c/FooBar_codes.

January

February

March

AWS Summits

AWS Global Summits are free events that bring the cloud computing community together to connect, collaborate, and learn about AWS. This year, we have restarted in-person Summits at major cities around the world.

The next 4 Summits planned are Paris (April 12), San Francisco (April 20-21), London (April 27), and Madrid (May 4-5). To find and register for your nearest AWS Summit, visit the AWS Summits homepage.

Still looking for more?

The Serverless landing page has more information. The Lambda resources page contains case studies, webinars, whitepapers, customer stories, reference architectures, and even more Getting Started tutorials.

You can also follow the Serverless Developer Advocacy team on Twitter to see the latest news, follow conversations, and interact with the team.

Making the most of Hello World magazine | Hello World #18

Post Syndicated from Gemma Coleman original https://www.raspberrypi.org/blog/making-the-most-of-hello-world-18-five-years/

Hello World magazine, our free magazine written by computing educators for computing educators, has been running for 5 years now. In the newest issue, Alan O’Donohoe shares his top tips for educators to make the most out of Hello World.

Issues of Hello World magazine arranged to form a number five.

Alan has over 20 years’ experience teaching and leading technology, ICT, and computing in schools in England. He runs exa.foundation, delivering professional development to engage digital makers, supporting computing teaching, and promoting the appropriate use of technology.

Alan’s top tips

Years before there was a national curriculum for computing, Hello World magazines, or England’s National Centre for Computing Education (NCCE), I had ambitious plans to overhaul our school’s ICT curriculum with the introduction of computer science. Since the subject team I led consisted mostly of non-specialist teachers, it was clear I needed to be the one steering the change. To do this successfully, I realised I’d need to look for examples and case studies outside of our school, to explore exactly what strategies, resources and programming languages other teachers were using. However, I drew a blank. I couldn’t find any local schools teaching computer science. It was both daunting and disheartening not knowing anyone else I could refer to for advice and experience.

An educator holds up a copy of Hello World magazine in front of their face.
“Hello World helps me keep up with the current trends in our thriving computing community.” – Matt Moore

Thankfully, ten years later, the situation has significantly improved. Even with increased research and resources, though, there can still be the sense of feeling alone. With scarce prospects to meet other computing teachers, there’s fewer people to be inspired by, to bounce ideas off, to celebrate achievement, or share the challenges of teaching computing with. Some teachers habitually engage with online discussion forums and social media platforms to plug this gap, but these have their own drawbacks. 

It’s great news then that there’s another resource that teachers can turn to. You all know by now that Hello World magazine offers another helping hand for computing teachers searching for richer experiences for their students and opportunities to hone their professional practice. In this Insider’s Guide, I offer practical suggestions for how you can use Hello World to its full potential.  

Put an article into practice  

Teachers have often told me that strategies like PRIMM and pair programming have had a positive impact on their teaching, after first reading about them in Hello World. Over the five years of its publication, there’s likely to have been an article or research piece that particularly struck a chord with you — so why not try putting the learnings from that article into practice?

An educator holds up a copy of Hello World magazine in front of their face.
“Hello World gives me loads of ideas that I’m excited to try out in my own classroom.” – Steve Rich

You may choose to go this route on your own, but you could persuade colleagues to join you. Not only is there safety in numbers, but the shared rewards and motivation that come from teamwork. Start by choosing an article. This could be an approach that made an impression on you, or something related to a particular theme or topic that you and your colleagues have been seeking to address. You could then test out some of the author’s suggestions in the article; if they represent something very different from your usual approach, then why not try them first with a teaching group that is more open to trying new things? For reflection and analysis, consider conducting some pupil voice interviews with your classes to see what their opinions are of the activity, or spend some time reflecting on the activity with your colleagues. Finally, you could make contact with the author to compare your experiences, seek further support, or ask questions. 

Strike up a conversation

Authors generally welcome correspondence from readers, even those that don’t agree with their opinions! While it’s difficult to predict exactly what the outcome may be, it could lead to a productive professional correspondence. Here are some suggestions: 

  • Establish the best way to contact the author. Some have contact details or clues about where to find them in their articles. If not, you might try connecting with them on LinkedIn, or social media. Don’t be disappointed if they don’t respond promptly; I’ve often received replies many months after sending. 
  • Open your message with an introduction to yourself moving onto some positive praise, describing your appreciation for the article and points that resonated deeply with you.
  • If you have already tried some of the author’s suggestions, you could share your experiences and pupil outcomes, where appropriate, with them.
An educator holds up a copy of Hello World magazine in front of their face.
“One of the things I love about Hello World is the huge number of interesting articles that represent a wide range of voices and experiences in computing education.” – Catherine Elliott
  • Try to maintain a constructive tone. Even if you disagree with the piece, the author will be more receptive to a supportive tone than criticism. If the article topic is a ‘work in progress’, the author may welcome your suggestions.
  • Enquire as to whether the author has changed their practice since writing the article or if their thinking has developed.
  • You might take the opportunity to direct questions at the author asking for further examples, clarity or advice.  
  • If the author has given you an idea for an article, project, or research on a similar theme, they’re likely to be interested in hearing more. Describe your proposal in a single sentence summary and see if they’d be interested in reading an early draft or collaborating with you.

Start a reading group

Take inspiration from book clubs, but rather than discuss works of fiction, instead invite members of your professional groups or curriculum teams to discuss content from issues of Hello World. This could become a regular feature of your meetings where attendees can be invited to contribute their own opinions. To achieve this, firstly identify a group that you’re a part of where this is most likely to be received well. This may be with your colleagues, or fellow computing teachers you’ve met at conferences or training days. To begin, you might prescribe one specific single article or broaden it to include a whole issue. It makes sense to select an article likely to be popular with your group, or one that addresses a current or future area of concern.

An educator holds up a copy of Hello World magazine in front of their face.
“I love Hello World! I encourage my teaching students to sign up, and give out copies when I can. I refer to articles in my lectures.” – Fiona Baxter

To familiarise attendees with the content, share a link to the issue for them to read in advance of the meeting. If you’re reviewing a whole issue, suggest pages likely to be most relevant. If you’re reviewing a single article, make it clear whether you are referring to the page numbers as printed or those in the PDF. You could make it easier by removing all other pages from the PDF and sending it as an attachment. Remember that you can download back issues of Hello World as PDFs, which you can then edit or print. 

Encourage your attendees to share the aspects of the article that appealed to them, or areas they could not agree with the author or struggled to see working in their particular setting. Invite any points of issue for further discussion and explanation — somebody in the group might volunteer to strike up a conversation with the author by passing on the feedback from the group. Alternatively, you could invite the author of the piece to join your meeting via video conference to address questions and promote discussion of the themes. This could lead to developing a productive friendship or professional association with the author.  

Propose an article

“I wish!” is a typical response I hear when I suggest to a teacher that they should seriously consider writing an article for Hello World. I often get the responses, “I don’t have enough time”, “Nobody would read anything I write”, or, “I don’t do anything worth writing about”. The most common concern I hear, though, is, “But I’m not a writer!”. So you’re not the only one thinking that! 

“We strongly encourage first-time writers. My job is to edit your work and worry about grammar and punctuation — so don’t worry if this isn’t your strength! Remember that as an educator, you’re writing all the time. Lesson plans, end-of-term reports, assessment feedback…you’re more of a writer than you think! If you’re not sure where to start, you could write a lesson plan, or contribute to our ‘Me and my Classroom’ feature.”

— Gemma Coleman, Editor of Hello World

Help and support is available from the editorial team. I for one have found this to be extremely beneficial, especially as I really don’t rate my own writing skills! Don’t forget, you’re writing about your own practice, something that you’ve done in your career — so you’ll be an expert on you. Each article starts with a proposal, the editor replies with some suggestions, then a draft follows and some more refinements. I ask friends and colleagues to review parts of what I’ve written to help me and I even ask non-teaching members of my family for their opinions. 

Writing an article for Hello World can really help boost your own professional development and career prospects. Writing about your own practice requires humility, analytical thinking and self reflection. To ensure you have time to write an article, make it fit in with something of interest to you. This could be an objective from your own performance management or appraisal. This reduces the need for additional work and adds a level of credibility.

An educator reads a copy of Hello World magazine on public transport.
“Professionally, writing for Hello World provides recognition that you know what you’re talking about and that you share your knowledge in a number of different ways.” – Neil Rickus

If that isn’t enough to persuade you, for contributors based outside of the UK (who usually aren’t eligible for free print copies), Hello World will send you a complimentary print copy of the magazine that you feature in to say thank you. Picture the next Hello World issue arriving featuring an article written by you. How does this make you feel? Be honest — your heart flutters as you tear off the wrapper to go straight to your article. You’ll be impressed to see how much smarter it looks in print than the draft you did in Microsoft Word. You’ll then want to show others, because you’ll be proud of your work. It generates a tremendous sense of pride and achievement in seeing your own work published in a professional capacity. 

Hello World offers busy teachers a fantastic, free and accessible resource of shared knowledge, experience and inspiring ideas. When we feel most exhausted and lacking inspiration, we should treasure those mindful moments where we can sit down with a cup of tea and make the most of this wonderful publication created especially for us.

Celebrate 5 years of Hello World with us

We marked Hello World’s fifth anniversary with a recent Twitter Spaces event with Alan and Catherine Elliot as our guests. You can catch up with the event recording on the Hello World podcast. And the newest Hello World issue, with a focus on cybersecurity, is available as a free PDF download — dive it today.

Cover of Hello World issue 18.

How have you been using Hello World in your practice in the past five years? What do you hope to see in the magazine in the next five? Let us know on Twitter by tagging @HelloWorld_Edu.

The post Making the most of Hello World magazine | Hello World #18 appeared first on Raspberry Pi.

Bypassing Two-Factor Authentication

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2022/04/bypassing-two-factor-authentication.html

These techniques are not new, but they’re increasingly popular:

…some forms of MFA are stronger than others, and recent events show that these weaker forms aren’t much of a hurdle for some hackers to clear. In the past few months, suspected script kiddies like the Lapsus$ data extortion gang and elite Russian-state threat actors (like Cozy Bear, the group behind the SolarWinds hack) have both successfully defeated the protection.

[…]

Methods include:

  • Sending a bunch of MFA requests and hoping the target finally accepts one to make the noise stop.
  • Sending one or two prompts per day. This method often attracts less attention, but “there is still a good chance the target will accept the MFA request.”
  • Calling the target, pretending to be part of the company, and telling the target they need to send an MFA request as part of a company process.

FIDO2 multi-factor authentication systems are not susceptible to these attacks, because they are tied to a physical computer.

And even though there are attacks against these two-factor systems, they’re much more secure than not having them at all. If nothing else, they block pretty much all automated attacks.

Handy Tips #26: Displaying infrastructure status with the Geomap widget

Post Syndicated from Arturs Lontons original https://blog.zabbix.com/handy-tips-26-displaying-infrastructure-status-with-the-geomap-widget/20012/

Secure your Zabbix logins from brute-force and dictionary attacks by defining password complexity requirements.

Enforcing an organization-wide password policy can be extremely unreliable if we don’t have a toolset to enforce these policies. By using native password complexity settings, we can provide an additional layer of security and ensure that our users follow our organization’s password complexity policies.

Define custom Zabbix login password complexity rules:

  • Set the minimum password length in a range of 2 – 70 characters
  • Define password character set rules

  • A built-in password list secures users from dictionary attacks
  • Prevent usage of passwords containing first or last names and easy to guess words

Check out the video to learn how to configure Zabbix password complexity requirements.

How to configure Zabbix password complexity requirements:
 
  1. As a super admin navigate to Administration → Authentication
  2. Define the minimum password length
  3. Select the optional Password must contain requirements
  4. Mark Avoid easy-to-guess passwords option
  5. Navigate to Administration → Users
  6. Select use for which we will change the password
  7. Press the Change password button
  8. Try using  easy to guess passwords like zabbix or password
  9. Observe the error messages
  10. Define a password that fits the password requirements
  11. Press the Update button

Tips and best practices:
  • It is possible to restrict access to the ui/data/top_passwords.txt file, which contains the Zabbix password deny list
  • Passwords longer than 72 characters will be truncated
  • Password complexity requirements are only applied to the internal Zabbix authentication
  • Users can change their passwords in the user profile settings

The post Handy Tips #26: Displaying infrastructure status with the Geomap widget appeared first on Zabbix Blog.

Supporting large campaigns at scale

Post Syndicated from Grab Tech original https://engineering.grab.com/supporting-large-campaigns-at-scale

Introduction

At Grab, we run large marketing campaigns every day. A typical campaign may require executing multiple actions for millions of users all at once. The actions may include sending rewards, awarding points, and sending messages. Here is what a campaign may look like: On 1st Jan 2022, send two ride rewards to all the users in the “heavy users” segment. Then, send them a congratulatory message informing them about the reward.

Years ago, Grab’s marketing team used to stay awake at midnight to manually trigger such campaigns. They would upload a file at 12 am and then wait for a long time for the campaign execution to complete. To solve this pain point and support more capabilities down this line, we developed a “batch job” service, which is part of our in-house real-time automation engine, Trident.

The following are some services we use to support Grab’s marketing teams:

  • Rewards: responsible for managing rewards.
  • Messaging: responsible for sending messages to users. For example, push notifications.
  • Segmentation: responsible for storing and retrieving segments of users based on certain criteria.

For simplicity, only the services above will be referenced for this article. The “batch job” service we built uses rewards and messaging services for executing actions, and uses the segmentation service for fetching users in a segment.

System requirements

Functional requirements

  • Apply a sequence of actions targeting a large segment of users at a scheduled time, display progress to the campaign manager and provide a final report.
    • For each user, the actions must be executed in sequence; the latter action can only be executed if the preceding action is successful.

Non-functional requirements

  • Quick execution and high turnover rate.
    • Definition of turnover rate: the number of scheduled jobs completed per unit time.
  • Maximise resource utilisation and balance server load.

For the sake of brevity, we will not cover the scheduling logic, nor the generation of the report. We will focus specifically on executing actions.

Naive approach

Let’s start thinking from the most naive solution, and improve from there to reach an optimised solution.

Here is the pseudocode of a naive action executor.

def executeActionOnSegment(segment, actions):
   for user in fetchUsersInSegment(segment):
       for action in actions:
           success := doAction(user, action)
           if not success:
               break
           recordActionResult(user, action)

def doAction(user, action):
   if action.type == "awardReward":
       rewardService.awardReward(user, action.meta)
   elif action.type == "sendMessage":
       messagingService.sendMessage(user, action.meta)
   else:
       # other action types ...

One may be able to quickly tell that the naive solution does not satisfy our non-functional requirements for the following reasons:

  • Execution is slow:
    • The programme is single-threaded.
    • Actions are executed for users one by one in sequence.
    • Each call to the rewards and messaging services will incur network trip time, which impacts time cost.
  • Resource utilisation is low: The actions will only be executed on one server. When we have a cluster of servers, the other servers will sit idle.

Here are our alternatives for fixing the above issues:

  • Actions for different users should be executed in parallel.
  • API calls to other services should be minimised.
  • Distribute the work of executing actions evenly among different servers.

Note: Actions for the same user have to be executed in sequence. For example, if a sequence of required actions are (1) award a reward, (2) send a message informing the user to use the reward, then we can only execute action (2) after action (1) is successfully done for logical reasons and to avoid user confusion.

Our approach

A message queue is a well-suited solution to distribute work among multiple servers. We selected Kafka, among numerous message services, due to its following characteristics:

  • High throughput: Kafka can accept reads and writes at a very high speed.
  • Robustness: Events in Kafka are distributedly stored with redundancy, without a need to worry about data loss.
  • Pull-based consumption: Consumers can consume events at their own speed. This helps to avoid overloading our servers.

When a scheduled campaign is triggered, we retrieve the users from the segment in batches; each batch comprises around 100 users. We write the batches into a Kafka stream, and all our servers consume from the stream to execute the actions for the batches. The following diagram illustrates the overall flow.

Flow

Data in Kafka is stored in partitions. The partition configuration is important to ensure that the batches are evenly distributed among servers:

  1. Number of partitions: Ensure that the number of stream partitions is greater than or equal to the max number of servers we will have in our cluster. This is because one Kafka partition can only be consumed by one consumer. If we have more consumers than partitions, some consumers will not receive any data.
  2. Partition key: For each batch, assign a hash value as the partition key to randomly allocate batches into different partitions.

Now that work is distributed among servers in batches, we can consider how to process each batch faster. If we follow the naive logic, for each user in the batch, we need to call the rewards or messaging service to execute the actions. This will create very high QPS (queries per second) to those services, and incur significant network round trip time.

To solve this issue, we decided to build batch endpoints in rewards and messaging services. Each batch endpoint takes in a list of user IDs and action metadata as input parameters, and returns the action result for each user, regardless of success or failure. With that, our batch processing logic looks like the following:

def processBatch(userBatch, actions):
   users = userBatch
   for action in actions:
       successUsers, failedUsers = doAction(users, action)
       recordFailures(failedUsers, action)
       users = successUsers

def doAction(users, action):
   resp = {}
   if action.type == "awardReward":
       resp = rewardService.batchAwardReward(users, action.meta)
   elif action.type == "sendMessage":
       resp = messagingService.batchSendMessage(users, action.meta)
   else:
   # other action types ...

   return getSuccessUsers(resp), getFailedUsers(resp)

In the implementation of batch endpoints, we also made optimisations to reduce latency. For example, when awarding rewards, we need to write the records of a reward being given to a user in multiple database tables. If we make separate DB queries for each user in the batch, it will cause high QPS to DB and incur high network time cost. Therefore, we grouped all the users in the batch into one DB query for each table update instead.

Benchmark tests show that using the batch DB query reduced API latency by up to 85%.

Further optimisations

As more campaigns started running in the system, we came across various bottlenecks. Here are the optimisations we implemented for some major examples.

Shard stream by action type

Two widely used actions are awarding rewards and sending messages to users. We came across situations where the sending of messages was blocked because a different campaign of awarding rewards had already started. If millions of users were targeted for rewards, this could result in significant waiting time before messages are sent, ultimately leading them to become irrelevant.

We found out the API latency of awarding rewards is significantly higher than sending messages. Hence, to make sure messages are not blocked by long-running awarding jobs, we created a dedicated Kafka topic for messages. By having different Kafka topics based on the action type, we were able to run different types of campaigns in parallel.

Flow

Shard stream by country

Grab operates in multiple countries. We came across situations where a campaign of awarding rewards to a small segment of users in one country was delayed by another campaign that targeted a huge segment of users in another country. The campaigns targeting a small set of users are usually more time-sensitive.

Similar to the above solution, we added different Kafka topics for each country to enable the processing of campaigns in different countries in parallel.

Remove unnecessary waiting

We observed that in the case of chained actions, messaging actions are generally the last action in the action list. For example, after awarding a reward, a congratulatory message would be sent to the user.

We realised that it was not necessary to wait for a sending message action to complete before processing the next batch of users. Moreover, the latency of the sending messages API is lower than awarding rewards. Hence, we adjusted the sending messages API to be asynchronous, so that the task of awarding rewards to the next batch of users can start while messages are being sent to the previous batch.

Conclusion

We have architected our batch jobs system in such a way so that it can be enhanced and optimised without redoing its work. For example, although we currently obtain the list of targeted users from a segmentation service, in the future, we may obtain this list from a different source, for example, all Grab Platinum tier members.

Join us

Grab is a leading superapp in Southeast Asia, providing everyday services that matter to consumers. More than just a ride-hailing and food delivery app, Grab offers a wide range of on-demand services in the region, including mobility, food, package and grocery delivery services, mobile payments, and financial services across over 400 cities in eight countries.
Powered by technology and driven by heart, our mission is to drive Southeast Asia forward by creating economic empowerment for everyone. If this mission speaks to you, join our team today!

ZTA doesn’t solve all problems, but partial implementations solve fewer

Post Syndicated from original https://mjg59.dreamwidth.org/59079.html

Traditional network access controls work by assuming that something is trustworthy based on some other factor – for example, if a computer is on your office network, it’s trustworthy because only trustworthy people should be able to gain physical access to plug something in. If you restrict access to your services to requests coming from trusted networks, then you can assert that it’s coming from a trusted device.

Of course, this isn’t necessarily true. A machine on your office network may be compromised. An attacker may obtain valid VPN credentials. Someone could leave a hostile device plugged in under a desk in a meeting room. Trust is being placed in devices that may not be trustworthy.

A Zero Trust Architecture (ZTA) is one where a device is granted no inherent trust. Instead, each access to a service is validated against some policy – if the policy is satisfied, the access is permitted. A typical implementation involves granting each device some sort of cryptographic identity (typically a TLS client certificate) and placing the protected services behind a proxy. The proxy verifies the device identity, queries another service to obtain the current device state (we’ll come back to that in a moment), compares the state against a policy and either pass the request through to the service or reject it. Different services can have different policies (eg, you probably want a lax policy around whatever’s hosting the documentation for how to fix your system if it’s being refused access to something for being in the wrong state), and if you want you can also tie it to proof of user identity in some way.

From a user perspective, this is entirely transparent. The proxy is made available on the public internet, DNS for the services points to the proxy, and every time your users try to access the service they hit the proxy instead and (if everything’s ok) gain access to it no matter which network they’re on. There’s no need to connect to a VPN first, and there’s no worries about accidentally leaking information over the public internet instead of over a secure link.

It’s also notable that traditional solutions tend to be all-or-nothing. If I have some services that are more sensitive than others, the only way I can really enforce this is by having multiple different VPNs and only granting access to sensitive services from specific VPNs. This obviously risks combinatorial explosion once I have more than a couple of policies, and it’s a terrible user experience.

Overall, ZTA approaches provide more security and an improved user experience. So why are we still using VPNs? Primarily because this is all extremely difficult. Let’s take a look at an extremely recent scenario. A device used by customer support technicians was compromised. The vendor in question has a solution that can tie authentication decisions to whether or not a device has a cryptographic identity. If this was in use, and if the cryptographic identity was tied to the device hardware (eg, by being generated in a TPM), the attacker would not simply be able to obtain the user credentials and log in from their own device. This is good – if the attacker wanted to maintain access to the service, they needed to stay on the device in question. This increases the probability of the monitoring tooling on the compromised device noticing them.

Unfortunately, the attacker simply disabled the monitoring tooling on the compromised device. If device state was being verified on each access then this would be noticed before too long – the last data received from the device would be flagged as too old, and the requests would no longer satisfy any reasonable access control policy. Instead, the device was assumed to be trustworthy simply because it could demonstrate its identity. There’s an important point here: just because a device belongs to you doesn’t mean it’s a trustworthy device.

So, if ZTA approaches are so powerful and user-friendly, why aren’t we all using one? There’s a few problems, but the single biggest is that there’s no standardised way to verify device state in any meaningful way. Remote Attestation can both prove device identity and the device boot state, but the only product on the market that does much with this is Microsoft’s Device Health Attestation. DHA doesn’t solve the broader problem of also reporting runtime state – it may be able to verify that endpoint monitoring was launched, but it doesn’t make assertions about whether it’s still running. Right now, people are left trying to scrape this information from whatever tooling they’re running. The absence of any standardised approach to this problem means anyone who wants to deploy a strong ZTA has to integrate with whatever tooling they’re already running, and that then increases the cost of migrating to any other tooling later.

But even device identity is hard! Knowing whether a machine should be given a certificate or not depends on knowing whether or not you own it, and inventory control is a surprisingly difficult problem in a lot of environments. It’s not even just a matter of whether a machine should be given a certificate in the first place – if a machine is reported as lost or stolen, its trust should be revoked. Your inventory system needs to tie into your device state store in order to ensure that your proxies drop access.

And, worse, all of this depends on you being able to put stuff behind a proxy in the first place! If you’re using third-party hosted services, that’s a problem. In the absence of a proxy, trust decisions are probably made at login time. It’s possible to tie user auth decisions to device identity and state (eg, a self-hosted SAML endpoint could do that before passing through to the actual ID provider), but that’s still going to end up providing a bearer token of some sort that can potentially be exfiltrated, and will continue to be trusted even if the device state becomes invalid.

ZTA doesn’t solve all problems, and there isn’t a clear path to it doing so without significantly greater industry support. But a complete ZTA solution is significantly more powerful than a partial one. Verifying device identity is a step on the path to ZTA, but in the absence of device state verification it’s only a step.

comment count unavailable comments

MITRE Engenuity ATT&CK Evaluation: InsightIDR Drives Strong Signal-to-Noise

Post Syndicated from Sam Adams original https://blog.rapid7.com/2022/03/31/mitre-engenuity-att-ck-evaluation-insightidr-drives-strong-signal-to-noise/

MITRE Engenuity ATT&CK Evaluation: InsightIDR Drives Strong Signal-to-Noise

Rapid7 is very excited to share the results of our participation in MITRE Engenuity’s latest ATT&CK Evaluation, which examines how adversaries abuse data encryption to exploit organizations.

With this evaluation, our customers and the broader security community get a deeper understanding of how InsightIDR helps protectors safeguard their organizations from destruction and ransomware techniques, like those used by the Wizard Spider and Sandworm APT groups modeled for this MITRE ATT&CK analysis.

MITRE Engenuity ATT&CK Evaluation: InsightIDR Drives Strong Signal-to-Noise

What was tested

At the center of InsightIDR’s XDR approach is the included endpoint agent: the Insight Agent. Rapid7’s universal Insight Agent is a lightweight endpoint software that can be installed on any asset – in the cloud or on-premises – to collect data in any environment. The Insight Agent enables our EDR capabilities that are the focus of this ATT&CK Evaluation.

Across both Wizard Spider and Sandworm attacks, we saw strong results indicative of the high-fidelity endpoint detections you can trust to identify real threats as early as possible.

Building transparency and a foundation for dialogue with MITRE Engenuity ATT&CK evaluations

Since the launch of MITRE ATT&CK in May 2015, security professionals around the globe have leveraged this framework as the “go-to” catalog and reference for cyberattack tactics, techniques, and procedures (TTPs). With this guide in hand, security teams visualize detection coverage and gaps, map out security plans and adversary emulations to strengthen defenses, and quickly understand the criticality of threats based on where in the attack chain they appear. Perhaps most importantly, ATT&CK provides a common language with which to discuss breaches, share known adversary group behaviors, and foster conversation and shared intelligence across the security community.

MITRE Engenuity’s ATT&CK evaluation exercises offer a vehicle for users to “better understand and defend against known adversary behaviors through a transparent evaluation process and publicly available results — leading to a safer world for all.” The 2022 MITRE ATT&CK evaluation round focuses on how groups leverage “Data Encrypted for Impact” (encrypting data on targets to prevent companies from being able to access it) to disrupt and exploit their targets. These techniques have been used in many notorious attacks over the years, notably the 2015 and 2016 attacks on Ukrainian electric companies and the 2017 NotPetya attacks.

How to use MITRE Engenuity evaluations

One of the most compelling parts of the MITRE evaluations is the transparency and rich detail provided in the emulation, the steps of each attack, vendor configurations, and detailed read-outs of what transpired. But remember: These vendor evaluations do not necessarily reflect how a similar attack would play out in your own environment. There are nuances in product configurations, the sequencing of events, and the lack of other technologies or product capabilities that may exist within your organization but didn’t in this scenario.

It’s best to use ATT&CK Evaluations to understand how a vendor’s product, as configured, performed under specific conditions for the simulated attack. You can analyze how a vendor’s offering behaves and what it detects at each step of the attack. This can be a great start to dig in for your own simulation or to discuss further with a current or prospective vendor. Consider your program goals and metrics that you are driving towards. Is more telemetry a priority? Is your team driving toward a mean-time-to-respond (MTTR) benchmark? These and other questions will help provide a more relevant view into these evaluation results in a way that is most relevant and meaningful to your team.

InsightIDR delivers superior signal-to-noise

Since the evolution of InsightIDR, we made customer input our “North Star” in guiding the direction of our product. While the technology and threat landscape continues to evolve, the direction and mission that our customers have set us on has remained constant: In a world of limitless noise and threats, we must make it possible to find and extinguish evil earlier, faster, and easier.

Simple to say, harder to do.

While traditional approaches give customers more buttons and levers to figure it out themselves, Rapid7’s approach is from a different angle. How do we provide sophisticated detection and response without creating more work for an already overworked SOC team? What started as a journey to provide (what was a new category at the time) user and entity behavior analytics (UEBA) evolved into a leading cloud SIEM, and it’s now ushering in the next era of detection and response with XDR.

MITRE Engenuity ATT&CK Evaluation: InsightIDR Drives Strong Signal-to-Noise
https://www.techvalidate.com/product-research/insightIDR/facts/CAA-CCB-F73

Key takeaways of the MITRE Engenuity ATT&CK Evaluation

  • Demonstrated strong visibility across ATT&CK, with telemetry, tactic, or technique coverage across 18 of the 19 phases covered across both simulations
  • Consistently indicated threats early in the cyber killchain, with solid detections coverage across Initial Compromise in the Sandworm evaluation and both Initial Compromise and Initial Discovery in the Wizard Spider evaluation
  • Showcased our commitment to providing a strong signal-to-noise ratio within our detections library with targeted and focused detections across each phase of the attack (versus alerting on every small substep)

As our customers know, these endpoint capabilities are just the tip of the spear with InsightIDR. While not within the scope of this evaluation, we also fired several targeted alerts that didn’t map to MITRE-defined subtypes — offering additional coverage beyond the framework. We know that with our other native telemetry capabilities for user behavior analytics, network traffic analysis, and cloud detections, InsightIDR provides relevant signals and valuable context in a real-world scenario — not to mention the additional protection, intelligence, and accelerated response that the broader Insight platform delivers in such a use case.

MITRE Engenuity ATT&CK Evaluation: InsightIDR Drives Strong Signal-to-Noise
https://www.techvalidate.com/product-research/insightIDR/facts/7D5-BD6-54D

Thank you!

We want to thank MITRE Engenuity for the opportunity to participate in this evaluation. While we are very proud of our results, we also learned a lot throughout the process and are actively working to implement those learnings to improve our endpoint capabilities for customers. We would also like to thank our customers and partners for their continued feedback. Your insights continue to inspire our team and elevate Rapid7’s products, making more successful detection and response accessible for all.

To learn more about how Rapid7 helps organizations achieve stronger signal-to-noise while still having defense in depth across the attack chain, join our webcast where we’ll be breaking down this evaluation and more.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Persist and analyze metadata in a transient Amazon MWAA environment

Post Syndicated from Praveen Kumar original https://aws.amazon.com/blogs/big-data/persist-and-analyze-metadata-in-a-transient-amazon-mwaa-environment/

Customers can harness sophisticated orchestration capabilities through the open-source tool Apache Airflow. Airflow can be installed on Amazon EC2 instances or can be dockerized and deployed as a container on AWS container services. Alternatively, customers can also opt to leverage Amazon Managed Workflows for Apache Airflow (MWAA).

Amazon MWAA is a fully managed service that enables customers to focus more of their efforts on high-impact activities such as programmatically authoring data pipelines and workflows, as opposed to maintaining or scaling the underlying infrastructure. Amazon MWAA offers auto-scaling capabilities where it can respond to surges in demand by scaling the number of Airflow workers out and back in.

With Amazon MWAA, there are no upfront commitments and you only pay for what you use based on instance uptime, additional auto-scaling capacity, and storage of the Airflow back-end metadata database. This database is provisioned and managed by Amazon MWAA and contains the necessary metadata to support the Airflow application.  It hosts key data points such as historical execution times for tasks and workflows and is valuable in understanding trends and behaviour of your data pipelines over time. Although the Airflow console does provide a series of visualisations that help you analyse these datasets, these are siloed from other Amazon MWAA environments you might have running, as well as the rest of your business data.

Data platforms encompass multiple environments. Typically, non-production environments are not subject to the same orchestration demands and schedule as those of production environments. In most instances, these non-production environments are idle outside of business hours and can be spun down to realise further cost-efficiencies. Unfortunately, terminating Amazon MWAA instances results in the purging of that critical metadata.

In this post, we discuss how to export, persist and analyse Airflow metadata in Amazon S3 enabling you to run and perform pipeline monitoring and analysis. In doing so, you can spin down Airflow instances without losing operational metadata.

Benefits of Airflow metadata

Persisting the metadata in the data lake enables customers to perform pipeline monitoring and analysis in a more meaningful manner:

  • Airflow operational logs can be joined and analysed across environments
  • Trend analysis can be conducted to explore how data pipelines are performing over time, what specific stages are taking the most time, and how is performance effected as data scales
  • Airflow operational data can be joined with business data for improved record level lineage and audit capabilities

These insights can help customers understand the performance of their pipelines over time and guide focus towards which processes need to be optimised.

The technique described below to extract metadata is applicable to any Airflow deployment type, but we will focus on Amazon MWAA in this blog.

Solution Overview

The below diagram illustrates the solution architecture. Please note, Amazon QuickSight is NOT included as part of the CloudFormation stack and is not covered in this tutorial. It has been placed in the diagram to illustrate that metadata can be visualised using a business intelligence tool.

As part of this tutorial, you will be performing the below high-level tasks:

  • Run CloudFormation stack to create all necessary resources
  • Trigger Airflow DAGs to perform sample ETL workload and generate operational metadata in back-end database
  • Trigger Airflow DAG to export operational metadata into Amazon S3
  • Perform analysis with Amazon Athena

This post comes with an AWS CloudFormation stack that automatically provisions the necessary AWS resources and infrastructure, including an active Amazon MWAA instance, for this solution. The entire code is available in the GitHub repository.

The Amazon MWAA instance will already have three directed-acyclic graphs (DAGs) imported:

  1. glue-etl – This ETL workflow leverages AWS Glue to perform transformation logic on a CSV file (customer_activity.csv). This file will be loaded as part of the CloudFormation template into the s3://<DataBucket>/raw/ prefix.

The first task glue_csv_to_parquet converts the ‘raw’ data to parquet format and stores the data in location s3://<DataBucket>/optimised/.  By converting the data in parquet format, you can achieve faster query performance and lower query costs.

The second task glue_transform runs an aggregation over the newly created parquet format and stores the aggregated data in location s3://<DataBucket>/conformed/.

  1. db_export_dag – This DAG consists of one task, export_db, which exports the data from the back-end Airflow database into Amazon S3 in the location s3://<DataBucket>/export/.

Please note that you may experience time-out issues when extracting large amounts of data. On busy Airflow instances, our recommendation will be to set up frequent extracts in small chunks.

  1. run-simple-dag – This DAG does not perform any data transformation or manipulation. It is used in this blog for the purposes of populating the back-end Airflow database with sufficient operational data.

Prerequisites

To implement the solution outlined in this blog, you will need following :

Steps to run a data pipeline using Amazon MWAA and saving metadata to s3:

  1. Choose Launch Stack:
  2. Choose Next.
  3. For Stack name, enter a name for your stack.
  4. Choose Next.
  5. Keep the default settings on the ‘Configure stack options’ page, and choose Next.
  6. Acknowledge that the template may create AWS Identity and Access Management (IAM) resources.
  7. Choose Create stack. The stack can take up to 30 mins to complete.

The CloudFormation template generates the following resources:

    • VPC infrastructure that uses Public routing over the Internet.
    • Amazon S3 buckets required to support Amazon MWAA, detailed below:
      • The Data Bucket, refered in this blog as s3://<DataBucket>, holds the data which will be optimised and transformed for further analytical consumption. This bucket will also hold the data from the Airflow back-end metadata database once extracted.
      • The Environment Bucket, refered in this blog as s3://<EnvironmentBucket>, stores your DAGs, as well as any custom plugins, and Python dependencies you may have.
    • Amazon MWAA environment that’s associated to the  s3://<EnvironmentBucket>/dags location.
    • AWS Glue jobs for data processing and help generate airflow metadata.
    • AWS Lambda-backed custom resources to upload to Amazon S3 the sample data, AWS Glue scripts and DAG configuration files,
    • AWS Identity and Access Management (IAM) users, roles, and policies.
  1. Once the stack creation is successful, navigate to the Outputs tab of the CloudFormation stack and make note of DataBucket and EnvironmentBucket name. Store your Apache Airflow Directed Acyclic Graphs (DAGs), custom plugins in a plugins.zip file, and Python dependencies in a requirements.txt file.
  2. Open the Environments page on the Amazon MWAA console.
  3. Choose the environment created above. (The environment name will include the stack name). Click on Open Airflow UI.
  4. Choose glue-etl DAG , unpause by clicking the radio button next to the name of the DAG and click on the Play Button on Right hand side to Trigger DAG. It may take up to a minute for DAG to appear.
  5. Leave Configuration JSON as empty and hit Trigger.
  6. Choose run-simple-dag DAG, unpause and click on Trigger DAG.
  7. Once both DAG executions have completed, select the db_export_dag DAG, unpause and click on Trigger DAG. Leave Configuration JSON as empty and hit Trigger.

This step will extract the dag and task metadata to a S3 location. This is a sample list of tables and more tables can be added as required. The exported metadata will be located in s3://<DataBucket>/export/ folder.

Visualise using Amazon QuickSight and Amazon Athena

Amazon Athena is a serverless interactive query service that can be used to run exploratory analysis on data stored in Amazon S3.

If you are using Amazon Athena for the first time, please find the steps here to setup query location. We can use Amazon Athena to explore and analyse the metadata generated from airflow dag runs.

  1. Navigate to Athena Console and click explore the query editor.
  2. Hit View Settings.
  3. Click Manage.
  4. Replace with s3://<DataBucket>/logs/athena/. Once completed, return to the query editor.
  5. Before we can perform our pipeline analysis, we need to create the below DDLs. Replace the <DataBucket> as part of the LOCATION clause with the parameter value as defined in the CloudFormation stack (noted in Step 8 above).
    CREATE EXTERNAL TABLE default.airflow_metadata_dagrun (
            sa_instance_state STRING,
            dag_id STRING,
            state STRING,
            start_date STRING,
            run_id STRING,
            external_trigger STRING,
            conf_name STRING,
            dag_hash STRING,
             id STRING,
            execution_date STRING,
            end_date STRING,
            creating_job_id STRING,
            run_type STRING,
            last_scheduling_decision STRING
       )
    PARTITIONED BY (dt string)
    ROW FORMAT DELIMITED
    FIELDS TERMINATED BY ','
    LOCATION 's3://<DataBucket>/export/dagrun/'
    TBLPROPERTIES ("skip.header.line.count"="1");
    MSCK REPAIR TABLE default.airflow_metadata_dagrun;
    
    CREATE EXTERNAL TABLE default.airflow_metadata_taskinstance (
            sa_instance_state STRING,
            start_date STRING,
            job_id STRING,
            pid STRING,
            end_date STRING,
            pool STRING,
            executor_config STRING,
            duration STRING,
            pool_slots STRING,
            external_executor_id STRING,
            state STRING,
            queue STRING,
            try_number STRING,
            max_tries STRING,
            priority_weight STRING,
            task_id STRING,
            hostname STRING,
            operator STRING,
            dag_id STRING,
            unixname STRING,
            queued_dttm STRING,
            execution_date STRING,
            queued_by_job_id STRING,
            test_mode STRING
       )
    PARTITIONED BY (dt string)
    ROW FORMAT DELIMITED
    FIELDS TERMINATED BY ','
    LOCATION 's3://<DataBucket>/export/taskinstance/'
    TBLPROPERTIES ("skip.header.line.count"="1");
    MSCK REPAIR TABLE default.airflow_metadata_taskinstance;

  6. You can preview the table in the query editor of Amazon Athena.

  7. With the metadata persisted, you can perform pipeline monitoring and derive some powerful insights on the performance of your data pipelines overtime. As an example to illustrate this, execute the below SQL query in Athena.

This query returns pertinent metrics at a monthly grain which include number of executions of the DAG in that month, success rate, minimum/maximum/average duration for the month and a variation compared to the previous months average.

Through the below SQL query, you will be able to understand how your data pipelines are performing over time.

select dag_run_prev_month_calcs.*
        , avg_duration - prev_month_avg_duration as var_duration
from
    (
select dag_run_monthly_calcs.*
            , lag(avg_duration, 1, avg_duration) over (partition by dag_id order by year_month) as prev_month_avg_duration
    from
        (
            select dag_id
                    , year_month
                    , sum(counter) as num_executions
                    , sum(success_ind) as num_success
                    , sum(failed_ind) as num_failed
                    , (cast(sum(success_ind) as double)/ sum(counter))*100 as success_rate
                    , min(duration) as min_duration
                    , max(duration) as max_duration
                    , avg(duration) as avg_duration
            from
                (
                    select dag_id
                            , 1 as counter
                            , case when state = 'success' then 1 else 0 end as success_ind
                            , case when state = 'failed' then 1 else 0 end as failed_ind
                            , date_parse(start_date,'%Y-%m-%d %H:%i:%s.%f+00:00') as start_date
                            , date_parse(end_date,'%Y-%m-%d %H:%i:%s.%f+00:00') as end_date
                            , date_parse(end_date,'%Y-%m-%d %H:%i:%s.%f+00:00') - date_parse(start_date,'%Y-%m-%d %H:%i:%s.%f+00:00') as duration
                            , date_format(date_parse(start_date,'%Y-%m-%d %H:%i:%s.%f+00:00'), '%Y-%m') as year_month
                    from "default"."airflow_metadata_dagrun"
                    where state <> 'running'
                )  dag_run_counters
            group by dag_id, year_month
        ) dag_run_monthly_calcs
    ) dag_run_prev_month_calcs
order by dag_id, year_month

  1. You can also visualize this data using your BI tool of choice. While step by step details of creating a dashboard is not covered in this blog, please refer the below dashboard built on Amazon QuickSight as an example of what can be built based on the metadata extracted above. If you are using Amazon QuickSight for the first time, please find the steps here on how to get started.

Through QuickSight, we can quickly visualise and derive that our data pipelines are completing successfully, but on average are taking a longer time to complete over time.

Clean up the environment

  1. Navigate to the S3 console and click on the <DataBucket> noted in step 8 above.
  2. Click on Empty bucket.
  3. Confirm the selection.
  4. Repeat this step for bucket <EnvironmentBucket> (noted in step 8 above) and Empty bucket.
  5. Run the below statements in the query editor to drop the two Amazon Athena tables. Run statements individually.
    DROP TABLE default.airflow_metadata_dagrun;
    DROP TABLE default.airflow_metadata_taskinstance;

  6. On the AWS CloudFormation console, select the stack you created and choose Delete.

Summary

In this post, we presented a solution to further optimise the costs of Amazon MWAA by tearing down instances whilst preserving the metadata. Storing this metadata in your data lake enables you to better perform pipeline monitoring and analysis. This process can be scheduled and orchestrated programatically and is applicable to all Airflow deployments, such as Amazon MWAA, Apache Airflow installed on Amazon EC2, and even on-premises installations of Apache Airflow.

To learn more, please visit Amazon MWAA and Getting Started with Amazon MWAA.


About the Authors

Praveen Kumar is a Specialist Solution Architect at AWS with expertise in designing, building, and implementing modern data and analytics platforms using cloud-native services. His areas of interests are serverless technology, streaming applications, and modern cloud data warehouses.

Avnish Jain is a Specialist Solution Architect in Analytics at AWS with experience designing and implementing scalable, modern data platforms on the cloud for large scale enterprises. He is passionate about helping customers build performant and robust data-driven solutions and realise their data & analytics potential.

Use Amazon CodeGuru Profiler to monitor and optimize performance in Amazon Kinesis Data Analytics applications for Apache Flink

Post Syndicated from Praveen Panati original https://aws.amazon.com/blogs/big-data/use-amazon-codeguru-profiler-to-monitor-and-optimize-performance-in-amazon-kinesis-data-analytics-applications-for-apache-flink/

Amazon Kinesis Data Analytics makes it easy to transform and analyze streaming data and gain actionable insights in real time with Apache Flink. Apache Flink is an open-source framework and engine for processing data streams in real time. Kinesis Data Analytics reduces the complexity of building and managing Apache Flink applications using open-source libraries and integrating with other AWS services.

Kinesis Data Analytics is a fully managed service that takes care of everything required to run real-time streaming applications continuously and scale automatically to match the volume and throughput of your incoming data.

As you start building and deploying business-critical, highly scalable, real-time streaming applications, it’s important that you continuously monitor applications for health and performance, and optimize the application to meet the demands of your business.

With Amazon CodeGuru Profiler, developers and operations teams can monitor the following:

You can use CodeGuru Profiler to analyze the application’s performance characteristics and bottlenecks in the application code by capturing metrics such as CPU and memory utilization. You can use these metrics and insights to identify the most expensive lines of code; optimize for performance; improve stability, latency, and throughput; and reduce operational cost.

In this post, we discuss some of the challenges of running streaming applications and how you can use Amazon Kinesis Data Analytics for Apache Flink to build reliable, scalable, and highly available streaming applications. We also demonstrate how to set up and use CodeGuru Profiler to monitor an application’s health and capture important metrics to optimize the performance of Kinesis Data Analytics for Apache Flink applications.

Challenges

Streaming applications are particularly complex in nature. The data is continuously generated from a variety of sources with varying amounts of throughput. It’s critical that the application infrastructure scales up and down according to these varying demands without becoming overloaded, and not run into operational issues that might result in downtime.

As such, it’s crucial to constantly monitor the application for health, and identify and troubleshoot the bottlenecks in the application configuration and application code to optimize the application and the underlying infrastructure to meet the demands while also reducing the operational costs.

What Kinesis Data Analytics for Apache Flink and CodeGuru Profiler do for you

With Kinesis Data Analytics for Apache Flink, you can use Java, Scala, and Python to process and analyze real-time streaming data using open-source libraries based on Apache Flink. Kinesis Data Analytics provides the underlying infrastructure for your Apache Flink applications. It handles core capabilities such as provisioning compute resources, parallel computation, automatic scaling, and application backups (implemented as checkpoints and snapshots) to rapidly create, test, deploy, and scale real-time data streaming applications using best practices. This allows developers to focus more on application development and less on Apache Flink infrastructure management.

With CodeGuru Profiler, you can quickly and easily monitor Kinesis Data Analytics for Apache Flink applications to:

  • Identify and troubleshoot CPU and memory issues using CPU and memory (heap summary) utilization metrics
  • Identify bottlenecks and the application’s most expensive lines of code
  • Optimize application performance (latency, throughput) and reduce infrastructure and operational costs

Solution overview

In this post, we use a sample Java application deployed as a Kinesis Data Analytics application for Apache Flink, which consumes the records from Amazon Kinesis Data Streams and uses Apache Flink operators to generate real-time actionable insights. We use this sample to understand and demonstrate how to integrate with CodeGuru Profiler to monitor the health and performance of your Kinesis Data Analytics applications.

The following diagram shows the solution components.

At a high level, the solution covers the following steps:

  1. Set up, configure, and deploy a sample Apache Flink Java application on Kinesis Data Analytics.
  2. Set up CodeGuru Profiler.
  3. Integrate the sample Apache Flink Java application with CodeGuru Profiler.
  4. Use CodeGuru Profiler to analyze, monitor, and optimize application performance.

Set up a sample Apache Flink Java application on Kinesis Data Analytics

Follow the instructions in the GitHub repo and deploy the sample application that includes source code as well as AWS CloudFormation templates to deploy the Kinesis Data Analytics for Apache Flink application.

For this post, I deploy the stack in the us-east-1 Region.

After you deploy the sample application, you can test the application by running the following commands, and providing the correct parameters for the Kinesis data stream and Region.

The Java application has already been downloaded to an EC2 instance that has been provisioned by AWS CloudFormation; you just need to connect to the instance and run the JAR file to start ingesting events into the stream.

$ ssh ec2-user@«Replay instance DNS name»

$ java -jar amazon-kinesis-replay-*.jar -streamName «Kinesis data stream name» -streamRegion «AWS region» -speedup 3600

Set up CodeGuru Profiler

Set up and configure CodeGuru Profiler using the AWS Management Console. For instructions, see Set up in the CodeGuru Profiler console.

For this post, I create a profiling group called flinkappdemo in the us-east-1 Region.

In the next section, I demonstrate how to integrate the sample Kinesis Data Analytics application with the profiling group.

Integrate the sample Apache Flink Java application with CodeGuru Profiler

Download the source code that you deployed earlier and complete the following steps to integrate CodeGuru Profiler to the Java application:

  1. Include the CodeGuru Profiler agent in your application by adding the following dependencies to your pom.xml file:
    <project xmlns="http://maven.apache.org/POM/4.0.0" 
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    ...
        <repositories>
            <repository>
                <id>codeguru-profiler</id>
                <name>codeguru-profiler</name>
                <url>https://d1osg35nybn3tt.cloudfront.net</url>
            </repository>
        </repositories>
        ... 
        <dependencies>
            <dependency>
                <groupId>com.amazonaws</groupId>
                <artifactId>codeguru-profiler-java-agent</artifactId>
                <version>1.2.1</version>
            </dependency>
        </dependencies>
    ...
    </project> 

  2. Add the CodeGuru Profiler agent configuration code to the Apache Flink Operators (functions), as shown in the following code.

Because multiple operators and operator instances can run on the same TaskManager JVM, and because one instance of the profiler can capture all events in a JVM, you just need to enable the profiler on an operator that is guaranteed to be present on all TaskManager JVMs. For this, you can pick the operator with the highest parallelism. In addition, you could instantiate the profiler as a singleton such that there is one instance per JVM.

public class CountByGeoHash implements WindowFunction<TripGeoHash, PickupCount, String, TimeWindow> {

  static {
    new Profiler.Builder()
            .profilingGroupName("flinkappdemo")
            .withHeapSummary(false) // optional - to start without heap profiling set to false or remove line
            .build()
            .start();
  }
  .....
}
public class TripDurationToAverageTripDuration implements WindowFunction<TripDuration, AverageTripDuration, Tuple2<String, String>, TimeWindow> {

  static {
    new Profiler.Builder()
            .profilingGroupName("flinkappdemo")
            .withHeapSummary(false) // optional - to start without heap profiling set to false or remove line
            .build()
            .start();
  }
  .....
}
  1. Build the application using the following command:
    mvn clean package

The preceding command packages the application into a JAR file.

  1. Copy and replace the JAR file in the Amazon Simple Storage Service (Amazon S3) bucket that was created as part of the CloudFormation stack.
  2. Choose Save changes to update the application.

This step allows the application to use the latest JAR file that contains the CodeGuru Profiler code to start profiling the application.

Use CodeGuru Profiler to analyze, monitor, and optimize application performance

Now that the application has been configured to use CodeGuru Profiler, you can use the metrics and visualizations to explore profiling data collected from the application.

Run the following commands from when you set up your application to start ingesting data into the Kinesis data stream and enable CodeGuru Profiler to profile the application and gather metrics:

$ ssh ec2-user@«Replay instance DNS name»

$ java -jar amazon-kinesis-replay-*.jar -streamName «Kinesis data stream name» -streamRegion «AWS region» -speedup 3600

On the CodeGuru console, navigate to flinkappdemo on the Profiling groups page.

The summary page displays the status of your profiling group as well as the relevant metrics gathered while profiling the application.

In the following sections, we discuss the metrics and reports on this page in more detail.

CPU summary

Use this summary and the associated metrics CPU utilization and Time spent executing code to understand how much of the instance’s CPU resources are consumed by the application and how frequently the application’s JVM threads were in the RUNNABLE state. This helps you measure the application’s time spent running operations on the CPU so you can tune your application code and configuration.

With the CPU utilization metric, a low value (such as less than 10%) indicates your application doesn’t consume a large amount of the system CPU capacity. This means there could be an opportunity to scale in the application parallelism to reduce cost. A high value (over 90%) indicates your application is consuming a large amount of system CPU capacity. This means there is likely value in looking at your CPU profiles and recommendations for areas of optimization.

When examining the time spent running code, a high percentage (over 90%) indicates most of your application’s time is spent running operations on the CPU. A very low percentage (under 1%) indicates that most of your application was spent in other thread states (such as BLOCKED or WAITING) and there may be more value in looking at the latency visualization, which displays all non-idle thread states, instead of the CPU visualization.

For more information on understanding the CPU summary, see CPU summary.

Latency summary

Use this summary and the metrics Time spent blocked and Time spent waiting to understand what sections of the code are causing threads to block and threads that are waiting to tune your application code and configuration. For more information, see Latency summary.

The CPU summary and latency visualization can help you analyze the thread blocking and wait operations to further identify bottlenecks and tune your application’s performance and configuration.

Heap usage

Use this summary and the metrics Average heap usage and Peak heap usage to understand how much of your application’s maximum heap capacity is consumed by your application and to spot memory leaks. If the graph grows continuously over time, that could be an indication of a memory leak.

With the average heap usage metric, a high percentage (over 90%) could indicate that your application is close to running out of memory most of the time. If you wish to optimize this, the heap summary visualization shows you the object types consuming the most space on the heap. A low percentage (less than 10%) may indicate that your JVM is being provided much more memory than it actually requires and cost savings may be available by scaling in the application parallelism, although you should check the peak usage too.

Peak heap usage shows the highest percentage of memory consumed by your application seen by the CodeGuru Profiler agent. This is based on the same dataset as seen in the heap summary visualization. A high percentage (over 90%) could indicate that your application has high spikes of memory usage, especially if your average heap usage is low.

For more information on the heap summary, see Understanding the heap summary.

Anomalies and recommendation reports

CodeGuru Profiler uses machine learning to detect and alert on anomalies in your application profile and code. Use this to identify parts of the code for performance optimization and potential savings.

The issues identified during analysis are included in the recommendations report. Use this report to identify potential outages, latency, and other performance issues. For more information on how to work with anomalies and recommendations, see Working with anomalies and recommendation reports.

Visualizations

You can use visualizations associated with the preceding metrics to drill down further to identify what parts of the application configuration and application code are impacting the performance, and use these insights to improve and optimize application performance.

CodeGuru Profiler supports three types of visualizations and a heap summary to display profiling data collected from applications:

Let’s explore the profiling data collected from the preceding steps to observe and monitor application performance.

CPU utilization

The following screenshot shows the snapshot of the application’s profiling data in a flame graph visualization. This view provides a bottom-up view of the application’s profiling data, with the X-axis showing the stack profile and the Y-axis showing the stack depth. Each rectangle represents a stack frame. This visualization can help you identify specific call stacks that lead to inefficient code by looking at the top block function on CPU. This may indicate an opportunity to optimize.

Recommendation report with opportunities to optimize the application

Use the recommendation report to identify and correlate the sections of the application code that can be improved to optimize the application performance. In our example, we can improve the application code by using StringBuilder instead of String.format and by reusing the loggers rather than reinitializing them repetitively, and also by selectively applying the debug/trace logging, as recommended in the following report.

Hotspot visualization

The hotspot visualization shows a top-down view of the application’s profiling data. The functions that consume the most CPU time are at the top of the visualization and have the widest block. You can use this view to investigate functions that are computationally expensive.

Latency visualization

In this mode, you can visualize frames with different thread states, which can help you identify functions that spent a lot of time being blocked on shared resources, or waiting for I/O or sleeping. You can use this view to identify threads that are waiting or dependent on other threads and use it to improve latency on all or parts of your application.

You can inspect a visualization to further analyze any frame by selecting a frame and then choosing (right-click) the frame and choosing Inspect.

Heap summary

This summary view shows how much heap space your application requires to store all objects required in memory after a garbage collection cycle. If this value continuously grows over time until it reaches total capacity, that could be an indication of a memory leak. If this value is very low compared to total capacity, you may be able to save money by reducing your system’s memory.

For more information on how to work and explore data with visualizations, refer to Working with visualizations and Exploring visualization data.

Clean up

To avoid ongoing charges, delete the resources you created from the previous steps.

  1. On the CodeGuru console, choose Profiling groups in the navigation pane.
  2. Select the flinkappdemo profiling group.
  3. On the Actions meu, choose Delete profiling group.
  4. On the AWS CloudFormation console, choose Stacks in the navigation pane.
  5. Select the stack you deployed (kinesis-analytics-taxi-consumer) and choose Delete.

Summary

This post explained how to configure, build, deploy, and monitor real-time streaming Java applications using Kinesis Data Analytics applications for Apache Flink and CodeGuru. We also explained how you can use CodeGuru Profiler to collect runtime performance data and metrics that can help you monitor application health and optimize your application performance.

For more information, see Build and run streaming applications with Apache Flink and Amazon Kinesis Data Analytics for Java Applications and the Amazon Kinesis Data Analytics Developer Guide.

Several customers are now using CodeGuru Profiler to monitor and improve application performance, and you too can start monitoring your applications by following the instructions in the product documentation. Head over to the CodeGuru console to get started today!


About the Author

Praveen Panati is a Senior Solutions Architect at Amazon Web Services. He is passionate about cloud computing and works with AWS enterprise customers to architect, build, and scale cloud-based applications to achieve their business goals. Praveen’s area of expertise includes cloud computing, big data, streaming analytics, and software engineering.

Децата от Община Родопи бяха сравнени с кучета от Георги Цанков – председател на Общинския съвет, при публичното обсъждане на бюджета на Община Родопи

Post Syndicated from VassilKendov original http://kendov.com/%D0%B4%D0%B5%D1%86%D0%B0%D1%82%D0%B0-%D0%BE%D1%82-%D0%BE%D0%B1%D1%89%D0%B8%D0%BD%D0%B0-%D1%80%D0%BE%D0%B4%D0%BE%D0%BF%D0%B8-%D0%B1%D1%8F%D1%85%D0%B0-%D1%81%D1%80%D0%B0%D0%B2%D0%BD%D0%B5%D0%BD%D0%B8/

Децата от Община Родопи бяха сравнени с кучета от Георги Цанков –  председател на Общинския съвет, при публичното обсъждане на бюджета на Община Родопи

На 30 Март 2022 се проведе публичното обсъждане на общинския бюджет в Община Родопи.
На въпрос за заделените стипендии за деца в Община Родопи, зададен от финансиста Васил Кендов, председателят на общинския съвет Георги Цанков отговори така:

„ …това, че Вие искате стипендии за деца, кучета, птички…”

На тази реплика Васил Кендов реагира остро и обясни, че не може да се сравняват децата с кучета. Това не е нормално и етично, и обяснява много неща.

На срещата общината организира запис на звуков файл, от който може лесно да се провери какво точно е казано и в какъв контекст.

Малко по-късно г-н Георги Цанков отново използва неподходящо сравнение в което обясни защо за малките населени места практически не е предвидена никаква подкрепа, от рекордния общински бюджет от 44.5 млн. лева.

„…бюджета е като бюджета на едно семейсто. Всичко влиза в една обща сметка и оттам се разпределя на всеки…”

Това иказване също не остана незабелязано от финансиста Васил Кендов (председател на фондация възраждане на българските села)

„Разбирам, че на г-н Цанков не му се получават сравненията днес. Преди малко сравнихте децата с кучета, а сега обаснявате как в едно семейство трябва да кажем на малкото дете, че няма да яде, защото ще дадем повече на по-голямото му братче. Това не е нормално.”

Изказванията бяха записани, а звуковите файлове се съхраняват в общината.

На практика бяха констатирани множество нарушения, а на обсъждането не присъстваха журналисти.
Стигна се и до личностни нападки към граждани по отношение на тяхното облекло, което според кмета Павел Михайков било неуважително.

В този случай гражданинът на с. Белащица отговори, че идва директно от работа, защото сега е разбрал за това обсъждане и не е имал време да се преоблече. Той също така отбеляза, че срокът от публикуването на поканата за обсъждането е прекалено кратък за да научи по-рано, а начинът на обявяване е бил на вътрешна страница на сайта на общината, което практически я прави незасекаема за хората от Община Родопи.
Точно такаве беше и първата точка от изложението на финансиста Васил Кендов.

Прилагаме цялото изложение

„Във връзка с поканата за пъблично обсъджане на проектобюджета на Община Родопи, бих желал да обърна внимание на следните детайли

1. Поканата е качена на 23.03.2022г., а датата на провеждане на пубичното обсъждане е определна на 30.03.2022г. в 10.00 часа. Реалният срок определен за запознаване и обсъждане е 7 дни.

Личното ми мнение, че това е твърде кратък срок.

Според деловодството на Община Родопи, които се позовават на разпоредбите на АПК и респективно Закон за нормативните актове, срокът за отговор на мое запитване към Община Родопи е едномесечен. Става въпрос за отговор, който съдържа половин страница формат А4.

От друга страна проекто бюджета на община Родопи е 15 страници, а резюмето към него е 14 страници.
Тоест община Родопи, при целия си административен капацитет, бюджет от 37.5млн (за 2021) и персонал, има нужда от 30 дни за отговор на запитване от гражданин, а отделен гражданин с личния си ресурс се очаква да се запознае, анализира, консултира и да даде предложения за бюджет и резюме в рамките на 7 дни, и то ако попадне на поканата за общественото обсъждане още първия ден от нейното публикуване.
Ясно е, че законовия срок за обсъждане е един месец, но как точно, с кого и с каква ефективност очаквате да се проведе обсъждането след срещата за обсъждане на бюджета, която е 7 дни след обявата. Останалите 23 дни от законовия срок ще са безсмислени.

Мисля, че горното е технически невъзможно! Не съм убеден, че това е във възможностите на 99.99% от живущите в Община Родопи, което според мен дава повод да се замислим за предоставяне на РЕАЛНА ВЪЗМОЖНОСТ ЗА ДЕБАТ върху бюджета на Община Родопи.

С предходно мое писмо до Община Родопи, обръщам внимание, че сесиите на Общински съвет в Община Родопи се обявяват с предизвестие от по-малко от 24 часа. Отговорът беше, че това важи за неотложните и спешни сесии. От началото на годината в Община Родопи са проведени 3 сесии. Две от тях – на 19.01.2022 и 07.03.2022 са с предизвестие под 24 часа, с мотив за спешност.
Като пример – на първата от тях са взети решения за възнагражденията на Кмета на Община Родопи (точка 2 от дневния ред), а също така е приет и „Анализ на потребностите от подкрепа за личностно развитие на децата и учениците в Община Родопи” (точка 3).

Личното ми мнение е, че покрай някоя „точка” от дневния ред, която отговаря на изискването за свикване на заседание на общинския съвет по спешност, се гласуват други 10 точки, които не отговарят на това определение и подлежат на обществено обсъждане.
Практиката по влючването на такива точки в дневния ред на спеши заседания, отнема възможността за публично обсъждане на проблемите и действията на общинската администрация и това се превръща в практика на администрацията на Община Родопи.
Сроковете на обявленията са прекалено кратки за да може обществото да се запознае с дневния ред, както и да заяви евентуално участие в сесиите на общинския съвет.

Научих също, че онлайн сесиите нямат възможност (на практика) да бъдат наблюдавани от граждани по „технически причини”.
За мен като гражданин такава теза е несъстоятелна поради факта, че на територията на Република България от 2 години поради обявената пандемия, всички училища и администрации въведоха системи за оналайн занятия и срещи, които системи са безплатни. В частния сектор такива се използват от десетки години и е несериозно да се твърди, че „няма техническа възможност” за осигуряване на линк на сайта на общината ( с някой от десетките безплатни софтуери за конферентни връзки), за да могат да бъдат наблюдавани онлайн заседанията на общинския съвет.

Като извод от горното, бих желал да отправя молба както към Общински съвет в Община Родопи, така и към Областен управител на Област Пловдив, да вземат отношение по така описания казус и да гарантират възможност на жителите на Община Родопи да взимат участие или поне да имат ПРАКТИЧЕСКА възможност да наблюдават заседанията на Общинския съвет.

2. Относно формата на предоставяната информация за Бюджет 2022.

На страницата на общината е предоставен линк към предлагания бюджет за 2022 в pdf формат. Такъв обаче липсва за 2021 година. Там бюджета е даден в различни pdf файлове, някои от които доволно нечетливи. Като пример мога да посоча „Мести дейности – сборен бюджет 2021”. За мен това не подлежи на обработка и сравнение с предложения нов „Мести дейности сборен бюджет 2022 – проект”, прото защото не се вижда, дори при разпечатване.
Освен, че имаме едва 7 дни за запознаване и взимане на становище, на практика публикуваните данни за 2021 и проекта за 2022 са несъвместими, за да можем да определим тренд и/или повтаряемост на разходите.

Личното ми мнение е, че за всеки от публикуваните файлове има налични екселски таблици. Не е нужно да бъдат публикувани копия от хартиен носител, конвертирани в pdf формат.
За мен дори при желание и наличие на ценз, анализа на бюджета на община Родопи е ПРАКТИЧЕСКИ невъзможен по начина, по който са представени данните. Не виждам причина да не се публикуват съвместими данни е електронен вариант, позволяващ обработката им.

3. Капиталов разход за газифицирне на ОУ Неофит Рилски с. Ягодово – 60 000 лева. (стр. 4, ред 42). Предвид цените на природния газ и задаващата се газова криза, предлагам средствата по този разход да бъдат пренасочени към, за поставяне на тенис маси около читалищата, което в комбинация със свободна WiFi точка, да превърне пространството около тях в любимо място за игра и забавление.

4. Забелязва се драстична разлика в цената на снегорините за Крумово и Лилково (стр. 7, ред 113 и 114). Мисля, че е нарушен основен принцип, а именно – отпускат се повече средства за място, където има пъти по-малко сняг, ако изобщо има. Тази година примерно в Крумово нямаше никакъв сняг, докато в Бойково, Лилково и Ситово вчера отново валя (28.03.2022)
Тази година в с. Бойково затъвах и бях издърпван 3 пъти (имам 2 автомобила и двата 4×4) , а самия аз участвах в издърпването на 4 други автомобила на едно и също място – отсечката от бетонирания път към местност Чатал Чучур. Миналата година съседите събрахме пари за да „оправим” стръмната част на пътя, но калните улици се оказаха още по-голям проблем.
искам да отбележа, че не става въпрос за асфалтиране, a насиповане с трошен камък. Евтино е, бързо и ефективно за планински терен.
Как очакваме да се възроди селото, ако не можем да стигаме до имотите си? Разбираме, че трябва да сме активни и сами да си помагам, но би било добре и общината да помогне в този случай. Става въпрос все пак за 44,4 млн, лева бюджет.

5. Това се очетрава да бъде бюджета на детските площадки. Задава се обаче по-сериозен проблем, който не се обсъжда – „проблемът с боклука”. Има издадено наказателно постановление за използването на депото в с. Цалапица. Сигурен съм че кметът на Община Родопи в качеството си на адвокат е наясно какво значи това. Моля да запознаете обществото с него. Има ли резервен вариант Община Родопи за депониране на отпадъци и кога смятате да заделите средства за него?
Това са проекти отнемащи време и ресурс. Не е редно отново да бъдем „изненадани”.
Смятам, че работата по такъв проект е дори закъсняла. В бюджета се говори за създаване ан условия за инвестиции, но ако бъдем реалисти, инвестициите ще са невъзможни, ако няма сигурно и трайно решение на проблема с отпадъците.

6. Стр. 3 точки 1,2,3,4,9,10,11 – Закупуване на техника (предимно лаптопи) на обща стойност 130 000 лева. Фондацията която представлявам има серизен опит в изграждането на компютърни зали, включително на територията на Община Родопи – Бойково, Лилково, Златитрап, Белащица, Марково. Предстои изграждането на такава в с. Ситово. Проблемът е в непригодността на помещението. Виждам кампании за набиране на средства от активистите на читалището, за ремонт на покрива.
От друга страна не смятам, че закупуването на тази техника ще повиши административния капацитет на Община Родопи. Няма основателна причина да се закупуват лаптопи за общиснките съветници, при положение, че се прави по една среща на месец. Ресурсът на такава техника е значително по-сериозен, а предназначението и е за съвсем друга дейност.

Най-лесно това се демонстрира с пример. Каолко е в момента времето от подаването на заявление за дарение на земя на общината, до взимането на решение за приемане на дарението? А с колко ще се съкрати това време след закупуването на техниката?

Предлагам тези средства да бъдат насочени към развитието на младежките клубове и дейности в Община Родопи. Като пример – За младежките клубове са отделени 30 800 лева с 2 щатни бройки за годината, а за пенсионерските клубове 352 000 лева и 15.5 щатни бройки.
Трябва добре да се замислим какво реално искаме да развиваме и как става това?

Без да подценяваме пенсионерите и техните нужди, трябва да се замислим за тези които идват след нас – децата. И то не само за физическото им развитие, а за подходящо образование.

Резултати от независимо външно оценяване за 2021г. математика –  ЧУ с. Марково 76 място, с. Марково 216 място, с. Брестовица – 916 място, с. Първенец 574, с. Устина 738, с. Ягодово 856…
Нека не забравяме, че това са населени места в съседство с втория по големина град в България.
Как очакваме да развиваме и привличаме бизнес, без да осигурим квалифицирана работна ръка или гарантирано сметоизвозване?
Какви са приоритетите на общината за привличане на бизнес? Какъв вид бизнес?
Ако ще равиваме ядрена енергетика не трябва ли да имаме средно училище със съответната специалност?
Ако ще развиваме туризъм, не е ли добре да имаме на територията на общината училище с паралелка хотелиерство?
Ясно е, че сме в близост до голям град, но това значи ли, че не трябва да развиваме образование?
Обикновенно процесите са обратни – голеия град е притегателен център за младите, а не селата около него.
Според мен в бюджета липсва ясна програма за развитие на икономическа активност в Община Родопи. Липсва и бюджетна обезпеченост на такива намерения, липсват и заявени конкретни икономически дейноси, като приоритетни за общината.

В Бюджета не е предвидена нито една стипендия (освен ако не са отчетени в друго перо)

Искам също да отбележа, че за последната година представляваната от мен Фондация – Възраждане на българските села, е организирала или участвала с помоща на кметовете и кметските наместници в следните мероприятия (извън даренията) на територията на Община Родопи:
Засаждане на рози на входа на с. Бойково, Дарение на професионална компютърна зала на младежкия клуб в с. Марково, организиане на беседа – Професиите на бъдещето с участието на лектори като Светлин Наков (СофтУни), Блогърът No Thanks, Председателят на Българската Starcraft Лига – Лъчезар Каменов, организиране на 2 турнира – Starcraft и CS, отганизиране ан конкурс за най-хубава баница с. Бойково, Организиране на Еньовден в с. Ситово, трофи преход от с. Бойково до с. Лилково. Всяко едно от тези мероприятия има потенциал да стане събитие от национално значение.
Трябва със съжаление да отбележим, че до момента на нито едно от тях не е имало официално присъствие или подкрепа от администрацията на Община Родопи! А потенциал има във всяко едно от тях. Прилагам линк към някои събития – https://www.youtube.com/watch?v=Wb6rHJRsMxU

Предлагам в бюджета да се предвидят средства за поставянето на тенис маси за открито пред читалищата в населените места, в комбинация с осигуряване на свободна wifi точка за всяко населено място, за сметка на средствата за изграждане на детски площадки, а ако се правят такива, то те да бъдат в непосредствена близост до читалищата, където да се оформят „комплекси” за приятно прекарване на свободното време на деца и младежи.
Предлагам средствата за газификация на училището в с. Ягодово да бъдат преразгледани и пренасочени за изграждане на система за отопление с термопомпи, като се предвидят и средства от националния план за възстановяване и развитие.
Абсолютно задължително е на страницата на Община Родопи да се публикуват четливи данни във формат позволяващ тяхната съпоставка и обработка за отделните периоди (Примерно Ексел)
Предлагам да се отделят целеви средства за създаването на фестивал от национално значение, за сметка на планираните разходи за закупуването на IT техника
Предлагам да се изготви стратегия за развитието на бизнес среда в Община Родопи
Предлагам да се заделят средства и да се започне работа по изграждането на ново сметище на територията на Община Родопи

Васил Кендов
Жител на Село Бойково, председател на Фондация Възраждане на българските села.”

The post Децата от Община Родопи бяха сравнени с кучета от Георги Цанков – председател на Общинския съвет, при публичното обсъждане на бюджета на Община Родопи appeared first on Kendov.com.

How GitHub does take home technical interviews

Post Syndicated from Andy McKay original https://github.blog/2022-03-31-how-github-does-take-home-technical-interviews/

There are many ways to evaluate an engineering candidate’s skills. One way is to ask them to solve a problem or write some code. We have been striving for a while to make this experience better at GitHub. This blog post talks about how candidates at GitHub do the “take home” portion of their interview—a technical challenge done independently—and how we improved on that process.

We believe the technical interview should be as similar as possible to the way we work at GitHub. That means:

  • Writing code on GitHub and submitting a pull request.
  • Using your preferred editor, operating system, and tools.
  • Using the internet for documentation and help.
  • Respecting time limits.

In order to make this process seamless for candidates, we automate with a GitHub app called Interview-bot. This app uses the GitHub API and existing GitHub features.

First, candidates get to choose the programming language they’ll use to take the interview. They’ll get an email asking them to take the interview at their convenience by signing into Interview-bot.

Each interview is aimed at being similar to the day-to-day problems that we solve at GitHub. These aren’t problems to trick or test obscure knowledge. Interviews come with a clear set of instructions and a time limit. We place this time limit because we respect your time and want to ensure that we don’t bias toward candidates who have more time to invest in the solution.

The exercise is contained in a repository in a separate organization on GitHub. When the candidate signs in, we make a new repository, grab a copy of the exercise and copy the files, issues, and pull requests into the repository. This is done as a copy and not a fork or clone because we can alter the files in the process to fix things up. It also allows us to remove any Git history that might hide embarrassing clues on how to complete the exercise. 😉

Diagram showing that the candidate exercise is copied from "based repository" to "candidate repositor"

The candidate is given access to the repository, and a timer starts. They can now clone the exercise to their local machine, or use GitHub Codespaces. They are able to use whatever editor, tooling, and operating system they want. Again, we hope to make this as close as possible to how the candidate will be working in a day to day environment at GitHub.

When the candidate is satisfied with their pull request, they can submit it for review. The application will listen for the pull request via webhooks and will confirm that the pull request has been submitted.

Screenshot of pull request confirmation that candidate will receive

At this point, we anonymize the pull request and copy it back to the base repository.

Diagram that shows how the anonymized candidate response is sent back to the base repository

The pull request contains the code changes and comments from the candidate. To further reduce bias, the system anonymizes the submission (as best as it can) by removing the title. The Git commits and pull request will display Interview-bot as the author. To the reviewer, the pull request comes from Interview-bot and not the candidate.

Sample pull request from interview-bot, showing anonymized ID rather than candidate name

The pull request includes automated tests and a rubric so that interviewers know how to mark it and each submission is evaluated objectively and consistently. The tests run through GitHub Actions and provide a base level for the reviewer.

For each language, we’ve got teams of engineers who review the exercise. Using GitHub’s code review team feature, an engineer at GitHub is assigned to review the code. To mark the code, we provide a clear scorecard on the pull request as a comment. This clear set of marking criteria helps limit any personal bias the interviewer might have. They’ll mark the review based on given technical criteria and apply an “Approve” or “Request changes” status to give the candidate a pass or fail, respectively.

Finally, Interview-bot tracks for changes on the pull request review and then informs the assigned staff member so they can follow up with the candidate, who hopefully moves on to the next stage of the GitHub interview process. At the start of the interview process, Interview-bot associates each candidate with an issue in an internal repository. This means that staff can track candidates and their progress all within GitHub.

Sample status update from Interview-bot

Using the existing GitHub APIs and tooling, we created an interview process that mirrors as closely as possible how you’ll work at GitHub, focused on reducing bias and improving the candidate’s experience.

If you’re interested in applying at GitHub, please check out our careers page!

How to Talk to Your Family About Backups

Post Syndicated from original https://www.backblaze.com/blog/how-to-talk-to-your-family-about-backups/

Talking to your family can be hard. Especially when it comes to topics that are as uncomfortable as backups. Today, March 31st, is World Backup Day, and we want to reduce the number of April Fools this year by making sure everyone is backed up. Do your family and friends have a good backup strategy in place? If not, we have a few different approaches you can take when broaching the conversation and some key concepts that will arm you with the knowledge to fight backup negligence, one friend and family member at a time.

The Subtle Nudge

Sometimes a simple reminder is the easiest way to go. Here are a couple of simple prompts that you might want to utilize if you think a simple reminder might do the trick:

  • Fun fact: Did you know that today is World Backup Day? You have a backup right? I use Backblaze, and it’s pretty great.
  • Don’t be an April Fool, back up your data! Today is World Backup Day, and Backblaze is a great service if you aren’t using one.
  • Backblaze is a great service for backing up your computer, and it’s World Backup Day today, so you know what to do.
  • I lost my data once. It was horrible. Don’t be like me—use Backblaze. (Oh, you’ve never lost data? Eh. A little white lie never hurt anyone when it comes to backing up.)

Oh, and don’t forget to send them to Backblaze.com!

The Intervention

Sometimes a simple nudge just won’t suffice and you need to really sit someone down and explain things to them. If that happens, we have a few different talking points that you may want to utilize about the benefits of backing up online:

  • Think of backing up as insurance for your data. In case something happens to the computer you are using, your data can still be protected.
  • If you have an online backup, all the data that’s backed up from your computer is available online, so you can access it even if your computer is offline, lost, or stolen.
  • Online backup services like Backblaze have mobile apps that allow you to access your backed up data on the go, from anywhere you have an internet connection.
  • Ransomware is on the rise, and having an off-site backup like Backblaze can help you recover from a malicious attack because your data will still be intact elsewhere, even if your computer is infected with ransomware or malware.

Full-on Family IT Management

Taking matters into your own hands is also an option. With Backblaze, our Groups feature allows you to take control and get your family backed up. Creating a Group that you manage is a piece of cake:

  • Log in to www.backblaze.com.
  • Go into your Account Settings and enable Business Groups.
  • Create a Group (you can find instructions here).
  • Invite your family to the Group.
  • Make sure they install the Backblaze service on their computer (That’s the only manual step on their machine.) and we’ll handle the rest!

One thing to note is that your Group can be managed or unmanaged. In an unmanaged Group, people will individually create Backblaze accounts and will be able to recover data on their own without the Group manager being able to access it. In a managed Group, both the individual and the Group manager would be able to access and recover data from the backed up accounts!

Knowledge Is Power

Before going into these conversations, it’s also important to be prepared with the cold hard facts about backing up and best practices in general. Below, we’ve listed a few things that are important to know and could be helpful in the discussions above:

Refer-a-friend

Backblaze has a refer-a-friend program that gives you a free month of backup for every person you refer who signs up for an account and purchases a license. Plus, they also get a month for free—this is a great way to get your friends and family started!

The 3-2-1 Backup Strategy

This is a concept that we wholeheartedly love at Backblaze and have written a lot about. The gist is that everyone should have at least three copies of their data: two on-site and one off-site. The on-site copy can include the original, but make sure that the second copy is on a different medium like an external hard drive. The off-site copy should be in an accessible location, ideally using a cloud-based system like Backblaze.

Extended Version History

Many services that sync your data have limited retention history, so if you remove or change something on your computer, it’ll also get removed or changed in other locations as well. Backblaze has 30 days of version history by default, but we offer Extended Version History for one year or forever in order to keep your data backed up for longer, just in case!

Password Best Practices

This is a general internet tip, but make sure that you are using different passwords for every website or service that you have an account with. This can absolutely get unruly, and so we recommend using a password manager like BitWarden, LastPass, or 1Password. They’re all great and can help you keep things organized and secure.

Two-factor Verification

Having strong passwords is a great first step to internet and account security. The next best thing to do is to enable two-factor verification. The most common form of doing this is with time-based, one-time passwords (ToTP). They typically live inside of apps (like the password managers above) or with dedicated ones like Google Authenticate. Another option is to use your phone number and get SMS-delivered ToTP, but that’s considered less secure since phone numbers can be spoofed.

Hopefully this overview of how to talk to your friends and family about backing up for World Backup Day was helpful, and maybe you learned something new in the process! If you’ve had this “talk” before and have an interesting angle that worked to get folks across the finish line and backing up, let us know in the comments below!

The post How to Talk to Your Family About Backups appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

4 Fallacies That Keep SMBs Vulnerable to Ransomware, Pt. 2

Post Syndicated from Ryan Weeks original https://blog.rapid7.com/2022/03/31/4-fallacies-that-keep-smbs-vulnerable-to-ransomware-pt-2/

4 Fallacies That Keep SMBs Vulnerable to Ransomware, Pt. 2

This post is co-authored by Chris Henderson, Senior Director of Information Security at Datto, Inc.

Welcome back for the second and final of our blogs on the fallacies and biases that perpetuate ransomware risk for SMBs. In part one, we examined how flawed thinking and a sense of helplessness are obstacles to taking action against ransomware. In this final part, we will examine fallacies number 3 and 4: the ways SMBs often fail to “trust but verify” the security safety of their critical business partners, and how prior investments affect their forward-looking mitigation decisions.

3. Failing to trust but verify

“You seem like someone I can trust to help support and grow my business.”

Stranger danger

When SMBs create business partnerships, we do so with a reasonable expectation that others will do the right things to keep both them and us safe. SMBs are effectively placing trust in strangers. As humans, we (often unconsciously) decide who to trust due to how they make us feel or whether they remind us of a past positive experience. Rarely have SMBs done a deep enough examination to determine if that level of trust is truly warranted, especially when it comes to protecting against ransomware.

We reasonably — but perhaps incorrectly — expect a few key things from these business partners, namely that they will:

  • Be rational actors that can be relied on to make informed decisions that maximize benefits for us
  • Exercise rational choice in our best interests
  • Operate with the same level of due care that a reasonable, prudent person would use under the same or similar circumstances, in decisions that affect our business – akin to a fiduciary

Rational actor model

According to an economic theory, a rational actor maximizes benefits for themselves first and will exercise rational choice that determines whether an option is right for themselves. That begs the question: To what extent do SMBs understand if business objectives are aligned such that what is right for their business partners’ cyber protection is also right for them? In the SMB space, too often the answer is based on trust alone and not on any sort of verification, or what mature security programs call third-party due diligence.

If I harm you, I harm myself

Increasingly, ransomware attacks are relying on our business relationships (a.k.a. supply chains) to facilitate attacks on targets. End targets may be meticulously selected, but they could instead be targets of opportunity, and sometimes they are even impacted as collateral damage. In any case, in this ransomware environment, it is critical for SMBs to reassess the level of trust they place in their business partners, as their cyber posture is now part of yours. You share the risk.

Trust is a critical component of business relationships, but trust in a business partner’s security must be verified upon establishment of the relationship and reaffirmed periodically thereafter. It is a reasonable expectation that, given this ransomware environment, your business partners will be able to prove that they take both their and your protection as being in your mutual best interests. They must be able to speak to and demonstrate how they work toward that objective.

Acknowledge and act

Trust is no longer enough — SMBs have to verify. Unfortunately, there is no one-size-fits-all process for diligence, but a good place to start is with a serious conversation about your business partners’ attitudes, beliefs, current readiness, and their investments in cyber resilience, ransomware prevention, and recovery. During that conversation, ask a few key questions:

  1. Do you have cyber insurance coverage for a ransomware incident that affects both you and your customers? Tip: Ask them to provide you proof of coverage.
  2. What cybersecurity program framework do you follow, and to what extent have you accomplished operating effectiveness against that framework? Tip: Ask to see materials from audits or assessments as evidence.
  3. Has your security posture been validated by an independent third party? Tip: Ask to see materials from audits or assessments as evidence.
  4. When was the last time you, or a customer of yours, suffered a cybersecurity incident, and how did you respond? Tip: Ask for a reference from a customer they’ve helped recover from a ransomware incident.

4. “We can’t turn back now; we’ve come too far”

“We have already spent so much time and made significant investments in IT solutions to achieve our business objectives. It wouldn’t make sense at this point to abandon our solutions, given what we’ve already invested.”

Sunk cost

Ransomware threat actors seek businesses whose IT solutions — when improperly developed, deployed, configured, or maintained — make compromise and infection easy. Such solutions are currently a primary access vector for ransomware, as they can be difficult to retrofit security into. When that happens, we are faced with a decision to migrate platforms, which can be costly and disruptive.

This decision point is one of the most difficult for SMBs, as it’s very easy to fall into a sunk cost fallacy — the tendency to follow through on an endeavor if we’ve already invested time, effort, or money into it, whether or not the current costs outweigh the benefits.

It’s easy to look backward at all the work done to get an IT solution to this point and exceedingly difficult to accept a large part of that investment as a sunk cost. The reality is that it doesn’t matter how much time has been invested in IT solutions. If security is not a core feature of the solution, then the long-term risk to an SMB’s business is greater than any sunk cost.

Acknowledge and act

Sunk costs burn because they feel like a failure — knowing what we know now, we should have made a different decision. New information is always presenting itself, and the security landscape is changing constantly around us. It’s impossible to foresee every shift, so our best defense is to remain agile and pivot when and as necessary. Acknowledge that there will be sunk costs on this journey, and allowing those to stand in the way of reasonable action is the real failure.

Moving forward

“There’s a brighter tomorrow that’s just down the road. Do not look back; you are not going that way” – Mary Engelbreit

Realizing your SMB has real cyber risk exposure to ransomware requires overcoming a series of logical fallacies and cognitive biases. Once you understand and accept that reality, it’s imperative not to buy into learned helplessness, because you need not be a victim. An SMB’s size and agility can be an advantage.

From here, re-evaluate your business partnerships and level of trust when it comes to cybersecurity. Be willing to make decisions that accept prior investments may just be sunk cost, but that the benefits of change to become more cyber resilient outweigh the risks of not changing in the long run.

Additional reading:

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

WAF mitigations for Spring4Shell

Post Syndicated from Michael Tremante original https://blog.cloudflare.com/waf-mitigations-sping4shell/

WAF mitigations for Spring4Shell

WAF mitigations for Spring4Shell

A set of high profile vulnerabilities have been identified affecting the popular Java Spring Framework and related software components – generally being referred to as Spring4Shell.

Four CVEs have been released so far and are being actively updated as new information emerges. These vulnerabilities can result, in the worst case, in full remote code execution (RCE) compromise:

Customers using Java Spring and related software components, such as the Spring Cloud Gateway, should immediately review their software and update to the latest versions by following the official Spring project guidance.

The Cloudflare WAF team is actively monitoring these CVEs and has already deployed a number of new managed mitigation rules. Customers should review the rules listed below to ensure they are enabled while also patching the underlying Java Spring components.

CVE-2022-22947

A new rule has been developed and deployed for this CVE with an emergency release on March 29:

Managed Rule Spring – CVE:CVE-2022-22947

  • WAF rule ID: e777f95584ba429796856007fbe6c869
  • Legacy rule ID: 100522

Note that the above rule is disabled by default and may cause some false positives. We advise customers to review rule matches or to deploy the rule with a LOG action before switching to BLOCK.

CVE-2022-22950

Currently, available PoCs are blocked by the following rule:

Managed Rule PHP – Code Injection

  • WAF rule ID: 55b100786189495c93744db0e1efdffb
  • Legacy rule ID: PHP100011

CVE-2022-22963

Currently, available PoCs are blocked by the following rule:

Managed Rule Plone – Dangerous File Extension

  • WAF rule ID: aa3411d5505b4895b547d68950a28587
  • Legacy WAF ID: PLONE0001

We also deployed a new rule via an emergency release on March 31 (today at time of writing) to cover additional variations attempting to exploit this vulnerability:

Managed Rule Spring – Code Injection

  • WAF rule ID: d58ebf5351d843d3a39a4480f2cc4e84
  • Legacy WAF ID: 100524

Note that the newly released rule is disabled by default and may cause some false positives. We advise customers to review rule matches or to deploy the rule with a LOG action before switching to BLOCK.

Additionally, customers can receive protection against this CVE by deploying the Cloudflare OWASP Core Ruleset with default or better settings on our new WAF. Customers using our legacy WAF will have to configure a high OWASP sensitivity level.

CVE-2022-22965

We are currently investigating this recent CVE and will provide an update to our Managed Ruleset as soon as possible if an applicable mitigation strategy or bypass is found. Please review and monitor our public facing change log.

[$] Indirect branch tracking for Intel CPUs

Post Syndicated from original https://lwn.net/Articles/889475/

“Control-flow integrity” (CFI) is a set of technologies intended to prevent
an attacker from redirecting a program’s control flow and taking it over.
One of the
approaches taken by CFI is called “indirect branch tracking” (IBT); its
purpose is to prevent an attacker from causing an indirect branch (a
function call via a pointer variable, for example) to go to an unintended
place. IBT for Intel processors has been under development for some time;
after an abrupt turn, support for protecting the kernel with IBT has been
merged for the upcoming 5.18 release.

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close