When students go back to school mobile usage goes down

Post Syndicated from João Tomé original https://blog.cloudflare.com/when-students-go-back-to-school-mobile-usage-goes-down/

When students go back to school mobile usage goes down

For many (especially in the Northern Hemisphere, where about 87% of humans live), September is the “get back to school” (or work) month after a summer break and that also reflects changes in the Internet traffic, particularly in mobile usage.

Looking at our data (you can see many of these insights in Cloudflare Radar) there’s a global trend: mobile traffic lost importance (compared with desktop traffic) in September. The next chart shows there was less percentage of Internet traffic from mobile devices after Monday, September 6, 2021, with a difference of -2% in some days, compared with the previous four weeks (August), and in late September it’s more than -3%.

When students go back to school mobile usage goes down

We can also see that the percentage of desktop traffic increased in September compared to August (we compare here to complete weeks between both months because there are significant differences between weekdays and weekends).

When students go back to school mobile usage goes down

A few of weeks ago, we  saw there are considerable differences between countries regarding the importance of mobile usage. Getting back to work (or office hours) usually means an increase in desktop traffic. In that blog we highlighted the advantages that mobile devices brought to developing countries — many had their first contact with the Internet via a smartphone.

Different calendars to consider

Looking at September 2021, those shifts in Internet trends are more dependent on countries that start their school period at this time of the year and also there are the COVID lockdowns effects (more limited this year) to consider.

In the Northern Hemisphere, many countries start school in September after a break during the summer.

Europe: Back to school brings less time to be mobile

Europe is mostly coherent, and it is easier to check for mobile traffic patterns there. Most countries start school in the first 14 days of September, although Finland, Norway, Sweden and Denmark start in late August (like some states in the US, for example).

There are some countries in Europe where the mobile traffic went down in September more clearly (the overall picture in the continent is similar to the worldwide situation we described). Poland, Malta, Portugal, Italy, Spain registered a drop in specific periods of a few days in September of more than 5% in the mobile traffic percentage of the total Internet traffic.

Let’s ‘travel’ to Spain, a country where mobile traffic usually represents 45% of Internet traffic (in August this number was higher). Spanish schools officially opened for the new school year on Monday, September 6, and mobile traffic percentage lost more than 5% of its importance in some days of that week, a trend that grew the following week.

When students go back to school mobile usage goes down

Portugal: A public holiday makes mobile usage go up

Portugal shows the same trend as other European countries but as shown in the following chart there was an apparent increase in mobile traffic percentage on October 5, 2021.

That Tuesday, Cloudflare’s Lisbon office was closed; the same happened across the country because it happens to be a public holiday, Republic Day. With most people not having to work in the middle of the week, the percentage of mobile traffic has risen (most visible at 19:00 local time).

When students go back to school mobile usage goes down

Downs and ups

In Italy, we can see the same pattern, and it was also in the second week of school that mobile traffic percentage went down up to 8%. But by the end of September, it began to normalise to the values of the end of August.

When students go back to school mobile usage goes down

The trend of mobile traffic going back to having the same level as late August is more clear in the Netherlands.

When students go back to school mobile usage goes down

Japan, where the school year starts in April, but there’s a summer break through July and August (this year there were changes related to COVID), also shows the same trend of a decrease in mobile traffic that we saw in the Netherlands after school returned on September 6, 2021.

When students go back to school mobile usage goes down

US: Start of the school year influenced by COVID

The United States had an atypical start of the school year because of COVID. Many states pushed the return to school from August to September (New York City started on September 13), and there were several schools with online classes because of the pandemic, but there’s also a drop in mobile traffic percentage, especially after Monday, September 6.

When students go back to school mobile usage goes down

Further north of the continent, Canada (the school year officially started on September 1) saw mobile traffic lose more of its importance after September 6, a trend that grew by the end of the month.

When students go back to school mobile usage goes down

China saw a decrease in mobile traffic percentage right away in the beginning of September (when the school year started), but mobile recovered in the last week of the month.

When students go back to school mobile usage goes down

Russia with different patterns

Then there are countries with trends that go the other way around. Russia saw an increase (and not a decrease like in most countries of the Northern Hemisphere) in mobile traffic percentage a few days before the school year. But news reports show that many schools were closed because of COVID and only started to open by September 20 (the next chart shows precisely a decrease of mobile traffic percentage in that week.

When students go back to school mobile usage goes down

The same trend is observed in Cyprus — the only EU country where mobile traffic percentage increases after the first week of school. That could be related with some school closures in the past few weeks COVID related.

When students go back to school mobile usage goes down

Nigeria: COVID impact

When we go to Africa, Nigeria is just above the Earth’s equator line and is the most populous country on the continent (population: 206 million), and the school year was officially scheduled to start on September 13. But reports from UNICEF show that school reopening was postponed a few weeks because of the pandemic situation in Nigeria.

This seems to go along the same lines as our data shows: mobile traffic percentage grew on the week of September 13 and only started to come down by the end of September and the beginning of October.

When students go back to school mobile usage goes down

Conclusion: September, September, the back to school/work centre

September brings shifts in the Internet traffic trends that seem to have an impact on the way people access the Internet and that goes beyond mobile usage, we can also see that worldwide: the Internet traffic percentage grew significantly — some days more than 10% — in September compared to August (like the graph shows).

When students go back to school mobile usage goes down

It’s not that surprising when you realise that most people on Earth live in the Northern Hemisphere, where August is a summer and vacation month for many – although countries like India have the rainy monsoon season in August and Mid-September before autumn, for example. So September is not only the month wherein some countries students go back to school, but also when many go back to work.

Тракинг

Post Syndicated from original http://dni.li/2021/11/05/tracking/

В петък поръчвахме онлайн.

  1. Храна от кварталния супер. Пристигна след по-малко от ден, в събота сутринта.
  2. Витамини и добавки от софийски магазин. Получени в понеделник.
  3. Филтри за климатици от Хонконг. Доставиха ги с DHL във вторник! От Китай!!! Другият край на света!
  4. Храна за котката от Германия. Дойде днес, след 1 седмица точно, като мина през Румъния за по-напряко.
  5. Зимна екипировка за детето от Декатлон. Още не са потвърдили поръчката, камо ли да са изпратили стоката…

За протокола да отбележа, че Декатлон са на 6 километра от нас.

От друга страна… Миналият месец поръчвах сурови ядки от магазин, чийто редовен клиент съм от почти 2 години. Две седмици чакахме, ни вопъл, ни мъц, накрая им звъннахме по телефона да питаме какво става, а те: „Временно не правим доставки, обърнете се към конкурентите ни“. Та може и по-зле да са търговците, да.

GitLab servers are being exploited in DDoS attacks (The Record)

Post Syndicated from original https://lwn.net/Articles/875154/rss

The Record is reporting
on massive exploitation of an oldish vulnerability in GitLab instances.

While the purpose of these attacks remained unclear for HN
Security, yesterday, Google’s Menscher said the hacked servers were
part of a botnet comprising of “thousands of compromised GitLab
instances” that was launching large-scale DDoS attacks.

The vulnerability was fixed
in April
, but evidently a lot of sites have not updated.

GitHub Availability Report: October 2021

Post Syndicated from Scott Sanders original https://github.blog/2021-11-04-github-availability-report-october-2021/

In October, we experienced one incident resulting in significant impact and degraded state of availability for the GitHub Codespaces service.

October 8 17:16 UTC (lasting 1 hour and 36 minutes)

A core Codespaces API response was inadvertently restructured as part of our Codespaces public API launch, impacting existing API clients dependent on a stable schema.

For the duration of the incident, new Codespaces could not be initiated from the Visual Studio Code Desktop client. Connections to the web editor and pre-existing desktop sessions were not impacted, but degraded, with the extension displaying an error message while omitting Codespaces metadata from the Remote Explorer view.

The incident was mitigated once we rolled back the regression, at which point all clients could connect again, including with new Codespaces created during the incident. As our monitoring systems did not initially detect the impact of the regression, a subsequent and unrelated deployment was initiated, delaying our ability to revert the change. To ensure similar breaking changes are not introduced in the future, we are investing in tooling to support more rigorous end-to-end testing with the extension’s use of our API. Additionally, we are expanding our monitoring to better align with the user experience across the relevant internal service boundaries.

In summary

We will continue to keep you updated on the progress and investments we’re making to ensure the reliability of our services. To learn more about what we’re working on, check out the GitHub engineering blog.

Introducing cross-account Amazon ECR access for AWS Lambda

Post Syndicated from Eric Johnson original https://aws.amazon.com/blogs/compute/introducing-cross-account-amazon-ecr-access-for-aws-lambda/

This post is written by Brian Zambrano, Enterprise Solutions Architect and Indranil Banerjee, Senior Solution Architect.

In December 2020, AWS announced support for packaging AWS Lambda functions using container images. Customers use the container image packaging format for workloads like machine learning inference made possible by the 10 GB container size increase and familiar container tooling.

Many customers use multiple AWS accounts for application development but centralize Amazon Elastic Container Registry (ECR) images to a single account. Until today, a Lambda function had to reside in the same AWS account as the ECR repository that owned the container image. Cross-account ECR access with AWS Lambda functions has been one of the most requested features since launch.

From today, you can now deploy Lambda functions that reference container images from an ECR repository in a different account within the same AWS Region.

Overview

The example demonstrates how to use the cross-account capability using two AWS example accounts:

  1. ECR repository owner: Account ID 111111111111
  2. Lambda function owner: Account ID 222222222222

The high-level process consists of the following steps:

  1. Create an ECR repository using Account 111111111111 that grants Account 222222222222 appropriate permissions to use the image
  2. Build a Lambda-compatible container image and push it to the ECR repository
  3. Deploy a Lambda function in account 222222222222 and reference the container image in the ECR repository from account 111111111111

This example uses the AWS Serverless Application Model (AWS SAM) to create the ECR repository and its repository permissions policy. AWS SAM provides an easier way to manage AWS resources with CloudFormation.

To build the container image and upload it to ECR, use Docker and the AWS Command Line Interface (CLI). To build and deploy a new Lambda function that references the ECR image, use AWS SAM. Find the example code for this project in the GitHub repository.

Create an ECR repository with a cross-account access policy

Using AWS SAM, I create a new ECR repository named cross-account-function in the us-east-1 Region with account 111111111111. In the template.yaml file, RepositoryPolicyText defines the permissions for the ECR Repository. This template grants account 222222222222 access so that a Lambda function in that account can reference images in the ECR repository:

AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: SAM Template for cross-account-function ECR Repo

Resources:
  HelloWorldRepo:
    Type: AWS::ECR::Repository
    Properties:
      RepositoryName: cross-account-function
      RepositoryPolicyText:
        Version: "2012-10-17"
        Statement:
          - Sid: CrossAccountPermission
            Effect: Allow
            Action:
              - ecr:BatchGetImage
              - ecr:GetDownloadUrlForLayer
            Principal:
              AWS:
                - arn:aws:iam::222222222222:root
          - Sid: LambdaECRImageCrossAccountRetrievalPolicy
            Effect: Allow
            Action:
              - ecr:BatchGetImage
              - ecr:GetDownloadUrlForLayer
            Principal:
              Service: lambda.amazonaws.com
            Condition:
              StringLike:
                aws:sourceArn:
                  - arn:aws:lambda:us-east-1:222222222222:function:*

Outputs:
  ERCRepositoryUri:
    Description: "ECR RepositoryUri which may be referenced by Lambda functions"
    Value: !GetAtt HelloWorldRepo.RepositoryUri

The RepositoryPolicyText has two statements that are required for Lambda functions to work as expected:

  1. CrossAccountPermission – Allows account 222222222222 to create and update Lambda functions that reference this ECR repository
  2. LambdaECRImageCrossAccountRetrievalPolicy – Lambda eventually marks a function as INACTIVE when not invoked for an extended period. This statement is necessary so that Lambda service in account 222222222222 can pull the image again for optimization and caching.

To deploy this stack, run the following commands:

git clone https://github.com/aws-samples/lambda-cross-account-ecr.git
cd sam-ecr-repo
sam build
AWS SAM build results

AWS SAM build results


sam deploy --guided
SAM deploy results

AWS SAM deploy results

Once AWS SAM deploys the stack, a new ECR repository named cross-account-function exists. The repository has a permissions policy that allows Lambda functions in account 222222222222 to access the container images. You can verify this in the ECR console for this repository:

Permissions displayed in the console

Permissions displayed in the console

You can also extend this policy to enable multiple accounts by adding additional account IDs to the Principal and Condition evaluations lists in the CrossAccountPermission and LambdaECRImageCrossAccountRetrievalPolicy permissions policy. Narrowing the ECR permission policy is a best practice. With this launch, if you are working with multiple accounts in an AWS Organization we recommend enumerating your account IDs in the ECR permissions policy.

Amazon ECR repository policies use a subset of IAM policies to control access to individual ECR repositories. Refer to the ECR repository policies documentation to learn more.

Build a Lambda-compatible container image

Next, you build a container image using Docker and the AWS CLI. For this step, you need Docker, a Dockerfile, and Python code that responds to Lambda invocations.

  1. Use the AWS-maintained Python 3.9 container image as the basis for the Dockerfile:
    FROM public.ecr.aws/lambda/python:3.9
    COPY app.py ${LAMBDA_TASK_ROOT}
    CMD ["app.handler"]

    The code for this example, in app.py, is a Hello World application.

    import json
    def handler(event, context):
        return {
            "statusCode": 200,
            "body": json.dumps({"message": "hello world!"}),
        }
  2. To build and tag the image and push it to ECR using the same name as the repository (cross-account-function) for the image name and 01 as the tag, run:
    $ docker build -t cross-account-function:01 .

    Docker build results

    Docker build results

  3. Tag the image for upload to the ECR. The command parameters vary depending on the account id and Region. If you’re unfamiliar with the tagging steps for ECR, view the exact commands for your repository using the View push commands button from the ECR repository console page:
    $ docker tag cross-account-function:01 111111111111.dkr.ecr.us-east-1.amazonaws.com/cross-account-function:01
  4. Log in to ECR and push the image:
    $ aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 111111111111.dkr.ecr.us-east-1.amazonaws.com
    $ docker push 111111111111.dkr.ecr.us-east-1.amazonaws.com/cross-account-function:01

    Docker push results

    Docker push results

Deploying a Lambda Function

The last step is to build and deploy a new Lambda function in account 222222222222. The AWS SAM template below, saved to a file named template.yaml, references the ECR image for the Lambda function’s ImageUri. This template also instructs AWS SAM to create an Amazon API Gateway REST endpoint integrating the Lambda function.

AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: Sample SAM Template for sam-ecr-cross-account-demo

Globals:
  Function:
    Timeout: 3
Resources:
  HelloWorldFunction:
    Type: AWS::Serverless::Function
    Properties:
      PackageType: Image
      ImageUri: 111111111111.dkr.ecr.us-east-1.amazonaws.com/cross-account-function:01
      Architectures:
        - x86_64
      Events:
        HelloWorld:
          Type: Api
          Properties:
            Path: /hello
            Method: get

Outputs:
  HelloWorldApi:
    Description: "API Gateway endpoint URL for Prod stage for Hello World function"
    Value: !Sub "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/hello/"

Use AWS SAM to deploy this template:

cd ../sam-cross-account-lambda
sam build
AWS SAM build results

AWS SAM build results

sam deploy --guided
SAM deploy results

SAM deploy results

Now that the Lambda function is deployed, test using the API Gateway endpoint that AWS SAM created:

Testing the endpoint

Testing the endpoint

Because it references a container image with the ImageUri parameter in the AWS SAM template, subsequent deployments must use the –resolve-image-repos parameter:

sam deploy --resolve-image-repos

Conclusion

This post demonstrates how to create a Lambda-compatible container image in one account and reference it from a Lambda function in another account. It shows an example of an ECR policy to enable cross-account functionality. It also shows how to use AWS SAM to deploy container-based functions using the ImageUri parameter.

To learn more about serverless and AWS SAM, visit the Sessions with SAM series and find more resources at Serverless Land.

#ServerlessForEveryone

Trojan Source CVE-2021-42572: No Panic Necessary

Post Syndicated from boB Rudis original https://blog.rapid7.com/2021/11/04/trojan-source-cve-2021-42572/

What is this thing?

Trojan Source CVE-2021-42572: No Panic Necessary

Researchers at the University of Cambridge and the University of Edinburgh recently published a paper on an attack technique they call “Trojan Source.” The attack targets a weakness in text-encoding standard Unicode—which allows computers to handle text across many different languages—to trick compilers into emitting binaries that do not actually match the logic visible in source code. In other words, what a developer or security analyst sees in source code with their own eyes could be different from how a compiler interprets it—leading, in effect, to an attack that is not easily discernible. This weakness arises from Unicode’s bidirectional “BiDi” algorithm and affects most compilers, or perhaps more accurately, most editing and code review tooling; the idea that source code will be compiled the way it is displayed to the human eye is a fundamental assumption.

How the attack works.

It is possible, and often necessary, to have both left-to-right and right-to-left glyphs appear in the same sentence. A classic example from O’Reilly’s “Unicode Explained” book shows Arabic embedded in an English sentence and the direction readers familiar with both languages will read the section in:

Trojan Source CVE-2021-42572: No Panic Necessary

The official Unicode site also has additional information and examples.

There are a few options available to creators when the need for a document or section of a document to support bidirectional content, one of which is to insert “invisible” control characters that dictate the directionality of text following the directive. This is how the “Trojan Source” attack works. Let’s use one of the examples from the paper to illustrate what’s going on.

Trojan Source CVE-2021-42572: No Panic Necessary

The screenshot above is from the GitHub repository associated with the paper and shows the C language source code that looks like it should not print anything when compiled and run. (Also note that there is a very explicit safety banner, which you should absolutely take very seriously in any source code you see it displayed in).

When we copy that code from the browser and paste it into the popular Sublime Text editor with the Gremlins package installed and enabled, we can see the attempted shenanigans pretty clearly:

Trojan Source CVE-2021-42572: No Panic Necessary

The line number sidebar shows where sneaky directives have been inserted, and the usually invisible content is explicitly highlighted and not interpreted, so you can see what’s actually getting compiled. In this case, one is always “admin” when they run this program. The bottom line is that you cannot fully trust just your eyes without some assistance.

Note that cat Linux command (available on Windows via the Windows Subsystem for Linux and via macOS by installing the GNU version of the utility) can also be used to display these invisible gremlins:

cat -A -v commentint-out.c                                                  #include <stdio.h>$
#include <stdbool.h>$
$
int main() {$
    bool isAdmin = false;$
    /*M-bM-^@M-. } M-bM-^AM-&if (isAdmin)M-bM-^AM-) M-bM-^AM-& begin admins only */$
        printf("You are an admin.\n");$
    /* end admins only M-bM-^@M-. { M-bM-^AM-&*/$
    return 0;$
}$
$

Unfortunately, GitHub’s safety banner and code-editor plugins do not scale very well. Thankfully, Red Hat has come to the rescue with a simple Python script which can help us identify potential issues across an entire codebase with relative ease. It should also be possible to use this script in pre-commit hooks or in CI/CD workflows to prevent malicious code from entering into production.

CVSSv3 9.8?! Orly?!

While this isn’t really a “vulnerability” in the traditional sense of the word, it’s been assigned CVE-2021-42574 and given a “Critical” CVSSv3 score of 9.8. (The “PetitPotam” attack chain targeting Windows domains is another example of a technique that was recently assigned a CVE.) It’s a little puzzling why CVE-2021-42574 merited a “Critical” severity score, though. According to our calculations, this weakness should be more like a 5.6 on the CVSSv3 scale.

Should I be super scared?

It’s an interesting attack, and its universality is certainly attention-grabbing. With that said, there are some caveats to both novelty and exploitability. Attack techniques that leverage Unicode’s text expression aren’t new. The CVSS score assigned to this is overblown. To exploit this weakness, an attacker would need to have direct access to developers’ workstations, source code management system, or CI pipelines. If an attacker has direct access to your source code management system, frankly, you probably have bigger problems than this attack. Note that said “attacker” could be a legitimate, malicious insider; those types of attackers are notoriously difficult to fully defend against.

What should I do?

You should apply patches from vendors whose products you rely on just as you normally would, keeping in mind that because this flaw is present in so many tooling implementations, you could apply many patches and still be considered “vulnerable” in other implementations. The better thing to do would be to apply a fairly straightforward mitigation: Disallow BiDi directives in your code base if you’re writing in only English or only Arabic.

As noted above, you should absolutely heed the Unicode safety warnings (if available) in any source code repositories you use, and strongly consider using something like the aforementioned Red Hat Unicode directionality directive checker-script in source code control and continuous integration and deployment workflows.

We advise prioritizing truly critical patches and limiting service and system exposure before worrying about source code-level attacks that require local or physical access.

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

Anomaly Detection in AWS Lambda using Amazon DevOps Guru’s ML-powered insights

Post Syndicated from Harish Vaswani original https://aws.amazon.com/blogs/devops/anomaly-detection-in-aws-lambda-using-amazon-devops-gurus-ml-powered-insights/

Critical business applications are monitored in order to prevent anomalies from negatively impacting their operational performance and availability. Amazon DevOps Guru is a Machine Learning (ML) powered solution that aids operations by detecting anomalous behavior and providing insights and recommendations for how to address the root cause before it impacts the customer.

This post demonstrates how Amazon DevOps Guru can detect an anomaly following a critical AWS Lambda function deployment and its remediation recommendations to fix such behavior.

Solution Overview

Amazon DevOps Guru lets you monitor resources at the region or AWS CloudFormation level. This post will demonstrate how to deploy an AWS Serverless Application Model (AWS SAM) stack, and then enable Amazon DevOps Guru to monitor the stack.

You will utilize the following services:

  • AWS Lambda
  • Amazon EventBridge
  • Amazon DevOps Guru

The architecture diagram shows an AWS SAM stack containing AWS Lambda and Amazon EventBridge resources, as well as Amazon DevOps Guru monitoring the resources in the AWS SAM stack.

Figure 1: Amazon DevOps Guru monitoring the resources in an AWS SAM stack

The architecture diagram shows an AWS SAM stack containing AWS Lambda and Amazon EventBridge resources, as well as Amazon DevOps Guru monitoring the resources in the AWS SAM stack.

This post simulates a real-world scenario where an anomaly is introduced in the AWS Lambda function in the form of latency. While the AWS Lambda function execution time is within its timeout threshold, it is not at optimal performance. This anomalous execution time can result in larger compute times and costs. Furthermore, this post demonstrates how Amazon DevOps Guru identifies this anomaly and provides recommendations for remediation.

Here is an overview of the steps that we will conduct:

  1. First, we will deploy the AWS SAM stack containing a healthy AWS Lambda function with an Amazon EventBridge rule to invoke it on a regular basis.
  2. We will enable Amazon DevOps Guru to monitor the stack, which will show the AWS Lambda function as healthy.
  3. After waiting for a period of time, we will make changes to the AWS Lambda function in order to introduce an anomaly and redeploy the AWS SAM stack. This anomaly will be identified by Amazon DevOps Guru, which will mark the AWS Lambda function as unhealthy, provide insights into the anomaly, and provide remediation recommendations.
  4. After making the changes recommended by Amazon DevOps Guru, we will redeploy the stack and observe Amazon DevOps Guru marking the AWS Lambda function healthy again.

This post also explores utilizing Provisioned Concurrency for AWS Lambda functions and the best practice approach of utilizing Warm Start for variables reuse.

Pricing

Before beginning, note the costs associated with each resource. The AWS Lambda function will incur a fee based on the number of requests and duration, while Amazon EventBridge is free. With Amazon DevOps Guru, you only pay for the data analyzed. There is no upfront cost or commitment. Learn more about the pricing per resource here.

Prerequisites

To complete this post, you need the following prerequisites:

Getting Started

We will set up an application stack in our AWS account that contains an AWS Lambda and an Amazon EventBridge event. The event will regularly trigger the AWS Lambda function, which simulates a high-traffic application. To get started, please follow the instructions below:

  1. In your local terminal, clone the amazon-devopsguru-samples repository.
git clone https://github.com/aws-samples/amazon-devopsguru-samples.git
  1.  In your IDE of choice, open the amazon-devopsguru-samples repository.
  2. In your terminal, change directories into the repository’s subfolder amazon-devopsguru-samples/generate-lambda-devopsguru-insights.
cd amazon-devopsguru-samples/generate-lambda-devopsguru-insights
  1. Utilize the SAM CLI to conduct a guided deployment of lambda-template.yaml.
sam deploy --guided --template lambda-template.yaml
    Stack Name [sam-app]: DevOpsGuru-Sample-AnomalousLambda-Stack
    AWS Region [us-east-1]: us-east-1
    #Shows you resources changes to be deployed and require a 'Y' to initiate deploy
    Confirm changes before deploy [y/N]: y
    #SAM needs permission to be able to create roles to connect to the resources in your template
    Allow SAM CLI IAM role creation [Y/n]: y
    Save arguments to configuration file [Y/n]: y
    SAM configuration file [samconfig.toml]: y
    SAM configuration environment [default]: default

You should see a success message in your terminal, such as:

Successfully created/updated stack - DevOpsGuru-Sample-AnomalousLambda-Stack in us-east-1.

Enabling Amazon DevOps Guru

Now that we have deployed our application stack, we can enable Amazon DevOps Guru.

  1. Log in to your AWS Account.
  2. Navigate to the Amazon DevOps Guru service page.
  3. Click “Get started”.
  4. In the “Amazon DevOps Guru analysis coverage” section, select “Choose later”, then click “Enable”.

Amazon DevOps Guru analysis coverage menu which asks which AWS resources to analyze. The “Choose later” option is selected.

Figure 2.1: Amazon DevOps Guru analysis coverage menu

  1. On the left-hand menu, select “Settings”
  2. In the “DevOps Guru analysis coverage” section, click on “Manage”.
  3. Select the “Analyze all AWS resources in the specified CloudFormation stacks in this Region” radio button.
  4. The stack created in the previous section should appear. Select it, click “Save”, and then “Confirm”.

Amazon DevOps Guru analysis coverage menu which asks which AWS resources to analyze. The “Analyze all AWS resources in the specified CloudFormation stacks in this Region” option is selected and CloudFormation stacks are displayed to choose from.

Figure 2.2: Amazon DevOps Guru analysis coverage resource selection

Before moving on to the next section, we must allow Amazon DevOps Guru to baseline the resources and benchmark the application’s normal behavior. For our serverless stack with two resources, we recommend waiting two hours before carrying out the next steps. When enabled in a production environment, depending upon the number of resources selected for monitoring, it can take up to 24 hours for Amazon DevOps Guru to complete baselining.

Once baselining is complete, the Amazon DevOps Guru dashboard, an overview of the health of your resources, will display the application stack, DevOpsGuru-Sample-AnomalousLambda-Stack, and mark it as healthy, shown below.

Amazon DevOps Guru Dashboard displays the system health summary and system health overview of each CloudFormation stack. The DevOpsGuru-Sample-AnomalousLambda-Stack is marked as healthy with 0 reactive insights and 0 proactive insights.

Figure 2.3: Amazon DevOps Guru Healthy Dashboard

Enabling SNS

If you would like to set up notifications upon the detection of an anomaly by Amazon DevOps Guru, then please follow these additional instructions.

Amazon DevOps Guru Specify an SNS topic menu which enables notifications for important DevOps Guru events. No SNS topics are currently configured.

Figure 3: Amazon DevOps Guru Specify an SNS topic

Invoking an Anomaly

Once Amazon DevOps Guru has identified the stack as healthy, we will update the AWS Lambda function with suboptimal code. This update will simulate an update to critical business applications which are causing the anomalous performance.

  1. Open the amazon-devopsguru-samples repository in your IDE.
  2. Open the file generate-lambda-devopsguru-insights/lambda-code.py
  3. Uncomment lines 7-8 and save the file. These lines of code will produce an anomaly due to the function’s increased runtime.
  4. Deploy these updates to your stack by running:
cd generate-lambda-devopsguru-insights 
sam deploy --template lambda-template.yaml -stack-name DevOpsGuru-Sample-AnomalousLambda-Stack

Anomaly Overview

Shortly after, Amazon DevOps Guru will generate a reactive insight from the sample stack. This insight contains recommendations, metrics, and events related to anomalous behavior. View the unhealthy stack status in the Dashboard.

Amazon DevOps Guru Dashboard displays the system health summary and system health overview of each CloudFormation stack. The DevOpsGuru-Sample-AnomalousLambda-Stack is marked as unhealthy with 1 reactive insights and 0 proactive insights.

Figure 4.1: Amazon DevOps Guru Unhealthy Dashboard

By clicking on the “Ongoing reactive insight” within the tile, you will be brought to the Insight Details page. This page contains an array of useful information to help you understand and address anomalous behavior.

Insight overview

Utilize this section to get a high-level overview of the insight. You can see that the status of the insight is ongoing, 1 AWS CloudFormation stack is affected, the insight started on Sept-08-2021, it does not have an end time, and it was last updated on Sept-08-2021.

Amazon DevOps Guru Insight Details page has multiple information sections. The Insight overview is the first section which displays the status is ongoing, there is 1 affected stack, the start time and last updated time. The end time is empty as the insight is ongoing.

Figure 4.2: Amazon DevOps Guru Ongoing Reactive Insight Overview

Aggregated metrics

The Aggregated metrics tab displays metrics related to the insight. The table is grouped by AWS CloudFormation stacks and subsequent resources that created the metrics. In this example, the insight was a product of an anomaly in the “duration p50” metric generated by the “DevOpsGuruSample-AnomalousLambda” AWS Lambda function.

AWS Lambda duration metrics derive from a percentile statistic utilized to exclude outlier values that skew average and maximum statistics. The P50 statistic is typically a great middle estimate. It is defined as 50% of estimates exceed the P50 estimate and 50% of estimates are less than the P50 estimate.

The red lines on the timeline indicate spans of time when the “duration p50” metric emitted unusual values. Click the red line in the timeline in order to view detailed information.

  • Choose View in CloudWatch to see how the metric looks in the CloudWatch console. For more information, see Statistics and Dimensions in the Amazon CloudWatch User Guide.
  • Hover over the graph in order to view details about the anomalous metric data and when it occurred.
  • Choose the box with the downward arrow to download a PNG image of the graph.

Amazon DevOps Guru Insight Details page contains aggregated metrics. The Duration p50 metric is selected and displayed in graph form.

Figure 4.3: Amazon DevOps Guru Ongoing Reactive Insight Aggregated Metrics

Graphed anomalies

The Graphed anomalies tab displays detailed graphs for each of the insight’s anomalies. Because our insight was comprised of a single anomaly, there is one tile with details about unusual behavior detected in related metrics.

  • Choose View all statistics and dimensions in order to see details about the anomaly. In the window that opens, you can:
  • Choose View in CloudWatch in order to see how the metric looks in the CloudWatch console.
  • Hover over the graph to view details about the anomalous metric data and when it occurred.
  • Choose Statistics or Dimension in order to customize the graph’s display. For more information, see Statistics and Dimensions in the Amazon CloudWatch User Guide.

Amazon DevOps Guru Insight Details page contains Graphed anomalies. The p50 metric of the AWS/Lambda duration in displayed in graph form.

Figure 4.4: Amazon DevOps Guru Ongoing Reactive Insight Graphed Anomaly

Related events

In Related events, view AWS CloudTrail events related to your insight. These events help understand, diagnose, and address the underlying cause of the anomalous behavior. In this example, the events are:

  1. CreateFunction – when we created and deployed the AWS SAM template containing our AWS Lambda function.
  2. CreateChangeSet – when we pushed updates to our stack via the AWS SAM CLI.
  3. UpdateFunctionCode – when the AWS Lambda function code was updated.

Continuation of figure 4.4

Figure 4.5: Amazon DevOps Guru Ongoing Reactive Insight Related Events

Recommendations

The final section in the Insight Detail page is Recommendations. You can view suggestions that might help you resolve the underlying problem. When Amazon DevOps Guru detects anomalous behavior, it attempts to create recommendations. An insight might contain one, multiple, or zero recommendations.

In this example, the Amazon DevOps Guru recommendation matches the best resolution to our problem-provisioned concurrency.

Amazon DevOps Guru Insight Details page contains Recommendations. The suggested recommendation is to configure provisioned concurrency for the AWS Lambda.

Figure 4.6: Amazon DevOps Guru Ongoing Reactive Insight Recommendations

Understanding what happened

Amazon DevOps Guru recommends enabling Provisioned Concurrency for the AWS Lambda functions in order to help it scale better when responding to concurrent requests. As mentioned earlier, Provisioned Concurrency keeps functions initialized by creating the requested number of execution environments so that they can respond to invocations. This is a suggested best practice when building high-traffic applications, such as the one that this sample is mimicking.

In the anomalous AWS Lambda function, we have sample code that is causing delays. This is analogous to application initialization logic within the handler function. It is a best practice for this logic to live outside of the handler function. Because we are mimicking a high-traffic application, the expectation is to receive a large number of concurrent requests. Therefore, it may be advisable to turn on Provisioned Concurrency for the AWS Lambda function. For Provisioned Concurrency pricing, refer to the AWS Lambda Pricing page.

Resolving the Anomaly

To resolve the sample application’s anomaly, we will update the AWS Lambda function code and enable provisioned concurrency for the AWS Lambda infrastructure.

  1. Opening the sample repository in your IDE.
  2. Open the file generate-lambda-devopsguru-insights/lambda-code.py.
  3. Move lines 7-8, the code forcing the AWS Lambda function to respond slowly, above the lambda_handler function definition.
  4. Save the file.
  5. Open the file generate-lambda-devopsguru-insights/lambda-template.yaml.
  6. Uncomment lines 15-17, the code enabling provisioned concurrency in the sample AWS Lambda function.
  7. Save the file.
  8. Deploy these updates to your stack.
cd generate-lambda-devopsguru-insights 
sam deploy --template lambda-template.yaml --stack-name DevOpsGuru-Sample-AnomalousLambda-Stack       

After completing these steps, the duration P50 metric will emit more typical results, thereby causing Amazon DevOps Guru to recognize the anomaly as fixed, and then close the reactive insight as shown below.

Amazon DevOps Guru Insight Summary page displays the reactive insight has been closed.

Figure 5: Amazon DevOps Guru Closed Reactive Insight

Clean Up

When you are finished walking through this post, you will have multiple test resources in your AWS account that should be cleaned up or un-provisioned in order to avoid incurring any further charges.

  1. Opening the sample repository in your IDE.
  2. Run the below AWS SAM CLI command to delete the sample stack.
cd generate-lambda-devopsguru-insights 
sam delete --stack-name DevOpsGuru-Sample-AnomalousLambda-Stack 

Conclusion

As seen in the example above, Amazon DevOps Guru can detect anomalous behavior in an AWS Lambda function, tie it to relevant events that introduced that anomaly, and provide recommendations for remediation by using its pre-trained ML models. All of this was possible by simply enabling Amazon DevOps Guru to monitor the resources with minimal configuration changes and no previous ML expertise. Start using Amazon DevOps Guru today.

About the authors

Harish Vaswani

Harish Vaswani is a Senior Cloud Application Architect at Amazon Web Services. He specializes in architecting and building cloud native applications and enables customers with best practices in their cloud journey. He is a DevOps and Machine Learning enthusiast. Harish lives in New Jersey and enjoys spending time with this family, filmmaking and music production.

Caroline Gluck

Caroline Gluck is a Cloud Application Architect at Amazon Web Services based in New York City, where she helps customer design and build cloud native Data Science applications. Caroline is a builder at heart, with a passion for serverless architecture and Machine Learning. In her spare time, she enjoys traveling, cooking, and spending time with family and friends.

Hands-On IoT Hacking: Rapid7 at DefCon 29 IoT Village, Part 3

Post Syndicated from Deral Heiland original https://blog.rapid7.com/2021/11/04/hands-on-iot-hacking-rapid7-at-defcon-29-iot-village-part-3/

Hands-On IoT Hacking: Rapid7 at DefCon 29 IoT Village, Part 3

In our first post in this series, we covered the setup of Rapid7’s hands-on exercise at Defcon 29’s IoT Village. Last week, we discussed how to determine the UART status of the header we created and how to actually start hacking on the IoT device. The goal in this next phase of the IoT hacking exercise is to turn the console back on.

To accomplish this, we need to reenter the bootargs variable without the console setting. To change the bootargs variable, the “setenv” command should be used. In the case of this exercise, enter the following command as shown in Figure 16. You can see that the “console=off” has been removed. This will overwrite the current bootargs environment variable setting.

Hands-On IoT Hacking: Rapid7 at DefCon 29 IoT Village, Part 3
Figure 16: setenv command

Once you’ve run this command, we recommend verifying that you’ve correctly made the changes to the bootargs variable by running the “printenv” command again and observing that the output shows that “console=off” has been removed. It is very common to accidentally mistype an environment variable, which will cause errors on reboot or just create an entirely new variable that has no usable value. The correct bootargs variable line should read as shown in Figure 17:

Hands-On IoT Hacking: Rapid7 at DefCon 29 IoT Village, Part 3
Figure 17: bootargs setting

Once you’re sure the changes made to bootargs are correct, you’ll need to save the environment variable settings. To do this, you’ll use the “saveenv” command. Enter this command in the UART console, and hit enter. If you miss this step, then none of the changes made to the environment variables of U-Boot will be saved and all will be lost on reboot.

The saveenv should cause the U-Boot environment variables to be written to flash and return a response indicating it is being saved. An example of this is shown in Figure 18:

Hands-On IoT Hacking: Rapid7 at DefCon 29 IoT Village, Part 3
Figure 18: saveenv command response

Reboot and capture logs for review

Once you’ve made all the needed changes to the U-Boot environment variables and saved them, you can reboot the device, observe console logs from the boot process, and save the console log data to a file for further review. The boot log data from the console will play a critical role in the next steps as you work toward gaining full root access to the device.

Next, reboot the systems. You can do this in a couple of different ways. You can either type the “reset” command within the U-Boot console and hit enter, which tells the MCU to reset and causes the system to restart, or just cycle the power on the device. After entering the reset command or power cycling the device, the device should reboot. The console should now be unlocked, and you should see the kernel boot up. If you still do not have a functioning console, you either entered the wrong data for bootargs or failed to save the settings with the “saveenv” command. I must admit I am personally guilty of both many times.

During the Defcon IoT Village exercise, we had the attendees capture console logs to a file for review using the following process in GtkTerm. If you are using a different serial console application, this process will be different for capture and saving logs.

In GtkTerm, to capture logs for review, select “Log” on the task bar pulldown menu on GtkTerm as shown below in Figure 19:

Hands-On IoT Hacking: Rapid7 at DefCon 29 IoT Village, Part 3
Figure 19: Enable logging

Once “Log” is selected, a window will pop up. From here, you need to select the file to write out the logs to. In this case, we had the attendees select the defcon_log.txt file on the laptops desktop as shown below in Figure 20:

Hands-On IoT Hacking: Rapid7 at DefCon 29 IoT Village, Part 3
Figure 20: Select defcon_log.txt file

Once you’ve selected a log file, you should now start capturing logs to that file. From here, the device can be powered back on or restarted to start capturing logs for review. Let the system boot up completely. Once it appears to be up and running, you can turn off logging by selecting “Log” and then selecting “Stop” in the dropdown, as shown in Figure 21:

Hands-On IoT Hacking: Rapid7 at DefCon 29 IoT Village, Part 3
Figure 21: Stop log capture

Once logging is stopped, you can open the captured log file and review the contents. During the Defcon IoT Village exercise, we had the participants search for the keyword “failsafe” in the captured logs. Searching for failsafe should take you to the log entry containing the line:

  • “Press the [f] key and hit [enter] to enter failsafe mode”

This is a prompt that allows you to hit the “f” key followed by return to boot the system into single-user mode. You won’t find this mode on all IoT devices, but you will find it on some, like in this case with the LUMA device. Single-user mode will start the system up with limited functionality and is often used for conducting maintenance on an operating system — and, yes, this is root-level access to the device, but with none of the critical system function running that would allow network service, USB access, and applications that are run as part of the device’s normal operation features. Our goal later is to use this access and the following data to eventually gain full running system root access.

There is also another critical piece of data in the log file just shortly after the failsafe mode prompt, which we need to note. Approximately 8 lines below failsafe prompt, there is a reference to “rootfs_data” as shown in Figure 22:

Hands-On IoT Hacking: Rapid7 at DefCon 29 IoT Village, Part 3
Figure 22: Log review

The piece of data we need from this line is the Unsorted Block Image File System (UBIFS) device number and the volume number. This will let us properly mount the rootfs_data partition later. With the LUMA, we found this to be one of the two following values.

  • Device 0, volume 2
  • Device 0, volume 3

Boot into single-user mode

Now that the captured logs have been reviewed, allowing us to identify the failsafe mode and the UBIFS mount data. The next step is to reboot the system into single-user mode, so we can work on getting full root access to the devices. To do this, you’ll need to monitor the system booting up in the UART console, watching for the failsafe mode prompt as shown below in Figure 23:

Hands-On IoT Hacking: Rapid7 at DefCon 29 IoT Village, Part 3
Figure 23: Failsafe mode prompt

When this prompt shows up, you will only have a couple of seconds to press the “f” key followed by the return key to get the system to launch into single-user root access mode. If you miss this, you’ll need to reboot and start over. If you’re successful, the UART console should show the following prompt (Figure 24):

Hands-On IoT Hacking: Rapid7 at DefCon 29 IoT Village, Part 3
Figure 24: Single-user mode

In single-user mode, you’ll have root access, although most of the partitions, applications, networks, and associated functions will not be loaded or running. Our goal will be to make changes so you can boot the device up into full operation system mode and have root access.

In our fourth and final installment of this series, we’ll go over how to configure user accounts, and finally, how to reboot the device and login. Check back with us next week!

NEVER MISS A BLOG

Get the latest stories, expertise, and news about security today.

ksmbd: a new in-kernel SMB server (SAMBA+ blog)

Post Syndicated from original https://lwn.net/Articles/875132/rss

Over at the SAMBA+ blog, the performance of the new ksmbd kernel SMB server and Samba in user space are compared:

ksmbd claims performance improvements on a wide range of benchmarks: the graphs on this page show a doubling of performance on some tests. There was also the notion that an in-kernel server is likely an easier place to support SMB Direct, which uses RDMA to transfer data between systems.

Clearly, those number are impressive, but at the same time recent improvements in Samba’s IO performance put this into perspective: by leveraging the new “io_uring” Linux API Samba is able to provide roughly 10x the throughput compared to ksmbd.

Time will tell whether it’s better to reside in kernel-space like ksmbd or in user-space like Samba in order to squeeze the last bit of performance out of the available hardware.

There are two graphs that show some impressive results for Samba.

Deepen Your Knowledge of Architecting for Sustainability at re:Invent

Post Syndicated from Margaret O'Toole original https://aws.amazon.com/blogs/architecture/deepen-your-knowledge-of-architecting-for-sustainability-at-reinvent/

This year, AWS customers took on sustainability challenges, including energy efficient building management, environmental and social governance reporting, and near-real-time renewable energy plant monitoring. Underscoring these unique and compelling projects is a need to understand, at the developer level, how to architect sustainably. So, this year, we’re excited to share sustainability content at re:Invent to inspire teams, to learn from each other, to get hands on, and to see what‘s possible when we combine technology with sustainability.

I recommend that all architects, developers, and anyone involved with AWS workloads tune in to ARC325, Architecting for sustainability. In this session, we’ll explore why teams would want to architect sustainably, and then dive into practical actions teams can apply to their own workloads. (You’ll also learn a bit about what AWS does to optimize for sustainability). ARC325 will feature an AWS enterprise customer and their engineering teams’ sustainability journey. Sustainability is an important non-functional requirement for their engineering teams, so they created a process to factor sustainability into their workload planning and execution. This will be an engaging session for engineering teams seeking to include sustainability into their workflows.

I’m personally very excited about STG209, Building a sustainable infrastructure with AWS Cloud storage. It will cover different storage options at AWS and how using efficient storage options strategically can help teams lower their storage carbon footprint.

I also recommend reserving a spot in Nat Sahlstrom’s session, ARC206, Sustainability in AWS global infrastructure. Nat is a Director, AWS energy strategy, and will share updates on AWS sustainability efforts, including updates on Amazon’s path to reach 100% renewable energy by 2025. Nat will also cover some key topics, such as water stewardship and how we work with communities in which our data centers are built.

Software developers and teams will definitely want to tune in for OPN301, Using Rust to minimize environmental impact. Studies have shown that Rust consumes less energy than other programming languages such as Python (98% less), Java (50% less), and C++ (23% less). In OPN301, Chairwoman of the Rust Foundation, Shane Miller, and AWS teammate, Carl Lerche, will explain how the efficiency gains of Rust can enable more workloads per watt and how to start using Rust in your own projects.

Explore hands-on sustainability activities

Check out our AWS GameDay – Reuse, Recycle, Reduce, Rearchitect. GameDays are collaborative learning exercises that test your skills as you implement AWS solutions (or fix broken ones) to solve real-world challenges in a gamified, risk-free environment. In Reuse, Recycle, Reduce, Rearchitect, teams play the role of new hires in a fictitious company (Unicorn.Rentals). Unicorn.Rentals rents unicorns to travelers around the world and has just made a holistic commitment to transform its business for sustainability. Teams will help Unicorn.Rentals find and reduce hotspots in their AWS usage and help optimize architectures for sustainability by better matching resource consumption to need, selecting the right instance to get the job done, and improving the efficiency of their software. Then, teams will help Unicorn.Rentals predict (and avoid) wildfires and understand and rank their suppliers based on sustainability commitments. After the GameDay, teams will understand practically how to architect sustainably and see some ways IT can support the sustainability goals of the broader organization.

For those looking for a creative build outlet, we will have the Code Green! Hackathon, where entrants can build sustainability solutions using data from the Amazon Sustainability Data Initiative (ASDI) or the AWS Data Exchange. All code resulting from the hackathon will be open sourced. You can see previous hack results from 2019 here.

Celebrate AWS customers innovating for sustainability

Don’t miss our special premiere video screening of AWS customers addressing climate change and driving sustainability! Climate Next is an original documentary series about AWS customers using AWS technologies to fight climate change. The short-form documentaries feature inspiring stories and engaging cinematography. We will host a Q&A reception with featured customers and the creative producer immediately following the screening. Join us for this special premiere screening at re:Invent Tuesday, November 30 at 6:30-8:00 pm PDT.

There’s even more going on, so we’ve put together a re:Invent sustainability attendee guide to make it easy to find all sessions related to sustainability. We hope you can join us – either in person or virtually!

We look forward to seeing you there!

Combining preprocessing with storing only trend data for high-frequency monitoring

Post Syndicated from Arturs Lontons original https://blog.zabbix.com/combining-preprocessing-with-storing-only-trend-data-for-high-frequency-monitoring/16568/

There are many design choices to consider when we build our monitoring environment for high-frequency monitoring. How to minimize performance impact? What are the data retention policies with storage space in mind? What are the available out-of-the-box features to solve these potential problems?
In this blog post, we will discuss when you should use preprocessing and when it is better to use the “Do not keep history” option for your metrics, and what are the pros and cons for both of these approaches.

Throttling and other preprocessing steps

We’ve discussed throttling previously as the go-to approach for high-frequency monitoring. Indeed, with throttling, you can discard repeated values and do so with a heartbeat. This is extremely useful with metrics that come as discreet values – services states, network port statuses, and so on.
Example of throttling with and without heartbeat
In addition, since starting from Zabbix 4.2 all preprocessing is also performed by Zabbix proxies. This means we can discard the repeated values before they reach the Zabbix server. This can help us both with the performance (fewer metrics to insert in the Zabbix server DB) and reduce the DB size (Fewer metrics stored in the DB. This also helps with improving overall Zabbix performance)
There are a few caveats with this approach – since metrics get discarded before they reach the Zabbix server, the triggers will not react on these metrics (This is where having a heartbeat is useful) and, since trends are calculated by Zabbix server based on the received history data, there could be a lack of trend information for these metrics. Keep in mind that this applies not only to throttling preprocessing rules – any preprocessing can be done on the proxy and any preprocessing rules can be used to transform your data.

Understanding “Do not keep history” option

The behavior of “Do not keep history” which we can define when configuring an item is a bit different though. If we collect an item by a Proxy and configure the item with “Do not keep history”, the history won’t always get discarded! There are a couple of reasons for this.
  • First off, let’s not forget that some of our values can populate host inventory! If the particular item is configured to populate an inventory field – it will be forwarded to the Zabbix server, but it will not get stored in the history tables.
  • If the item does not populate an inventory field – the text data such as character, log and text will indeed get discarded before reaching the Zabbix server, but Numeric values – both float and integer, will get forwarded to the server. The reason for that is deriving trend information from the numeric values. Mind that the numeric data will still not get stored in the history tables, only trends will be available for these items.

Note: This behavior has been properly implemented starting from Zabbix 5.2. See ZBX-17548

Setting the “Do not keep history” option for an item

Using trend functions with high-frequency monitoring

With the specifics of “Do not keep history” in mind, we should now recall that starting from Zabbix 5.2 we have trend functions available at our disposal!
History functions such as trendavg, trendcount, trendmax, trendmin, trendsum allow us to perform different kinds of trend calculations – from counting the number of trend values to retrieving min/max/avg trend values for a time period.
This means, that if we require only the metric trend for specific time periods (hours, days, weeks, etc) we can use these trend functions together with “Do not keep history” option, thus discarding unnecessary data and improving our Zabbix server performance!
There are two approaches two using trend functions:
  • If you wish to collect and display the trend data, you need to create the item which will collect the metrics (say, a net.if.in Agent item for collecting incoming network traffic) and then create a separate calculated item that uses the trend function to calculate the avg/min/max value for the trend over a time period. The original item can then have “Do not keep history” option selected for it.

trendavg item for calculating hourly trends from the net.if.in[ifHCInOctets.5] item

 

  • If you wish to simply define triggers and react on long-term trends and are not required to collect the trend values, then we can skip the creation of the calculated item and simply use the trend function on the original item in the trigger.

This trigger fires if the hourly average trend value exceeds 100M.
Note: In this case only the original item is required.

By combining these approaches in our environment – using preprocessing when we wish to discard or transform the data and also implementing opting out of storing the history data, whenever this is appropriate, we can minimize the performance impact on our Zabbix instance. Add a layer of distributed Zabbix proxies on top of this and you can truly achieve a large, scalable Zabbix infrastructure optimized for high-frequency ingestion and processing of your data.

Horgan: Linux x86 Program Start Up

Post Syndicated from original https://lwn.net/Articles/875108/rss

Patrick Horgan explains
the process of starting a program
on Linux in great detail.

Well, __libc_start_main calls __libc_init_first,
who immediately uses secret inside information to find the
environment variables just after the terminating null of the
argument vector and then sets a global variable __environ
which __libc_start_main uses thereafter whenever it needs
it including when it calls main. After the envp
is established, then __libc_start_main uses the same trick
and surprise! Just past the terminating null at the end of the
envp array, there’s another vector, the ELF auxiliary
vector the loader uses to pass some information to the process. An
easy way to see what’s in there is to set the environment variable
LD_SHOW_AUXV=1 before running the program.

[$] 5.16 Merge window, part 1

Post Syndicated from original https://lwn.net/Articles/874683/rss

As of this writing, Linus Torvalds has pulled exactly 6,800 non-merge
changesets into the mainline repository for the 5.16 kernel release. That
is probably a little over half of what will arrive during this merge
window, so this is a good time to catch up on what has been pulled so far.
There are many significant changes and some large-scale restructuring of
internal kernel code, but relatively few ground-breaking new
features.

Security updates for Thursday

Post Syndicated from original https://lwn.net/Articles/875106/rss

Security updates have been issued by Fedora (ansible, chromium, kernel, mupdf, python-PyMuPDF, rust, and zathura-pdf-mupdf), openSUSE (qemu and webkit2gtk3), Red Hat (firefox and kpatch-patch), Scientific Linux (firefox), SUSE (qemu, tomcat, and webkit2gtk3), and Ubuntu (firefox and thunderbird).

[Security Nation] Pete Cooper and Irene Pontisso of the UK Cabinet Office on Their Cybersecurity Culture Competition

Post Syndicated from Rapid7 original https://blog.rapid7.com/2021/11/04/pete-cooper-and-irene-pontisso-of-the-uk-cabinet-office-on-their-cybersecurity-culture-competition/

[Security Nation] Pete Cooper and Irene Pontisso of the UK Cabinet Office on Their Cybersecurity Culture Competition

In this special bonus episode of Security Nation, Jen and Tod chat with Pete Cooper and Irene Pontisso from the UK Cabinet Office about their current competition aiming to promote cybersecurity culture among small businesses. They highlight their 9 hypotheses, which touch on the role of human factors, the distinction between cyber culture and security culture, and the importance of leadership. They chat about why they decided to get help validating these ideas through a competition format the “Bakeoff Approach,” as Irene calls to promote collaborative thinking and get a sense of what organizations are doing on these issues today.

The deadline to apply for the competition is fast approaching on Monday, November 8, and winners will be awarded contracts to carry out the competition over 12 weeks, beginning in late November. Check out the Invitation to Tender to submit your entry!

Pete Cooper

[Security Nation] Pete Cooper and Irene Pontisso of the UK Cabinet Office on Their Cybersecurity Culture Competition

Pete Cooper is Deputy Director Cyber Defence within the Government Security Group in the UK Cabinet Office, where he looks over the whole of the Government sector and is responsible for the Government Cyber Security Strategy, standards, and policies as well as responding to serious or cross-government cyber incidents. With a diverse military, private-sector, and government background, he has worked on everything ranging from cyber operations, global cyber security strategies, advising on the nature of state-vs.-state cyber conflict to leading cybersecurity change across industry, public sector, and the global hacker community, including founding and leading the Aerospace Village at DEF CON. A fast jet pilot turned cyber operations advisor, who on leaving the military in 2016, founded the UK’s first multi-disciplinary cyber strategy competition, he is passionate about tackling national and international cybersecurity challenges through better collaboration, diversity, and innovative partnerships. He has a Post Grad in Cyberspace Operations from Cranfield University, is a Non-Resident Senior Fellow at the Cyber Statecraft Initiative of the Scowcroft Centre for Strategy and Security at the Atlantic Council, and is a Visiting Senior Research Fellow in the Department of War Studies, King’s College London.

Irene Pontisso

[Security Nation] Pete Cooper and Irene Pontisso of the UK Cabinet Office on Their Cybersecurity Culture Competition

Irene is Assistant Head of Engagement and Information within the Government Security Group in the UK Cabinet Office. Irene is responsible for the design and strategic oversight of cross-government security education, awareness, and culture-related initiatives. She is also responsible for leading cross-government engagement and press activities for Government Security and the Government Chief Security Officer. Irene started her career in policy and international relations through her roles at the United Nations Platform for Space-Based Information for Disaster Management and Emergency Response (UN-SPIDER). Irene also has significant industry and third-sector experience, and she partnered with the world’s leading law firms to provide free access to legal advice for NGOs on international development projects. She also has experience in leading large-scale exhibitions and policy research in corporate environments. She holds a MSc in International Relations from the University of Bristol and a BSc from the University of Turin.

Show notes

Like the show? Want to keep Jen and Tod in the podcasting business? Feel free to rate and review with your favorite podcast purveyor, like Apple Podcasts.

Want More Inspiring Stories From the Security Community?

Subscribe to Security Nation Today

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close