Tag Archives: displays

How to Automatically Revert and Receive Notifications About Changes to Your Amazon VPC Security Groups

Post Syndicated from Rob Barnes original https://aws.amazon.com/blogs/security/how-to-automatically-revert-and-receive-notifications-about-changes-to-your-amazon-vpc-security-groups/

In a previous AWS Security Blog post, Jeff Levine showed how you can monitor changes to your Amazon EC2 security groups. The methods he describes in that post are examples of detective controls, which can help you determine when changes are made to security controls on your AWS resources.

In this post, I take that approach a step further by introducing an example of a responsive control, which you can use to automatically respond to a detected security event by applying a chosen security mitigation. I demonstrate a solution that continuously monitors changes made to an Amazon VPC security group, and if a new ingress rule (the same as an inbound rule) is added to that security group, the solution removes the rule and then sends you a notification after the changes have been automatically reverted.

The scenario

Let’s say you want to reduce your infrastructure complexity by replacing your Secure Shell (SSH) bastion hosts with Amazon EC2 Systems Manager (SSM). SSM allows you to run commands on your hosts remotely, removing the need to manage bastion hosts or rely on SSH to execute commands. To support this objective, you must prevent your staff members from opening SSH ports to your web server’s Amazon VPC security group. If one of your staff members does modify the VPC security group to allow SSH access, you want the change to be automatically reverted and then receive a notification that the change to the security group was automatically reverted. If you are not yet familiar with security groups, see Security Groups for Your VPC before reading the rest of this post.

Solution overview

This solution begins with a directive control to mandate that no web server should be accessible using SSH. The directive control is enforced using a preventive control, which is implemented using a security group rule that prevents ingress from port 22 (typically used for SSH). The detective control is a “listener” that identifies any changes made to your security group. Finally, the responsive control reverts changes made to the security group and then sends a notification of this security mitigation.

The detective control, in this case, is an Amazon CloudWatch event that detects changes to your security group and triggers the responsive control, which in this case is an AWS Lambda function. I use AWS CloudFormation to simplify the deployment.

The following diagram shows the architecture of this solution.

Solution architecture diagram

Here is how the process works:

  1. Someone on your staff adds a new ingress rule to your security group.
  2. A CloudWatch event that continually monitors changes to your security groups detects the new ingress rule and invokes a designated Lambda function (with Lambda, you can run code without provisioning or managing servers).
  3. The Lambda function evaluates the event to determine whether you are monitoring this security group and reverts the new security group ingress rule.
  4. Finally, the Lambda function sends you an email to let you know what the change was, who made it, and that the change was reverted.

Deploy the solution by using CloudFormation

In this section, you will click the Launch Stack button shown below to launch the CloudFormation stack and deploy the solution.

Prerequisites

  • You must have AWS CloudTrail already enabled in the AWS Region where you will be deploying the solution. CloudTrail lets you log, continuously monitor, and retain events related to API calls across your AWS infrastructure. See Getting Started with CloudTrail for more information.
  • You must have a default VPC in the region in which you will be deploying the solution. AWS accounts have one default VPC per AWS Region. If you’ve deleted your VPC, see Creating a Default VPC to recreate it.

Resources that this solution creates

When you launch the CloudFormation stack, it creates the following resources:

  • A sample VPC security group in your default VPC, which is used as the target for reverting ingress rule changes.
  • A CloudWatch event rule that monitors changes to your AWS infrastructure.
  • A Lambda function that reverts changes to the security group and sends you email notifications.
  • A permission that allows CloudWatch to invoke your Lambda function.
  • An AWS Identity and Access Management (IAM) role with limited privileges that the Lambda function assumes when it is executed.
  • An Amazon SNS topic to which the Lambda function publishes notifications.

Launch the CloudFormation stack

The link in this section uses the us-east-1 Region (the US East [N. Virginia] Region). Change the region if you want to use this solution in a different region. See Selecting a Region for more information about changing the region.

To deploy the solution, click the following Launch Stack button to launch the stack. After you click the button, you must sign in to the AWS Management Console if you have not already done so.

Click this "Launch Stack" button

Then:

  1. Choose Next to proceed to the Specify Details page.
  2. On the Specify Details page, type your email address in the Send notifications to box. This is the email address to which change notifications will be sent. (After the stack is launched, you will receive a confirmation email that you must accept before you can receive notifications.)
  3. Choose Next until you get to the Review page, and then choose the I acknowledge that AWS CloudFormation might create IAM resources check box. This confirms that you are aware that the CloudFormation template includes an IAM resource.
  4. Choose Create. CloudFormation displays the stack status, CREATE_COMPLETE, when the stack has launched completely, which should take less than two minutes.Screenshot showing that the stack has launched completely

Testing the solution

  1. Check your email for the SNS confirmation email. You must confirm this subscription to receive future notification emails. If you don’t confirm the subscription, your security group ingress rules still will be automatically reverted, but you will not receive notification emails.
  2. Navigate to the EC2 console and choose Security Groups in the navigation pane.
  3. Choose the security group created by CloudFormation. Its name is Web Server Security Group.
  4. Choose the Inbound tab in the bottom pane of the page. Note that only one rule allows HTTPS ingress on port 443 from 0.0.0.0/0 (from anywhere).Screenshot showing the "Inbound" tab in the bottom pane of the page
  1. Choose Edit to display the Edit inbound rules dialog box (again, an inbound rule and an ingress rule are the same thing).
  2. Choose Add Rule.
  3. Choose SSH from the Type drop-down list.
  4. Choose My IP from the Source drop-down list. Your IP address is populated for you. By adding this rule, you are simulating one of your staff members violating your organization’s policy (in this blog post’s hypothetical example) against allowing SSH access to your EC2 servers. You are testing the solution created when you launched the CloudFormation stack in the previous section. The solution should remove this newly created SSH rule automatically.
    Screenshot of editing inbound rules
  5. Choose Save.

Adding this rule creates an EC2 AuthorizeSecurityGroupIngress service event, which triggers the Lambda function created in the CloudFormation stack. After a few moments, choose the refresh button ( The "refresh" icon ) to see that the new SSH ingress rule that you just created has been removed by the solution you deployed earlier with the CloudFormation stack. If the rule is still there, wait a few more moments and choose the refresh button again.

Screenshot of refreshing the page to see that the SSH ingress rule has been removed

You should also receive an email to notify you that the ingress rule was added and subsequently reverted.

Screenshot of the notification email

Cleaning up

If you want to remove the resources created by this CloudFormation stack, you can delete the CloudFormation stack:

  1. Navigate to the CloudFormation console.
  2. Choose the stack that you created earlier.
  3. Choose the Actions drop-down list.
  4. Choose Delete Stack, and then choose Yes, Delete.
  5. CloudFormation will display a status of DELETE_IN_PROGRESS while it deletes the resources created with the stack. After a few moments, the stack should no longer appear in the list of completed stacks.
    Screenshot of stack "DELETE_IN_PROGRESS"

Other applications of this solution

I have shown one way to use multiple AWS services to help continuously ensure that your security controls haven’t deviated from your security baseline. However, you also could use the CIS Amazon Web Services Foundations Benchmarks, for example, to establish a governance baseline across your AWS accounts and then use the principles in this blog post to automatically mitigate changes to that baseline.

To scale this solution, you can create a framework that uses resource tags to identify particular resources for monitoring. You also can use a consolidated monitoring approach by using cross-account event delivery. See Sending and Receiving Events Between AWS Accounts for more information. You also can extend the principle of automatic mitigation to detect and revert changes to other resources such as IAM policies and Amazon S3 bucket policies.

Summary

In this blog post, I demonstrated how you can automatically revert changes to a VPC security group and have a notification sent about the changes. You can use this solution in your own AWS accounts to enforce your security requirements continuously.

If you have comments about this blog post or other ideas for ways to use this solution, submit a comment in the “Comments” section below. If you have implementation questions, start a new thread in the EC2 forum or contact AWS Support.

– Rob

The Pi Hut’s 3D Xmas Tree pre-order

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/pi-hut-3d-xmas-tree/

We appreciate it’s only October, but hear us out. The Pi Hut’s 3D Xmas Tree is only available for pre-order until the 15th, and we’d hate for you to find out about it too late. So please share in a few minutes of premature Christmas cheer as we introduce you to this gorgeous kit.

The Pi Hut's 3D Xmas Tree for Raspberry Pi

Oooo…aaaaahhhh…

Super early Christmas prep

Designed by Pi Towers alumna Rachel Rayns, the 3D Xmas Tree kit is a 25-LED add-on board for the Raspberry Pi, on sale as a pre-soldered and as a ‘solder yourself’ version. You can control each LED independently via the GPIO pins, allowing you to create some wonderful, twinkly displays this coming holiday season.

The Pi Hut's 3D Xmas Tree for Raspberry Pi

The tree works with any 40-pin Raspberry Pi, including the Zero and Zero W.

You may remember the kit from last Christmas, when The Pi Hut teasingly hinted at its existence. We’ve been itching to get our hands on one for months now, and last week we finally received our own to build and play with.

3D Xmas Tree

So I took the time to record my entire build process for you…only to discover that I had managed to do most of the soldering out of frame. I blame Ben Nuttall for this, as we all rightly should, and offer instead this short GIF of me proudly showing off my finished piece.

The Pi Hut’s website has complete soldering instructions for the tree, as well as example code to get you started. Thus, even the most novice of Raspberry Pi enthusiasts and digital makers should be able to put this kit together and get it twinkling for Christmas.

If you don’t own helping hands for soldering, you’re missing out on, well, a helping hand when soldering.

If you need any help with soldering, check out our video resource. And once you’ve mastered this skill, how about upgrading your tree to twinkle in time with your favourite Christmas song? Or getting two or three, and having them flash in a beautiful synchronised multi-tree display?

Get your own 3D Xmas Tree

As mentioned above, you can pre-order the kit until Sunday 15 October. Once this deadline passes, that’s it — the boat will have sailed and you’ll be left stranded at the dock, waving goodbye to the missed opportunity.

The Pi Hut's 3D Xmas Tree for Raspberry Pi

Don’t be this kid.

With 2730 trees already ordered, you know this kit is going to be in the Christmas stocking of many a maker on 25 December.

And another thing

Shhh…while you’re there, The Pi Hut still has a few Google AIY Projects voice kits available for pre-order…but you didn’t hear that from me. Quick!

The post The Pi Hut’s 3D Xmas Tree pre-order appeared first on Raspberry Pi.

Adafruit’s read-only Raspberry Pi

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/adafruits-read-only/

For passive projects such as point-of-sale displays, video loopers, and your upcoming Halloween builds, Adafruit have come up with a read-only solution for powering down your Raspberry Pi without endangering your SD card.

Adafruit read-only raspberry pi

Pulling the plug

At home, at a coding club, or at a Jam, you rarely need to pull the plug on your Raspberry Pi without going through the correct shutdown procedure. To ensure a long life for your SD card and its contents, you should always turn off you Pi by selecting the shutdown option from the menu. This way the Pi saves any temporary files to the card before relinquishing power.

Dramatic reconstruction

By pulling the plug while your OS is still running, you might corrupt these files, which could result in the Pi failing to boot up again. The only fix? Wipe the SD card clean and start over, waving goodbye to all files you didn’t back up.

Passive projects

But what if it’s not as easy as selecting shutdown, because your Raspberry Pi is embedded deep inside the belly of a project? Maybe you’ve hot-glued your Zero W into a pumpkin which is now screwed to the roof of your porch, or your store has a bank of Pi-powered monitors playing ads and the power is set to shut off every evening. Without the ability to shut down your Pi via the menu, you risk the SD card’s contents every time you power down your project.

Read-only

Just in time of the plethora of Halloween projects we’re looking forward to this month, the clever folk at Adafruit have designed a solution for this issue. They’ve shared a script which forces the Raspberry Pi to run in read-only mode, so that powering it down via a plug pull will not corrupt the SD card.

But how?

The script makes the Pi save temporary files to the RAM instead of the SD card. Of course, this means that no files or new software can be written to the card. However, if that’s not necessary for your Pi project, you might be happy to make the trade-off. Note that you can only use Adafruit’s script on Raspbian Lite.

Find more about the read-only Raspberry Pi solution, including the script and optional GPIO-halt utility, on the Adafruit Learn page. And be aware that making your Pi read-only is irreversible, so be sure to back up the contents of your SD card before you implement the script.

Halloween!

It’s October, and we’re now allowed to get excited about Halloween and all of the wonderful projects you plan on making for the big night.

Adafruit read-only raspberry pi

Adafruit’s animated snake eyes

We’ll be covering some of our favourite spooky build on social media throughout the month — make sure to share yours with us, either in the comments below or on Facebook, Twitter, Instagram, or G+.

The post Adafruit’s read-only Raspberry Pi appeared first on Raspberry Pi.

Now Use AWS IAM to Delete a Service-Linked Role When You No Longer Require an AWS Service to Perform Actions on Your Behalf

Post Syndicated from Ujjwal Pugalia original https://aws.amazon.com/blogs/security/now-use-aws-iam-to-delete-a-service-linked-role-when-you-no-longer-require-an-aws-service-to-perform-actions-on-your-behalf/

Earlier this year, AWS Identity and Access Management (IAM) introduced service-linked roles, which provide you an easy and secure way to delegate permissions to AWS services. Each service-linked role delegates permissions to an AWS service, which is called its linked service. Service-linked roles help with monitoring and auditing requirements by providing a transparent way to understand all actions performed on your behalf because AWS CloudTrail logs all actions performed by the linked service using service-linked roles. For information about which services support service-linked roles, see AWS Services That Work with IAM. Over time, more AWS services will support service-linked roles.

Today, IAM added support for the deletion of service-linked roles through the IAM console and the IAM API/CLI. This means you now can revoke permissions from the linked service to create and manage AWS resources in your account. When you delete a service-linked role, the linked service no longer has the permissions to perform actions on your behalf. To ensure your AWS services continue to function as expected when you delete a service-linked role, IAM validates that you no longer have resources that require the service-linked role to function properly. This prevents you from inadvertently revoking permissions required by an AWS service to manage your existing AWS resources and helps you maintain your resources in a consistent state. If there are any resources in your account that require the service-linked role, you will receive an error when you attempt to delete the service-linked role, and the service-linked role will remain in your account. If you do not have any resources that require the service-linked role, you can delete the service-linked role and IAM will remove the service-linked role from your account.

In this blog post, I show how to delete a service-linked role by using the IAM console. To learn more about how to delete service-linked roles by using the IAM API/CLI, see the DeleteServiceLinkedRole API documentation.

Note: The IAM console does not currently support service-linked role deletion for Amazon Lex, but you can delete your service-linked role by using the Amazon Lex console. To learn more, see Service Permissions.

How to delete a service-linked role by using the IAM console

If you no longer need to use an AWS service that uses a service-linked role, you can remove permissions from that service by deleting the service-linked role through the IAM console. To delete a service-linked role, you must have permissions for the iam:DeleteServiceLinkedRole action. For example, the following IAM policy grants the permission to delete service-linked roles used by Amazon Redshift. To learn more about working with IAM policies, see Working with Policies.

{ 
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowDeletionOfServiceLinkedRolesForRedshift",
            "Effect": "Allow",
            "Action": ["iam:DeleteServiceLinkedRole"],
            "Resource": ["arn:aws:iam::*:role/aws-service-role/redshift.amazonaws.com/AWSServiceRoleForRedshift*"]
	 }
    ]
}

To delete a service-linked role by using the IAM console:

  1. Navigate to the IAM console and choose Roles from the navigation pane.

Screenshot of the Roles page in the IAM console

  1. Choose the service-linked role you want to delete and then choose Delete role. In this example, I choose the  AWSServiceRoleForRedshift service-linked role.

Screenshot of the AWSServiceRoleForRedshift service-linked role

  1. A dialog box asks you to confirm that you want to delete the service-linked role you have chosen. In the Last activity column, you can see when the AWS service last used the service-linked role, which tells you when the linked service last used the service-linked role to perform an action on your behalf. If you want to continue to delete the service-linked role, choose Yes, delete to delete the service-linked role.

Screenshot of the "Delete role" window

  1. IAM then checks whether you have any resources that require the service-linked role you are trying to delete. While IAM checks, you will see the status message, Deletion in progress, below the role name. Screenshot showing "Deletion in progress"
  1. If no resources require the service-linked role, IAM deletes the role from your account and displays a success message on the console.

Screenshot of the success message

  1. If there are AWS resources that require the service-linked role you are trying to delete, you will see the status message, Deletion failed, below the role name.

Screenshot showing the "Deletion failed"

  1. If you choose View details, you will see a message that explains the deletion failed because there are resources that use the service-linked role.
    Screenshot showing details about why the role deletion failed
  2. Choose View Resources to view the Amazon Resource Names (ARNs) of the first five resources that require the service-linked role. You can delete the service-linked role only after you delete all resources that require the service-linked role. In this example, only one resource requires the service-linked role.

Conclusion

Service-linked roles make it easier for you to delegate permissions to AWS services to create and manage AWS resources on your behalf and to understand all actions the service will perform on your behalf. If you no longer need to use an AWS service that uses a service-linked role, you can remove permissions from that service by deleting the service-linked role through the IAM console. However, before you delete a service-linked role, you must delete all the resources associated with that role to ensure that your resources remain in a consistent state.

If you have any questions, submit a comment in the “Comments” section below. If you need help working with service-linked roles, start a new thread on the IAM forum or contact AWS Support.

– Ujjwal

Disabling Intel Hyper-Threading Technology on Amazon EC2 Windows Instances

Post Syndicated from Brian Beach original https://aws.amazon.com/blogs/compute/disabling-intel-hyper-threading-technology-on-amazon-ec2-windows-instances/

In a prior post, Disabling Intel Hyper-Threading on Amazon Linux, I investigated how the Linux kernel enumerates CPUs. I also discussed the options to disable Intel Hyper-Threading (HT Technology) in Amazon Linux running on Amazon EC2.

In this post, I do the same for Microsoft Windows Server 2016 running on EC2 instances. I begin with a quick review of HT Technology and the reasons you might want to disable it. I also recommend that you take a moment to review the prior post for a more thorough foundation.

HT Technology

HT Technology makes a single physical processor appear as multiple logical processors. Each core in an Intel Xeon processor has two threads of execution. Most of the time, these threads can progress independently; one thread executing while the other is waiting on a relatively slow operation (for example, reading from memory) to occur. However, the two threads do share resources and occasionally one thread is forced to wait while the other is executing.

There a few unique situations where disabling HT Technology can improve performance. One example is high performance computing (HPC) workloads that rely heavily on floating point operations. In these rare cases, it can be advantageous to disable HT Technology. However, these cases are rare, and for the overwhelming majority of workloads you should leave it enabled. I recommend that you test with and without HT Technology enabled, and only disable threads if you are sure it will improve performance.

Exploring HT Technology on Microsoft Windows

Here’s how Microsoft Windows enumerates CPUs. As before, I am running these examples on an m4.2xlarge. I also chose to run Windows Server 2016, but you can walk through these exercises on any version of Windows. Remember that the m4.2xlarge has eight vCPUs, and each vCPU is a thread of an Intel Xeon core. Therefore, the m4.2xlarge has four cores, each of which run two threads, resulting in eight vCPUs.

Windows does not have a built-in utility to examine CPU configuration, but you can download the Sysinternals coreinfo utility from Microsoft’s website. This utility provides useful information about the system CPU and memory topology. For this walkthrough, you enumerate the individual CPUs, which you can do by running coreinfo -c. For example:

C:\Users\Administrator >coreinfo -c

Coreinfo v3.31 - Dump information on system CPU and memory topology
Copyright (C) 2008-2014 Mark Russinovich
Sysinternals - www.sysinternals.com

Logical to Physical Processor Map:
**------ Physical Processor 0 (Hyperthreaded)
--**---- Physical Processor 1 (Hyperthreaded)
----**-- Physical Processor 2 (Hyperthreaded)
------** Physical Processor 3 (Hyperthreaded)

As you can see from the screenshot, the coreinfo utility displays a table where each row is a physical core and each column is a logical CPU. In other words, the two asterisks on the first line indicate that CPU 0 and CPU 1 are the two threads in the first physical core. Therefore, my m4.2xlarge has for four physical processors and each processor has two threads resulting in eight total CPUs, just as expected.

It is interesting to note that Windows Server 2016 enumerates CPUs in a different order than Linux. Remember from the prior post that Linux enumerated the first thread in each core, followed by the second thread in each core. You can see from the output earlier that Windows Server 2016, enumerates both threads in the first core, then both threads in the second core, and so on. The diagram below shows the relationship of CPUs to cores and threads in both operating systems.

In the Linux post, I disabled CPUs 4–6, leaving one thread per core, and effectively disabling HT Technology. You can see from the diagram that you must disable the odd-numbered threads (that is, 1, 3, 5, and 7) to achieve the same result in Windows. Here’s how to do that.

Disabling HT Technology on Microsoft Windows

In Linux, you can globally disable CPUs dynamically. In Windows, there is no direct equivalent that I could find, but there are a few alternatives.

First, you can disable CPUs using the msconfig.exe tool. If you choose Boot, Advanced Options, you have the option to set the number of processors. In the example below, I limit my m4.2xlarge to four CPUs. Restart for this change to take effect.

Unfortunately, Windows does not disable hyperthreaded CPUs first and then real cores, as Linux does. As you can see in the following output, coreinfo reports that my c4.2xlarge has two real cores and four hyperthreads, after rebooting. Msconfig.exe is useful for disabling cores, but it does not allow you to disable HT Technology.

Note: If you have been following along, you can re-enable all your CPUs by unselecting the Number of processors check box and rebooting your system.

 

C:\Users\Administrator >coreinfo -c

Coreinfo v3.31 - Dump information on system CPU and memory topology
Copyright (C) 2008-2014 Mark Russinovich
Sysinternals - www.sysinternals.com

Logical to Physical Processor Map:
**-- Physical Processor 0 (Hyperthreaded)
--** Physical Processor 1 (Hyperthreaded)

While you cannot disable HT Technology systemwide, Windows does allow you to associate a particular process with one or more CPUs. Microsoft calls this, “processor affinity”. To see an example, use the following steps.

  1. Launch an instance of Notepad.
  2. Open Windows Task Manager and choose Processes.
  3. Open the context (right click) menu on notepad.exe and choose Set Affinity….

This brings up the Processor Affinity dialog box.

As you can see, all the CPUs are allowed to run this instance of notepad.exe. You can uncheck a few CPUs to exclude them. Windows is smart enough to allow any scheduled operations to continue to completion on disabled CPUs. It then saves its state at the next scheduling event, and resumes those operations on another CPU. To ensure that only one thread in each core is able to run a process, you uncheck every other core. This effectively disables HT Technology for this process. For example:

Of course, this can be tedious when you have a large number of cores. Remember that the x1.32xlarge has 128 CPUs. Luckily, you can set the affinity of a running process from PowerShell using the Get-Process cmdlet. For example:

PS C:\> (Get-Process -Name 'notepad').ProcessorAffinity = 0x55;

The ProcessorAffinity attribute takes a bitmask in hexadecimal format. 0x55 in hex is equivalent to 01010101 in binary. Think of the binary encoding as 1=enabled and 0=disabled. This is slightly confusing, but we work left to right so that CPU 0 is the rightmost bit and CPU 7 is the leftmost bit. Therefore, 01010101 means that the first thread in each CPU is enabled just as it was in the diagram earlier.

The calculator built into Windows includes a “programmer view” that helps you convert from hexadecimal to binary. In addition, the ProcessorAffinity attribute is a 64-bit number. Therefore, you can only configure the processor affinity on systems up to 64 CPUs. At the moment, only the x1.32xlarge has more than 64 vCPUs.

In the preceding examples, you changed the processor affinity of a running process. Sometimes, you want to start a process with the affinity already configured. You can do this using the start command. The start command includes an affinity flag that takes a hexadecimal number like the PowerShell example earlier.

C:\Users\Administrator>start /affinity 55 notepad.exe

It is interesting to note that a child process inherits the affinity from its parent. For example, the following commands create a batch file that launches Notepad, and starts the batch file with the affinity set. If you examine the instance of Notepad launched by the batch file, you see that the affinity has been applied to as well.

C:\Users\Administrator>echo notepad.exe > test.bat
C:\Users\Administrator>start /affinity 55 test.bat

This means that you can set the affinity of your task scheduler and any tasks that the scheduler starts inherits the affinity. So, you can disable every other thread when you launch the scheduler and effectively disable HT Technology for all of the tasks as well. Be sure to test this point, however, as some schedulers override the normal inheritance behavior and explicitly set processor affinity when starting a child process.

Conclusion

While the Windows operating system does not allow you to disable logical CPUs, you can set processor affinity on individual processes. You also learned that Windows Server 2016 enumerates CPUs in a different order than Linux. Therefore, you can effectively disable HT Technology by restricting a process to every other CPU. Finally, you learned how to set affinity of both new and running processes using Task Manager, PowerShell, and the start command.

Note: this technical approach has nothing to do with control over software licensing, or licensing rights, which are sometimes linked to the number of “CPUs” or “cores.” For licensing purposes, those are legal terms, not technical terms. This post did not cover anything about software licensing or licensing rights.

If you have questions or suggestions, please comment below.

Cloud Storage Doesn’t have to be Convoluted, Complex, or Confusing

Post Syndicated from Ahin Thomas original https://www.backblaze.com/blog/cloud-storage-pricing-comparison/

business man frustrated over cloud storage pricing

So why do many vendors make it so hard to get information about how much you’re storing and how much you’re being charged?

Cloud storage is fast becoming the central repository for mission critical information, irreplaceable memories, and in some cases entire corporate and personal histories. Given this responsibility, we believe cloud storage vendors have an obligation to be transparent as possible in how they interact with their customers.

In that light we decided to challenge four cloud storage vendors and ask two simple questions:

  1. Can a customer understand how much data is stored?
  2. Can a customer understand the bill?

The detailed results are below, but if you wish to skip the details and the screen captures (TL;DR), we’ve summarized the results in the table below.

Summary of Cloud Storage Pricing Test

Our challenge was to upload 1 terabyte of data, store it for one month, and then download it.

Visibility to Data Stored Easy to Understand Bill Cost
Backblaze B2 Accurate, intuitive display of storage information. Available on demand, and the site clearly defines what has and will be charged for. $25
Microsoft Azure Storage is being measured in KiB, but is billed by the GB. With a calculator, it is unclear how much storage we are using. Available, but difficult to find. The nearly 30 day lag in billing creates business and accounting challenges. $72
Amazon S3 Incomplete. From the file browsing user interface, there is no reasonable way to understand how much data is being stored. Available on demand. While there are some line items that seem unnecessary for our test, the bill is generally straight-forward to understand. $71
Google Cloud Service Incomplete. From the file browsing user interface, there is no reasonable way to understand how much data is being stored. Available, but provides descriptions in units that are not on the pricing table nor commonly used. $100

Cloud Storage Test Details

For our tests, we choose Backblaze B2, Microsoft’s Azure, Amazon’s S3, and Google Cloud Storage. Our idea was simple: Upload 1 TB of data to the comparable service for each vendor, store it for 1 month, download that 1 TB, then document and share the results.

Let’s start with most obvious observation, the cost charged by each vendor for the test:

Cost
Backblaze B2 $25
Microsoft Azure $72
Amazon S3 $71
Google Cloud Service $100

Later in this post, we’ll see if we can determine the different cost components (storage, downloading, transactions, etc.) for each vendor, but our first step is to see if we can determine how much data we stored. In some cases, the answer is not as obvious as it would seem.

Test 1: Can a Customer Understand How Much Data Is Stored?

At the core, a provider of a service ought to be able to tell a customer how much of the service he or she is using. In this case, one might assume that providers of Cloud Storage would be able to tell customers how much data is being stored at any given moment. It turns out, it’s not that simple.

Backblaze B2
Logging into a Backblaze B2 account, one is presented with a summary screen that displays all “buckets.” Each bucket displays key summary information, including data currently stored.

B2 Cloud Storage Buckets screenshot

Clicking into a given bucket, one can browse individual files. Each file displays its size, and multiple files can be selected to create a size summary.

B2 file tree screenshot

Summary: Accurate, intuitive display of storage information.

Microsoft Azure

Moving on to Microsoft’s Azure, things get a little more “exciting.” There was no area that we could find where one can determine the total amount of data, in GB, stored with Azure.

There’s an area entitled “usage,” but that wasn’t helpful.

Microsoft Azure cloud storage screenshot

We then moved on to “Overview,” but had a couple challenges.The first issue was that we were presented with KiB (kibibyte) as a unit of measure. One GB (the unit of measure used in Azure’s pricing table) equates to roughly 976,563 KiB. It struck us as odd that things would be summarized by a unit of measure different from the billing unit of measure.

Microsoft Azure usage dashboard screenshot

Summary: Storage is being measured in KiB, but is billed by the GB. Even with a calculator, it is unclear how much storage we are using.

Amazon S3

Next we checked on the data we were storing in S3. We again ran into problems.

In the bucket overview, we were able to identify our buckets. However, we could not tell how much data was being stored.

Amazon S3 cloud storage buckets screenshot

Drilling into a bucket, the detail view does tell us file size. However, there was no method for summarizing the data stored within that bucket or for multiple files.

Amazon S3 cloud storage buckets usage screenshot

Summary: Incomplete. From the file browsing user interface, there is no reasonable way to understand how much data is being stored.

Google Cloud Storage (“GCS”)

GCS proved to have its own quirks, as well.

One can easily find the “bucket” summary, however, it does not provide information on data stored.

Google Cloud Storage Bucket screenshot

Clicking into the bucket, one can see files and the size of an individual file. However, no ability to see data total is provided.

Google Cloud Storage bucket files screenshot

Summary: Incomplete. From the file browsing user interface, there is no reasonable way to understand how much data is being stored.

Test 1 Conclusions

We knew how much storage we were uploading and, in many cases, the user will have some sense of the amount of data they are uploading. However, it strikes us as odd that many vendors won’t tell you how much data you have stored. Even stranger are the vendors that provide reporting in a unit of measure that is different from the units in their pricing table.

Test 2: Can a Customer Understand The Bill?

The cloud storage industry has done itself no favors with its tiered pricing that requires a calculator to figure out what’s going on. Setting that aside for a moment, one would presume that bills would be created in clear, auditable ways.

Backblaze

Inside of the Backblaze user interface, one finds a navigation link entitled “Billing.” Clicking on that, the user is presented with line items for previous bills, payments, and an estimate for the upcoming charges.

Backblaze B2 billing screenshot

One can expand any given row to see the the line item transactions composing each bill.

Backblaze B2 billing details screenshot

Summary: Available on demand, and the site clearly defines what has and will be charged for.

Azure

Trying to understand the Azure billing proved to be a bit tricky.

On August 6th, we logged into the billing console and were presented with this screen.

Microsoft Azure billing screenshot

As you can see, on Aug 6th, billing for the period of May-June was not available for download. For the period ending June 26th, we were charged nearly a month later, on July 24th. Clicking into that row item does display line item information.

Microsoft Azure cloud storage billing details screenshot

Summary: Available, but difficult to find. The nearly 30 day lag in billing creates business and accounting challenges.

Amazon S3

Amazon presents a clean billing summary and enables users to “drill down” into line items.

Going to the billing area of AWS, one can survey various monthly bills and is presented with a clean summary of billing charges.

AWS billing screenshot

Expanding into the billing detail, Amazon articulates each line item charge. Within each line item, charges are broken out into sub-line items for the different tiers of pricing.

AWS billing details screenshot

Summary: Available on demand. While there are some line items that seem unnecessary for our test, the bill is generally straight-forward to understand.

Google Cloud Storage (“GCS”)

This was an area where the GCS User Interface, which was otherwise relatively intuitive, became confusing.

Going to the Billing Overview page did not offer much in the way of an overview on charges.

Google Cloud Storage billing screenshot

However, moving down to the “Transactions” section did provide line item detail on all the charges incurred. However, similar to Azure introducing the concept of KiB, Google introduces the concept of the equally confusing Gibibyte (GiB). While all of Google’s pricing tables are listed in terms of GB, the line items reference GiB. 1 GiB is 1.07374 GBs.

Google Cloud Storage billing details screenshot

Summary: Available, but provides descriptions in units that are not on the pricing table nor commonly used.

Test 2 Conclusions

Clearly, some vendors do a better job than others in making their pricing available and understandable. From a transparency standpoint, it’s difficult to justify why a vendor would have their pricing table in units of X, but then put units of Y in the user interface.

Transparency: The Backblaze Way

Transparency isn’t easy. At Backblaze, we believe in investing time and energy into presenting the most intuitive user interfaces that we can create. We take pride in our heritage in the consumer backup space — servicing consumers has taught us how to make things understandable and usable. We do our best to apply those lessons to everything we do.

This philosophy reflects our desire to make our products usable, but it’s also part of a larger ethos of being transparent with our customers. We are being trusted with precious data. We want to repay that trust with, among other things, transparency.

It’s that spirit that was behind the decision to publish our hard drive performance stats, to open source the infrastructure that is behind us having the lowest cost of storage in the industry, and also to open source our erasure coding (the math that drives a significant portion of our redundancy for your data).

Why? We believe it’s not just about good user interface, it’s about the relationship we want to build with our customers.

The post Cloud Storage Doesn’t have to be Convoluted, Complex, or Confusing appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Analyzing Salesforce Data with Amazon QuickSight

Post Syndicated from David McAmis original https://aws.amazon.com/blogs/big-data/analyzing-salesforce-data-with-amazon-quicksight/

Salesforce Sales Cloud is a powerful platform for managing customer data. One of the key functions that the platform provides is the ability to track customer opportunities. Opportunities in Salesforce are used to track revenue, sales pipelines, and other activities from the very first contact with a potential customer to a closed sale.

Amazon QuickSight is a rich data visualization tool that provides the ability to connect to Salesforce data and use it as a data source for creating analyses, stories, and dashboards  and easily share them with others in the organization. This post focuses on how to connect to Salesforce as a data source and create a useful opportunity dashboard, incorporating Amazon QuickSight features like relative date filters, Key Performance Indicator (KPI) charts, and more.

Walkthrough

In this post, you walk through the following tasks:

  • Creating a new data set based on Salesforce data
  • Creating your analysis and adding visuals
  • Creating an Amazon QuickSight dashboard
  • Working with filters

Note: For this walkthrough, I am using my own Salesforce.com Developer Edition account. You can sign up for your own free developer account at https://developer.salesforce.com/.

Creating a new Amazon QuickSight data set based on Salesforce data

To start, you need to create a new Amazon QuickSight data set. Sign in to Amazon QuickSight at https://quicksight.aws using the link from the home page. Enter your Amazon QuickSight account name and choose Continue. Next, enter your Email address or user name and password, then choose Sign In.

On the Amazon QuickSight start page, choose Manage Data, which takes you to a list of your data sets. Choose New Data Set, and choose Salesforce as your data source. Enter a data source name—in this example, I called mine “SFDC Opportunity.” Choose Create Data Source to open the Salesforce authentication page, where you can enter your Salesforce user name and password.

After you are authenticated to Salesforce, you are presented with a drop-down list that lets you select data from Reports or Objects. For this tutorial, choose Object. Scroll down in the list to choose the Opportunity object, and then choose Select.

To finish creating your data set, choose Visualize to go to where you can create a new Amazon QuickSight analysis from this data.

Creating your analysis and adding visuals

Now that you have acquired your data, it’s time to start working with your analysis. In Amazon Quicksight, an analysis is a container for a set of related visual stories. When you chose Visualize, a new analysis was created for you. This is where you start to create the visuals (charts, graphs, etc.) that will be the building blocks for your dashboard.

In Amazon QuickSight, Salesforce objects look like database tables. In the analysis that you just created, you can see the columns in the Fields list for the Opportunity object.

The Opportunity object in Salesforce has a number of default fields. Salesforce administrators can extend this object by adding other custom fields as required—these custom fields are usually marked with a “_c” at the end.

In the Fields List, you can see that Amazon QuickSight has divided the fields into Dimensions and Measures.  You use these to create your visualizations and dashboard. For this particular dashboard, you create five different visuals to display the data in a few different ways.

Opportunity by Stage

For the first visualization, you create a horizontal bar chart showing “Opportunity by Stage”. In the Fields List, choose the StageName dimension and the ExpectedRevenue measure. By default, this should create a horizontal bar chart for you, as shown in the following image.

Notice that this chart includes the Closed Won category, which we aren’t interested in showing. Choose the bar for Closed Won, and in the pop-up menu, choose Exclude Closed Won. This filters the chart to show only opportunities that are in progress.

It’s important to note that for this dashboard, we only want to show the opportunities that are not Closed Won. So in the menu bar on the left side, choose Filter.

By default, the filter that you just created was only applied to a single visualization. To change this, choose the filter, and then choose All Visuals from the drop-down list. This applies the filter to all visuals in the analysis.

To finish, select the chart title and rename the chart to Opportunity by Stage.

Opportunity by Month

Next, you need to create a new visual to show “Opportunity by Month.” You use a vertical bar chart to display the data. On the Amazon QuickSight toolbar, choose Add, and then choose Add visual. For this visual, choose CloseDate from the dimensions and ExpectedRevenue from the measures.

Using the Visual Types menu, change the chart type to a Vertical Bar Chart. By default, the chart displays the revenue by year, but we want to break it down a bit further. Choose Field Wells, and using the CloseDate drop-down menu, change the Aggregate to Month.

With the change to a monthly aggregate, your chart should look something like the following:

Select the chart title and rename the chart to Opportunity by Month.

Expected Revenue

When working with Salesforce opportunities, there are two measures that are important to most sales managers—the first is the total amount associated with the opportunity, and the second is what the actual expected revenue will be. For the next visual, you use the KPI chart to display these measures.

Choose Add on the Amazon QuickSight toolbar, and then choose Add visual. From the measures, choose ExpectedRevenue, and then Amount. To change your visualization, go to the Visual Types menu and choose the Key Performance Indicator (KPI). Your visualization should change and be similar to the following:

Select the chart title and rename the chart to Expected Revenue.

Opportunity by Lead Source

Next, you need to look at where the opportunity actually came from. This helps your dashboard users understand where the leads are being generated from and their value to the business. For this visual, you use a Horizontal Bar Chart.

On the Amazon QuickSight toolbar, choose Add, and then choose Add visual. From the measures, choose Amount, and for the dimensions, choose LeadSource. To change your visualization, go to the Visual Types menu and choose the Horizontal Bar Chart. Your visualization should change and be similar to the following:

Note: If you can’t read the chart labels for the bars, grab the axis line and drag to resize.

Select the chart title and rename the chart to Opportunity by Lead Source.

Expected Revenue vs. Opportunity Amount

For the last visual, you look at the individual opportunities and how they contribute to the total pipeline. A tree map is a specialized chart type that lets your dashboard users see how each opportunity amount contributes to the whole.  Additionally, you can highlight if there is a difference between the Expected Revenue and the Amount by sizing the marks by the Amount and coloring them by the Expected Amount.

On the Amazon QuickSight toolbar, choose Add, and then choose Add visual. From the measures, choose ExpectedRevenue and Amount. From the dimensions, choose Name. To change your visualization, go to the Visual Types menu and choose the Tree Map. Your visualization should change and be similar to the following:

Select the chart title and rename the chart to Expected Revenue vs Opportunity Amount.

Creating an Amazon QuickSight dashboard

Now that your visuals are created, it’s time to do the fun part—actually putting your Amazon QuickSight dashboard together. To create a dashboard, resize and position your visuals on the page, using the following layout:

To resize a visual, grab the handle in the lower-right corner and drag it to the height and width that you want.

To move your visual, use the grab bar at the top of the visual, as shown here:

When you are done resizing your visuals, your canvas should look something like this:

To create a dashboard, choose Share in the Amazon QuickSight toolbar. Then choose Create Dashboard. For this dashboard, give it a name of SFDC Opportunity Dashboard, and choose Create Dashboard. You are prompted to enter the email address or user name of the users you want to share this dashboard with.

Because we are just concentrating on the design at the moment, you can choose Cancel and share your dashboard later using the Share button on the dashboard toolbar.

Working with filters

There is one more feature that you can use when viewing your dashboard to make it even more useful. Earlier, when you were working with the Analysis, you added a filter to remove any opportunities that were tagged as Closed Won. Now, as you are viewing the dashboard, you add a filter that you can use to filter on a relative date.

This feature in Amazon QuickSight allows you to choose a time period (years, quarters, months, weeks, etc.) and then select from a list of relative time periods. For example, if you choose Year, you could set the filter options to Previous Year, This Year, Year to Date, or Last N Years.

This is especially handy for a Salesforce Opportunity dashboard, as you might want to filter the data using the Close Date field to see when the opportunity is actually set to close.

To create a relative date filter, choose Filter on the toolbar. Choose the filter icon, and then choose CloseDate, as shown in the following image:

At the top of the Edit Filter pane, change the drop-down list to apply the filter to All Visuals. The default filter type is Time Range, so use the drop-down list to change the filter type to Relative Dates.  For the time period, choose Quarters. To view all the current opportunities in your dashboard, choose the option for This Quarter, and choose Apply.

With the date filter in place, you have the final component for your dashboard, which should look something like the following example:

It’s important to note that at this point, you have added the filter when viewing the dashboard. If you think this is something that other users might want to do, you can go back to your Amazon QuickSight Analysis and add the filter there—that way it will be available for all dashboard users.

Summary

In this post, you learned how to connect to Salesforce data and create a basic dashboard. You can apply the same techniques to create analyses and dashboards from all different types of Salesforce data and objects. Whether you want to analyze your Salesforce account demographics or where your leads are coming from, or evaluate any other data stored in Salesforce, Amazon QuickSight helps you quickly connect to and visualize your data with only a few clicks.

 


Additional Reading

Learn how to visualize Amazon S3 analytics data with Amazon QuickSight!


About the Author

David McAmis is a Big Data & Analytics Consultant with Amazon Web Services. He works with customers to develop scalable platforms to gather, process and analyze data on AWS.

 

 

 

 

AWS Cost Explorer Update – Better Filtering & Grouping, Report Management, RI Reports

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-cost-explorer-update-better-filtering-grouping-report-management-ri-reports/

Our customers use Cost Explorer to better understand and manage their AWS spending, making heavy use of the reporting, analytics, and visualization tools that it provides. We launched Cost Explorer in 2014 with a focus on simplicity – single click signup, preconfigured default views, and a clean user interface (take a look back at The New AWS Cost Explorer to see where we started). The Cost Explorer has been very popular and we’ve received a lot of great feedback from our customers.

Last week we launched a major upgrade to Cost Explorer. We’ve redesigned the user interface to optimize many common workflows including filtering, report management, selection of date ranges, and grouping of data. We have also included some default reports to make it easier for you to explore the costs related to your use of Reserved Instances.

Looking at Cost Explorer
Since pictures are reportedly worth 1000 words, let’s take a closer look! Cost Explorer is part of the Billing Dashboard so I can start there:

Here’s the Billing Dashboard. I click on Cost Explorer to move ahead:

I can open up Cost Explorer or access one of three preconfigured views. I’ll go for the first option:

The default report shows my EC2 costs and usage (running hours) for the past 3 months:

I can use the Group By menu to break the costs down by EC2 instance type:

I have many other grouping options:

The filtering options are now easier to access and to edit. Here’s the full set:

I can explore my EC2 costs in any set of desired regions:

I can filter and then group by instance type to see how my spending breaks down:

I can click on Download CSV and then process the data locally:

I can also exclude certain instance types from the report. Here’s how I exclude my m4.xlarge, t2.micro, and t2.nano usage:

Report Management
Cost Explorer allows me to customize my existing reports and to create new reports from scratch. I can click on Save As to save my customized report with a new name:

I can see and manage all of my reports on the Saved Reports page (The padlock denotes a default report that cannot be edited and then overwritten):

When I click on New report I can start from a template:

After I click on Create Report, I set up my date range and filters as desired, and click on Save As. I created a report that displays my year-to-date usage of several AWS database services (Amazon Redshift, DynamoDB Accelerator (DAX), Amazon Relational Database Service (RDS), and AWS Database Migration Service):

All of my reports are accessible from the Reports menu so I can check on my costs with a click:

We also simplified the process of selecting a range of dates for a report, including options to select common date ranges:

Reserved Instance Reports
Cost Explorer also includes a pair of reports that will help you to understand and optimize your usage of Reserved Instances. I don’t own an RI’s so I used screen shots supplied by the team.

The RI Utilization report allows you to see how much of your purchased RI capacity is being put to use (the dashed red line represents a utilization target that you can specify):

The RI Coverage report tells you how much of your EC2 usage is being handled by Reserved Instances (this time, the dashed red line represents the desired amount of coverage):

I hope you have enjoyed this tour of the updated Cost Explorer. It is available now and you can start using it today!

Jeff;

[email protected] – Intelligent Processing of HTTP Requests at the Edge

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/lambdaedge-intelligent-processing-of-http-requests-at-the-edge/

Late last year I announced a preview of [email protected] and talked about how you could use it to intelligently process HTTP requests at locations that are close (latency-wise) to your customers. Developers who applied and gained access to the preview have been making good use of it, and have provided us with plenty of very helpful feedback. During the preview we added the ability to generate HTTP responses and support for CloudWatch Logs, and also updated our roadmap based on the feedback.

Now Generally Available
Today I am happy to announce that [email protected] is now generally available! You can use it to:

  • Inspect cookies and rewrite URLs to perform A/B testing.
  • Send specific objects to your users based on the User-Agent header.
  • Implement access control by looking for specific headers before passing requests to the origin.
  • Add, drop, or modify headers to direct users to different cached objects.
  • Generate new HTTP responses.
  • Cleanly support legacy URLs.
  • Modify or condense headers or URLs to improve cache utilization.
  • Make HTTP requests to other Internet resources and use the results to customize responses.

[email protected] allows you to create web-based user experiences that are rich and personal. As is rapidly becoming the norm in today’s world, you don’t need to provision or manage any servers. You simply upload your code (Lambda functions written in Node.js) and pick one of the CloudFront behaviors that you have created for the distribution, along with the desired CloudFront event:

In this case, my function (the imaginatively named EdgeFunc1) would run in response to origin requests for image/* within the indicated distribution. As you can see, you can run code in response to four different CloudFront events:

Viewer Request – This event is triggered when an event arrives from a viewer (an HTTP client, generally a web browser or a mobile app), and has access to the incoming HTTP request. As you know, each CloudFront edge location maintains a large cache of objects so that it can efficiently respond to repeated requests. This particular event is triggered regardless of whether the requested object is already cached.

Origin Request – This event is triggered when the edge location is about to make a request back to the origin, due to the fact that the requested object is not cached at the edge location. It has access to the request that will be made to the origin (often an S3 bucket or code running on an EC2 instance).

Origin Response – This event is triggered after the origin returns a response to a request. It has access to the response from the origin.

Viewer Response – This is event is triggered before the edge location returns a response to the viewer. It has access to the response.

Functions are globally replicated and requests are automatically routed to the optimal location for execution. You can write your code once and with no overt action on your part, have it be available at low latency to users all over the world.

Your code has full access to requests and responses, including headers, cookies, the HTTP method (GET, HEAD, and so forth), and the URI. Subject to a few restrictions, it can modify existing headers and insert new ones.

[email protected] in Action
Let’s create a simple function that runs in response to the Viewer Request event. I open up the Lambda Console and create a new function. I choose the Node.js 6.10 runtime and search for cloudfront blueprints:

I choose cloudfront-response-generation and configure a trigger to invoke the function:

The Lambda Console provides me with some information about the operating environment for my function:

I enter a name and a description for my function, as usual:

The blueprint includes a fully operational function. It generates a “200” HTTP response and a very simple body:

I used this as the starting point for my own code, which pulls some interesting values from the request and displays them in a table:

'use strict';
exports.handler = (event, context, callback) => {

    /* Set table row style */
    const rs = '"border-bottom:1px solid black;vertical-align:top;"';
    /* Get request */
    const request = event.Records[0].cf.request;
   
    /* Get values from request */ 
    const httpVersion = request.httpVersion;
    const clientIp    = request.clientIp;
    const method      = request.method;
    const uri         = request.uri;
    const headers     = request.headers;
    const host        = headers['host'][0].value;
    const agent       = headers['user-agent'][0].value;
    
    var sreq = JSON.stringify(event.Records[0].cf.request, null, ' ');
    sreq = sreq.replace(/\n/g, '<br/>');

    /* Generate body for response */
    const body = 
     '<html>\n'
     + '<head><title>Hello From [email protected]</title></head>\n'
     + '<body>\n'
     + '<table style="border:1px solid black;background-color:#e0e0e0;border-collapse:collapse;" cellpadding=4 cellspacing=4>\n'
     + '<tr style=' + rs + '><td>Host</td><td>'        + host     + '</td></tr>\n'
     + '<tr style=' + rs + '><td>Agent</td><td>'       + agent    + '</td></tr>\n'
     + '<tr style=' + rs + '><td>Client IP</td><td>'   + clientIp + '</td></tr>\n'
     + '<tr style=' + rs + '><td>Method</td><td>'      + method   + '</td></tr>\n'
     + '<tr style=' + rs + '><td>URI</td><td>'         + uri      + '</td></tr>\n'
     + '<tr style=' + rs + '><td>Raw Request</td><td>' + sreq     + '</td></tr>\n'
     + '</table>\n'
     + '</body>\n'
     + '</html>'

    /* Generate HTTP response */
    const response = {
        status: '200',
        statusDescription: 'HTTP OK',
        httpVersion: httpVersion,
        body: body,
        headers: {
            'vary':          [{key: 'Vary',          value: '*'}],
            'last-modified': [{key: 'Last-Modified', value:'2017-01-13'}]
        },
    };

    callback(null, response);
};

I configure my handler, and request the creation of a new IAM Role with Basic Edge Lambda permissions:

On the next page I confirm my settings (as I would do for a regular Lambda function), and click on Create function:

This creates the function, attaches the trigger to the distribution, and also initiates global replication of the function. The status of my distribution changes to In Progress for the duration of the replication (typically 5 to 8 minutes):

The status changes back to Deployed as soon as the replication completes:

Then I access the root of my distribution (https://dogy9dy9kvj6w.cloudfront.net/), the function runs, and this is what I see:

Feel free to click on the image (it is linked to the root of my distribution) to run my code!

As usual, this is a very simple example and I am sure that you can do a lot better. Here are a few ideas to get you started:

Site Management – You can take an entire dynamic website offline and replace critical pages with [email protected] functions for maintenance or during a disaster recovery operation.

High Volume Content – You can create scoreboards, weather reports, or public safety pages and make them available at the edge, both quickly and cost-effectively.

Create something cool and share it in the comments or in a blog post, and I’ll take a look.

Things to Know
Here are a couple of things to keep in mind as you start to think about how to put [email protected] to use in your application:

Timeouts – Functions that handle Origin Request and Origin Response events must complete within 3 seconds. Functions that handle Viewer Request and Viewer Response events must complete within 1 second.

Versioning – After you update your code in the Lambda Console, you must publish a new version and set up a fresh set of triggers for it, and then wait for the replication to complete. You must always refer to your code using a version number; $LATEST and aliases do not apply.

Headers – As you can see from my code, the HTTP request headers are accessible as an array. The headers fall in to four categories:

  • Accessible – Can be read, written, deleted, or modified.
  • Restricted – Must be passed on to the origin.
  • Read-only – Can be read, but not modified in any way.
  • Blacklisted – Not seen by code, and cannot be added.

Runtime Environment – The runtime environment provides each function with 128 MB of memory, but no builtin libraries or access to /tmp.

Web Service Access – Functions that handle Origin Request and Origin Response events must complete within 3 seconds can access the AWS APIs and fetch content via HTTP. These requests are always made synchronously with request to the original request or response.

Function Replication – As I mentioned earlier, your functions will be globally replicated. The replicas are visible in the “other” regions from the Lambda Console:

CloudFront – Everything that you already know about CloudFront and CloudFront behaviors is relevant to [email protected]. You can use multiple behaviors (each with up to four [email protected] functions) from each behavior, customize header & cookie forwarding, and so forth. You can also make the association between events and functions (via ARNs that include function versions) while you are editing a behavior:

Available Now
[email protected] is available now and you can start using it today. Pricing is based on the number of times that your functions are invoked and the amount of time that they run (see the [email protected] Pricing page for more info).

Jeff;

 

Handy: Google Highlights ‘Best Torrent Sites’ in Search Results

Post Syndicated from Ernesto original https://torrentfreak.com/handy-google-highlights-best-torrent-sites-in-search-results-170709/

With torrent sites dropping like flies recently, a lot of people are looking for alternatives.

For many, Google is the preferred choice to find them, and the search engine is actually quite helpful.

When you type in “best torrent sites” or just “torrent sites,” Google.com provides a fancy reel of several high traffic indexers.

The search engine displays the names of sites such as RARBG, The Pirate Bay and 1337x as well as their logo. When you click on this link, Google brings up all results for the associated term.

While it’s a thought provoking idea to think that Google employees are manually curating the list, the entire process is likely automated. Still, many casual torrent users might find it quite handy. Whether rightsholders will be equally excited is another question though.

The automated nature of this type of search result display also creates another problem. While many people know that most torrent sites offer pirated content, this is quite different with streaming portals.

This leads to a confusing situation where Google lists both legal and unauthorized streaming platforms when users search for “streaming sites.”

The screenshot below shows the pirate streaming site Putlocker next to Hulu and Crackle. The same lineup also rotates various other pirate sites such as Alluc and Movie4k.to.

The reels in question are most likely generated by algorithms, which don’t distinguish between authorized and unauthorized sources. Still, given the repeated criticism Hollywood has for Google for its supposed facilitation of piracy, it’s a bit unfortunate, to say the least.

This isn’t the first time that Google’s “rich” search results have featured pirate sites. The same happened in the past when the search engine displayed pirate site ratings of movies, next to ratings from regular review sites such as IMDb and Rotten Tomatoes.

We can expect the MPAA and others to take note, and bring these and other issues up at their convenience.

Note: the search reel doesn’t appear on many localized Google domains. We tested and confirmed it only on Google.com.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Perform Near Real-time Analytics on Streaming Data with Amazon Kinesis and Amazon Elasticsearch Service

Post Syndicated from Tristan Li original https://aws.amazon.com/blogs/big-data/perform-near-real-time-analytics-on-streaming-data-with-amazon-kinesis-and-amazon-elasticsearch-service/

Nowadays, streaming data is seen and used everywhere—from social networks, to mobile and web applications, IoT devices, instrumentation in data centers, and many other sources. As the speed and volume of this type of data increases, the need to perform data analysis in real time with machine learning algorithms and extract a deeper understanding from the data becomes ever more important. For example, you might want a continuous monitoring system to detect sentiment changes in a social media feed so that you can react to the sentiment in near real time.

In this post, we use Amazon Kinesis Streams to collect and store streaming data. We then use Amazon Kinesis Analytics to process and analyze the streaming data continuously. Specifically, we use the Kinesis Analytics built-in RANDOM_CUT_FOREST function, a machine learning algorithm, to detect anomalies in the streaming data. Finally, we use Amazon Kinesis Firehose to export the anomalies data to Amazon Elasticsearch Service (Amazon ES). We then build a simple dashboard in the open source tool Kibana to visualize the result.

Solution overview

The following diagram depicts a high-level overview of this solution.

Amazon Kinesis Streams

You can use Amazon Kinesis Streams to build your own streaming application. This application can process and analyze streaming data by continuously capturing and storing terabytes of data per hour from hundreds of thousands of sources.

Amazon Kinesis Analytics

Kinesis Analytics provides an easy and familiar standard SQL language to analyze streaming data in real time. One of its most powerful features is that there are no new languages, processing frameworks, or complex machine learning algorithms that you need to learn.

Amazon Kinesis Firehose

Kinesis Firehose is the easiest way to load streaming data into AWS. It can capture, transform, and load streaming data into Amazon S3, Amazon Redshift, and Amazon Elasticsearch Service.

Amazon Elasticsearch Service

Amazon ES is a fully managed service that makes it easy to deploy, operate, and scale Elasticsearch for log analytics, full text search, application monitoring, and more.

Solution summary

The following is a quick walkthrough of the solution that’s presented in the diagram:

  1. IoT sensors send streaming data into Kinesis Streams. In this post, you use a Python script to simulate an IoT temperature sensor device that sends the streaming data.
  2. By using the built-in RANDOM_CUT_FOREST function in Kinesis Analytics, you can detect anomalies in real time with the sensor data that is stored in Kinesis Streams. RANDOM_CUT_FOREST is also an appropriate algorithm for many other kinds of anomaly-detection use cases—for example, the media sentiment example mentioned earlier in this post.
  3. The processed anomaly data is then loaded into the Kinesis Firehose delivery stream.
  4. By using the built-in integration that Kinesis Firehose has with Amazon ES, you can easily export the processed anomaly data into the service and visualize it with Kibana.

Implementation steps

The following sections walk through the implementation steps in detail.

Creating the delivery stream

  1. Open the Amazon Kinesis Streams console.
  2. Create a new Kinesis stream. Give it a name that indicates it’s for raw incoming stream data—for example, RawStreamData. For Number of shards, type 1.
  3. The Python code provided below simulates a streaming application, such as an IoT device, and generates random data and anomalies into a Kinesis stream. The code generates two temperature ranges, where the first range is the hypothetical sensor’s normal operating temperature range (10–20), and the second is the anomaly temperature range (100–120).Make sure to change the stream name on line 16 and 20 and the Region on line 6 to match your configuration. Alternatively, you can download the Amazon Kinesis Data Generator from this repository and use it to generate the data.
    import json
    import datetime
    import random
    import testdata
    from boto import kinesis
    
    kinesis = kinesis.connect_to_region("us-east-1")
    
    def getData(iotName, lowVal, highVal):
       data = {}
       data["iotName"] = iotName
       data["iotValue"] = random.randint(lowVal, highVal) 
       return data
    
    while 1:
       rnd = random.random()
       if (rnd < 0.01):
          data = json.dumps(getData("DemoSensor", 100, 120))  
          kinesis.put_record("RawStreamData", data, "DemoSensor")
          print '***************************** anomaly ************************* ' + data
       else:
          data = json.dumps(getData("DemoSensor", 10, 20))  
          kinesis.put_record("RawStreamData", data, "DemoSensor")
          print data

  4. Open the Amazon Elasticsearch Service console and create a new domain.
    1. Give the domain a unique name. In the Configure cluster screen, use the default settings.
    2. In the Set up access policy screen, in the Set the domain access policy list, choose Allow access to the domain from specific IP(s).
    3. Enter the public IP address of your computer.
      Note: If you’re working behind a proxy or firewall, see the “Use a proxy to simplify request signing” section in this AWS Database blog post to learn how to work with a proxy. For additional information about securing access to your Amazon ES domain, see How to Control Access to Your Amazon Elasticsearch Domain in the AWS Security Blog.
  5. After the Amazon ES domain is up and running, you can set up and configure Kinesis Firehose to export results to Amazon ES:
    1. Open the Amazon Kinesis Firehose console and choose Create Delivery Stream.
    2. In the Destination dropdown list, choose Amazon Elasticsearch Service.
    3. Type a stream name, and choose the Amazon ES domain that you created in Step 4.
    4. Provide an index name and ES type. In the S3 bucket dropdown list, choose Create New S3 bucket. Choose Next.
    5. In the configuration, change the Elasticsearch Buffer size to 1 MB and the Buffer interval to 60s. Use the default settings for all other fields. This shortens the time for the data to reach the ES cluster.
    6. Under IAM Role, choose Create/Update existing IAM role.
      The best practice is to create a new role every time. Otherwise, the console keeps adding policy documents to the same role. Eventually the size of the attached policies causes IAM to reject the role, but it does it in a non-obvious way, where the console basically quits functioning.
    7. Choose Next to move to the Review page.
  6. Review the configuration, and then choose Create Delivery Stream.
  7. Run the Python file for 1–2 minutes, and then press Ctrl+C to stop the execution. This loads some data into the stream for you to visualize in the next step.

Analyzing the data

Now it’s time to analyze the IoT streaming data using Amazon Kinesis Analytics.

  1. Open the Amazon Kinesis Analytics console and create a new application. Give the application a name, and then choose Create Application.
  2. On the next screen, choose Connect to a source. Choose the raw incoming data stream that you created earlier. (Note the stream name Source_SQL_STREAM_001 because you will need it later.)
  3. Use the default settings for everything else. When the schema discovery process is complete, it displays a success message with the formatted stream sample in a table as shown in the following screenshot. Review the data, and then choose Save and continue.
  4. Next, choose Go to SQL editor. When prompted, choose Yes, start application.
  5. Copy the following SQL code and paste it into the SQL editor window.
    CREATE OR REPLACE STREAM "TEMP_STREAM" (
       "iotName"        varchar (40),
       "iotValue"   integer,
       "ANOMALY_SCORE"  DOUBLE);
    -- Creates an output stream and defines a schema
    CREATE OR REPLACE STREAM "DESTINATION_SQL_STREAM" (
       "iotName"       varchar(40),
       "iotValue"       integer,
       "ANOMALY_SCORE"  DOUBLE,
       "created" TimeStamp);
     
    -- Compute an anomaly score for each record in the source stream
    -- using Random Cut Forest
    CREATE OR REPLACE PUMP "STREAM_PUMP_1" AS INSERT INTO "TEMP_STREAM"
    SELECT STREAM "iotName", "iotValue", ANOMALY_SCORE FROM
      TABLE(RANDOM_CUT_FOREST(
        CURSOR(SELECT STREAM * FROM "SOURCE_SQL_STREAM_001")
      )
    );
    
    -- Sort records by descending anomaly score, insert into output stream
    CREATE OR REPLACE PUMP "OUTPUT_PUMP" AS INSERT INTO "DESTINATION_SQL_STREAM"
    SELECT STREAM "iotName", "iotValue", ANOMALY_SCORE, ROWTIME FROM "TEMP_STREAM"
    ORDER BY FLOOR("TEMP_STREAM".ROWTIME TO SECOND), ANOMALY_SCORE DESC;

 

  1. Choose Save and run SQL.
    As the application is running, it displays the results as stream data arrives. If you don’t see any data coming in, run the Python script again to generate some fresh data. When there is data, it appears in a grid as shown in the following screenshot.Note that you are selecting data from the source stream name Source_SQL_STREAM_001 that you created previously. Also note the ANOMALY_SCORE column. This is the value that the Random_Cut_Forest function calculates based on the temperature ranges provided by the Python script. Higher (anomaly) temperature ranges have a higher score.Looking at the SQL code, note that the first two blocks of code create two new streams to store temporary data and the final result. The third block of code analyzes the raw source data (Stream_Pump_1) using the Random_Cut_Forest function. It calculates an anomaly score (ANOMALY_SCORE) and inserts it into the TEMP_STREAM stream. The final code block loads the result stored in the TEMP_STREAM into DESTINATION_SQL_STREAM.
  2. Choose Exit (done editing) next to the Save and run SQL button to return to the application configuration page.

Load processed data into the Kinesis Firehose delivery stream

Now, you can export the result from DESTINATION_SQL_STREAM into the Amazon Kinesis Firehose stream that you created previously.

  1. On the application configuration page, choose Connect to a destination.
  2. Choose the stream name that you created earlier, and use the default settings for everything else. Then choose Save and Continue.
  3. On the application configuration page, choose Exit to Kinesis Analytics applications to return to the Amazon Kinesis Analytics console.
  4. Run the Python script again for 4–5 minutes to generate enough data to flow through Amazon Kinesis Streams, Kinesis Analytics, Kinesis Firehose, and finally into the Amazon ES domain.
  5. Open the Kinesis Firehose console, choose the stream, and then choose the Monitoring
  6. As the processed data flows into Kinesis Firehose and Amazon ES, the metrics appear on the Delivery Stream metrics page. Keep in mind that the metrics page takes a few minutes to refresh with the latest data.
  7. Open the Amazon Elasticsearch Service dashboard in the AWS Management Console. The count in the Searchable documents column increases as shown in the following screenshot. In addition, the domain shows a cluster health of Yellow. This is because, by default, it needs two instances to deploy redundant copies of the index. To fix this, you can deploy two instances instead of one.

Visualize the data using Kibana

Now it’s time to launch Kibana and visualize the data.

  1. Use the ES domain link to go to the cluster detail page, and then choose the Kibana link as shown in the following screenshot.

    If you’re working behind a proxy or firewall, see the “Use a proxy to simplify request signing” section in this blog post to learn how to work with a proxy.
  2. In the Kibana dashboard, choose the Discover tab to perform a query.
  3. You can also visualize the data using the different types of charts offered by Kibana. For example, by going to the Visualize tab, you can quickly create a split bar chart that aggregates by ANOMALY_SCORE per minute.


Conclusion

In this post, you learned how to use Amazon Kinesis to collect, process, and analyze real-time streaming data, and then export the results to Amazon ES for analysis and visualization with Kibana. If you have comments about this post, add them to the “Comments” section below. If you have questions or issues with implementing this solution, please open a new thread on the Amazon Kinesis or Amazon ES discussion forums.


Next Steps

Take your skills to the next level. Learn real-time clickstream anomaly detection with Amazon Kinesis Analytics.

 


About the Author

Tristan Li is a Solutions Architect with Amazon Web Services. He works with enterprise customers in the US, helping them adopt cloud technology to build scalable and secure solutions on AWS.

 

 

 

 

VästtraPi: your personal bus stop schedule monitor

Post Syndicated from Janina Ander original https://www.raspberrypi.org/blog/public-transport-vasttrapi/

I get impatient quickly when I’m looking up information on my phone. There’s just something about it that makes me jittery – especially when the information is time-sensitive, like timetables for public transport. If you’re like me, then Dimitris Platis‘s newest build is for you. He has created the VästtraPi, a Pi-powered departure time screen for your home!

No Title

No Description

Never miss the bus again with VästtraPi

Let me set the scene: it’s a weekday morning, and you’ve finally woken up enough to think about taking the bus to work. How much time do you have to catch it, though? You pick up your phone, unlock it, choose the right app, wait for it to update – and realise this took so much time that you’ll probably miss the next bus! Grrrrrr!

Running after a streetcar

Never again!

Now picture this: instead of using your phone, you can glance at  a personalized real-time bus schedule monitor while sipping your tea at breakfast.

Paul Rudd is fairly impressed

That would be pretty neat, wouldn’t it?

Such a device is exactly what Dimitris has created with the VästtraPi, and he has provided instructions so you can make your own. One less stress factor for your morning commute!

Stephen Colbert and Jon Stewart are very impressed.

I agree with Stephen and Jon.

Setting up the VästtraPi

The main pieces of hardware making up the VästtraPi are a Raspberry Pi Zero W, an LCD screen, and a power control board designed by Dimitris which switches the device on and off. He explains where to buy the board’s components, as well as all the other parts of the build, and how to put them together. He’s also 3D-printed a simple case.

On the software side, a Python script accesses the API provided by Dimitris’s local public transportation company, Västtrafik, and repeatedly fetches information about his favourite bus stop. It displays the information using neat graphics, generated with the help of Tkinter, the standard GUI package for Python. The device is set up so that pressing the ‘on’ button starts up the Pi. The script then runs automatically for ten minutes before safely shutting everything down. Very economical!

Dimitris has even foreseen what you’re likely to be thinking right now:

So, is this faster than the mobile app solution? Yes and no. The Raspberry Pi Zero W needs around 30 seconds to boot up and display the GUI. Without any optimizations it is naturally slower than my phone. VästtraPi’s biggest advantage is that it allows me to multitask while it is loading.

Build your own live bus schedule monitor

All the schematics and code are available via Dimitris’s write-up. He says that, for the moment, “the bus station, selected platform and bus line destinations that are displayed are hard-coded” in his script, but that it would be easy to amend for your own purposes. Of course, when recreating this build, you’ll want to use your own local public transport provider’s API, so some tweaking of his code will be required anyway.

What do you think – will this improve your morning routine? Are you up to the challenge of adapting it? Or do you envision modifying the build to display other live information? Let us know how you get on in the comments.

The post VästtraPi: your personal bus stop schedule monitor appeared first on Raspberry Pi.

New Information in the AWS IAM Console Helps You Follow IAM Best Practices

Post Syndicated from Rob Moncur original https://aws.amazon.com/blogs/security/newly-updated-features-in-the-aws-iam-console-help-you-adhere-to-iam-best-practices/

Today, we added new information to the Users section of the AWS Identity and Access Management (IAM) console to make it easier for you to follow IAM best practices. With this new information, you can more easily monitor users’ activity in your AWS account and identify access keys and passwords that you should rotate regularly. You can also better audit users’ MFA device usage and keep track of their group memberships. In this post, I show how you can use this new information to help you follow IAM best practices.

Monitor activity in your AWS account

The IAM best practice, monitor activity in your AWS account, encourages you to monitor user activity in your AWS account by using services such as AWS CloudTrail and AWS Config. In addition to monitoring usage in your AWS account, you should be aware of inactive users so that you can remove them from your account. By only retaining necessary users, you can help maintain the security of your AWS account.

To help you find users that are inactive, we added three new columns to the IAM user table: Last activity, Console last sign-in, and Access key last used.
Screenshot showing three new columns in the IAM user table

  1. Last activity – This column tells you how long it has been since the user has either signed in to the AWS Management Console or accessed AWS programmatically with their access keys. Use this column to find users who might be inactive, and consider removing them from your AWS account.
  2. Console last sign-in – This column displays the time since the user’s most recent console sign-in. Consider removing passwords from users who are not signing in to the console.
  3. Access key last used – This column displays the time since a user last used access keys. Use this column to find any access keys that are not being used, and deactivate or remove them.

Rotate credentials regularly

The IAM best practice, rotate credentials regularly, recommends that all users in your AWS account change passwords and access keys regularly. With this practice, if a password or access key is compromised without your knowledge, you can limit how long the credentials can be used to access your resources. To help your management efforts, we added three new columns to the IAM user table: Access key age, Password age, and Access key ID.

Screenshot showing three new columns in the IAM user table

  1. Access key age – This column shows how many days it has been since the oldest active access key was created for a user. With this information, you can audit access keys easily across all your users and identify the access keys that may need to be rotated.

Based on the number of days since the access key has been rotated, a green, yellow, or red icon is displayed. To see the corresponding time frame for each icon, pause your mouse pointer on the Access key age column heading to see the tooltip, as shown in the following screenshot.

Icons showing days since the oldest active access key was created

  1. Password age – This column shows the number of days since a user last changed their password. With this information, you can audit password rotation and identify users who have not changed their password recently. The easiest way to make sure that your users are rotating their password often is to establish an account password policy that requires users to change their password after a specified time period.
  2. Access key ID – This column displays the access key IDs for users and the current status (Active/Inactive) of those access key IDs. This column makes it easier for you to locate and see the state of access keys for each user, which is useful for auditing. To find a specific access key ID, use the search box above the table.

Enable MFA for privileged users

Another IAM best practice is to enable multi-factor authentication (MFA) for privileged IAM users. With MFA, users have a device that generates a unique authentication code (a one-time password [OTP]). Users must provide both their normal credentials (such as their user name and password) and the OTP when signing in.

To help you see if MFA has been enabled for your users, we’ve improved the MFA column to show you if MFA is enabled and which type of MFA (hardware, virtual, or SMS) is enabled for each user, where applicable.

Screenshot showing the improved "MFA" column

Use groups to assign permissions to IAM users

Instead of defining permissions for individual IAM users, it’s usually more convenient to create groups that relate to job functions (such as administrators, developers, and accountants), define the relevant permissions for each group, and then assign IAM users to those groups. All the users in an IAM group inherit the permissions assigned to the group. This way, if you need to modify permissions, you can make the change once for everyone in a group instead of making the change one time for each user. As people move around in your company, you can change the group membership of the IAM user.

To better understand which groups your users belong to, we’ve made updates:

  1. Groups – This column now lists the groups of which a user is a member. This information makes it easier to understand and compare multiple users’ permissions at once.
  2. Group count – This column shows the number of groups to which each user belongs.Screenshot showing the updated "Groups" and "Group count" columns

Customize your view

Choosing which columns you see in the User table is easy to do. When you click the button with the gear icon in the upper right corner of the table, you can choose the columns you want to see, as shown in the following screenshots.

Screenshot showing gear icon  Screenshot of "Manage columns" dialog box

Conclusion

We made these improvements to the Users section of the IAM console to make it easier for you to follow IAM best practices in your AWS account. Following these best practices can help you improve the security of your AWS resources and make your account easier to manage.

If you have comments about this post, submit them in the “Comments” section below. If you have questions or suggestions, please start a new thread on the IAM forum.

– Rob

Monitoring HiveMQ with InfluxDB and Grafana

Post Syndicated from The HiveMQ Team original http://www.hivemq.com/blog/monitoring-hivemq-influxdb-grafana

hivemq_monitoring-influx

You need to monitor your system

System monitoring is an essential part of any production software deployment. Some people believe it to be as critical as security and it should be given the same attention. Historical challenges to effective monitoring are a lack of cohesive tools and the wrong mindset. These can lead to a false sense of security, which it is important to not fall victim of. At the end of this blog post we will provide you with a standardized dashboard, including metrics we believe to be useful for live monitoring MQTT brokers. This does in no way mean that these are all the metrics you need to monitor or that we could possibly know what’s crucial to your use case and deployment.

In order to provide you with the opportunity of implementing cohesive monitoring tools, the HiveMQ core distribution comes with the JVM Metrics Plugin and the JMX Plugin. The JVM Plugin will add crucial JVM metrics to the already existing available HiveMQ metrics and the JMX Plugin will enable JMX monitoring for any JMX monitoring tool like JConsole.

Real-time monitoring with the use of tools like JConsole is certainly better than nothing but has its own disadvantages. HiveMQ is often deployed in a container environment and therefore direct access to the HiveMQ process might not be possible. Despite that, using a time series monitoring solution also provides the added benefit of functioning as a great debugging tool, when trying to find the root cause of a system crash or similar.

The AWS Cloudwatch Plugin, Graphite Plugin and InfluxDB Plugin are free of charge and ready to use plugins provided by HiveMQ to enable time series monitoring.

Our recommendation

We routinely get asked about recommendations for monitoring tools. At the end of the day this is down to preference and ultimately your decision. In the past we have had good experiences with the combination of Telegraf, InfluxDB and a Grafana dashboard.

Telegraf can be used for gathering system metrics and writing them to the InfluxDB. HiveMQ is able to write its own metrics to the InfluxDB as well and a Grafana dashboard is a good solution for visualizing these gathered metrics.

Example Dashboard

Example Dashboard

Please note that there are countless other viable monitoring options available.

Installation and configuration

The first step to achieving our desired monitoring setup is installing and starting InfluxDB. InfluxDB works out of the box without adding additional configuration.
When InfluxDB is installed and running, use the command line tool to create a database called ‘hivemq’.

$ influx
Connected to http://localhost:8086 version 1.3.0
InfluxDB shell version: v1.2.3
> CREATE DATABASE hivemq

Attention: InfluxDB does not provide authentication by default, which could open your metrics up to a third party when running the InfluxDB on an external server. Make sure you cover this potential security issue.

InfluxDB data will grow rapidly. This can and will lead to the use of large amounts of disc space after running your InfluxDB for some time. To deal with this challenge InfluxDB offers the possibility to create so called retention policies. In our opinion it is sufficient to retain your InfluxDB data for two weeks. The syntax for creating this retention policy looks like this:

$ influx
Connected to http://localhost:8086 version 1.3.0
InfluxDB shell version: v1.2.3
> CREATE RETENTION POLICY "two_weeks_only" ON "hivemq" DURATION 2w REPLICATION 1

Which, if any, retention policy is best for your individual use case has to be decided by you.

The second step is downloading the InfluxDB HiveMQ Plugin. For this demonstration all the services will be running locally, so we can use the influxdb.properties file that is included in the HiveMQ Plugin without any adjustments. Bear in mind that you need to change the IP address, when running an external InfluxDB.

When running HiveMQ in a cluster it is important you use the exact same influxdb.properties on each node with the exception of this property:

tags:host=hivemq1

This property should be set individually for each HiveMQ node in the cluster for better transparency.

This plugin will now gather all the available HiveMQ metrics (given the JMX Plugin is also running) and write them to the configured InfluxDB.

The third step is installing Telegraf on each HiveMQ cluster node.

Now a telegraf.conf needs to be configured, telling Telegraf which metrics it should gather and eventually write to an InfluxDB. The default telegraf.conf is very inflated and full of comments and options, that are not needed for HiveMQ monitoring. The config we propose looks like this:

[tags]
node = "example-node"

[agent]
interval = "5s"

# OUTPUTS
[outputs]
[outputs.influxdb]
url = "http://localhost:8086"
database = "hivemq" # required.
precision = "s"

# PLUGINS
[cpu]
percpu = true
totalcpu = true

[system]

[disk]

[mem]

[diskio]

[net]

[kernel]

[processes]

This configuration provides metrics for

  • CPU: CPU Usage divided into spaces
  • System: Load, Uptime
  • Disk: Disk Usage, Free Space, INodes used
  • DiskIO: IO Time, Operations
  • Memory: RAM Used, Buffered, Cached
  • Kernel: Linux specific information like context switching
  • Processes: Systemwide process information

Note that some modules like Kernel may not be available on non-Linux systems.

Make sure to change the url, when not using a local InfluxDB.

This configuration will gather the CPU’s percentage and total usage every five seconds. See this page for other possible configurations of the system input.

At this point the terminal window, you are running Influxd in, should be showing something like this:

[httpd] ::1 - - [20/Jun/2017:13:36:46 +0200] "POST /write?db=hivemq HTTP/1.1" 204 0 "-" "-" bdad5fd9-55ac-11e7-8550-000000000000 9743

Showing a successful write of the Telegraf metrics to the InfluxDb.

The next step is installing and starting Grafana.

Grafana works out of the box and can be reached via localhost:3000.

The next step is configuring our InfluxDB as the Grafana’s data source.

Step 1: Add Data Source

Step 1: Add Data Source

Step 2: Configure InfluxDB

Step 2: Configure InfluxDB

Now we need a dashboard. As this question comes up quite often, we decided to provide a dashboard template, that displays some useful metrics for most MQTT deployments and should give you a good starting point for building your own individual dashboard tailored to your use case at hand. You can download the template here
The JSON file inside the zip can be imported to Grafana.

Step 3: Import Dashboard

Step 3: Import Dashboard

That’s it. We now have a working dashboard displaying metrics, who’s monitoring has proven vital in many MQTT deployments.

Disclaimer: This is one possibility and a good starting point we like to give you for monitoring your MQTT use case. Logically the requirements for your individual case may vary. We suggest reading the getting started guide from Grafana and to find what works best for you and your deployment.

Cybercrime Officials Shutdown Large eBook Portal, Three Arrested

Post Syndicated from Andy original https://torrentfreak.com/cybercrime-officials-shutdown-large-ebook-portal-three-arrested-170626/

Back in February 2015, German anti-piracy outfit GVU filed a complaint against the operators of large eBook portal Lul.to.

Targeted mainly at the German audience, the site carried around 160,000 eBooks, 28,000 audiobooks, plus newspapers and periodicals. Its motto was “Read and Listen” and claimed to be both the largest German eBook portal and the largest DRM-free platform in the world.

Unlike most file-sharing sites, Lul.to charged around 30,000 customers a small fee to access content, around $0.23 per download. However, all that came to end last week when authorities moved to shut the platform down.

According to the General Prosecutor’s Office, searches in several locations led to the discovery of around 55,000 euros in bitcoin, 100,000 euros in bank deposits, 10,000 euros in cash, plus a “high-quality” motorcycle.

As is often the case following significant action, the site has been completely taken down and now displays the following seizure notice.

Lul.to seized (translated from German)

Authorities report that three people were arrested and are being detained while investigations continue.

It is not yet clear how many times the site’s books were downloaded by users but investigators believe that the retail value of the content offered on the site was around 392,000 euros. By volume, investigators seized more than 11 terabytes of data.

The German Publishers & Booksellers Association welcomed the shutdown of the platform.

“Intervening against lul.to is an important success in the fight against Internet piracy. By blocking one of the largest illegal providers for e-books and audiobooks, many publishers and retailers can breathe,” said CEO Alexander Skipis.

“Piracy is not an excusable offense, it’s the theft of intellectual property, which is the basis for the work of authors, publishers, and bookshops. Portals like lul.to harm the media market massively. The success of the investigation is another example of the fact that such illegal models ultimately can not hold up.”

Last week in a separate case in Denmark, three men aged between 26 and 71-years-old were handed suspended sentences for offering subscription access to around 198 pirate textbooks.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

CoderDojo Coolest Projects 2017

Post Syndicated from Ben Nuttall original https://www.raspberrypi.org/blog/coderdojo-coolest-projects-2017/

When I heard we were merging with CoderDojo, I was delighted. CoderDojo is a wonderful organisation with a spectacular community, and it’s going to be great to join forces with the team and work towards our common goal: making a difference to the lives of young people by making technology accessible to them.

You may remember that last year Philip and I went along to Coolest Projects, CoderDojo’s annual event at which their global community showcase their best makes. It was awesome! This year a whole bunch of us from the Raspberry Pi Foundation attended Coolest Projects with our new Irish colleagues, and as expected, the projects on show were as cool as can be.

Coolest Projects 2017 attendee

Crowd at Coolest Projects 2017

This year’s coolest projects!

Young maker Benjamin demoed his brilliant RGB LED table tennis ball display for us, and showed off his brilliant project tutorial website codemakerbuddy.com, which he built with Python and Flask. [Click on any of the images to enlarge them.]

Coolest Projects 2017 LED ping-pong ball display
Coolest Projects 2017 Benjamin and Oly

Next up, Aimee showed us a recipes app she’d made with the MIT App Inventor. It was a really impressive and well thought-out project.

Coolest Projects 2017 Aimee's cook book
Coolest Projects 2017 Aimee's setup

This very successful OpenCV face detection program with hardware installed in a teddy bear was great as well:

Coolest Projects 2017 face detection bear
Coolest Projects 2017 face detection interface
Coolest Projects 2017 face detection database

Helen’s and Oly’s favourite project involved…live bees!

Coolest Projects 2017 live bees

BEEEEEEEEEEES!

Its creator, 12-year-old Amy, said she wanted to do something to help the Earth. Her project uses various sensors to record data on the bee population in the hive. An adjacent monitor displays the data in a web interface:

Coolest Projects 2017 Aimee's bees

Coolest robots

I enjoyed seeing lots of GPIO Zero projects out in the wild, including this robotic lawnmower made by Kevin and Zach:

Raspberry Pi Lawnmower

Kevin and Zach’s Raspberry Pi lawnmower project with Python and GPIO Zero, showed at CoderDojo Coolest Projects 2017

Philip’s favourite make was a Pi-powered robot you can control with your mind! According to the maker, Laura, it worked really well with Philip because he has no hair.

Philip Colligan on Twitter

This is extraordinary. Laura from @CoderDojo Romania has programmed a mind controlled robot using @Raspberry_Pi @coolestprojects

And here are some pictures of even more cool robots we saw:

Coolest Projects 2017 coolest robot no.1
Coolest Projects 2017 coolest robot no.2
Coolest Projects 2017 coolest robot no.3

Games, toys, activities

Oly and I were massively impressed with the work of Mogamad, Daniel, and Basheerah, who programmed a (borrowed) Amazon Echo to make a voice-controlled text-adventure game using Java and the Alexa API. They’ve inspired me to try something similar using the AIY projects kit and adventurelib!

Coolest Projects 2017 Mogamad, Daniel, Basheerah, Oly
Coolest Projects 2017 Alexa text-based game

Christopher Hill did a brilliant job with his Home Alone LEGO house. He used sensors to trigger lights and sounds to make it look like someone’s at home, like in the film. I should have taken a video – seeing it in action was great!

Coolest Projects 2017 Lego home alone house
Coolest Projects 2017 Lego home alone innards
Coolest Projects 2017 Lego home alone innards closeup

Meanwhile, the Northern Ireland Raspberry Jam group ran a DOTS board activity, which turned their area into a conductive paint hazard zone.

Coolest Projects 2017 NI Jam DOTS activity 1
Coolest Projects 2017 NI Jam DOTS activity 2
Coolest Projects 2017 NI Jam DOTS activity 3
Coolest Projects 2017 NI Jam DOTS activity 4
Coolest Projects 2017 NI Jam DOTS activity 5
Coolest Projects 2017 NI Jam DOTS activity 6

Creativity and ingenuity

We really enjoyed seeing so many young people collaborating, experimenting, and taking full advantage of the opportunity to make real projects. And we loved how huge the range of technologies in use was: people employed all manner of hardware and software to bring their ideas to life.

Philip Colligan on Twitter

Wow! Look at that room full of awesome young people. @coolestprojects #coolestprojects @CoderDojo

Congratulations to the Coolest Projects 2017 prize winners, and to all participants. Here are some of the teams that won in the different categories:

Coolest Projects 2017 winning team 1
Coolest Projects 2017 winning team 2
Coolest Projects 2017 winning team 3

Take a look at the gallery of all winners over on Flickr.

The wow factor

Raspberry Pi co-founder and Foundation trustee Pete Lomas came along to the event as well. Here’s what he had to say:

It’s hard to describe the scale of the event, and photos just don’t do it justice. The first thing that hit me was the sheer excitement of the CoderDojo ninjas [the children attending Dojos]. Everyone was setting up for their time with the project judges, and their pure delight at being able to show off their creations was evident in both halls. Time and time again I saw the ninjas apply their creativity to help save the planet or make someone’s life better, and it’s truly exciting that we are going to help that continue and expand.

Even after 8 hours, enthusiasm wasn’t flagging – the awards ceremony was just brilliant, with ninjas high-fiving the winners on the way to the stage. This speaks volumes about the ethos and vision of the CoderDojo founders, where everyone is a winner just by being part of a community of worldwide friends. It was a brilliant introduction, and if this weekend was anything to go by, our merger certainly is a marriage made in Heaven.

Join this awesome community!

If all this inspires you as much as it did us, consider looking for a CoderDojo near you – and sign up as a volunteer! There’s plenty of time for young people to build up skills and start working on a project for next year’s event. Check out coolestprojects.com for more information.

The post CoderDojo Coolest Projects 2017 appeared first on Raspberry Pi.