Tag Archives: Demo

timeShift(GrafanaBuzz, 1w) Issue 14

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/09/22/timeshiftgrafanabuzz-1w-issue-14/

Summer is officially in the rear-view mirror, but we at Grafana Labs are excited. Next week, the team will gather in Stockholm, Sweden where we’ll be discussing Grafana 5.0, GrafanaCon EU and setting other goals. If you’re attending Percona Live Europe 2017 in Dublin, be sure and catch Grafana developer, Daniel Lee on Tuesday, September 26. He’ll be showing off the new MySQL data source and a sneak peek of Grafana 5.0.

And with that – we hope you enjoy this issue of TimeShift!


Latest Release

Grafana 4.5.2 is now available! Various fixes to the Graphite data source, HTTP API, and templating.

To see details on what’s been fixed in the newest version, please see the release notes.

Download Grafana 4.5.2 Now


From the Blogosphere

A Monitoring Solution for Docker Hosts, Containers and Containerized Services: Stefan was searching for an open source, self-hosted monitoring solution. With an ever-growing number of open source TSDBs, Stefan outlines why he chose Prometheus and provides a rundown of how he’s monitoring his Docker hosts, containers and services.

Real-time API Performance Monitoring with ES, Beats, Logstash and Grafana: As APIs become a centerpiece for businesses, monitoring API performance is extremely important. Hiren recently configured real time API response time monitoring for a project and shares his implementation plan and configurations.

Monitoring SSL Certificate Expiry in GCP and Kubernetes: This article discusses how to use Prometheus and Grafana to automatically monitor SSL certificates in use by load balancers across GCP projects.

Node.js Performance Monitoring with Prometheus: This is a good primer for monitoring in general. It discusses what monitoring is, important signals to know, instrumentation, and things to consider when selecting a monitoring tool.

DIY Dashboard with Grafana and MariaDB: Mark was interested in testing out the new beta MySQL support in Grafana, so he wrote a short article on how he is using Grafana with MariaDB.

Collecting Temperature Data with Raspberry Pi Computers: Many of us use monitoring for tracking mission-critical systems, but setting up environment monitoring can be a fun way to improve your programming skills as well.


GrafanaCon EU CFP is Open

Have a big idea to share? A shorter talk or a demo you’d like to show off? We’re looking for technical and non-technical talks of all sizes. The proposals are rolling in, but we are happy to save a speaking slot for you!

I’d Like to Speak at GrafanaCon


Grafana Plugins

There were a lot of plugin updates to highlight this week, many of which were due to changes in Grafana 4.5. It’s important to keep your plugins up to date, since bug fixes and new features are added frequently. We’ve made the process of installing and updating plugins simple. On an on-prem instance, use the Grafana-cli, or on Hosted Grafana, install and update with 1-click.

NEW PLUGIN

Linksmart HDS Data Source – The LinkSmart Historical Data Store is a new Grafana data source plugin. LinkSmart is an open source IoT platform for developing IoT applications. IoT applications need to deal with large amounts of data produced by a growing number of sensors and other devices. The Historical Datastore is for storing, querying, and aggregating (time-series) sensor data.

Install Now

UPDATED PLUGIN

Simple JSON Data Source – This plugin received a bug fix for the query editor.

Update Now

UPDATED PLUGIN

Stagemonitor Elasticsearch App – Numerous small updates and the version updated to match the StageMonitor version number.

Update Now

UPDATED PLUGIN

Discrete Panel – Update to fix breaking change in Grafana 4.5.

Update Now

UPDATED PLUGIN

Status Dot Panel – Minor HTML Update in this version.

Update Now

UPDATED PLUGIN

Alarm Box Panel – This panel was updated to fix breaking changes in Grafana 4.5.

Update Now


This week’s MVC (Most Valuable Contributor)

Each week we highlight a contributor to Grafana or the surrounding ecosystem as a thank you for their participation in making open source software great.

Sven Klemm opened a PR for adding a new Postgres data source and has been very quick at implementing proposed changes. The Postgres data source is on our roadmap for Grafana 5.0 so this PR really helps. Thanks Sven!


Tweet of the Week

We scour Twitter each week to find an interesting/beautiful dashboard and show it off! #monitoringLove

Glad you’re finding Grafana useful! Curious about that annotation just before midnight 🙂

We Need Your Help

Last week we announced an experiment we were conducting, and need your help! Do you have a graph that you love because the data is beautiful or because the graph provides interesting information? Please get in touch. Tweet or send us an email with a screenshot, and we’ll tell you about this fun experiment.

I Want to Help


Grafana Labs is Hiring!

We are passionate about open source software and thrive on tackling complex challenges to build the future. We ship code from every corner of the globe and love working with the community. If this sounds exciting, you’re in luck – WE’RE HIRING!

Check out our Open Positions


What do you think?

What would you like to see here? Submit a comment on this article below, or post something at our community forum. Help us make these weekly roundups better!

Follow us on Twitter, like us on Facebook, and join the Grafana Labs community.

FRED-209 Nerf gun tank

Post Syndicated from Janina Ander original https://www.raspberrypi.org/blog/nerf-gun-tank-fred-209/

David Pride, known to many of you as an active member of our maker community, has done it again! His FRED-209 build combines a Nerf gun, 3D printing, a Raspberry Pi Zero, and robotics to make one neat remotely controlled Nerf tank.

FRED-209 – 3D printed Raspberry Pi Nerf Tank

Uploaded by David Pride on 2017-09-17.

A Nerf gun for FRED-209

David says he worked on FRED-209 over the summer in order to have some fun with Nerf guns, which weren’t around when he was a kid. He purchased an Elite Stryfe model at a car boot sale, and took it apart to see what made it tick. Then he set about figuring out how to power it with motors and a servo.

Nerf Elite Stryfe components for the FRED-209 Nerf tank of David Pride

To control the motors, David used a ZeroBorg add-on board for the Pi Zero, and he set up a PlayStation 3 controller to pilot his tank. These components were also part of a robot that David entered into the Pi Wars competition, so he had already written code for them.

3D printing for FRED-209

During prototyping for his Nerf tank, which David named after ED-209 from RoboCop, he used lots of eBay loot and several 3D-printed parts. He used the free OpenSCAD software package to design the parts he wanted to print. If you’re a novice at 3D printing, you might find the printing advice he shares in the write-up on his blog very useful.

3D-printed lid of FRED-209 nerf gun tank by David Pride

David found the 3D printing of the 24cm-long lid of FRED-209 tricky

On eBay, David found some cool-looking chunky wheels, but these turned out to be too heavy for the motors. In the end, he decided to use a Rover 5 chassis, which changed the look of FRED-209 from ‘monster truck’ to ‘tank’.

FRED-209 Nerf tank by David Pride

Next step: teach it to use stairs

The final result looks awesome, and David’s video demonstrates that it shoots very accurately as well. A make like this might be a great defensive project for our new apocalypse-themed Pioneers challenge!

Taking FRED-209 further

David will be uploading code and STL files for FRED-209 soon, so keep an eye on his blog or Twitter for updates. He’s also bringing the Nerf tank to the Cotswold Raspberry Jam this weekend. If you’re attending the event, make sure you catch him and try FRED-209 out yourself.

Never one to rest on his laurels, David is already working on taking his build to the next level. He wants to include a web interface controller and a camera, and is working on implementing OpenCV to give the Nerf tank the ability to autonomously detect targets.

Pi Wars 2018

I have a feeling we might get to see an advanced version of David’s project at next year’s Pi Wars!

The 2018 Pi Wars have just been announced. They will take place on 21-22 April at the Cambridge Computer Laboratory, and you have until 3 October to apply to enter the competition. What are you waiting for? Get making! And as always, do share your robot builds with us via social media.

The post FRED-209 Nerf gun tank appeared first on Raspberry Pi.

SecureLogin For Java Web Applications

Post Syndicated from Bozho original https://techblog.bozho.net/securelogin-java-web-applications/

No, there is not a missing whitespace in the title. It’s not about any secure login, it’s about the SecureLogin protocol developed by Egor Homakov, a security consultant, who became famous for committing to master in the Rails project without having permissions.

The SecureLogin protocol is very interesting, as it does not rely on any central party (e.g. OAuth providers like Facebook and Twitter), thus avoiding all the pitfalls of OAuth (which Homakov has often criticized). It is not a password manager either. It is just a client-side software that performs a bit of crypto in order to prove to the server that it is indeed the right user. For that to work, two parts are key:

  • Using a master password to generate a private key. It uses a key-derivation function, which guarantees that the produced private key has sufficient entropy. That way, using the same master password and the same email, you will get the same private key everytime you use the password, and therefore the same public key. And you are the only one who can prove this public key is yours, by signing a message with your private key.
  • Service providers (websites) identify you by your public key by storing it in the database when you register and then looking it up on each subsequent login

The client-side part is performed ideally by a native client – a browser plugin (one is available for Chrome) or a OS-specific application (including mobile ones). That may sound tedious, but it’s actually quick and easy and a one-time event (and is easier than password managers).

I have to admit – I like it, because I’ve been having a similar idea for a while. In my “biometric identification” presentation (where I discuss the pitfalls of using biometrics-only identification schemes), I proposed (slide 23) an identification scheme that uses biometrics (e.g. scanned with your phone) + a password to produce a private key (using a key-derivation function). And the biometric can easily be added to SecureLogin in the future.

It’s not all roses, of course, as one issue isn’t fully resolved yet – revocation. In case someone steals your master password (or you suspect it might be stolen), you may want to change it and notify all service providers of that change so that they can replace your old public key with a new one. That has two implications – first, you may not have a full list of sites that you registered on, and since you may have changed devices, or used multiple devices, there may be websites that never get to know about your password change. There are proposed solutions (points 3 and 4), but they are not intrinsic to the protocol and rely on centralized services. The second issue is – what if the attacker changes your password first? To prevent that, service providers should probably rely on email verification, which is neither part of the protocol, nor is encouraged by it. But you may have to do it anyway, as a safeguard.

Homakov has not only defined a protocol, but also provided implementations of the native clients, so that anyone can start using it. So I decided to add it to a project I’m currently working on (the login page is here). For that I needed a java implementation of the server verification, and since no such implementation existed (only ruby and node.js are provided for now), I implemented it myself. So if you are going to use SecureLogin with a Java web application, you can use that instead of rolling out your own. While implementing it, I hit a few minor issues that may lead to protocol changes, so I guess backward compatibility should also be somehow included in the protocol (through versioning).

So, how does the code look like? On the client side you have a button and a little javascript:

<!-- get the latest sdk.js from the GitHub repo of securelogin
   or include it from https://securelogin.pw/sdk.js -->
<script src="js/securelogin/sdk.js"></script>
....
<p class="slbutton" id="securelogin">&#9889; SecureLogin</p>
$("#securelogin").click(function() {
  SecureLogin(function(sltoken){
	// TODO: consider adding csrf protection as in the demo applications
        // Note - pass as request body, not as param, as the token relies 
        // on url-encoding which some frameworks mess with
	$.post('/app/user/securelogin', sltoken, function(result) {
            if(result == 'ok') {
		 window.location = "/app/";
            } else {
                 $.notify("Login failed, try again later", "error");
            }
	});
  });
  return false;
});

A single button can be used for both login and signup, or you can have a separate signup form, if it has to include additional details rather than just an email. Since I added SecureLogin in addition to my password-based login, I kept the two forms.

On the server, you simply do the following:

@RequestMapping(value = "/securelogin/register", method = RequestMethod.POST)
@ResponseBody
public String secureloginRegister(@RequestBody String token, HttpServletResponse response) {
    try {
        SecureLogin login = SecureLogin.verify(request.getSecureLoginToken(), Options.create(websiteRootUrl));
        UserDetails details = userService.getUserDetailsByEmail(login.getEmail());
        if (details == null || !login.getRawPublicKey().equals(details.getSecureLoginPublicKey())) {
            return "failure";
        }
        // sets the proper cookies to the response
        TokenAuthenticationService.addAuthentication(response, login.getEmail(), secure));
        return "ok";
    } catch (SecureLoginVerificationException e) {
        return "failure";
    }
}

This is spring-mvc, but it can be any web framework. You can also incorporate that into a spring-security flow somehow. I’ve never liked spring-security’s complexity, so I did it manually. Also, instead of strings, you can return proper status codes. Note that I’m doing a lookup by email and only then checking the public key (as if it’s a password). You can do the other way around if you have the proper index on the public key column.

I wouldn’t suggest having a SecureLogin-only system, as the project is still in an early stage and users may not be comfortable with it. But certainly adding it as an option is a good idea.

The post SecureLogin For Java Web Applications appeared first on Bozho's tech blog.

Greater Transparency into Actions AWS Services Perform on Your Behalf by Using AWS CloudTrail

Post Syndicated from Ujjwal Pugalia original https://aws.amazon.com/blogs/security/get-greater-transparency-into-actions-aws-services-perform-on-your-behalf-by-using-aws-cloudtrail/

To make managing your AWS account easier, some AWS services perform actions on your behalf, including the creation and management of AWS resources. For example, AWS Elastic Beanstalk automatically handles the deployment details of capacity provisioning, load balancing, auto-scaling, and application health monitoring. To make these AWS actions more transparent, AWS adds an AWS Identity and Access Management (IAM) service-linked roles to your account for each linked service you use. Service-linked roles let you view all actions an AWS service performs on your behalf by using AWS CloudTrail logs. This helps you monitor and audit the actions AWS services perform on your behalf. No additional actions are required from you and you can continue using AWS services the way you do today.

To learn more about which AWS services use service-linked roles and log actions on your behalf to CloudTrail, see AWS Services That Work with IAM. Over time, more AWS services will support service-linked roles. For more information about service-linked roles, see Role Terms and Concepts.

In this blog post, I demonstrate how to view CloudTrail logs so that you can more easily monitor and audit AWS services performing actions on your behalf. First, I show how AWS creates a service-linked role in your account automatically when you configure an AWS service that supports service-linked roles. Next, I show how you can view the policies of a service-linked role that grants an AWS service permission to perform actions on your behalf. Finally, I  use the configured AWS service to perform an action and show you how the action appears in your CloudTrail logs.

How AWS creates a service-linked role in your account automatically

I will use Amazon Lex as the AWS service that performs actions on your behalf for this post. You can use Amazon Lex to create chatbots that allow for highly engaging conversational experiences through voice and text. You also can use chatbots on mobile devices, web browsers, and popular chat platform channels such as Slack. Amazon Lex uses Amazon Polly on your behalf to synthesize speech that sounds like a human voice.

Amazon Lex uses two IAM service-linked roles:

  • AWSServiceRoleForLexBots — Amazon Lex uses this service-linked role to invoke Amazon Polly to synthesize speech responses for your chatbot.
  • AWSServiceRoleForLexChannels — Amazon Lex uses this service-linked role to post text to your chatbot when managing channels such as Slack.

You don’t need to create either of these roles manually. When you create your first chatbot using the Amazon Lex console, Amazon Lex creates the AWSServiceRoleForLexBots role for you. When you first associate a chatbot with a messaging channel, Amazon Lex creates the AWSServiceRoleForLexChannels role in your account.

1. Start configuring the AWS service that supports service-linked roles

Navigate to the Amazon Lex console, and choose Get Started to navigate to the Create your Lex bot page. For this example, I choose a sample chatbot called OrderFlowers. To learn how to create a custom chatbot, see Create a Custom Amazon Lex Bot.

Screenshot of making the choice to create an OrderFlowers chatbot

2. Complete the configuration for the AWS service

When you scroll down, you will see the settings for the OrderFlowers chatbot. Notice the field for the IAM role with the value, AWSServiceRoleForLexBots. This service-linked role is “Automatically created on your behalf.” After you have entered all details, choose Create to build your sample chatbot.

Screenshot of the automatically created service-linked role

AWS has created the AWSServiceRoleForLexBots service-linked role in your account. I will return to using the chatbot later in this post when I discuss how Amazon Lex performs actions on your behalf and how CloudTrail logs these actions. First, I will show how you can view the permissions for the AWSServiceRoleForLexBots service-linked role by using the IAM console.

How to view actions in the IAM console that AWS services perform on your behalf

When you configure an AWS service that supports service-linked roles, AWS creates a service-linked role in your account automatically. You can view the service-linked role by using the IAM console.

1. View the AWSServiceRoleForLexBots service-linked role on the IAM console

Go to the IAM console, and choose AWSServiceRoleForLexBots on the Roles page. You can confirm that this role is a service-linked role by viewing the Trusted entities column.

Screenshot of the service-linked role

2.View the trusted entities that can assume the AWSServiceRoleForLexBots service-linked role

Choose the Trust relationships tab on the AWSServiceRoleForLexBots role page. You can view the trusted entities that can assume the AWSServiceRoleForLexBots service-linked role to perform actions on your behalf. In this example, the trusted entity is lex.amazonaws.com.

Screenshot of the trusted entities that can assume the service-linked role

3. View the policy attached to the AWSServiceRoleForLexBots service-linked role

Choose AmazonLexBotPolicy on the Permissions tab to view the policy attached to the AWSServiceRoleForLexBots service-linked role. You can view the policy summary to see that AmazonLexBotPolicy grants permission to Amazon Lex to use Amazon Polly.

Screenshot showing that AmazonLexBotPolicy grants permission to Amazon Lex to use Amazon Polly

4. View the actions that the service-linked role grants permissions to use

Choose Polly to view the action, SynthesizeSpeech, that the AmazonLexBotPolicy grants permission to Amazon Lex to perform on your behalf. Amazon Lex uses this permission to synthesize speech responses for your chatbot. I show later in this post how you can monitor this SynthesizeSpeech action in your CloudTrail logs.

Screenshot showing the the action, SynthesizeSpeech, that the AmazonLexBotPolicy grants permission to Amazon Lex to perform on your behalf

Now that I know the trusted entity and the policy attached to the service-linked role, let’s go back to the chatbot I created earlier and see how CloudTrail logs the actions that Amazon Lex performs on my behalf.

How to use CloudTrail to view actions that AWS services perform on your behalf

As discussed already, I created an OrderFlowers chatbot on the Amazon Lex console. I will use the chatbot and display how the AWSServiceRoleForLexBots service-linked role helps me track actions in CloudTrail. First, though, I must have an active CloudTrail trail created that stores the logs in an Amazon S3 bucket. I will use a trail called TestTrail and an S3 bucket called account-ids-slr.

1. Use the Amazon Lex chatbot via the Amazon Lex console

In Step 2 in the first section of this post, when I chose Create, Amazon Lex built the OrderFlowers chatbot. After the chatbot was built, the right pane showed that a Test Bot was created. Now, I choose the microphone symbol in the right pane and provide voice input to test the OrderFlowers chatbot. In this example, I tell the chatbot, “I would like to order some flowers.” The bot replies to me by asking, “What type of flowers would you like to order?”

Screenshot of voice input to test the OrderFlowers chatbot

When the chatbot replies using voice, Amazon Lex uses Amazon Polly to synthesize speech from text to voice. Amazon Lex assumes the AWSServiceRoleForLexBots service-linked role to perform the SynthesizeSpeech action.

2. Check CloudTrail to view actions performed on your behalf

Now that I have created the chatbot, let’s see which actions were logged in CloudTrail. Choose CloudTrail from the Services drop-down menu to reach the CloudTrail console. Choose Trails and choose the S3 bucket in which you are storing your CloudTrail logs.

Screenshot of the TestTrail trail

In the S3 bucket, you will find log entries for the SynthesizeSpeech event. This means that CloudTrail logged the action when Amazon Lex assumed the AWSServiceRoleForLexBots service-linked role to invoke Amazon Polly to synthesize speech responses for your chatbot. You can monitor and audit this invocation, and it provides you with transparency into Amazon Polly’s SynthesizeSpeech action that Amazon Lex invoked on your behalf. The applicable CloudTrail log section follows and I have emphasized the key lines.

{  
         "eventVersion":"1.05",
         "userIdentity":{  
           "type":"AssumedRole",
            "principalId":"{principal-id}:OrderFlowers",
            "arn":"arn:aws:sts::{account-id}:assumed-role/AWSServiceRoleForLexBots/OrderFlowers",
            "accountId":"{account-id}",
            "accessKeyId":"{access-key-id}",
            "sessionContext":{  
               "attributes":{  
                  "mfaAuthenticated":"false",
                  "creationDate":"2017-09-17T17:30:05Z"
               },
               "sessionIssuer":{  
                  "type":"Role",
                  "principalId":"{principal-id}",
                  "arn":"arn:aws:iam:: {account-id}:role/aws-service-role/lex.amazonaws.com/AWSServiceRoleForLexBots",
                  "accountId":"{account-id",
                  "userName":"AWSServiceRoleForLexBots"
               }
            },
            "invokedBy":"lex.amazonaws.com"
         },
         "eventTime":"2017-09-17T17:30:05Z",
         "eventSource":"polly.amazonaws.com",
         "eventName":"SynthesizeSpeech",
         "awsRegion":"us-east-1",
         "sourceIPAddress":"lex.amazonaws.com",
         "userAgent":"lex.amazonaws.com",
         "requestParameters":{  
            "outputFormat":"mp3",
            "textType":"text",
            "voiceId":"Salli",
            "text":"**********"
         },
         "responseElements":{  
            "requestCharacters":45,
            "contentType":"audio/mpeg"
         },
         "requestID":"{request-id}",
         "eventID":"{event-id}",
         "eventType":"AwsApiCall",
         "recipientAccountId":"{account-id}"
      }

Conclusion

Service-linked roles make it easier for you to track and view actions that linked AWS services perform on your behalf by using CloudTrail. When an AWS service supports service-linked roles to enable this additional logging, you will see a service-linked role added to your account.

If you have comments about this post, submit a comment in the “Comments” section below. If you have questions about working with service-linked roles, start a new thread on the IAM forum or contact AWS Support.

– Ujjwal

How to Query Personally Identifiable Information with Amazon Macie

Post Syndicated from Chad Woolf original https://aws.amazon.com/blogs/security/how-to-query-personally-identifiable-information-with-amazon-macie/

Amazon Macie logo

In August 2017 at the AWS Summit New York, AWS launched a new security and compliance service called Amazon Macie. Macie uses machine learning to automatically discover, classify, and protect sensitive data in AWS. In this blog post, I demonstrate how you can use Macie to help enable compliance with applicable regulations, starting with data retention.

How to query retained PII with Macie

Data retention and mandatory data deletion are common topics across compliance frameworks, so knowing what is stored and how long it has been or needs to be stored is of critical importance. For example, you can use Macie for Payment Card Industry Data Security Standard (PCI DSS) 3.2, requirement 3, “Protect stored cardholder data,” which mandates a “quarterly process for identifying and securely deleting stored cardholder data that exceeds defined retention.” You also can use Macie for ISO 27017 requirement 12.3.1, which calls for “retention periods for backup data.” In each of these cases, you can use Macie’s built-in queries to identify the age of data in your Amazon S3 buckets and to help meet your compliance needs.

To get started with Macie and run your first queries of personally identifiable information (PII) and sensitive data, follow the initial setup as described in the launch post on the AWS Blog. After you have set up Macie, walk through the following steps to start running queries. Start by focusing on the S3 buckets that you want to inventory and capture important compliance related activity and data.

To start running Macie queries:

  1. In the AWS Management Console, launch the Macie console (you can type macie to find the console).
  2. Click Dashboard in the navigation pane. This shows you an overview of the risk level and data classification type of all inventoried S3 buckets, categorized by date and type.
    Screenshot of "Dashboard" in the navigation pane
  3. Choose S3 objects by PII priority. This dashboard lets you sort by PII priority and PII types.
    Screenshot of "S3 objects by PII priority"
  4. In this case, I want to find information about credit card numbers. I choose the magnifying glass for the type cc_number (note that PII types can be used for custom queries). This view shows the events where PII classified data has been uploaded to S3. When I scroll down, I see the individual files that have been identified.
    Screenshot showing the events where PII classified data has been uploaded to S3
  5. Before looking at the files, I want to continue to build the query by only showing items with high priority. To do so, I choose the row called Object PII Priority and then the magnifying glass icon next to High.
    Screenshot of refining the query for high priority events
  6. To view the results matching these queries, I scroll down and choose any file listed. This shows vital information such as creation date, location, and object access control list (ACL).
  7. The piece I am most interested in this case is the Object PII details line to understand more about what was found in the file. In this case, I see name and credit card information, which is what caused the high priority. Scrolling up again, I also see that the query fields have updated as I interacted with the UI.
    Screenshot showing "Object PII details"

Let’s say that I want to get an alert every time Macie finds new data matching this query. This alert can be used to automate response actions by using AWS Lambda and Amazon CloudWatch Events.

  1. I choose the left green icon called Save query as alert.
    Screenshot of "Save query as alert" button
  2. I can customize the alert and change things like category or severity to fit my needs based on the alert data.
  3. Another way to find the information I am looking for is to run custom queries. To start using custom queries, I choose Research in the navigation pane.
    1. To learn more about custom Macie queries and what you can do on the Research tab, see Using the Macie Research Tab.
  4. I change the type of query I want to run from CloudTrail data to S3 objects in the drop-down list menu.
    Screenshot of choosing "S3 objects" from the drop-down list menu
  5. Because I want PII data, I start typing in the query box, which has an autocomplete feature. I choose the pii_types: query. I can now type the data I want to look for. In this case, I want to see all files matching the credit card filter so I type cc_number and press Enter. The query box now says, pii_types:cc_number. I press Enter again to enable autocomplete, and then I type AND pii_types:email to require both a credit card number and email address in a single object.
    The query looks for all files matching the credit card filter ("cc_number")
  6. I choose the magnifying glass to search and Macie shows me all S3 objects that are tagged as PII of type Credit Cards. I can further specify that I only want to see PII of type Credit Card that are classified as High priority by adding AND and pii_impact:high to the query.
    Screenshot showing narrowing the query results furtherAs before, I can save this new query as an alert by clicking Save query as alert, which will be triggered by data matching the query going forward.

Advanced tip

Try the following advanced queries using Lucene query syntax and save the queries as alerts in Macie.

  • Use a regular-expression based query to search for a minimum of 10 credit card numbers and 10 email addresses in a single object:
    • pii_explain.cc_number:/([1-9][0-9]|[0-9]{3,}) distinct Credit Card Numbers.*/ AND pii_explain.email:/([1-9][0-9]|[0-9]{3,}) distinct Email Addresses.*/
  • Search for objects containing at least one credit card, name, and email address that have an object policy enabling global access (searching for S3 AllUsers or AuthenticatedUsers permissions):
    • (object_acl.Grants.Grantee.URI:”http\://acs.amazonaws.com/groups/global/AllUsers” OR  object_acl.Grants.Grantee.URI:”http\://acs.amazonaws.com/groups/global/AllUsers”) AND (pii_types.cc_number AND pii_types.email AND pii_types.name)

These are two ways to identify and be alerted about PII by using Macie. In a similar way, you can create custom alerts for various AWS CloudTrail events by choosing a different data set on which to run the queries again. In the examples in this post, I identified credit cards stored in plain text (all data in this post is example data only), determined how long they had been stored in S3 by viewing the result details, and set up alerts to notify or trigger actions on new sensitive data being stored. With queries like these, you can build a reliable data validation program.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about how to use Macie, start a new thread on the Macie forum or contact AWS Support.

-Chad

Apple’s FaceID

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/09/apples_faceid.html

This is a good interview with Apple’s SVP of Software Engineering about FaceID.

Honestly, I don’t know what to think. I am confident that Apple is not collecting a photo database, but not optimistic that it can’t be hacked with fake faces. I dislike the fact that the police can point the phone at someone and have it automatically unlock. So this is important:

I also quizzed Federighi about the exact way you “quick disabled” Face ID in tricky scenarios — like being stopped by police, or being asked by a thief to hand over your device.

“On older phones the sequence was to click 5 times [on the power button], but on newer phones like iPhone 8 and iPhone X, if you grip the side buttons on either side and hold them a little while — we’ll take you to the power down [screen]. But that also has the effect of disabling Face ID,” says Federighi. “So, if you were in a case where the thief was asking to hand over your phone — you can just reach into your pocket, squeeze it, and it will disable Face ID. It will do the same thing on iPhone 8 to disable Touch ID.”

That squeeze can be of either volume button plus the power button. This, in my opinion, is an even better solution than the “5 clicks” because it’s less obtrusive. When you do this, it defaults back to your passcode.

More:

It’s worth noting a few additional details here:

  • If you haven’t used Face ID in 48 hours, or if you’ve just rebooted, it will ask for a passcode.
  • If there are 5 failed attempts to Face ID, it will default back to passcode. (Federighi has confirmed that this is what happened in the demo onstage when he was asked for a passcode — it tried to read the people setting the phones up on the podium.)

  • Developers do not have access to raw sensor data from the Face ID array. Instead, they’re given a depth map they can use for applications like the Snap face filters shown onstage. This can also be used in ARKit applications.

  • You’ll also get a passcode request if you haven’t unlocked the phone using a passcode or at all in 6.5 days and if Face ID hasn’t unlocked it in 4 hours.

Also be prepared for your phone to immediately lock every time your sleep/wake button is pressed or it goes to sleep on its own. This is just like Touch ID.

Federighi also noted on our call that Apple would be releasing a security white paper on Face ID closer to the release of the iPhone X. So if you’re a researcher or security wonk looking for more, he says it will have “extreme levels of detail” about the security of the system.

Here’s more about fooling it with fake faces:

Facial recognition has long been notoriously easy to defeat. In 2009, for instance, security researchers showed that they could fool face-based login systems for a variety of laptops with nothing more than a printed photo of the laptop’s owner held in front of its camera. In 2015, Popular Science writer Dan Moren beat an Alibaba facial recognition system just by using a video that included himself blinking.

Hacking FaceID, though, won’t be nearly that simple. The new iPhone uses an infrared system Apple calls TrueDepth to project a grid of 30,000 invisible light dots onto the user’s face. An infrared camera then captures the distortion of that grid as the user rotates his or her head to map the face’s 3-D shape­ — a trick similar to the kind now used to capture actors’ faces to morph them into animated and digitally enhanced characters.

It’ll be harder, but I have no doubt that it will be done.

More speculation.

I am not planning on enabling it just yet.

Using AWS CodePipeline, AWS CodeBuild, and AWS Lambda for Serverless Automated UI Testing

Post Syndicated from Prakash Palanisamy original https://aws.amazon.com/blogs/devops/using-aws-codepipeline-aws-codebuild-and-aws-lambda-for-serverless-automated-ui-testing/

Testing the user interface of a web application is an important part of the development lifecycle. In this post, I’ll explain how to automate UI testing using serverless technologies, including AWS CodePipeline, AWS CodeBuild, and AWS Lambda.

I built a website for UI testing that is hosted in S3. I used Selenium to perform cross-browser UI testing on Chrome, Firefox, and PhantomJS, a headless WebKit browser with Ghost Driver, an implementation of the WebDriver Wire Protocol. I used Python to create test cases for ChromeDriver, FirefoxDriver, or PhatomJSDriver based the browser against which the test is being executed.

Resources referred to in this post, including the AWS CloudFormation template, test and status websites hosted in S3, AWS CodeBuild build specification files, AWS Lambda function, and the Python script that performs the test are available in the serverless-automated-ui-testing GitHub repository.

S3 Hosted Test Website:

AWS CodeBuild supports custom containers so we can use the Selenium/standalone-Firefox and Selenium/standalone-Chrome containers, which include prebuild Firefox and Chrome browsers, respectively. Xvfb performs the graphical operation in virtual memory without any display hardware. It will be installed in the CodeBuild containers during the install phase.

Build Spec for Chrome and Firefox

The build specification for Chrome and Firefox testing includes multiple phases:

  • The environment variables section contains a set of default variables that are overridden while creating the build project or triggering the build.
  • As part of install phase, required packages like Xvfb and Selenium are installed using yum.
  • During the pre_build phase, the test bed is prepared for test execution.
  • During the build phase, the appropriate DISPLAY is set and the tests are executed.
version: 0.2

env:
  variables:
    BROWSER: "chrome"
    WebURL: "https://sampletestweb.s3-eu-west-1.amazonaws.com/website/index.html"
    ArtifactBucket: "codebuild-demo-artifact-repository"
    MODULES: "mod1"
    ModuleTable: "test-modules"
    StatusTable: "blog-test-status"

phases:
  install:
    commands:
      - apt-get update
      - apt-get -y upgrade
      - apt-get install xvfb python python-pip build-essential -y
      - pip install --upgrade pip
      - pip install selenium
      - pip install awscli
      - pip install requests
      - pip install boto3
      - cp xvfb.init /etc/init.d/xvfb
      - chmod +x /etc/init.d/xvfb
      - update-rc.d xvfb defaults
      - service xvfb start
      - export PATH="$PATH:`pwd`/webdrivers"
  pre_build:
    commands:
      - python prepare_test.py
  build:
    commands:
      - export DISPLAY=:5
      - cd tests
      - echo "Executing simple test..."
      - python testsuite.py

Because Ghost Driver runs headless, it can be executed on AWS Lambda. In keeping with a fire-and-forget model, I used CodeBuild to create the PhantomJS Lambda function and trigger the test invocations on Lambda in parallel. This is powerful because many tests can be executed in parallel on Lambda.

Build Spec for PhantomJS

The build specification for PhantomJS testing also includes multiple phases. It is a little different from the preceding example because we are using AWS Lambda for the test execution.

  • The environment variables section contains a set of default variables that are overridden while creating the build project or triggering the build.
  • As part of install phase, the required packages like Selenium and the AWS CLI are installed using yum.
  • During the pre_build phase, the test bed is prepared for test execution.
  • During the build phase, a zip file that will be used to create the PhantomJS Lambda function is created and tests are executed on the Lambda function.
version: 0.2

env:
  variables:
    BROWSER: "phantomjs"
    WebURL: "https://sampletestweb.s3-eu-west-1.amazonaws.com/website/index.html"
    ArtifactBucket: "codebuild-demo-artifact-repository"
    MODULES: "mod1"
    ModuleTable: "test-modules"
    StatusTable: "blog-test-status"
    LambdaRole: "arn:aws:iam::account-id:role/role-name"

phases:
  install:
    commands:
      - apt-get update
      - apt-get -y upgrade
      - apt-get install python python-pip build-essential -y
      - apt-get install zip unzip -y
      - pip install --upgrade pip
      - pip install selenium
      - pip install awscli
      - pip install requests
      - pip install boto3
  pre_build:
    commands:
      - python prepare_test.py
  build:
    commands:
      - cd lambda_function
      - echo "Packaging Lambda Function..."
      - zip -r /tmp/lambda_function.zip ./*
      - func_name=`echo $CODEBUILD_BUILD_ID | awk -F ':' '{print $1}'`-phantomjs
      - echo "Creating Lambda Function..."
      - chmod 777 phantomjs
      - |
         func_list=`aws lambda list-functions | grep FunctionName | awk -F':' '{print $2}' | tr -d ', "'`
         if echo "$func_list" | grep -qw $func_name
         then
             echo "Lambda function already exists."
         else
             aws lambda create-function --function-name $func_name --runtime "python2.7" --role $LambdaRole --handler "testsuite.lambda_handler" --zip-file fileb:///tmp/lambda_function.zip --timeout 150 --memory-size 1024 --environment Variables="{WebURL=$WebURL, StatusTable=$StatusTable}" --tags Name=$func_name
         fi
      - export PhantomJSFunction=$func_name
      - cd ../tests/
      - python testsuite.py

The list of test cases and the test modules that belong to each case are stored in an Amazon DynamoDB table. Based on the list of modules passed as an argument to the CodeBuild project, CodeBuild gets the test cases from that table and executes them. The test execution status and results are stored in another Amazon DynamoDB table. It will read the test status from the status table in DynamoDB and display it.

AWS CodeBuild and AWS Lambda perform the test execution as individual tasks. AWS CodePipeline plays an important role here by enabling continuous delivery and parallel execution of tests for optimized testing.

Here’s how to do it:

In AWS CodePipeline, create a pipeline with four stages:

  • Source (AWS CodeCommit)
  • UI testing (AWS Lambda and AWS CodeBuild)
  • Approval (manual approval)
  • Production (AWS Lambda)

Pipeline stages, the actions in each stage, and transitions between stages are shown in the following diagram.

This design implemented in AWS CodePipeline looks like this:

CodePipeline automatically detects a change in the source repository and triggers the execution of the pipeline.

In the UITest stage, there are two parallel actions:

  • DeployTestWebsite invokes a Lambda function to deploy the test website in S3 as an S3 website.
  • DeployStatusPage invokes another Lambda function to deploy in parallel the status website in S3 as an S3 website.

Next, there are three parallel actions that trigger the CodeBuild project:

  • TestOnChrome launches a container to perform the Selenium tests on Chrome.
  • TestOnFirefox launches another container to perform the Selenium tests on Firefox.
  • TestOnPhantomJS creates a Lambda function and invokes individual Lambda functions per test case to execute the test cases in parallel.

You can monitor the status of the test execution on the status website, as shown here:

When the UI testing is completed successfully, the pipeline continues to an Approval stage in which a notification is sent to the configured SNS topic. The designated team member reviews the test status and approves or rejects the deployment. Upon approval, the pipeline continues to the Production stage, where it invokes a Lambda function and deploys the website to a production S3 bucket.

I used a CloudFormation template to set up my continuous delivery pipeline. The automated-ui-testing.yaml template, available from GitHub, sets up a full-featured pipeline.

When I use the template to create my pipeline, I specify the following:

  • AWS CodeCommit repository.
  • SNS topic to send approval notification.
  • S3 bucket name where the artifacts will be stored.

The stack name should follow the rules for S3 bucket naming because it will be part of the S3 bucket name.

When the stack is created successfully, the URLs for the test website and status website appear in the Outputs section, as shown here:

Conclusion

In this post, I showed how you can use AWS CodePipeline, AWS CodeBuild, AWS Lambda, and a manual approval process to create a continuous delivery pipeline for serverless automated UI testing. Websites running on Amazon EC2 instances or AWS Elastic Beanstalk can also be tested using similar approach.


About the author

Prakash Palanisamy is a Solutions Architect for Amazon Web Services. When he is not working on Serverless, DevOps or Alexa, he will be solving problems in Project Euler. He also enjoys watching educational documentaries.

Automating Amazon EBS Snapshot Management with AWS Step Functions and Amazon CloudWatch Events

Post Syndicated from Andy Katz original https://aws.amazon.com/blogs/compute/automating-amazon-ebs-snapshot-management-with-aws-step-functions-and-amazon-cloudwatch-events/

Brittany Doncaster, Solutions Architect

Business continuity is important for building mission-critical workloads on AWS. As an AWS customer, you might define recovery point objectives (RPO) and recovery time objectives (RTO) for different tier applications in your business. After the RPO and RTO requirements are defined, it is up to your architects to determine how to meet those requirements.

You probably store persistent data in Amazon EBS volumes, which live within a single Availability Zone. And, following best practices, you take snapshots of your EBS volumes to back up the data on Amazon S3, which provides 11 9’s of durability. If you are following these best practices, then you’ve probably recognized the need to manage the number of snapshots you keep for a particular EBS volume and delete older, unneeded snapshots. Doing this cleanup helps save on storage costs.

Some customers also have policies stating that backups need to be stored a certain number of miles away as part of a disaster recovery (DR) plan. To meet these requirements, customers copy their EBS snapshots to the DR region. Then, the same snapshot management and cleanup has to also be done in the DR region.

All of this snapshot management logic consists of different components. You would first tag your snapshots so you could manage them. Then, determine how many snapshots you currently have for a particular EBS volume and assess that value against a retention rule. If the number of snapshots was greater than your retention value, then you would clean up old snapshots. And finally, you might copy the latest snapshot to your DR region. All these steps are just an example of a simple snapshot management workflow. But how do you automate something like this in AWS? How do you do it without servers?

One of the most powerful AWS services released in 2016 was Amazon CloudWatch Events. It enables you to build event-driven IT automation, based on events happening within your AWS infrastructure. CloudWatch Events integrates with AWS Lambda to let you execute your custom code when one of those events occurs. However, the actions to take based on those events aren’t always composed of a single Lambda function. Instead, your business logic may consist of multiple steps (like in the case of the example snapshot management flow described earlier). And you may want to run those steps in sequence or in parallel. You may also want to have retry logic or exception handling for each step.

AWS Step Functions serves just this purpose―to help you coordinate your functions and microservices. Step Functions enables you to simplify your effort and pull the error handling, retry logic, and workflow logic out of your Lambda code. Step Functions integrates with Lambda to provide a mechanism for building complex serverless applications. Now, you can kick off a Step Functions state machine based on a CloudWatch event.

In this post, I discuss how you can target Step Functions in a CloudWatch Events rule. This allows you to have event-driven snapshot management based on snapshot completion events firing in CloudWatch Event rules.

As an example of what you could do with Step Functions and CloudWatch Events, we’ve developed a reference architecture that performs management of your EBS snapshots.

Automating EBS Snapshot Management with Step Functions

This architecture assumes that you have already set up CloudWatch Events to create the snapshots on a schedule or that you are using some other means of creating snapshots according to your needs.

This architecture covers the pieces of the workflow that need to happen after a snapshot has been created.

  • It creates a CloudWatch Events rule to invoke a Step Functions state machine execution when an EBS snapshot is created.
  • The state machine then tags the snapshot, cleans up the oldest snapshots if the number of snapshots is greater than the defined number to retain, and copies the snapshot to a DR region.
  • When the DR region snapshot copy is completed, another state machine kicks off in the DR region. The new state machine has a similar flow and uses some of the same Lambda code to clean up the oldest snapshots that are greater than the defined number to retain.
  • Also, both state machines demonstrate how you can use Step Functions to handle errors within your workflow. Any errors that are caught during execution result in the execution of a Lambda function that writes a message to an SNS topic. Therefore, if any errors occur, you can subscribe to the SNS topic and get notified.

The following is an architecture diagram of the reference architecture:

Creating the Lambda functions and Step Functions state machines

First, pull the code from GitHub and use the AWS CLI to create S3 buckets for the Lambda code in the primary and DR regions. For this example, assume that the primary region is us-west-2 and the DR region is us-east-2. Run the following commands, replacing the italicized text in <> with your own unique bucket names.

git clone https://github.com/awslabs/aws-step-functions-ebs-snapshot-mgmt.git

cd aws-step-functions-ebs-snapshot-mgmt/

aws s3 mb s3://<primary region bucket name> --region us-west-2

aws s3 mb s3://<DR region bucket name> --region us-east-2

Next, use the Serverless Application Model (SAM), which uses AWS CloudFormation to deploy the Lambda functions and Step Functions state machines in the primary and DR regions. Replace the italicized text in <> with the S3 bucket names that you created earlier.

aws cloudformation package --template-file PrimaryRegionTemplate.yaml --s3-bucket <primary region bucket name>  --output-template-file tempPrimary.yaml --region us-west-2

aws cloudformation deploy --template-file tempPrimary.yaml --stack-name ebsSnapshotMgmtPrimary --capabilities CAPABILITY_IAM --region us-west-2

aws cloudformation package --template-file DR_RegionTemplate.yaml --s3-bucket <DR region bucket name> --output-template-file tempDR.yaml  --region us-east-2

aws cloudformation deploy --template-file tempDR.yaml --stack-name ebsSnapshotMgmtDR --capabilities CAPABILITY_IAM --region us-east-2

CloudWatch event rule verification

The CloudFormation templates deploy the following resources:

  • The Lambda functions that are coordinated by Step Functions
  • The Step Functions state machine
  • The SNS topic
  • The CloudWatch Events rules that trigger the state machine execution

So, all of the CloudWatch event rules have been created for you by performing the preceding commands. The next section demonstrates how you could create the CloudWatch event rule manually. To jump straight to testing the workflow, see the “Testing in your Account” section. Otherwise, you begin by setting up the CloudWatch event rule in the primary region for the createSnapshot event and also the CloudWatch event rule in the DR region for the copySnapshot command.

First, open the CloudWatch console in the primary region.

Choose Create Rule and create a rule for the createSnapshot command, with your newly created Step Function state machine as the target.

For Event Source, choose Event Pattern and specify the following values:

  • Service Name: EC2
  • Event Type: EBS Snapshot Notification
  • Specific Event: createSnapshot

For Target, choose Step Functions state machine, then choose the state machine created by the CloudFormation commands. Choose Create a new role for this specific resource. Your completed rule should look like the following:

Choose Configure Details and give the rule a name and description.

Choose Create Rule. You now have a CloudWatch Events rule that triggers a Step Functions state machine execution when the EBS snapshot creation is complete.

Now, set up the CloudWatch Events rule in the DR region as well. This looks almost same, but is based off the copySnapshot event instead of createSnapshot.

In the upper right corner in the console, switch to your DR region. Choose CloudWatch, Create Rule.

For Event Source, choose Event Pattern and specify the following values:

  • Service Name: EC2
  • Event Type: EBS Snapshot Notification
  • Specific Event: copySnapshot

For Target, choose Step Functions state machine, then select the state machine created by the CloudFormation commands. Choose Create a new role for this specific resource. Your completed rule should look like in the following:

As in the primary region, choose Configure Details and then give this rule a name and description. Complete the creation of the rule.

Testing in your account

To test this setup, open the EC2 console and choose Volumes. Select a volume to snapshot. Choose Actions, Create Snapshot, and then create a snapshot.

This results in a new execution of your state machine in the primary and DR regions. You can view these executions by going to the Step Functions console and selecting your state machine.

From there, you can see the execution of the state machine.

Primary region state machine:

DR region state machine:

I’ve also provided CloudFormation templates that perform all the earlier setup without using git clone and running the CloudFormation commands. Choose the Launch Stack buttons below to launch the primary and DR region stacks in Dublin and Ohio, respectively. From there, you can pick up at the Testing in Your Account section above to finish the example. All of the code for this example architecture is located in the aws-step-functions-ebs-snapshot-mgmt AWSLabs repo.

Launch EBS Snapshot Management into Ireland with CloudFormation
Primary Region eu-west-1 (Ireland)

Launch EBS Snapshot Management into Ohio with CloudFormation
DR Region us-east-2 (Ohio)

Summary

This reference architecture is just an example of how you can use Step Functions and CloudWatch Events to build event-driven IT automation. The possibilities are endless:

  • Use this pattern to perform other common cleanup type jobs such as managing Amazon RDS snapshots, old versions of Lambda functions, or old Amazon ECR images—all triggered by scheduled events.
  • Use Trusted Advisor events to identify unused EC2 instances or EBS volumes, then coordinate actions on them, such as alerting owners, stopping, or snapshotting.

Happy coding and please let me know what useful state machines you build!

AWS IAM Policy Summaries Now Help You Identify Errors and Correct Permissions in Your IAM Policies

Post Syndicated from Joy Chatterjee original https://aws.amazon.com/blogs/security/iam-policy-summaries-now-help-you-identify-errors-and-correct-permissions-in-your-iam-policies/

In March, we made it easier to view and understand the permissions in your AWS Identity and Access Management (IAM) policies by using IAM policy summaries. Today, we updated policy summaries to help you identify and correct errors in your IAM policies. When you set permissions using IAM policies, for each action you specify, you must match that action to supported resources or conditions. Now, you will see a warning if these policy elements (Actions, Resources, and Conditions) defined in your IAM policy do not match.

When working with policies, you may find that although the policy has valid JSON syntax, it does not grant or deny the desired permissions because the Action element does not have an applicable Resource element or Condition element defined in the policy. For example, you may want to create a policy that allows users to view a specific Amazon EC2 instance. To do this, you create a policy that specifies ec2:DescribeInstances for the Action element and the Amazon Resource Name (ARN) of the instance for the Resource element. When testing this policy, you find AWS denies this access because ec2:DescribeInstances does not support resource-level permissions and requires access to list all instances. Therefore, to grant access to this Action element, you need to specify a wildcard (*) in the Resource element of your policy for this Action element in order for the policy to function correctly.

To help you identify and correct permissions, you will now see a warning in a policy summary if the policy has either of the following:

  • An action that does not support the resource specified in a policy.
  • An action that does not support the condition specified in a policy.

In this blog post, I walk through two examples of how you can use policy summaries to help identify and correct these types of errors in your IAM policies.

How to use IAM policy summaries to debug your policies

Example 1: An action does not support the resource specified in a policy

Let’s say a human resources (HR) representative, Casey, needs access to the personnel files stored in HR’s Amazon S3 bucket. To do this, I create the following policy to grant all actions that begin with s3:List. In addition, I grant access to s3:GetObject in the Action element of the policy. To ensure that Casey has access only to a specific bucket and not others, I specify the bucket ARN in the Resource element of the policy.

Note: This policy does not grant the desired permissions.

This policy does not work. Do not copy.
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "ThisPolicyDoesNotGrantAllListandGetActions",
            "Effect": "Allow",
            "Action": ["s3:List*",
                       "s3:GetObject"],
            "Resource": ["arn:aws:s3:::HumanResources"]
        }
    ]
}

After I create the policy, HRBucketPermissions, I select this policy from the Policies page to view the policy summary. From here, I check to see if there are any warnings or typos in the policy. I see a warning at the top of the policy detail page because the policy does not grant some permissions specified in the policy, which is caused by a mismatch among the actions, resources, or conditions.

Screenshot showing the warning at the top of the policy

To view more details about the warning, I choose Show remaining so that I can understand why the permissions do not appear in the policy summary. As shown in the following screenshot, I see no access to the services that are not granted by the IAM policy in the policy, which is expected. However, next to S3, I see a warning that one or more S3 actions do not have an applicable resource.

Screenshot showing that one or more S3 actions do not have an applicable resource

To understand why the specific actions do not have a supported resource, I choose S3 from the list of services and choose Show remaining. I type List in the filter to understand why some of the list actions are not granted by the policy. As shown in the following screenshot, I see these warnings:

  • This action does not support resource-level permissions. This means the action does not support resource-level permissions and requires a wildcard (*) in the Resource element of the policy.
  • This action does not have an applicable resource. This means the action supports resource-level permissions, but not the resource type defined in the policy. In this example, I specified an S3 bucket for an action that supports only an S3 object resource type.

From these warnings, I see that s3:ListAllMyBuckets, s3:ListBucketMultipartUploadsParts3:ListObjects , and s3:GetObject do not support an S3 bucket resource type, which results in Casey not having access to the S3 bucket. To correct the policy, I choose Edit policy and update the policy with three statements based on the resource that the S3 actions support. Because Casey needs access to view and read all of the objects in the HumanResources bucket, I add a wildcard (*) for the S3 object path in the Resource ARN.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "TheseActionsSupportBucketResourceType",
            "Effect": "Allow",
            "Action": ["s3:ListBucket",
                       "s3:ListBucketByTags",
                       "s3:ListBucketMultipartUploads",
                       "s3:ListBucketVersions"],
            "Resource": ["arn:aws:s3:::HumanResources"]
        },{
            "Sid": "TheseActionsRequireAllResources",
            "Effect": "Allow",
            "Action": ["s3:ListAllMyBuckets",
                       "s3:ListMultipartUploadParts",
                       "s3:ListObjects"],
            "Resource": [ "*"]
        },{
            "Sid": "TheseActionsRequireSupportsObjectResourceType",
            "Effect": "Allow",
            "Action": ["s3:GetObject"],
            "Resource": ["arn:aws:s3:::HumanResources/*"]
        }
    ]
}

After I make these changes, I see the updated policy summary and see that warnings are no longer displayed.

Screenshot of the updated policy summary that no longer shows warnings

In the previous example, I showed how to identify and correct permissions errors that include actions that do not support a specified resource. In the next example, I show how to use policy summaries to identify and correct a policy that includes actions that do not support a specified condition.

Example 2: An action does not support the condition specified in a policy

For this example, let’s assume Bob is a project manager who requires view and read access to all the code builds for his team. To grant him this access, I create the following JSON policy that specifies all list and read actions to AWS CodeBuild and defines a condition to limit access to resources in the us-west-2 Region in which Bob’s team develops.

This policy does not work. Do not copy. 
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "ListReadAccesstoCodeServices",
            "Effect": "Allow",
            "Action": [
                "codebuild:List*",
                "codebuild:BatchGet*"
            ],
            "Resource": ["*"], 
             "Condition": {
                "StringEquals": {
                    "ec2:Region": "us-west-2"
                }
            }
        }
    ]	
}

After I create the policy, PMCodeBuildAccess, I select this policy from the Policies page to view the policy summary in the IAM console. From here, I check to see if the policy has any warnings or typos. I see an error at the top of the policy detail page because the policy does not grant any permissions.

Screenshot with an error showing the policy does not grant any permissions

To view more details about the error, I choose Show remaining to understand why no permissions result from the policy. I see this warning: One or more conditions do not have an applicable action. This means that the condition is not supported by any of the actions defined in the policy.

From the warning message (see preceding screenshot), I realize that ec2:Region is not a supported condition for any actions in CodeBuild. To correct the policy, I separate the list actions that do not support resource-level permissions into a separate Statement element and specify * as the resource. For the remaining CodeBuild actions that support resource-level permissions, I use the ARN to specify the us-west-2 Region in the project resource type.

CORRECT POLICY 
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "TheseActionsSupportAllResources",
            "Effect": "Allow",
            "Action": [
                "codebuild:ListBuilds",
                "codebuild:ListProjects",
                "codebuild:ListRepositories",
                "codebuild:ListCuratedEnvironmentImages",
                "codebuild:ListConnectedOAuthAccounts"
            ],
            "Resource": ["*"] 
        }, {
            "Sid": "TheseActionsSupportAResource",
            "Effect": "Allow",
            "Action": [
                "codebuild:ListBuildsForProject",
                "codebuild:BatchGet*"
            ],
            "Resource": ["arn:aws:codebuild:us-west-2:123456789012:project/*"] 
        }

    ]	
}

After I make the changes, I view the updated policy summary and see that no warnings are displayed.

Screenshot showing the updated policy summary with no warnings

When I choose CodeBuild from the list of services, I also see that for the actions that support resource-level permissions, the access is limited to the us-west-2 Region.

Screenshow showing that for the Actions that support resource-level permissions, the access is limited to the us-west-2 region.

Conclusion

Policy summaries make it easier to view and understand the permissions and resources in your IAM policies by displaying the permissions granted by the policies. As I’ve demonstrated in this post, you can also use policy summaries to help you identify and correct your IAM policies. To understand the types of warnings that policy summaries support, you can visit Troubleshoot IAM Policies. To view policy summaries in your AWS account, sign in to the IAM console and navigate to any policy on the Policies page of the IAM console or the Permissions tab on a user’s page.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or suggestions for this solution, start a new thread on the IAM forum or contact AWS Support.

– Joy

timeShift(GrafanaBuzz, 1w) Issue 13

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/09/15/timeshiftgrafanabuzz-1w-issue-13/

It’s been a busy week here at Grafana Labs – Grafana 4.5 is now available! We’ve made a lot of enhancements and added new features in this release, so be sure and check out the release blog post to see the full changelog. The GrafanaCon EU CFP is officially open so please don’t forget to submit your topic. We’re looking for technical and non-technical talks of all sizes.


Latest Release

Grafana v4.5 is available for download. The new Grafana 4.5 release includes major improvements to the query editors for Prometheus, Elasticsearch and MySQL.
View the changelog.

Download Grafana 4.5 Now


From the Blogosphere

Percona Live Europe Featured Talks: Visualize Your Data with Grafana Featuring Daniel Lee: The folks from Percona sat down with Grafana Labs Software Developer Daniel Lee to discuss his upcoming talk at PerconaLive Europe 2017, Dublin, and how data can drive better decision making for your business. Get your tickets now, and use code: SeeMeSpeakPLE17 for 10% off!

Register Now

Performance monitoring with ELK / Grafana: This article walks you through setting up the ELK stack to monitor webpage load time, but switches out Kibana for Grafana so you can visualize data from other sources right next to this performance data.

ESXi Lab Series: Aaron created a video mini-series about implementing both offensive and defensive security in an ESXi Lab environment. Parts four and five focus on monitoring with Grafana, but you’ll probably want to start with one.

Raspberry Pi Monitoring with Grafana: We’ve been excited to see more and more articles about Grafana from Raspberry Pi users. This article helps you install and configure Grafana, and also touches on what monitoring is and why it’s important.


Grafana Plugins

This week we were busy putting the finishing touches on the new release, but we do have an update to the Gnocchi data source plugin to announce, and a new annotation plugin that works with any data source. Install or update plugins on an on-prem instance using the Grafana-cli, or with one click on Hosted Grafana.

NEW PLUGIN

Simple Annotations – Frustrated with using a data source that doesn’t support annotations? This is a simple annotation plugin for Grafana that works with any data source!

Install Now

UPDATED PLUGIN

Gnocchi Data Source – The latest release adds the reaggregation feature. Gnocchi can pre-compute the aggregation of timeseries (ex: aggregate the mean every 10 minute for 1 year). Then allows you to (re)aggregate timeseries, since stored timeseries have already been aggregated. A big shout out to sileht for adding new features to the Gnocchi plugin.

Update Now


GrafanaCon EU Call for Papers is Open

Have a big idea to share? A shorter talk or a demo you’d like to show off? We’re looking for technical and non-technical talks of all sizes.

I’d Like to Speak at GrafanaCon


Tweet of the Week

We scour Twitter each week to find an interesting/beautiful dashboard and show it off! #monitoringLove

Awesome – really looking forward to seeing updates as you get to 1.0!

We Need Your Help

We’re conducting an experiment and need your help. Do you have a graph that you love because the data is beautiful or because the graph provides interesting information? Please get in touch. Tweet or send us an email with a screenshot, and we’ll tell you about the experiment.

Be Part of the Experiment


Grafana Labs is Hiring!

We are passionate about open source software and thrive on tackling complex challenges to build the future. We ship code from every corner of the globe and love working with the community. If this sounds exciting, you’re in luck – WE’RE HIRING!

Check out our Open Positions


What do you think?

We’re always interested in how we can improve our weekly roundups. Submit a comment on this article below, or post something at our community forum. Help us make these roundups better and better!

Follow us on Twitter, like us on Facebook, and join the Grafana Labs community.

Delivering Graphics Apps with Amazon AppStream 2.0

Post Syndicated from Deepak Suryanarayanan original https://aws.amazon.com/blogs/compute/delivering-graphics-apps-with-amazon-appstream-2-0/

Sahil Bahri, Sr. Product Manager, Amazon AppStream 2.0

Do you need to provide a workstation class experience for users who run graphics apps? With Amazon AppStream 2.0, you can stream graphics apps from AWS to a web browser running on any supported device. AppStream 2.0 offers a choice of GPU instance types. The range includes the newly launched Graphics Design instance, which allows you to offer a fast, fluid user experience at a fraction of the cost of using a graphics workstation, without upfront investments or long-term commitments.

In this post, I discuss the Graphics Design instance type in detail, and how you can use it to deliver a graphics application such as Siemens NX―a popular CAD/CAM application that we have been testing on AppStream 2.0 with engineers from Siemens PLM.

Graphics Instance Types on AppStream 2.0

First, a quick recap on the GPU instance types available with AppStream 2.0. In July, 2017, we launched graphics support for AppStream 2.0 with two new instance types that Jeff Barr discussed on the AWS Blog:

  • Graphics Desktop
  • Graphics Pro

Many customers in industries such as engineering, media, entertainment, and oil and gas are using these instances to deliver high-performance graphics applications to their users. These instance types are based on dedicated NVIDIA GPUs and can run the most demanding graphics applications, including those that rely on CUDA graphics API libraries.

Last week, we added a new lower-cost instance type: Graphics Design. This instance type is a great fit for engineers, 3D modelers, and designers who use graphics applications that rely on the hardware acceleration of DirectX, OpenGL, or OpenCL APIs, such as Siemens NX, Autodesk AutoCAD, or Adobe Photoshop. The Graphics Design instance is based on AMD’s FirePro S7150x2 Server GPUs and equipped with AMD Multiuser GPU technology. The instance type uses virtualized GPUs to achieve lower costs, and is available in four instance sizes to scale and match the requirements of your applications.

Instance vCPUs Instance RAM (GiB) GPU Memory (GiB)
stream.graphics-design.large 2 7.5 GiB 1
stream.graphics-design.xlarge 4 15.3 GiB 2
stream.graphics-design.2xlarge 8 30.5 GiB 4
stream.graphics-design.4xlarge 16 61 GiB 8

The following table compares all three graphics instance types on AppStream 2.0, along with example applications you could use with each.

  Graphics Design Graphics Desktop Graphics Pro
Number of instance sizes 4 1 3
GPU memory range
1–8 GiB 4 GiB 8–32 GiB
vCPU range 2–16 8 16–32
Memory range 7.5–61 GiB 15 GiB 122–488 GiB
Graphics libraries supported AMD FirePro S7150x2 NVIDIA GRID K520 NVIDIA Tesla M60
Price range (N. Virginia AWS Region) $0.25 – $2.00/hour $0.5/hour $2.05 – $8.20/hour
Example applications Adobe Premiere Pro, AutoDesk Revit, Siemens NX AVEVA E3D, SOLIDWORKS AutoDesk Maya, Landmark DecisionSpace, Schlumberger Petrel

Example graphics instance set up with Siemens NX

In the section, I walk through setting up Siemens NX with Graphics Design instances on AppStream 2.0. After set up is complete, users can able to access NX from within their browser and also access their design files from a file share. You can also use these steps to set up and test your own graphics applications on AppStream 2.0. Here’s the workflow:

  1. Create a file share to load and save design files.
  2. Create an AppStream 2.0 image with Siemens NX installed.
  3. Create an AppStream 2.0 fleet and stack.
  4. Invite users to access Siemens NX through a browser.
  5. Validate the setup.

To learn more about AppStream 2.0 concepts and set up, see the previous post Scaling Your Desktop Application Streams with Amazon AppStream 2.0. For a deeper review of all the setup and maintenance steps, see Amazon AppStream 2.0 Developer Guide.

Step 1: Create a file share to load and save design files

To launch and configure the file server

  1. Open the EC2 console and choose Launch Instance.
  2. Scroll to the Microsoft Windows Server 2016 Base Image and choose Select.
  3. Choose an instance type and size for your file server (I chose the general purpose m4.large instance). Choose Next: Configure Instance Details.
  4. Select a VPC and subnet. You launch AppStream 2.0 resources in the same VPC. Choose Next: Add Storage.
  5. If necessary, adjust the size of your EBS volume. Choose Review and Launch, Launch.
  6. On the Instances page, give your file server a name, such as My File Server.
  7. Ensure that the security group associated with the file server instance allows for incoming traffic from the security group that you select for your AppStream 2.0 fleets or image builders. You can use the default security group and select the same group while creating the image builder and fleet in later steps.

Log in to the file server using a remote access client such as Microsoft Remote Desktop. For more information about connecting to an EC2 Windows instance, see Connect to Your Windows Instance.

To enable file sharing

  1. Create a new folder (such as C:\My Graphics Files) and upload the shared files to make available to your users.
  2. From the Windows control panel, enable network discovery.
  3. Choose Server Manager, File and Storage Services, Volumes.
  4. Scroll to Shares and choose Start the Add Roles and Features Wizard. Go through the wizard to install the File Server and Share role.
  5. From the left navigation menu, choose Shares.
  6. Choose Start the New Share Wizard to set up your folder as a file share.
  7. Open the context (right-click) menu on the share and choose Properties, Permissions, Customize Permissions.
  8. Choose Permissions, Add. Add Read and Execute permissions for everyone on the network.

Step 2:  Create an AppStream 2.0 image with Siemens NX installed

To connect to the image builder and install applications

  1. Open the AppStream 2.0 management console and choose Images, Image Builder, Launch Image Builder.
  2. Create a graphics design image builder in the same VPC as your file server.
  3. From the Image builder tab, select your image builder and choose Connect. This opens a new browser tab and display a desktop to log in to.
  4. Log in to your image builder as ImageBuilderAdmin.
  5. Launch the Image Assistant.
  6. Download and install Siemens NX and other applications on the image builder. I added Blender and Firefox, but you could replace these with your own applications.
  7. To verify the user experience, you can test the application performance on the instance.

Before you finish creating the image, you must mount the file share by enabling a few Microsoft Windows services.

To mount the file share

  1. Open services.msc and check the following services:
  • DNS Client
  • Function Discovery Resource Publication
  • SSDP Discovery
  • UPnP Device H
  1. If any of the preceding services have Startup Type set to Manual, open the context (right-click) menu on the service and choose Start. Otherwise, open the context (right-click) menu on the service and choose Properties. For Startup Type, choose Manual, Apply. To start the service, choose Start.
  2. From the Windows control panel, enable network discovery.
  3. Create a batch script that mounts a file share from the storage server set up earlier. The file share is mounted automatically when a user connects to the AppStream 2.0 environment.

Logon Script Location: C:\Users\Public\logon.bat

Script Contents:

:loop

net use H: \\path\to\network\share 

PING localhost -n 30 >NUL

IF NOT EXIST H:\ GOTO loop

  1. Open gpedit.msc and choose User Configuration, Windows Settings, Scripts. Set logon.bat as the user logon script.
  2. Next, create a batch script that makes the mounted drive visible to the user.

Logon Script Location: C:\Users\Public\startup.bat

Script Contents:
REG DELETE “HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer” /v “NoDrives” /f

  1. Open Task Scheduler and choose Create Task.
  2. Choose General, provide a task name, and then choose Change User or Group.
  3. For Enter the object name to select, enter SYSTEM and choose Check Names, OK.
  4. Choose Triggers, New. For Begin the task, choose At startup. Under Advanced Settings, change Delay task for to 5 minutes. Choose OK.
  5. Choose Actions, New. Under Settings, for Program/script, enter C:\Users\Public\startup.bat. Choose OK.
  6. Choose Conditions. Under Power, clear the Start the task only if the computer is on AC power Choose OK.
  7. To view your scheduled task, choose Task Scheduler Library. Close Task Scheduler when you are done.

Step 3:  Create an AppStream 2.0 fleet and stack

To create a fleet and stack

  1. In the AppStream 2.0 management console, choose Fleets, Create Fleet.
  2. Give the fleet a name, such as Graphics-Demo-Fleet, that uses the newly created image and the same VPC as your file server.
  3. Choose Stacks, Create Stack. Give the stack a name, such as Graphics-Demo-Stack.
  4. After the stack is created, select it and choose Actions, Associate Fleet. Associate the stack with the fleet you created in step 1.

Step 4:  Invite users to access Siemens NX through a browser

To invite users

  1. Choose User Pools, Create User to create users.
  2. Enter a name and email address for each user.
  3. Select the users just created, and choose Actions, Assign Stack to provide access to the stack created in step 2. You can also provide access using SAML 2.0 and connect to your Active Directory if necessary. For more information, see the Enabling Identity Federation with AD FS 3.0 and Amazon AppStream 2.0 post.

Your user receives an email invitation to set up an account and use a web portal to access the applications that you have included in your stack.

Step 5:  Validate the setup

Time for a test drive with Siemens NX on AppStream 2.0!

  1. Open the link for the AppStream 2.0 web portal shared through the email invitation. The web portal opens in your default browser. You must sign in with the temporary password and set a new password. After that, you get taken to your app catalog.
  2. Launch Siemens NX and interact with it using the demo files available in the shared storage folder – My Graphics Files. 

After I launched NX, I captured the screenshot below. The Siemens PLM team also recorded a video with NX running on AppStream 2.0.

Summary

In this post, I discussed the GPU instances available for delivering rich graphics applications to users in a web browser. While I demonstrated a simple setup, you can scale this out to launch a production environment with users signing in using Active Directory credentials,  accessing persistent storage with Amazon S3, and using other commonly requested features reviewed in the Amazon AppStream 2.0 Launch Recap – Domain Join, Simple Network Setup, and Lots More post.

To learn more about AppStream 2.0 and capabilities added this year, see Amazon AppStream 2.0 Resources.

KinoX / Movie4K Admin Detained in Kosovo After Three-Year Manhunt

Post Syndicated from Andy original https://torrentfreak.com/kinox-movie4k-admin-detained-in-kosovo-after-three-year-manhunt-170912/

In June 2011, police across Europe carried out the largest anti-piracy operation the region had ever seen. Their target was massive streaming portal Kino.to and several affiliates with links to Spain, France and the Netherlands.

With many sites demonstrating phoenix-like abilities these days, it didn’t take long for a replacement to appear.

Replacement platform KinoX soon attracted a large fanbase and with that almost immediate attention from the authorities. In October 2014, Germany-based investigators acting on behalf of the Attorney General carried out raids in several regions of the country looking for four main suspects.

One raid, focused on a village near to the northern city of Lübeck, targeted two brothers, then aged 21 and 25-years-old. The pair, who were said to have lived with their parents, were claimed to be the main operators of Kinox.to and another large streaming site, Movie4K.to. Although two other men were arrested elsewhere in Germany, the brothers couldn’t be found.

This was to be no ordinary manhunt by the police. In addition to accusing the brothers of copyright infringement and tax evasion, authorities indicated they were wanted for fraud, extortion, and arson too. The suggestion was that they’d targeted a vehicle owned by a pirate competitor, causing it to “burst into flames”.

The brothers were later named as Kastriot and Kreshnik Selimi. Born in 1992, 21-year-old Kreshnik was born in Sweden. 25-year-old Kastriot was born in Kosovo in 1989 and along with his brother, later became a German citizen.

With authorities piling on the charges, the pair were accused of being behind not only KinoX and Movie4K, but also other hosting and sharing platforms including BitShare, Stream4k.to, Shared.sx, Mygully.com and Boerse.sx.

Now, almost three years later, German police are one step closer to getting their men. According to a Handelsblatt report via Tarnkappe, Kreshnik Selimi has been detained by authorities.

The now 24-year-old suspect reportedly handed himself to the German embassy located in the capital of Kosovo, Prestina. The location of the arrest isn’t really a surprise. Older brother Kastriot previously published a picture on Instagram which appeared to show a ticket in his name destined for Kosovo from Zurich in Switzerland.

But while Kreshnik’s arrest reportedly took place in July, there’s still no news of Kastriot. The older brother is still on the run, maybe in Kosovo, or by now, potentially anywhere else in the world.

While his whereabouts remain a mystery, the other puzzle faced by German authorities is the status of the two main sites the brothers were said to maintain.

Despite all the drama and unprecedented allegations of violence and other serious offenses, both Movie4K and KinoX remain stubbornly online, apparently oblivious to the action.

There have been consequences for people connected to the latter, however.

In December 2015, Arvit O (aka “Pedro”) who handled technical issues on KinoX, was sentenced to 40 months in prison for his involvement in the site.

Arvit O, who made a partial confession, was found guilty of copyright infringement by the District Court of Leipzig. The then 29-year-old admitted to infringing 2,889 works. The Court also found that he hacked the computers of two competitors in order to improve Kinox’s market share.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

[$] A different approach to kernel configuration

Post Syndicated from corbet original https://lwn.net/Articles/733405/rss

The kernel’s configuration system can be challenging to deal with it; Linus
Torvalds recently called itone of
the worst parts of the whole project
“. Thus, anything that might
help users with the process of configuring a kernel build would be
welcome. A talk by Junghwan Kang at the 2017 Open-Source Summit
demonstrated an interesting approach, even if it’s not quite ready for
prime time yet.

Parallel Processing in Python with AWS Lambda

Post Syndicated from Oz Akan original https://aws.amazon.com/blogs/compute/parallel-processing-in-python-with-aws-lambda/

If you develop an AWS Lambda function with Node.js, you can call multiple web services without waiting for a response due to its asynchronous nature.  All requests are initiated almost in parallel, so you can get results much faster than a series of sequential calls to each web service. Considering the maximum execution duration for Lambda, it is beneficial for I/O bound tasks to run in parallel.

If you develop a Lambda function with Python, parallelism doesn’t come by default. Lambda supports Python 2.7 and Python 3.6, both of which have multiprocessing and threading modules. The multiprocessing module supports multiple cores so it is a better choice, especially for CPU intensive workloads. With the threading module, all threads are going to run on a single core though performance difference is negligible for network-bound tasks.

In this post, I demonstrate how the Python multiprocessing module can be used within a Lambda function to run multiple I/O bound tasks in parallel.

Example use case

In this example, you call Amazon EC2 and Amazon EBS API operations to find the total EBS volume size for all your EC2 instances in a region.

This is a two-step process:

  • The Lambda function calls EC2 to list all EC2 instances
  • The function calls EBS for each instance to find attached EBS volumes

Sequential Execution

If you make these calls sequentially, during the second step, your code has to loop over all the instances and wait for each response before moving to the next request.

The class named VolumesSequential has the following methods:

  • __init__ creates an EC2 resource.
  • total_size returns all EC2 instances and passes these to the instance_volumes method.
  • instance_volumes finds the total size of EBS volumes for the instance.
  • total_size adds all sizes from all instances to find total size for the EBS volumes.

Source Code for Sequential Execution

import time
import boto3

class VolumesSequential(object):
    """Finds total volume size for all EC2 instances"""
    def __init__(self):
        self.ec2 = boto3.resource('ec2')

    def instance_volumes(self, instance):
        """
        Finds total size of the EBS volumes attached
        to an EC2 instance
        """
        instance_total = 0
        for volume in instance.volumes.all():
            instance_total += volume.size
        return instance_total

    def total_size(self):
        """
        Lists all EC2 instances in the default region
        and sums result of instance_volumes
        """
        print "Running sequentially"
        instances = self.ec2.instances.all()
        instances_total = 0
        for instance in instances:
            instances_total += self.instance_volumes(instance)
        return instances_total

def lambda_handler(event, context):
    volumes = VolumesSequential()
    _start = time.time()
    total = volumes.total_size()
    print "Total volume size: %s GB" % total
    print "Sequential execution time: %s seconds" % (time.time() - _start)

Parallel Execution

The multiprocessing module that comes with Python 2.7 lets you run multiple processes in parallel. Due to the Lambda execution environment not having /dev/shm (shared memory for processes) support, you can’t use multiprocessing.Queue or multiprocessing.Pool.

If you try to use multiprocessing.Queue, you get an error similar to the following:

[Errno 38] Function not implemented: OSError
…
    sl = self._semlock = _multiprocessing.SemLock(kind, value, maxvalue)
OSError: [Errno 38] Function not implemented

On the other hand, you can use multiprocessing.Pipe instead of multiprocessing.Queue to accomplish what you need without getting any errors during the execution of the Lambda function.

The class named VolumeParallel has the following methods:

  • __init__ creates an EC2 resource
  • instance_volumes finds the total size of EBS volumes attached to an instance
  • total_size finds all instances and runs instance_volumes for each to find the total size of all EBS volumes attached to all EC2 instances.

Source Code for Parallel Execution

import time
from multiprocessing import Process, Pipe
import boto3

class VolumesParallel(object):
    """Finds total volume size for all EC2 instances"""
    def __init__(self):
        self.ec2 = boto3.resource('ec2')

    def instance_volumes(self, instance, conn):
        """
        Finds total size of the EBS volumes attached
        to an EC2 instance
        """
        instance_total = 0
        for volume in instance.volumes.all():
            instance_total += volume.size
        conn.send([instance_total])
        conn.close()

    def total_size(self):
        """
        Lists all EC2 instances in the default region
        and sums result of instance_volumes
        """
        print "Running in parallel"

        # get all EC2 instances
        instances = self.ec2.instances.all()
        
        # create a list to keep all processes
        processes = []

        # create a list to keep connections
        parent_connections = []
        
        # create a process per instance
        for instance in instances:            
            # create a pipe for communication
            parent_conn, child_conn = Pipe()
            parent_connections.append(parent_conn)

            # create the process, pass instance and connection
            process = Process(target=self.instance_volumes, args=(instance, child_conn,))
            processes.append(process)

        # start all processes
        for process in processes:
            process.start()

        # make sure that all processes have finished
        for process in processes:
            process.join()

        instances_total = 0
        for parent_connection in parent_connections:
            instances_total += parent_connection.recv()[0]

        return instances_total


def lambda_handler(event, context):
    volumes = VolumesParallel()
    _start = time.time()
    total = volumes.total_size()
    print "Total volume size: %s GB" % total
    print "Sequential execution time: %s seconds" % (time.time() - _start)

Performance

There are a few differences between two Lambda functions when it comes to the execution environment. The parallel function requires more memory than the sequential one. You may run the parallel Lambda function with a relatively large memory setting to see how much memory it uses. The amount of memory required by the Lambda function depends on what the function does and how many processes it runs in parallel. To restrict maximum memory usage, you may want to limit the number of parallel executions.

In this case, when you give 1024 MB for both Lambda functions, the parallel function runs about two times faster than the sequential function. I have a handful of EC2 instances and EBS volumes in my account so the test ran way under the maximum execution limit for Lambda. Remember that parallel execution doesn’t guarantee that the runtime for the Lambda function will be under the maximum allowed duration but does speed up the overall execution time.

Sequential Run Time Output

START RequestId: 4c370b12-f9d3-11e6-b46b-b5d41afd648e Version: $LATEST
Running sequentially
Total volume size: 589 GB
Sequential execution time: 3.80066084862 seconds
END RequestId: 4c370b12-f9d3-11e6-b46b-b5d41afd648e
REPORT RequestId: 4c370b12-f9d3-11e6-b46b-b5d41afd648e Duration: 4091.59 ms Billed Duration: 4100 ms  Memory Size: 1024 MB Max Memory Used: 46 MB

Parallel Run Time Output

START RequestId: 4f1328ed-f9d3-11e6-8cd1-c7381c5c078d Version: $LATEST
Running in parallel
Total volume size: 589 GB
Sequential execution time: 1.89170885086 seconds
END RequestId: 4f1328ed-f9d3-11e6-8cd1-c7381c5c078d
REPORT RequestId: 4f1328ed-f9d3-11e6-8cd1-c7381c5c078d Duration: 2069.33 ms Billed Duration: 2100 ms  Memory Size: 1024 MB Max Memory Used: 181 MB 

Summary

In this post, I demonstrated how to run multiple I/O bound tasks in parallel by developing a Lambda function with the Python multiprocessing module. With the help of this module, you freed the CPU from waiting for I/O and fired up several tasks to fit more I/O bound operations into a given time frame. This might be the trick to reduce the overall runtime of a Lambda function especially when you have to run so many and don’t want to split the work into smaller chunks.

Demonoid Hopes to Return to Its Former Glory

Post Syndicated from Ernesto original https://torrentfreak.com/demonoid-hopes-to-return-to-its-former-glory-170910/

Demonoid has been around for well over a decade but the site is not really known for having a stable presence.

Quite the opposite, the torrent tracker has a ‘habit’ of going offline for weeks or even months on end, only to reappear as if nothing ever happened.

Earlier this year the site made another one if its trademark comebacks and it has been sailing relatively smoothly since then. Interestingly, the site is once again under the wings of a familiar face, its original founder Deimos.

Deimos decided to take the lead again after some internal struggles. “I gave control to the wrong guys while the problems started, but it’s time to control stuff again,” Deimos told us earlier.

Since the return a few months back, the site’s main focus has been on rebuilding the community and improving the site. Some may have already noticed the new logo, but more changes are coming, both on the front and backend.

“The backend development is going a bit slow, it’s a big change that will allow the server to run off a bunch of small servers all over the world,” Deimos informs TorrentFreak.

“For the frontend, we’re working on new features including a karma system, integrated forums, buddy list, etc. That part is faster to build once you have everything in the back working,” he adds.

Demonoid’s new logo

Deimos has been on and off the site a few times, but he and a few others most recently returned to get it back on track and increase its popularity. While the site has around eight million registered users, many of these have moved elsewhere in recent years.

“I want to to see the community we had back. Don’t know if it’s possible but that’s my aim,” Deimos says, admitting that he may not stay on forever.

Many torrent sites have come and gone in recent years, but they are still here today. Looking back, Demonoid has come a long way. What many people don’t know, is that it was originally a place to share demo tapes of metal bands. Hence the name DEMOnoid.

“It originally started as a modified PHP based forum that allowed posting of .torrent files. At some point, we started using a full torrent indexing script written in PHP that included a tracker, and started building the first version of the indexing site it is today,” Deimos says.

The site required users to have an invite to sign up, making it a semi-private tracker. This wasn’t done to encourage people to maintain a certain ratio, as some other trackers do, but mostly to keep unsavory characters away.

“The invitation system was implemented to keep spammers, trolls and the like out,” Deimos says. “Originally it was due to some very problematic people who happened to have a death metal band, back in the DEMOnoid days.

“We try to keep it open as often as possible but when we start to get these kinds of issues, we close it,” he adds.

In recent years, the site has had quite a few setbacks, but Deimos doesn’t want to dwell on these in public. Instead, he prefers to focus on the future. While torrent sites are no longer at the center of media distribution, there will always be a place for dedicated sharing communities.

Whether Demonoid will ever return to its former glory is a big unknown for now, but Deimos is sure to do his best.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

‘Game of Thrones Season 7 Pirated Over a Billion Times’

Post Syndicated from Ernesto original https://torrentfreak.com/game-of-thrones-season-7-pirated-over-a-billion-times-170905/

The seventh season of Game of Thrones has brought tears and joy to HBO this summer.

It was the most-viewed season thus far, with record-breaking TV ratings. But on the other hand, HBO and Game of Thrones were plagued by hacks, leaks, and piracy, of course.

While it’s hard to measure piracy accurately, streaming in particular, piracy tracking outfit MUSO has just released some staggering numbers. According to the company, the latest season was pirated more than a billion times in total.

To put this into perspective, this means that on average each episode was pirated 140 million times, compared to 32 million views through legal channels.

The vast majority of the pirate ‘views’ came from streaming services (85%), followed by torrents (9%) and direct downloads (6%). Private torrent trackers are at the bottom with less than one percent.

Pirate sources

Andy Chatterley, MUSO’s CEO and Co-Founder, notes that the various leaks may have contributed to these high numbers. This is supported by the finding that the sixth episode, which leaked several days in advance, was pirated more than the season finale.

“It’s no secret that HBO has been plagued by security breaches throughout the latest season, which has seen some episodes leak before broadcast and added to unlicensed activity,” Chatterley says.

In addition, the data shows that despite a heavy focus on torrent traffic, unauthorized streaming is a much bigger problem for rightsholders.

“In addition to the scale of piracy when it comes to popular shows, these numbers demonstrate that unlicensed streaming can be a far more significant type of piracy than torrent downloads.”

Although the report shares precise numbers, it’s probably best to describe them as estimates.

The streaming data MUSO covers is sourced from SimilarWeb, which uses a sample of 200 million ‘devices’ to estimate website traffic. The sample data covers thousands of popular pirate sites and is extrapolated into the totals.

While more than a billion downloads are pretty significant, to say the least, MUSO is not even looking at the full pirate landscape.

For one, Muso’s streaming data doesn’t include Chinese traffic, which usually has a very active piracy community. As if that’s not enough, alternative pirate sources such as fully-loaded Kodi boxes, are not included either.

It’s clear though, which doesn’t really come as a surprise, that Game of Thrones piracy overall is still very significant. The torrent numbers may not have grown in recent years, but streaming seems to be making up for it and probably adding a few dozen million extra, give or take.

Total Global Downloads and Streams by Episode

Episode one: 187,427,575
Episode two: 123,901,209
Episode three: 116,027,851
Episode four: 121,719,868
Episode five: 151,569,560
Episode six: 184,913,279
Episode seven (as of 3rd Sept): 143,393,804
All Episode Bundles – Season 7: 834,522
TOTAL (as of 3rd September) = 1,029,787,668

Total Breakdown By Type

Streaming: 84.66%
Torrent: 9.12%
Download: 5.59%
Private Torrent: 0.63%

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.