Tag Archives: displays

[email protected] – Intelligent Processing of HTTP Requests at the Edge

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/lambdaedge-intelligent-processing-of-http-requests-at-the-edge/

Late last year I announced a preview of [email protected] and talked about how you could use it to intelligently process HTTP requests at locations that are close (latency-wise) to your customers. Developers who applied and gained access to the preview have been making good use of it, and have provided us with plenty of very helpful feedback. During the preview we added the ability to generate HTTP responses and support for CloudWatch Logs, and also updated our roadmap based on the feedback.

Now Generally Available
Today I am happy to announce that [email protected] is now generally available! You can use it to:

  • Inspect cookies and rewrite URLs to perform A/B testing.
  • Send specific objects to your users based on the User-Agent header.
  • Implement access control by looking for specific headers before passing requests to the origin.
  • Add, drop, or modify headers to direct users to different cached objects.
  • Generate new HTTP responses.
  • Cleanly support legacy URLs.
  • Modify or condense headers or URLs to improve cache utilization.
  • Make HTTP requests to other Internet resources and use the results to customize responses.

[email protected] allows you to create web-based user experiences that are rich and personal. As is rapidly becoming the norm in today’s world, you don’t need to provision or manage any servers. You simply upload your code (Lambda functions written in Node.js) and pick one of the CloudFront behaviors that you have created for the distribution, along with the desired CloudFront event:

In this case, my function (the imaginatively named EdgeFunc1) would run in response to origin requests for image/* within the indicated distribution. As you can see, you can run code in response to four different CloudFront events:

Viewer Request – This event is triggered when an event arrives from a viewer (an HTTP client, generally a web browser or a mobile app), and has access to the incoming HTTP request. As you know, each CloudFront edge location maintains a large cache of objects so that it can efficiently respond to repeated requests. This particular event is triggered regardless of whether the requested object is already cached.

Origin Request – This event is triggered when the edge location is about to make a request back to the origin, due to the fact that the requested object is not cached at the edge location. It has access to the request that will be made to the origin (often an S3 bucket or code running on an EC2 instance).

Origin Response – This event is triggered after the origin returns a response to a request. It has access to the response from the origin.

Viewer Response – This is event is triggered before the edge location returns a response to the viewer. It has access to the response.

Functions are globally replicated and requests are automatically routed to the optimal location for execution. You can write your code once and with no overt action on your part, have it be available at low latency to users all over the world.

Your code has full access to requests and responses, including headers, cookies, the HTTP method (GET, HEAD, and so forth), and the URI. Subject to a few restrictions, it can modify existing headers and insert new ones.

[email protected] in Action
Let’s create a simple function that runs in response to the Viewer Request event. I open up the Lambda Console and create a new function. I choose the Node.js 6.10 runtime and search for cloudfront blueprints:

I choose cloudfront-response-generation and configure a trigger to invoke the function:

The Lambda Console provides me with some information about the operating environment for my function:

I enter a name and a description for my function, as usual:

The blueprint includes a fully operational function. It generates a “200” HTTP response and a very simple body:

I used this as the starting point for my own code, which pulls some interesting values from the request and displays them in a table:

'use strict';
exports.handler = (event, context, callback) => {

    /* Set table row style */
    const rs = '"border-bottom:1px solid black;vertical-align:top;"';
    /* Get request */
    const request = event.Records[0].cf.request;
   
    /* Get values from request */ 
    const httpVersion = request.httpVersion;
    const clientIp    = request.clientIp;
    const method      = request.method;
    const uri         = request.uri;
    const headers     = request.headers;
    const host        = headers['host'][0].value;
    const agent       = headers['user-agent'][0].value;
    
    var sreq = JSON.stringify(event.Records[0].cf.request, null, ' ');
    sreq = sreq.replace(/\n/g, '<br/>');

    /* Generate body for response */
    const body = 
     '<html>\n'
     + '<head><title>Hello From [email protected]</title></head>\n'
     + '<body>\n'
     + '<table style="border:1px solid black;background-color:#e0e0e0;border-collapse:collapse;" cellpadding=4 cellspacing=4>\n'
     + '<tr style=' + rs + '><td>Host</td><td>'        + host     + '</td></tr>\n'
     + '<tr style=' + rs + '><td>Agent</td><td>'       + agent    + '</td></tr>\n'
     + '<tr style=' + rs + '><td>Client IP</td><td>'   + clientIp + '</td></tr>\n'
     + '<tr style=' + rs + '><td>Method</td><td>'      + method   + '</td></tr>\n'
     + '<tr style=' + rs + '><td>URI</td><td>'         + uri      + '</td></tr>\n'
     + '<tr style=' + rs + '><td>Raw Request</td><td>' + sreq     + '</td></tr>\n'
     + '</table>\n'
     + '</body>\n'
     + '</html>'

    /* Generate HTTP response */
    const response = {
        status: '200',
        statusDescription: 'HTTP OK',
        httpVersion: httpVersion,
        body: body,
        headers: {
            'vary':          [{key: 'Vary',          value: '*'}],
            'last-modified': [{key: 'Last-Modified', value:'2017-01-13'}]
        },
    };

    callback(null, response);
};

I configure my handler, and request the creation of a new IAM Role with Basic Edge Lambda permissions:

On the next page I confirm my settings (as I would do for a regular Lambda function), and click on Create function:

This creates the function, attaches the trigger to the distribution, and also initiates global replication of the function. The status of my distribution changes to In Progress for the duration of the replication (typically 5 to 8 minutes):

The status changes back to Deployed as soon as the replication completes:

Then I access the root of my distribution (https://dogy9dy9kvj6w.cloudfront.net/), the function runs, and this is what I see:

Feel free to click on the image (it is linked to the root of my distribution) to run my code!

As usual, this is a very simple example and I am sure that you can do a lot better. Here are a few ideas to get you started:

Site Management – You can take an entire dynamic website offline and replace critical pages with [email protected] functions for maintenance or during a disaster recovery operation.

High Volume Content – You can create scoreboards, weather reports, or public safety pages and make them available at the edge, both quickly and cost-effectively.

Create something cool and share it in the comments or in a blog post, and I’ll take a look.

Things to Know
Here are a couple of things to keep in mind as you start to think about how to put [email protected] to use in your application:

Timeouts – Functions that handle Origin Request and Origin Response events must complete within 3 seconds. Functions that handle Viewer Request and Viewer Response events must complete within 1 second.

Versioning – After you update your code in the Lambda Console, you must publish a new version and set up a fresh set of triggers for it, and then wait for the replication to complete. You must always refer to your code using a version number; $LATEST and aliases do not apply.

Headers – As you can see from my code, the HTTP request headers are accessible as an array. The headers fall in to four categories:

  • Accessible – Can be read, written, deleted, or modified.
  • Restricted – Must be passed on to the origin.
  • Read-only – Can be read, but not modified in any way.
  • Blacklisted – Not seen by code, and cannot be added.

Runtime Environment – The runtime environment provides each function with 128 MB of memory, but no builtin libraries or access to /tmp.

Web Service Access – Functions that handle Origin Request and Origin Response events must complete within 3 seconds can access the AWS APIs and fetch content via HTTP. These requests are always made synchronously with request to the original request or response.

Function Replication – As I mentioned earlier, your functions will be globally replicated. The replicas are visible in the “other” regions from the Lambda Console:

CloudFront – Everything that you already know about CloudFront and CloudFront behaviors is relevant to [email protected]. You can use multiple behaviors (each with up to four [email protected] functions) from each behavior, customize header & cookie forwarding, and so forth. You can also make the association between events and functions (via ARNs that include function versions) while you are editing a behavior:

Available Now
[email protected] is available now and you can start using it today. Pricing is based on the number of times that your functions are invoked and the amount of time that they run (see the [email protected] Pricing page for more info).

Jeff;

 

Handy: Google Highlights ‘Best Torrent Sites’ in Search Results

Post Syndicated from Ernesto original https://torrentfreak.com/handy-google-highlights-best-torrent-sites-in-search-results-170709/

With torrent sites dropping like flies recently, a lot of people are looking for alternatives.

For many, Google is the preferred choice to find them, and the search engine is actually quite helpful.

When you type in “best torrent sites” or just “torrent sites,” Google.com provides a fancy reel of several high traffic indexers.

The search engine displays the names of sites such as RARBG, The Pirate Bay and 1337x as well as their logo. When you click on this link, Google brings up all results for the associated term.

While it’s a thought provoking idea to think that Google employees are manually curating the list, the entire process is likely automated. Still, many casual torrent users might find it quite handy. Whether rightsholders will be equally excited is another question though.

The automated nature of this type of search result display also creates another problem. While many people know that most torrent sites offer pirated content, this is quite different with streaming portals.

This leads to a confusing situation where Google lists both legal and unauthorized streaming platforms when users search for “streaming sites.”

The screenshot below shows the pirate streaming site Putlocker next to Hulu and Crackle. The same lineup also rotates various other pirate sites such as Alluc and Movie4k.to.

The reels in question are most likely generated by algorithms, which don’t distinguish between authorized and unauthorized sources. Still, given the repeated criticism Hollywood has for Google for its supposed facilitation of piracy, it’s a bit unfortunate, to say the least.

This isn’t the first time that Google’s “rich” search results have featured pirate sites. The same happened in the past when the search engine displayed pirate site ratings of movies, next to ratings from regular review sites such as IMDb and Rotten Tomatoes.

We can expect the MPAA and others to take note, and bring these and other issues up at their convenience.

Note: the search reel doesn’t appear on many localized Google domains. We tested and confirmed it only on Google.com.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Perform Near Real-time Analytics on Streaming Data with Amazon Kinesis and Amazon Elasticsearch Service

Post Syndicated from Tristan Li original https://aws.amazon.com/blogs/big-data/perform-near-real-time-analytics-on-streaming-data-with-amazon-kinesis-and-amazon-elasticsearch-service/

Nowadays, streaming data is seen and used everywhere—from social networks, to mobile and web applications, IoT devices, instrumentation in data centers, and many other sources. As the speed and volume of this type of data increases, the need to perform data analysis in real time with machine learning algorithms and extract a deeper understanding from the data becomes ever more important. For example, you might want a continuous monitoring system to detect sentiment changes in a social media feed so that you can react to the sentiment in near real time.

In this post, we use Amazon Kinesis Streams to collect and store streaming data. We then use Amazon Kinesis Analytics to process and analyze the streaming data continuously. Specifically, we use the Kinesis Analytics built-in RANDOM_CUT_FOREST function, a machine learning algorithm, to detect anomalies in the streaming data. Finally, we use Amazon Kinesis Firehose to export the anomalies data to Amazon Elasticsearch Service (Amazon ES). We then build a simple dashboard in the open source tool Kibana to visualize the result.

Solution overview

The following diagram depicts a high-level overview of this solution.

Amazon Kinesis Streams

You can use Amazon Kinesis Streams to build your own streaming application. This application can process and analyze streaming data by continuously capturing and storing terabytes of data per hour from hundreds of thousands of sources.

Amazon Kinesis Analytics

Kinesis Analytics provides an easy and familiar standard SQL language to analyze streaming data in real time. One of its most powerful features is that there are no new languages, processing frameworks, or complex machine learning algorithms that you need to learn.

Amazon Kinesis Firehose

Kinesis Firehose is the easiest way to load streaming data into AWS. It can capture, transform, and load streaming data into Amazon S3, Amazon Redshift, and Amazon Elasticsearch Service.

Amazon Elasticsearch Service

Amazon ES is a fully managed service that makes it easy to deploy, operate, and scale Elasticsearch for log analytics, full text search, application monitoring, and more.

Solution summary

The following is a quick walkthrough of the solution that’s presented in the diagram:

  1. IoT sensors send streaming data into Kinesis Streams. In this post, you use a Python script to simulate an IoT temperature sensor device that sends the streaming data.
  2. By using the built-in RANDOM_CUT_FOREST function in Kinesis Analytics, you can detect anomalies in real time with the sensor data that is stored in Kinesis Streams. RANDOM_CUT_FOREST is also an appropriate algorithm for many other kinds of anomaly-detection use cases—for example, the media sentiment example mentioned earlier in this post.
  3. The processed anomaly data is then loaded into the Kinesis Firehose delivery stream.
  4. By using the built-in integration that Kinesis Firehose has with Amazon ES, you can easily export the processed anomaly data into the service and visualize it with Kibana.

Implementation steps

The following sections walk through the implementation steps in detail.

Creating the delivery stream

  1. Open the Amazon Kinesis Streams console.
  2. Create a new Kinesis stream. Give it a name that indicates it’s for raw incoming stream data—for example, RawStreamData. For Number of shards, type 1.
  3. The Python code provided below simulates a streaming application, such as an IoT device, and generates random data and anomalies into a Kinesis stream. The code generates two temperature ranges, where the first range is the hypothetical sensor’s normal operating temperature range (10–20), and the second is the anomaly temperature range (100–120).Make sure to change the stream name on line 16 and 20 and the Region on line 6 to match your configuration. Alternatively, you can download the Amazon Kinesis Data Generator from this repository and use it to generate the data.
    import json
    import datetime
    import random
    import testdata
    from boto import kinesis
    
    kinesis = kinesis.connect_to_region("us-east-1")
    
    def getData(iotName, lowVal, highVal):
       data = {}
       data["iotName"] = iotName
       data["iotValue"] = random.randint(lowVal, highVal) 
       return data
    
    while 1:
       rnd = random.random()
       if (rnd < 0.01):
          data = json.dumps(getData("DemoSensor", 100, 120))  
          kinesis.put_record("RawStreamData", data, "DemoSensor")
          print '***************************** anomaly ************************* ' + data
       else:
          data = json.dumps(getData("DemoSensor", 10, 20))  
          kinesis.put_record("RawStreamData", data, "DemoSensor")
          print data

  4. Open the Amazon Elasticsearch Service console and create a new domain.
    1. Give the domain a unique name. In the Configure cluster screen, use the default settings.
    2. In the Set up access policy screen, in the Set the domain access policy list, choose Allow access to the domain from specific IP(s).
    3. Enter the public IP address of your computer.
      Note: If you’re working behind a proxy or firewall, see the “Use a proxy to simplify request signing” section in this AWS Database blog post to learn how to work with a proxy. For additional information about securing access to your Amazon ES domain, see How to Control Access to Your Amazon Elasticsearch Domain in the AWS Security Blog.
  5. After the Amazon ES domain is up and running, you can set up and configure Kinesis Firehose to export results to Amazon ES:
    1. Open the Amazon Kinesis Firehose console and choose Create Delivery Stream.
    2. In the Destination dropdown list, choose Amazon Elasticsearch Service.
    3. Type a stream name, and choose the Amazon ES domain that you created in Step 4.
    4. Provide an index name and ES type. In the S3 bucket dropdown list, choose Create New S3 bucket. Choose Next.
    5. In the configuration, change the Elasticsearch Buffer size to 1 MB and the Buffer interval to 60s. Use the default settings for all other fields. This shortens the time for the data to reach the ES cluster.
    6. Under IAM Role, choose Create/Update existing IAM role.
      The best practice is to create a new role every time. Otherwise, the console keeps adding policy documents to the same role. Eventually the size of the attached policies causes IAM to reject the role, but it does it in a non-obvious way, where the console basically quits functioning.
    7. Choose Next to move to the Review page.
  6. Review the configuration, and then choose Create Delivery Stream.
  7. Run the Python file for 1–2 minutes, and then press Ctrl+C to stop the execution. This loads some data into the stream for you to visualize in the next step.

Analyzing the data

Now it’s time to analyze the IoT streaming data using Amazon Kinesis Analytics.

  1. Open the Amazon Kinesis Analytics console and create a new application. Give the application a name, and then choose Create Application.
  2. On the next screen, choose Connect to a source. Choose the raw incoming data stream that you created earlier. (Note the stream name Source_SQL_STREAM_001 because you will need it later.)
  3. Use the default settings for everything else. When the schema discovery process is complete, it displays a success message with the formatted stream sample in a table as shown in the following screenshot. Review the data, and then choose Save and continue.
  4. Next, choose Go to SQL editor. When prompted, choose Yes, start application.
  5. Copy the following SQL code and paste it into the SQL editor window.
    CREATE OR REPLACE STREAM "TEMP_STREAM" (
       "iotName"        varchar (40),
       "iotValue"   integer,
       "ANOMALY_SCORE"  DOUBLE);
    -- Creates an output stream and defines a schema
    CREATE OR REPLACE STREAM "DESTINATION_SQL_STREAM" (
       "iotName"       varchar(40),
       "iotValue"       integer,
       "ANOMALY_SCORE"  DOUBLE,
       "created" TimeStamp);
     
    -- Compute an anomaly score for each record in the source stream
    -- using Random Cut Forest
    CREATE OR REPLACE PUMP "STREAM_PUMP_1" AS INSERT INTO "TEMP_STREAM"
    SELECT STREAM "iotName", "iotValue", ANOMALY_SCORE FROM
      TABLE(RANDOM_CUT_FOREST(
        CURSOR(SELECT STREAM * FROM "SOURCE_SQL_STREAM_001")
      )
    );
    
    -- Sort records by descending anomaly score, insert into output stream
    CREATE OR REPLACE PUMP "OUTPUT_PUMP" AS INSERT INTO "DESTINATION_SQL_STREAM"
    SELECT STREAM "iotName", "iotValue", ANOMALY_SCORE, ROWTIME FROM "TEMP_STREAM"
    ORDER BY FLOOR("TEMP_STREAM".ROWTIME TO SECOND), ANOMALY_SCORE DESC;

 

  1. Choose Save and run SQL.
    As the application is running, it displays the results as stream data arrives. If you don’t see any data coming in, run the Python script again to generate some fresh data. When there is data, it appears in a grid as shown in the following screenshot.Note that you are selecting data from the source stream name Source_SQL_STREAM_001 that you created previously. Also note the ANOMALY_SCORE column. This is the value that the Random_Cut_Forest function calculates based on the temperature ranges provided by the Python script. Higher (anomaly) temperature ranges have a higher score.Looking at the SQL code, note that the first two blocks of code create two new streams to store temporary data and the final result. The third block of code analyzes the raw source data (Stream_Pump_1) using the Random_Cut_Forest function. It calculates an anomaly score (ANOMALY_SCORE) and inserts it into the TEMP_STREAM stream. The final code block loads the result stored in the TEMP_STREAM into DESTINATION_SQL_STREAM.
  2. Choose Exit (done editing) next to the Save and run SQL button to return to the application configuration page.

Load processed data into the Kinesis Firehose delivery stream

Now, you can export the result from DESTINATION_SQL_STREAM into the Amazon Kinesis Firehose stream that you created previously.

  1. On the application configuration page, choose Connect to a destination.
  2. Choose the stream name that you created earlier, and use the default settings for everything else. Then choose Save and Continue.
  3. On the application configuration page, choose Exit to Kinesis Analytics applications to return to the Amazon Kinesis Analytics console.
  4. Run the Python script again for 4–5 minutes to generate enough data to flow through Amazon Kinesis Streams, Kinesis Analytics, Kinesis Firehose, and finally into the Amazon ES domain.
  5. Open the Kinesis Firehose console, choose the stream, and then choose the Monitoring
  6. As the processed data flows into Kinesis Firehose and Amazon ES, the metrics appear on the Delivery Stream metrics page. Keep in mind that the metrics page takes a few minutes to refresh with the latest data.
  7. Open the Amazon Elasticsearch Service dashboard in the AWS Management Console. The count in the Searchable documents column increases as shown in the following screenshot. In addition, the domain shows a cluster health of Yellow. This is because, by default, it needs two instances to deploy redundant copies of the index. To fix this, you can deploy two instances instead of one.

Visualize the data using Kibana

Now it’s time to launch Kibana and visualize the data.

  1. Use the ES domain link to go to the cluster detail page, and then choose the Kibana link as shown in the following screenshot.

    If you’re working behind a proxy or firewall, see the “Use a proxy to simplify request signing” section in this blog post to learn how to work with a proxy.
  2. In the Kibana dashboard, choose the Discover tab to perform a query.
  3. You can also visualize the data using the different types of charts offered by Kibana. For example, by going to the Visualize tab, you can quickly create a split bar chart that aggregates by ANOMALY_SCORE per minute.


Conclusion

In this post, you learned how to use Amazon Kinesis to collect, process, and analyze real-time streaming data, and then export the results to Amazon ES for analysis and visualization with Kibana. If you have comments about this post, add them to the “Comments” section below. If you have questions or issues with implementing this solution, please open a new thread on the Amazon Kinesis or Amazon ES discussion forums.


Next Steps

Take your skills to the next level. Learn real-time clickstream anomaly detection with Amazon Kinesis Analytics.

 


About the Author

Tristan Li is a Solutions Architect with Amazon Web Services. He works with enterprise customers in the US, helping them adopt cloud technology to build scalable and secure solutions on AWS.

 

 

 

 

VästtraPi: your personal bus stop schedule monitor

Post Syndicated from Janina Ander original https://www.raspberrypi.org/blog/public-transport-vasttrapi/

I get impatient quickly when I’m looking up information on my phone. There’s just something about it that makes me jittery – especially when the information is time-sensitive, like timetables for public transport. If you’re like me, then Dimitris Platis‘s newest build is for you. He has created the VästtraPi, a Pi-powered departure time screen for your home!

No Title

No Description

Never miss the bus again with VästtraPi

Let me set the scene: it’s a weekday morning, and you’ve finally woken up enough to think about taking the bus to work. How much time do you have to catch it, though? You pick up your phone, unlock it, choose the right app, wait for it to update – and realise this took so much time that you’ll probably miss the next bus! Grrrrrr!

Running after a streetcar

Never again!

Now picture this: instead of using your phone, you can glance at  a personalized real-time bus schedule monitor while sipping your tea at breakfast.

Paul Rudd is fairly impressed

That would be pretty neat, wouldn’t it?

Such a device is exactly what Dimitris has created with the VästtraPi, and he has provided instructions so you can make your own. One less stress factor for your morning commute!

Stephen Colbert and Jon Stewart are very impressed.

I agree with Stephen and Jon.

Setting up the VästtraPi

The main pieces of hardware making up the VästtraPi are a Raspberry Pi Zero W, an LCD screen, and a power control board designed by Dimitris which switches the device on and off. He explains where to buy the board’s components, as well as all the other parts of the build, and how to put them together. He’s also 3D-printed a simple case.

On the software side, a Python script accesses the API provided by Dimitris’s local public transportation company, Västtrafik, and repeatedly fetches information about his favourite bus stop. It displays the information using neat graphics, generated with the help of Tkinter, the standard GUI package for Python. The device is set up so that pressing the ‘on’ button starts up the Pi. The script then runs automatically for ten minutes before safely shutting everything down. Very economical!

Dimitris has even foreseen what you’re likely to be thinking right now:

So, is this faster than the mobile app solution? Yes and no. The Raspberry Pi Zero W needs around 30 seconds to boot up and display the GUI. Without any optimizations it is naturally slower than my phone. VästtraPi’s biggest advantage is that it allows me to multitask while it is loading.

Build your own live bus schedule monitor

All the schematics and code are available via Dimitris’s write-up. He says that, for the moment, “the bus station, selected platform and bus line destinations that are displayed are hard-coded” in his script, but that it would be easy to amend for your own purposes. Of course, when recreating this build, you’ll want to use your own local public transport provider’s API, so some tweaking of his code will be required anyway.

What do you think – will this improve your morning routine? Are you up to the challenge of adapting it? Or do you envision modifying the build to display other live information? Let us know how you get on in the comments.

The post VästtraPi: your personal bus stop schedule monitor appeared first on Raspberry Pi.

New Information in the AWS IAM Console Helps You Follow IAM Best Practices

Post Syndicated from Rob Moncur original https://aws.amazon.com/blogs/security/newly-updated-features-in-the-aws-iam-console-help-you-adhere-to-iam-best-practices/

Today, we added new information to the Users section of the AWS Identity and Access Management (IAM) console to make it easier for you to follow IAM best practices. With this new information, you can more easily monitor users’ activity in your AWS account and identify access keys and passwords that you should rotate regularly. You can also better audit users’ MFA device usage and keep track of their group memberships. In this post, I show how you can use this new information to help you follow IAM best practices.

Monitor activity in your AWS account

The IAM best practice, monitor activity in your AWS account, encourages you to monitor user activity in your AWS account by using services such as AWS CloudTrail and AWS Config. In addition to monitoring usage in your AWS account, you should be aware of inactive users so that you can remove them from your account. By only retaining necessary users, you can help maintain the security of your AWS account.

To help you find users that are inactive, we added three new columns to the IAM user table: Last activity, Console last sign-in, and Access key last used.
Screenshot showing three new columns in the IAM user table

  1. Last activity – This column tells you how long it has been since the user has either signed in to the AWS Management Console or accessed AWS programmatically with their access keys. Use this column to find users who might be inactive, and consider removing them from your AWS account.
  2. Console last sign-in – This column displays the time since the user’s most recent console sign-in. Consider removing passwords from users who are not signing in to the console.
  3. Access key last used – This column displays the time since a user last used access keys. Use this column to find any access keys that are not being used, and deactivate or remove them.

Rotate credentials regularly

The IAM best practice, rotate credentials regularly, recommends that all users in your AWS account change passwords and access keys regularly. With this practice, if a password or access key is compromised without your knowledge, you can limit how long the credentials can be used to access your resources. To help your management efforts, we added three new columns to the IAM user table: Access key age, Password age, and Access key ID.

Screenshot showing three new columns in the IAM user table

  1. Access key age – This column shows how many days it has been since the oldest active access key was created for a user. With this information, you can audit access keys easily across all your users and identify the access keys that may need to be rotated.

Based on the number of days since the access key has been rotated, a green, yellow, or red icon is displayed. To see the corresponding time frame for each icon, pause your mouse pointer on the Access key age column heading to see the tooltip, as shown in the following screenshot.

Icons showing days since the oldest active access key was created

  1. Password age – This column shows the number of days since a user last changed their password. With this information, you can audit password rotation and identify users who have not changed their password recently. The easiest way to make sure that your users are rotating their password often is to establish an account password policy that requires users to change their password after a specified time period.
  2. Access key ID – This column displays the access key IDs for users and the current status (Active/Inactive) of those access key IDs. This column makes it easier for you to locate and see the state of access keys for each user, which is useful for auditing. To find a specific access key ID, use the search box above the table.

Enable MFA for privileged users

Another IAM best practice is to enable multi-factor authentication (MFA) for privileged IAM users. With MFA, users have a device that generates a unique authentication code (a one-time password [OTP]). Users must provide both their normal credentials (such as their user name and password) and the OTP when signing in.

To help you see if MFA has been enabled for your users, we’ve improved the MFA column to show you if MFA is enabled and which type of MFA (hardware, virtual, or SMS) is enabled for each user, where applicable.

Screenshot showing the improved "MFA" column

Use groups to assign permissions to IAM users

Instead of defining permissions for individual IAM users, it’s usually more convenient to create groups that relate to job functions (such as administrators, developers, and accountants), define the relevant permissions for each group, and then assign IAM users to those groups. All the users in an IAM group inherit the permissions assigned to the group. This way, if you need to modify permissions, you can make the change once for everyone in a group instead of making the change one time for each user. As people move around in your company, you can change the group membership of the IAM user.

To better understand which groups your users belong to, we’ve made updates:

  1. Groups – This column now lists the groups of which a user is a member. This information makes it easier to understand and compare multiple users’ permissions at once.
  2. Group count – This column shows the number of groups to which each user belongs.Screenshot showing the updated "Groups" and "Group count" columns

Customize your view

Choosing which columns you see in the User table is easy to do. When you click the button with the gear icon in the upper right corner of the table, you can choose the columns you want to see, as shown in the following screenshots.

Screenshot showing gear icon  Screenshot of "Manage columns" dialog box

Conclusion

We made these improvements to the Users section of the IAM console to make it easier for you to follow IAM best practices in your AWS account. Following these best practices can help you improve the security of your AWS resources and make your account easier to manage.

If you have comments about this post, submit them in the “Comments” section below. If you have questions or suggestions, please start a new thread on the IAM forum.

– Rob

Monitoring HiveMQ with InfluxDB and Grafana

Post Syndicated from The HiveMQ Team original http://www.hivemq.com/blog/monitoring-hivemq-influxdb-grafana

hivemq_monitoring-influx

You need to monitor your system

System monitoring is an essential part of any production software deployment. Some people believe it to be as critical as security and it should be given the same attention. Historical challenges to effective monitoring are a lack of cohesive tools and the wrong mindset. These can lead to a false sense of security, which it is important to not fall victim of. At the end of this blog post we will provide you with a standardized dashboard, including metrics we believe to be useful for live monitoring MQTT brokers. This does in no way mean that these are all the metrics you need to monitor or that we could possibly know what’s crucial to your use case and deployment.

In order to provide you with the opportunity of implementing cohesive monitoring tools, the HiveMQ core distribution comes with the JVM Metrics Plugin and the JMX Plugin. The JVM Plugin will add crucial JVM metrics to the already existing available HiveMQ metrics and the JMX Plugin will enable JMX monitoring for any JMX monitoring tool like JConsole.

Real-time monitoring with the use of tools like JConsole is certainly better than nothing but has its own disadvantages. HiveMQ is often deployed in a container environment and therefore direct access to the HiveMQ process might not be possible. Despite that, using a time series monitoring solution also provides the added benefit of functioning as a great debugging tool, when trying to find the root cause of a system crash or similar.

The AWS Cloudwatch Plugin, Graphite Plugin and InfluxDB Plugin are free of charge and ready to use plugins provided by HiveMQ to enable time series monitoring.

Our recommendation

We routinely get asked about recommendations for monitoring tools. At the end of the day this is down to preference and ultimately your decision. In the past we have had good experiences with the combination of Telegraf, InfluxDB and a Grafana dashboard.

Telegraf can be used for gathering system metrics and writing them to the InfluxDB. HiveMQ is able to write its own metrics to the InfluxDB as well and a Grafana dashboard is a good solution for visualizing these gathered metrics.

Example Dashboard

Example Dashboard

Please note that there are countless other viable monitoring options available.

Installation and configuration

The first step to achieving our desired monitoring setup is installing and starting InfluxDB. InfluxDB works out of the box without adding additional configuration.
When InfluxDB is installed and running, use the command line tool to create a database called ‘hivemq’.

$ influx
Connected to http://localhost:8086 version 1.3.0
InfluxDB shell version: v1.2.3
> CREATE DATABASE hivemq

Attention: InfluxDB does not provide authentication by default, which could open your metrics up to a third party when running the InfluxDB on an external server. Make sure you cover this potential security issue.

InfluxDB data will grow rapidly. This can and will lead to the use of large amounts of disc space after running your InfluxDB for some time. To deal with this challenge InfluxDB offers the possibility to create so called retention policies. In our opinion it is sufficient to retain your InfluxDB data for two weeks. The syntax for creating this retention policy looks like this:

$ influx
Connected to http://localhost:8086 version 1.3.0
InfluxDB shell version: v1.2.3
> CREATE RETENTION POLICY "two_weeks_only" ON "hivemq" DURATION 2w REPLICATION 1

Which, if any, retention policy is best for your individual use case has to be decided by you.

The second step is downloading the InfluxDB HiveMQ Plugin. For this demonstration all the services will be running locally, so we can use the influxdb.properties file that is included in the HiveMQ Plugin without any adjustments. Bear in mind that you need to change the IP address, when running an external InfluxDB.

When running HiveMQ in a cluster it is important you use the exact same influxdb.properties on each node with the exception of this property:

tags:host=hivemq1

This property should be set individually for each HiveMQ node in the cluster for better transparency.

This plugin will now gather all the available HiveMQ metrics (given the JMX Plugin is also running) and write them to the configured InfluxDB.

The third step is installing Telegraf on each HiveMQ cluster node.

Now a telegraf.conf needs to be configured, telling Telegraf which metrics it should gather and eventually write to an InfluxDB. The default telegraf.conf is very inflated and full of comments and options, that are not needed for HiveMQ monitoring. The config we propose looks like this:

[tags]
node = "example-node"

[agent]
interval = "5s"

# OUTPUTS
[outputs]
[outputs.influxdb]
url = "http://localhost:8086"
database = "hivemq" # required.
precision = "s"

# PLUGINS
[cpu]
percpu = true
totalcpu = true

[system]

[disk]

[mem]

[diskio]

[net]

[kernel]

[processes]

This configuration provides metrics for

  • CPU: CPU Usage divided into spaces
  • System: Load, Uptime
  • Disk: Disk Usage, Free Space, INodes used
  • DiskIO: IO Time, Operations
  • Memory: RAM Used, Buffered, Cached
  • Kernel: Linux specific information like context switching
  • Processes: Systemwide process information

Note that some modules like Kernel may not be available on non-Linux systems.

Make sure to change the url, when not using a local InfluxDB.

This configuration will gather the CPU’s percentage and total usage every five seconds. See this page for other possible configurations of the system input.

At this point the terminal window, you are running Influxd in, should be showing something like this:

[httpd] ::1 - - [20/Jun/2017:13:36:46 +0200] "POST /write?db=hivemq HTTP/1.1" 204 0 "-" "-" bdad5fd9-55ac-11e7-8550-000000000000 9743

Showing a successful write of the Telegraf metrics to the InfluxDb.

The next step is installing and starting Grafana.

Grafana works out of the box and can be reached via localhost:3000.

The next step is configuring our InfluxDB as the Grafana’s data source.

Step 1: Add Data Source

Step 1: Add Data Source

Step 2: Configure InfluxDB

Step 2: Configure InfluxDB

Now we need a dashboard. As this question comes up quite often, we decided to provide a dashboard template, that displays some useful metrics for most MQTT deployments and should give you a good starting point for building your own individual dashboard tailored to your use case at hand. You can download the template here
The JSON file inside the zip can be imported to Grafana.

Step 3: Import Dashboard

Step 3: Import Dashboard

That’s it. We now have a working dashboard displaying metrics, who’s monitoring has proven vital in many MQTT deployments.

Disclaimer: This is one possibility and a good starting point we like to give you for monitoring your MQTT use case. Logically the requirements for your individual case may vary. We suggest reading the getting started guide from Grafana and to find what works best for you and your deployment.

Cybercrime Officials Shutdown Large eBook Portal, Three Arrested

Post Syndicated from Andy original https://torrentfreak.com/cybercrime-officials-shutdown-large-ebook-portal-three-arrested-170626/

Back in February 2015, German anti-piracy outfit GVU filed a complaint against the operators of large eBook portal Lul.to.

Targeted mainly at the German audience, the site carried around 160,000 eBooks, 28,000 audiobooks, plus newspapers and periodicals. Its motto was “Read and Listen” and claimed to be both the largest German eBook portal and the largest DRM-free platform in the world.

Unlike most file-sharing sites, Lul.to charged around 30,000 customers a small fee to access content, around $0.23 per download. However, all that came to end last week when authorities moved to shut the platform down.

According to the General Prosecutor’s Office, searches in several locations led to the discovery of around 55,000 euros in bitcoin, 100,000 euros in bank deposits, 10,000 euros in cash, plus a “high-quality” motorcycle.

As is often the case following significant action, the site has been completely taken down and now displays the following seizure notice.

Lul.to seized (translated from German)

Authorities report that three people were arrested and are being detained while investigations continue.

It is not yet clear how many times the site’s books were downloaded by users but investigators believe that the retail value of the content offered on the site was around 392,000 euros. By volume, investigators seized more than 11 terabytes of data.

The German Publishers & Booksellers Association welcomed the shutdown of the platform.

“Intervening against lul.to is an important success in the fight against Internet piracy. By blocking one of the largest illegal providers for e-books and audiobooks, many publishers and retailers can breathe,” said CEO Alexander Skipis.

“Piracy is not an excusable offense, it’s the theft of intellectual property, which is the basis for the work of authors, publishers, and bookshops. Portals like lul.to harm the media market massively. The success of the investigation is another example of the fact that such illegal models ultimately can not hold up.”

Last week in a separate case in Denmark, three men aged between 26 and 71-years-old were handed suspended sentences for offering subscription access to around 198 pirate textbooks.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

CoderDojo Coolest Projects 2017

Post Syndicated from Ben Nuttall original https://www.raspberrypi.org/blog/coderdojo-coolest-projects-2017/

When I heard we were merging with CoderDojo, I was delighted. CoderDojo is a wonderful organisation with a spectacular community, and it’s going to be great to join forces with the team and work towards our common goal: making a difference to the lives of young people by making technology accessible to them.

You may remember that last year Philip and I went along to Coolest Projects, CoderDojo’s annual event at which their global community showcase their best makes. It was awesome! This year a whole bunch of us from the Raspberry Pi Foundation attended Coolest Projects with our new Irish colleagues, and as expected, the projects on show were as cool as can be.

Coolest Projects 2017 attendee

Crowd at Coolest Projects 2017

This year’s coolest projects!

Young maker Benjamin demoed his brilliant RGB LED table tennis ball display for us, and showed off his brilliant project tutorial website codemakerbuddy.com, which he built with Python and Flask. [Click on any of the images to enlarge them.]

Coolest Projects 2017 LED ping-pong ball display
Coolest Projects 2017 Benjamin and Oly

Next up, Aimee showed us a recipes app she’d made with the MIT App Inventor. It was a really impressive and well thought-out project.

Coolest Projects 2017 Aimee's cook book
Coolest Projects 2017 Aimee's setup

This very successful OpenCV face detection program with hardware installed in a teddy bear was great as well:

Coolest Projects 2017 face detection bear
Coolest Projects 2017 face detection interface
Coolest Projects 2017 face detection database

Helen’s and Oly’s favourite project involved…live bees!

Coolest Projects 2017 live bees

BEEEEEEEEEEES!

Its creator, 12-year-old Amy, said she wanted to do something to help the Earth. Her project uses various sensors to record data on the bee population in the hive. An adjacent monitor displays the data in a web interface:

Coolest Projects 2017 Aimee's bees

Coolest robots

I enjoyed seeing lots of GPIO Zero projects out in the wild, including this robotic lawnmower made by Kevin and Zach:

Raspberry Pi Lawnmower

Kevin and Zach’s Raspberry Pi lawnmower project with Python and GPIO Zero, showed at CoderDojo Coolest Projects 2017

Philip’s favourite make was a Pi-powered robot you can control with your mind! According to the maker, Laura, it worked really well with Philip because he has no hair.

Philip Colligan on Twitter

This is extraordinary. Laura from @CoderDojo Romania has programmed a mind controlled robot using @Raspberry_Pi @coolestprojects

And here are some pictures of even more cool robots we saw:

Coolest Projects 2017 coolest robot no.1
Coolest Projects 2017 coolest robot no.2
Coolest Projects 2017 coolest robot no.3

Games, toys, activities

Oly and I were massively impressed with the work of Mogamad, Daniel, and Basheerah, who programmed a (borrowed) Amazon Echo to make a voice-controlled text-adventure game using Java and the Alexa API. They’ve inspired me to try something similar using the AIY projects kit and adventurelib!

Coolest Projects 2017 Mogamad, Daniel, Basheerah, Oly
Coolest Projects 2017 Alexa text-based game

Christopher Hill did a brilliant job with his Home Alone LEGO house. He used sensors to trigger lights and sounds to make it look like someone’s at home, like in the film. I should have taken a video – seeing it in action was great!

Coolest Projects 2017 Lego home alone house
Coolest Projects 2017 Lego home alone innards
Coolest Projects 2017 Lego home alone innards closeup

Meanwhile, the Northern Ireland Raspberry Jam group ran a DOTS board activity, which turned their area into a conductive paint hazard zone.

Coolest Projects 2017 NI Jam DOTS activity 1
Coolest Projects 2017 NI Jam DOTS activity 2
Coolest Projects 2017 NI Jam DOTS activity 3
Coolest Projects 2017 NI Jam DOTS activity 4
Coolest Projects 2017 NI Jam DOTS activity 5
Coolest Projects 2017 NI Jam DOTS activity 6

Creativity and ingenuity

We really enjoyed seeing so many young people collaborating, experimenting, and taking full advantage of the opportunity to make real projects. And we loved how huge the range of technologies in use was: people employed all manner of hardware and software to bring their ideas to life.

Philip Colligan on Twitter

Wow! Look at that room full of awesome young people. @coolestprojects #coolestprojects @CoderDojo

Congratulations to the Coolest Projects 2017 prize winners, and to all participants. Here are some of the teams that won in the different categories:

Coolest Projects 2017 winning team 1
Coolest Projects 2017 winning team 2
Coolest Projects 2017 winning team 3

Take a look at the gallery of all winners over on Flickr.

The wow factor

Raspberry Pi co-founder and Foundation trustee Pete Lomas came along to the event as well. Here’s what he had to say:

It’s hard to describe the scale of the event, and photos just don’t do it justice. The first thing that hit me was the sheer excitement of the CoderDojo ninjas [the children attending Dojos]. Everyone was setting up for their time with the project judges, and their pure delight at being able to show off their creations was evident in both halls. Time and time again I saw the ninjas apply their creativity to help save the planet or make someone’s life better, and it’s truly exciting that we are going to help that continue and expand.

Even after 8 hours, enthusiasm wasn’t flagging – the awards ceremony was just brilliant, with ninjas high-fiving the winners on the way to the stage. This speaks volumes about the ethos and vision of the CoderDojo founders, where everyone is a winner just by being part of a community of worldwide friends. It was a brilliant introduction, and if this weekend was anything to go by, our merger certainly is a marriage made in Heaven.

Join this awesome community!

If all this inspires you as much as it did us, consider looking for a CoderDojo near you – and sign up as a volunteer! There’s plenty of time for young people to build up skills and start working on a project for next year’s event. Check out coolestprojects.com for more information.

The post CoderDojo Coolest Projects 2017 appeared first on Raspberry Pi.

Manage Instances at Scale without SSH Access Using EC2 Run Command

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/manage-instances-at-scale-without-ssh-access-using-ec2-run-command/

The guest post below, written by Ananth Vaidyanathan (Senior Product Manager for EC2 Systems Manager) and Rich Urmston (Senior Director of Cloud Architecture at Pegasystems) shows you how to use EC2 Run Command to manage a large collection of EC2 instances without having to resort to SSH.

Jeff;


Enterprises often have several managed environments and thousands of Amazon EC2 instances. It’s important to manage systems securely, without the headaches of Secure Shell (SSH). Run Command, part of Amazon EC2 Systems Manager, allows you to run remote commands on instances (or groups of instances using tags) in a controlled and auditable manner. It’s been a nice added productivity boost for Pega Cloud operations, which rely daily on Run Command services.

You can control Run Command access through standard IAM roles and policies, define documents to take input parameters, control the S3 bucket used to return command output. You can also share your documents with other AWS accounts, or with the public. All in all, Run Command provides a nice set of remote management features.

Better than SSH
Here’s why Run Command is a better option than SSH and why Pegasystems has adopted it as their primary remote management tool:

Run Command Takes Less Time –  Securely connecting to an instance requires a few steps e.g. jumpboxes to connect to or IP addresses to whitelist etc. With Run Command, cloud ops engineers can invoke commands directly from their laptop, and never have to find keys or even instance IDs. Instead, system security relies on AWS auth, IAM roles and policies.

Run Command Operations are Fully Audited – With SSH, there is no real control over what they can do, nor is there an audit trail. With Run Command, every invoked operation is audited in CloudTrail, including information on the invoking user, instances on which command was run, parameters, and operation status. You have full control and ability to restrict what functions engineers can perform on a system.

Run Command has no SSH keys to Manage – Run Command leverages standard AWS credentials, API keys, and IAM policies. Through integration with a corporate auth system, engineers can interact with systems based on their corporate credentials and identity.

Run Command can Manage Multiple Systems at the Same Time – Simple tasks such as looking at the status of a Linux service or retrieving a log file across a fleet of managed instances is cumbersome using SSH. Run Command allows you to specify a list of instances by IDs or tags, and invokes your command, in parallel, across the specified fleet. This provides great leverage when troubleshooting or managing more than the smallest Pega clusters.

Run Command Makes Automating Complex Tasks Easier – Standardizing operational tasks requires detailed procedure documents or scripts describing the exact commands. Managing or deploying these scripts across the fleet is cumbersome. Run Command documents provide an easy way to encapsulate complex functions, and handle document management and access controls. When combined with AWS Lambda, documents provide a powerful automation platform to handle any complex task.

Example – Restarting a Docker Container
Here is an example of a simple document used to restart a Docker container. It takes one parameter; the name of the Docker container to restart. It uses the AWS-RunShellScript method to invoke the command. The output is collected automatically by the service and returned to the caller. For an example of the latest document schema, see Creating Systems Manager Documents.

{
  "schemaVersion":"1.2",
  "description":"Restart the specified docker container.",
  "parameters":{
    "param":{
      "type":"String",
      "description":"(Required) name of the container to restart.",
      "maxChars":1024
    }
  },
  "runtimeConfig":{
    "aws:runShellScript":{
      "properties":[
        {
          "id":"0.aws:runShellScript",
          "runCommand":[
            "docker restart {{param}}"
          ]
        }
      ]
    }
  }
}

Putting Run Command into practice at Pegasystems
The Pegasystems provisioning system sits on AWS CloudFormation, which is used to deploy and update Pega Cloud resources. Layered on top of it is the Pega Provisioning Engine, a serverless, Lambda-based service that manages a library of CloudFormation templates and Ansible playbooks.

A Configuration Management Database (CMDB) tracks all the configurations details and history of every deployment and update, and lays out its data using a hierarchical directory naming convention. The following diagram shows how the various systems are integrated:

For cloud system management, Pega operations uses a command line version called cuttysh and a graphical version based on the Pega 7 platform, called the Pega Operations Portal. Both tools allow you to browse the CMDB of deployed environments, view configuration settings, and interact with deployed EC2 instances through Run Command.

CLI Walkthrough
Here is a CLI walkthrough for looking into a customer deployment and interacting with instances using Run Command.

Launching the cuttysh tool brings you to the root of the CMDB and a list of the provisioned customers:

% cuttysh
d CUSTA
d CUSTB
d CUSTC
d CUSTD

You interact with the CMDB using standard Linux shell commands, such as cd, ls, cat, and grep. Items prefixed with s are services that have viewable properties. Items prefixed with d are navigable subdirectories in the CMDB hierarchy.

In this example, change directories into customer CUSTB’s portion of the CMDB hierarchy, and then further into a provisioned Pega environment called env1, under the Dev network. The tool displays the artifacts that are provisioned for that environment. These entries map to provisioned CloudFormation templates.

> cd CUSTB
/ROOT/CUSTB/us-east-1 > cd DEV/env1

The ls –l command shows the version of the provisioned resources. These version numbers map back to source control–managed artifacts for the CloudFormation, Ansible, and other components that compose a version of the Pega Cloud.

/ROOT/CUSTB/us-east-1/DEV/env1 > ls -l
s 1.2.5 RDSDatabase 
s 1.2.5 PegaAppTier 
s 7.2.1 Pega7 

Now, use Run Command to interact with the deployed environments. To do this, use the attach command and specify the service with which to interact. In the following example, you attach to the Pega Web Tier. Using the information in the CMDB and instance tags, the CLI finds the corresponding EC2 instances and displays some basic information about them. This deployment has three instances.

/ROOT/CUSTB/us-east-1/DEV/env1 > attach PegaWebTier
 # ID         State  Public Ip    Private Ip  Launch Time
 0 i-0cf0e84 running 52.63.216.42 10.96.15.70 2017-01-16 
 1 i-0043c1d running 53.47.191.22 10.96.15.43 2017-01-16 
 2 i-09b879e running 55.93.118.27 10.96.15.19 2017-01-16 

From here, you can use the run command to invoke Run Command documents. In the following example, you run the docker-ps document against instance 0 (the first one on the list). EC2 executes the command and returns the output to the CLI, which in turn shows it.

/ROOT/CUSTB/us-east-1/DEV/env1 > run 0 docker-ps
. . 
CONTAINER ID IMAGE             CREATED      STATUS        NAMES
2f187cc38c1  pega-7.2         10 weeks ago  Up 8 weeks    pega-web

Using the same command and some of the other documents that have been defined, you can restart a Docker container or even pull back the contents of a file to your local system. When you get a file, Run Command also leaves a copy in an S3 bucket in case you want to pass the link along to a colleague.

/ROOT/CUSTB/us-east-1/DEV/env1 > run 0 docker-restart pega-web
..
pega-web

/ROOT/CUSTB/us-east-1/DEV/env1 > run 0 get-file /var/log/cfn-init-cmd.log
. . . . . 
get-file

Data has been copied locally to: /tmp/get-file/i-0563c9e/data
Data is also available in S3 at: s3://my-bucket/CUSTB/cuttysh/get-file/data

Now, leverage the Run Command ability to do more than one thing at a time. In the following example, you attach to a deployment with three running instances and want to see the uptime for each instance. Using the par (parallel) option for run, the CLI tells Run Command to execute the uptime document on all instances in parallel.

/ROOT/CUSTB/us-east-1/DEV/env1 > run par uptime
 …
Output for: i-006bdc991385c33
 20:39:12 up 15 days, 3:54, 0 users, load average: 0.42, 0.32, 0.30

Output for: i-09390dbff062618
 20:39:12 up 15 days, 3:54, 0 users, load average: 0.08, 0.19, 0.22

Output for: i-08367d0114c94f1
 20:39:12 up 15 days, 3:54, 0 users, load average: 0.36, 0.40, 0.40

Commands are complete.
/ROOT/PEGACLOUD/CUSTB/us-east-1/PROD/prod1 > 

Summary
Run Command improves productivity by giving you faster access to systems and the ability to run operations across a group of instances. Pega Cloud operations has integrated Run Command with other operational tools to provide a clean and secure method for managing systems. This greatly improves operational efficiency, and gives greater control over who can do what in managed deployments. The Pega continual improvement process regularly assesses why operators need access, and turns those operations into new Run Command documents to be added to the library. In fact, their long-term goal is to stop deploying cloud systems with SSH enabled.

If you have any questions or suggestions, please leave a comment for us!

— Ananth and Rich

EtherApe – Graphical Network Monitor

Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/DxSK15EgI5k/

EtherApe is a graphical network monitor for Unix modelled after etherman. Featuring link layer, IP and TCP modes, it displays network activity graphically. Hosts and links change in size with traffic. Colour coded protocols display. It supports Ethernet, FDDI, Token Ring, ISDN, PPP, SLIP and WLAN devices, plus several encapsulation formats. It can…

Read the full post at darknet.org.uk

Tinkernut’s do-it-yourself Pi Zero audio HAT

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/tinkernut-diy-pi-zero-audio/

Why buy a Raspberry Pi Zero audio HAT when Tinkernut can show you how to make your own?

Adding Audio Output To The Raspberry Pi Zero – Tinkernut Workbench

The Raspberry Pi Zero W is an amazing miniature computer piece of technology. I want to turn it into an epic portable Spotify radio that displays visuals such as Album Art. So in this new series called “Tinkernut Workbench”, I show you step by step what it takes to build a product from the ground up.

Raspberry Pi Zero audio

Unlike their grown-up siblings, the Pi Zero and Zero W lack an onboard audio jack, but that doesn’t stop you from using them to run an audio output. Various audio HATs exist on the market, from Adafruit, Pimoroni and Pi Supply to name a few, providing easy audio output for the Zero. But where would the fun be in a Tinkernut video that shows you how to attach a HAT?

Tinkernut Pi Zero Audio

“Take this audio HAT, press it onto the header pins and, errr, done? So … how was your day?”

DIY Audio: Tinkernut style

For the first video in his Hipster Spotify Radio using a Raspberry Pi Tinkernut Workbench series, Tinkernut – real name Daniel Davis – goes through the steps of researching, prototyping and finishing his own audio HAT for his newly acquired Raspberry Pi Zero W.

The build utilises the GPIO pins on the Zero W, specifically pins #18 and #13. FYI, this hidden gem of information comes from the Adafruit Pi Zero PWM Audio guide. Before he can use #18 and #13, header pins need to be soldered. If the thought of soldering pins to the Pi is somewhat daunting, check out the Pimoroni Hammer Header.

Pimoroni Hammer Header for Raspberry Pi

You’re welcome.

Once complete, with Raspbian installed on the micro SD, and SSH enabled for remote access, he’s ready to start prototyping.

Ingredients

Tinkernut uses two 270 ohm resistors, two 150 ohm resistors, two 10μf electrolytic capacitors, two 0.01 μf polyester film capacitors, an audio jack and some wire. You’ll also need a breadboard for prototyping. For the final build, you’ll need a single row female pin header and some prototyping board, if you want to join in at home.

Tinkernut audio board Raspberry Pi Zero W

It should look like this…hopefully.

Once the prototype is working to run audio through to a cheap speaker (thanks to an edit of the config.txt file), the final board can be finished.

What’s next?

The audio board is just one step in the build.

Spotify is such an awesome music service. Raspberry Pi Zero is such an awesome ultra-mini computing device. Obviously, combining the two is something I must do!!! The idea here is to make something that’s stylish, portable, can play Spotify, and hopefully also display visuals such as album art.

Subscribe to Tinkernut’s YouTube channel to keep up to date with the build, and check out some of his other Raspberry Pi builds, such as his cheap 360 video camera, security camera and digital vintage camera.

Have you made your own Raspberry Pi HAT? Show it off in the comments below!

The post Tinkernut’s do-it-yourself Pi Zero audio HAT appeared first on Raspberry Pi.

An affordable ocular fundus camera

Post Syndicated from Helen Lynn original https://www.raspberrypi.org/blog/an-affordable-ocular-fundus-camera/

The ocular fundus is the interior surface of the eye, and an ophthalmologist can learn a lot about a patient’s health by examining it. However, there’s a problem: an ocular fundus camera can’t capture a useful image unless the eye is brightly lit, but this makes the pupil constrict, obstructing the camera’s view. Ophthalmologists use pupil-dilating eye drops to block the eye’s response to light, but these are uncomfortable and can cause blurred vision for several hours. Now, researchers at the University of Illinois at Chicago have built a Raspberry Pi-based fundus camera that can take photos of the retina without the need for eye drops.

Dr Bailey Shen and co-author Dr Shizuo Mukai made their camera with a Raspberry Pi 2 and a Pi NoIR Camera Module, a version of the Camera Module that does not have an infrared filter. They used a small LCD touchscreen display and a lithium battery, holding the ensemble together with tape and rubber bands. They also connected a button and a dual infrared/white light LED to the Raspberry Pi’s GPIO pins.

When the Raspberry Pi boots, a Python script turns on the infrared illumination from the LED and displays the camera view on the screen. The iris does not respond to infrared light, so in a darkened room the operator is able to position the camera and a separate condensing lens to produce a clear image of the patient’s fundus. When they’re satisfied with the image, the operator presses the button. This turns off the infrared light, produces a flash of white light, and captures a colour image of the fundus before the iris can respond and constrict the pupil.

This isn’t the first ocular fundus camera to use infrared/white light to focus and obtain images without eye drops, but it is less bulky and distinctly cheaper than existing equipment, which can cost thousands of dollars. The total cost of all the parts is $185, and all but one are available as off-the-shelf components. The exception is the dual infrared/white light LED, a prototype which the researchers explain is a critical part of the equipment. Using an infrared LED and a white LED side by side doesn’t yield consistent results. We’d be glad to see the LED available on the market, both to support this particular application, and because we bet there are plenty of other builds that could use one!

Read more in Science Daily, in an article exploring the background to the project. The article is based on the researchers’ recent paper, presenting the Raspberry Pi ocular fundus camera in the Journal of Ophthalmology. The journal is an open access publication, so if you think this build is as interesting as I do, I encourage you to read the researchers’ presentation of their work, its possibilities and its limitations. They also provide step-by-step instructions and a parts list to help others to replicate and build on their work.

It’s beyond brilliant to see researchers and engineers using the Raspberry Pi to make medical and scientific work cheaper and more accessible. Please tell us about your favourite applications, or the applications you’d develop in your fantasy lab or clinic, in the comments.

The post An affordable ocular fundus camera appeared first on Raspberry Pi.

The New AWS Organizations User Interface Makes Managing Your AWS Accounts Easier

Post Syndicated from Anders Samuelsson original https://aws.amazon.com/blogs/security/the-new-aws-organizations-user-interface-makes-managing-your-aws-accounts-easier/

With AWS Organizations—launched on February 27, 2017—you can easily organize accounts centrally and set organizational policies across a set of accounts. Starting today, the Organizations console includes a tree view that allows you to manage accounts and organizational units (OUs) easily. The new view also makes it simple to attach service control policies (SCPs) to individual accounts or a group of accounts in an OU. In this post, I demonstrate some of the benefits of the new user interface.

The new tree view

The following screenshot shows an example of how an organization is displayed in the tree view on the Organize accounts tab. I have chosen the Frontend OU, and it shows that two OUs—Application 1 and Application 2—are child OUs of the Frontend OU. In the tree view, I can choose any OU and immediately view and take action on the contents of that OU. This new view makes it easier to quickly view OUs and navigate the relationships between OUs in your organization.

Screenshot of the new tree view

If you would prefer not to use the tree view, you can hide it by choosing the Tree view toggle in the upper left corner of the main pane. The following screenshot shows the console with the tree view turned off.

Screenshot of the console with the tree view hidden

You can toggle between the old view and the new tree view at any time. For the rest of this post, though, I will show the tree view.

Additional Organizations console improvements

In addition, we made a few other console improvements. First, we added more detail to the right pane when you choose an account or an OU. In the following screenshot, I have chosen the Application 1 OU in the main pane of the console and then the new Accounts heading in the right pane. As a result, I now can view the accounts that are in the OU without having to navigate into the OU. I can also remove an account from the OU by choosing Remove next to the account I want to remove.

Screenshot showing the accounts in the OU

Secondly, we have made it easier for you to attach SCPs to entities such as individual accounts and OUs. For example, to attach to the Application 1 OU an SCP that blocks access to Amazon Redshift, I choose Service Control Policies in the right pane. I now see a list of SCPs from which I can choose, as shown in the following screenshot.

Screenshot showing SCPs that are attached and available

The Blacklist Redshift policy is an SCP I created previously, and by choosing Attach, I attach it to the Application 1 OU.

The third console enhancement is in the Accounts tab. The right pane displays additional information when you choose an account. In the following screenshot, I choose the Accounts tab and then the DB backend account. In the right pane, I now see a new option: Organizational units.

Screenshot showing the new "Organizational units" choice in the right pane

When I choose Organizational units in the right pane, I see the OUs of which the chosen account is a member—in this case, Application 1. If the account should not be in that OU, I can remove it by choosing Remove next to the OU name, as shown in the following screenshot.

Screenshot showing the OUs of which the account is a member

We have also made it possible to attach SCPs to accounts in this view. When I choose Service Control Policies in the right pane, I see a list of all SCPs in my organization. The list is organized such that all the policies that are directly attached to the account are at the top of the list. You can detach any of these policies by choosing Detach next to the policy.

At the bottom of the list, I see the SCPs that I can attach to accounts. To do this, I choose Attach next to a policy. In the following screenshot, the Blacklist Redshift SCP can be attached directly to the account. However, when I look at the policies that are indirectly attached to the account via OUs, I see that the Blacklist Redshift SCP is already attached via the Application 1 OU. This means it is not necessary for me to attach this SCP directly to the DB backend account.

Screenshot showing that the Blacklist Redshift SCP is already attached via the Application 1 OU

Summary

The new Organizations user interface makes it easier for you to manage your accounts and OUs as well as attach SCPs to accounts. To get started, sign in to the Organizations console.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or issues implementing this solution, start a new thread on the Organizations forum.

– Anders

Amazon CloudWatch launches Alarms on Dashboards

Post Syndicated from Tara Walker original https://aws.amazon.com/blogs/aws/amazon-cloudwatch-launches-alarms-on-dashboards/

Amazon CloudWatch is a service that gives customers the ability to monitor their applications, systems, and solutions running on Amazon Web Services by providing and collecting metrics, logs, and events about AWS resources in real time. CloudWatch automatically provides key resource measurements such as; latency, error rates, and CPU usage, while also enabling monitoring of custom metrics via customer-supplied logs and system data.

Last November, Amazon CloudWatch added new Dashboard Widgets to provide additional data visualization options for all available metrics. In order to provide customers with even more insight into their solutions and resources running on AWS, CloudWatch has launched Alarms on Dashboards. With this alarms enhancement, customers can view alarms and metrics in the same dashboard widget enabling them to perform data-driven troubleshooting and analysis.

CloudWatch dashboards are designed with a goal of providing better visibility when monitoring AWS resources across regions in a consolidated view. Since CloudWatch dashboards are highly customizable, users can create their own custom dashboards to graphically represent data for varying metrics such as utilization, performance, estimated billing, and now alarm conditions. An alarm tracks a single metric over time based on the value of the metric in relation to a specified threshold. When the alarm state changes, an action such an Auto Scaling policy is executed or a notification is sent to Amazon SNS, among other options.

With the ability to add alarms to dashboards, CloudWatch users have another mechanism to proactively monitor and receive alerts about their AWS resources and applications across multiple regions. In addition, the metric data associated with an alarm, which has been added to a dashboard, can be charted and reviewed. Alarms have three possible states:

  • OK: The value of the alarm metric does not meet the threshold
  • INSUFFICIENT DATA: Initial triggering of alarm metric or alarm metric data does not have enough data to determine whether it’s in the OK state or the ALARM state
  • ALARM: The value of the alarm metric meets the threshold

When added to a dashboard, alarms are displayed in red when in the Alarm state, gray when in the Insufficient data state and shown with no color fill when the alarm is in the OK state. Alarms added to a dashboard are supported with the following widgets: Line, Number, and Stacked Graph widgets.

  • Number widget: provides a quick and efficient view of the latest value of any desired metric. Using the widget with alarms, the view of the state of the alarm is shown with different background colors for the latest metric data.
  • Line widget: allows the visualization of the actual value of any collection of chosen metrics. Provides a view on the dashboard of the state of the alarm, which displays the alarm threshold and condition as a horizontal line. The threshold line can act as a good indicator to view the degree of the alarm.
  • Stack graph widget: allows customers to visualize the net total effect of any collection of chosen metrics. The stacked graph widget loads one metric over another in order to illustrate the distribution and contribution of a metric and has the option to display the contribution of metrics in percentages. With alarms, it also provides a view of the state of the alarm, which displays the alarm threshold and condition as a horizontal line.

Currently, adding multiple metrics onto the same widget for an alarm is in the works and this feature is evolving based on customer feedback.

Adding Alarms on Dashboards

Let’s take a quick look at the utilizing the Alarms on a CloudWatch Dashboard. In the AWS Console, I will go to the CloudWatch service. When in the CloudWatch console, select Dashboards. I will click the Create dashboard button and create the CloudWatchBlog dashboard.

 

Upon creation of my CloudWatchBlog dashboard, a dialog box will open to allow me to add widgets to the dashboard. I will forego adding widgets for now since I want to focus on adding alarms on my dashboard. Therefore, I will hit the Cancel button here and go to the Alarms section of the CloudWatch console.

Once in the Alarms section of the CloudWatch console, you will see all of your alarms and the state of each of the alarms for the current region displayed.

As we mentioned earlier, there are three types of alarm states and as you can see in my console above that all of the different alarms states for various alarms are being displayed. If desired, you can adjust your filter on the console to display alarms filtered by the alarm state type.

As an example, I am only interested in viewing the alarms with an alarm state of ALARM. Therefore, I will adjust the filter to show only the alarms in the current region with an alarm state as ALARM.

Now only the two alarms that have a current alarm state of ALARM are displayed. One of these alarms is for monitoring the provisioned write capacity units of an Amazon DynamoDB table, and the other is to monitor the CPU utilization of my active Amazon Elasticsearch instance.

Let’s examine the scenario in which I leverage my CloudWatchBlog dashboard as my troubleshooting mechanism for identifying and diagnosing issues with my Elasticsearch solution and its instances. I will first add the Amazon Elasticsearch CPU utilization alarm, ES Alarm, to my CloudWatchBlog dashboard. To add the alarm, I simply select the checkbox by the desired alarm, which in this case is ES Alarm. Then with the alarm selected, I click the Add to Dashboard button.

The Add to dashboard dialog box will open, allowing me to select my CloudWatchBlog dashboard. Additionally, I can select the widget type I would like to use for the display of my alarm. For the ES Alarm, I will choose the Line widget and complete the process of adding this alarm to my dashboard by clicking the Add to dashboard button.

Upon successfully adding ES Alarm to the CloudWatchBlog dashboard, you will see a confirmation notice displayed in the CloudWatch console.

If I then go to the Dashboard section of the console and select my CloudWatchBlog dashboard, I will see the line widget for my alarm, ES Alarm, on the dashboard. To ensure that my ES Alarm widget is a permanent part of the dashboard, I will click the Save dashboard button to preserve the addition of this widget on the dashboard.

As we discussed, one of the benefits of utilizing a CloudWatch dashboard is the ability to add several alarms from various regions onto a dashboard. Since my scenario is leveraging my dashboard as a troubleshooting mechanism for my Elasticsearch solution, I would like to have several alarms and metrics related to my solution displayed on the CloudWatchBlog dashboard. Given this, I will create another alarm for my Elasticsearch instance and add it to my dashboard.

I will first return to the Alarms section of the console and click the Create Alarm button.

The Create Alarm dialog box is displayed showing all of the current metrics available in this region. From the summary, I can quickly see that there are 21 metrics being tracked for Elasticsearch. I will click on the ES Metrics link to view the individual metrics that can be used to create my alarm.

I can review the individual metrics shown for my Elasticsearch instance, and choose which metric I want to base my new alarm on. In this case, I choose the WriteLatency metric by selecting the checkbox for this metric and then click the Next button.

 

The next screen is where I fill in all the details about my alarm: name, description, alarm threshold, time period, and alarm action. I will name my new alarm, ES Latency Alarm, and complete the rest of the aforementioned data fields. To complete the creation of my new alarm, I click the Create Alarm button.

I will see a confirmation message box at the top of the Alarms console upon successful completion of adding the alarm, and the status of the newly created alarm will be displayed in the alarms list.

Now I will add my ES Latency Alarm to my CloudWatchBlog dashboard. Again, I click on the checkbox by the alarm and then click the Add to Dashboard button.

This time when the Add to Dashboard dialog comes up, I will choose the Stacked area widget to display the ES Latency Alarm on my CloudWatchBlog dashboard. Clicking the Add to Dashboard button will complete the addition of my ES Latency Alarm widget to the dashboard.

Once back in the console, again I will see the confirmation noting the successful addition of the widget. I go to the Dashboards and click on the CloudWatchBlog dashboard and I can now view the two widgets in my dashboard. To include this widget in the dashboard permanently, I click the Save dashboard button.

The final thing to note about the new CloudWatch feature, Alarms on Dashboards, is that alarms and metrics from other regions can be added to the dashboard for a complete view for troubleshooting. Let’s add a metric to the dashboard with the alarms widget.

Within the console, I will move from my current region, US East (Ohio), to the US East (N. Virginia) region.

Now I will go to the Metric section of the CloudWatch console. This section displays the metrics from services used in the US East (N. Virginia) region.

My Elasticsearch solution triggers Lambda functions to capture all of the EmployeeInfo DynamoDB database CRUD (Create, Read, Update, Delete) changes via DynamoDB streams and write those changes into my Elasticsearch domain, taratestdomain. Therefore, I will add metrics to my CloudWatchBlog dashboard to track table metrics from DynamoDB.

Therefore, I am going to add the EmployeeInfo database ProvisionedWriteCapacityUnits metric to my CloudWatchBlog dashboard.

Back again in the Add to Dashboard dialog, I will select my CloudWatchBlog dashboard and choose to display this metric using the Number widget.

Now, the ProvisionedWriteCapacityUnits metric from the US East (N. Virginia) is displayed in the CloudWatchBlog dashboard with the Number widget added to the dashboard to with the alarms from the US East (Ohio). To make this update permanent in the dashboard, I will (you guessed it!) click the Save dashboard button.

Summary

Getting started with alarms on dashboards is easy. You can use alarms on dashboards across regions for another means of proactively monitoring alarms, build troubleshooting playbooks, and view desired metrics. You can also choose the metric first in the Metric UI and then change the type of widget according to the visualization that fits the metric.

Alarms on Dashboards are supported on Line, Stacked Area, and Number widgets. In addition, you can use Text widgets next to alarms on a dashboard to add steps or observations on how to handle changes in the alarm state. To learn more about Amazon CloudWatch widgets and about the additional dashboard widgets, visit the Amazon CloudWatch documentation and the CloudWatch Getting Started guide.

 

Tara

Build the wall: a huge wall (of LEDs)

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/giant-led-wall/

London-based Solid State Group decided to combine Slack with a giant LED wall, and created a thing of beauty for their office.

Solid State LED wall Raspberry Pi

“BUILD THE WALL we cried aloud. Well, in truth, we shouted this over Slack, until the pixel wall became a thing.” Niall Quinn, Solid State Group.

Flexing the brain

Project name RIO: Rendered-Input-Output took its inspiration from Google Creative Lab’s anypixel.js project. An open source software and hardware library, anypixel.js boasts the ability to ‘create big, unusual, interactive displays out of all kinds of things’ such as arcade buttons and balloons.

Every tech company has side projects and Solid State is no different. It keeps devs motivated and flexes the bits of the brain sometimes not quite reached by day-to-day coding. Sometimes these side projects become products, sometimes we crack open a beer and ask “what the hell were we thinking”, but always we learn something – about the process, and perhaps ourselves.

Niall Quinn, Solid State Group

To ‘flex the bits of the brain’, the team created their own in-house resource. Utilising Slack as their interface, they were able to direct images, GIFs and video over the internet to the Pi-powered LED wall.

Solid State LED wall Raspberry Pi

Bricks and mortar

They developed an API for ‘drawing’ each pixel of the content sent to the wall, and converting them to match the pixels of the display.

After experimenting with the code on a small 6×5 pixel replica, the final LED wall was built using WS2812B RGB strips. With 2040 LEDs in total to control, higher RAM and power requirements called for the team to replace their microcontroller. Enter the Raspberry Pi.

Solid State LED wall Raspberry Pi

“With more pixels come more problems. LEDs gobble up RAM and draw a lot of power, so we switched from an Arduino to a Raspberry Pi, and got ourselves a pretty hefty power supply.”

Alongside the Slack-to-wall image sharing, Solid State also developed their own mobile app. This app used the HTML5 canvas element to draw data for the wall. The app enabled gaming via a SNES-style controller, a live drawing application, a messaging function and live preview capabilities.

Build your own LED wall

If you’re planning on building your own LED wall, whether for an event, a classroom, an office or a living room, the Solid State team have shared the entire project via their GitHub page. To read a full breakdown of the build, make sure you visit their blog. And if you do build your own, or have done already, make sure to share it in the comments below.

The post Build the wall: a huge wall (of LEDs) appeared first on Raspberry Pi.