Tag Archives: https

How to Create an AMI Builder with AWS CodeBuild and HashiCorp Packer – Part 2

Post Syndicated from Heitor Lessa original https://aws.amazon.com/blogs/devops/how-to-create-an-ami-builder-with-aws-codebuild-and-hashicorp-packer-part-2/

Written by AWS Solutions Architects Jason Barto and Heitor Lessa

 
In Part 1 of this post, we described how AWS CodeBuild, AWS CodeCommit, and HashiCorp Packer can be used to build an Amazon Machine Image (AMI) from the latest version of Amazon Linux. In this post, we show how to use AWS CodePipeline, AWS CloudFormation, and Amazon CloudWatch Events to continuously ship new AMIs. We use Ansible by Red Hat to harden the OS on the AMIs through a well-known set of security controls outlined by the Center for Internet Security in its CIS Amazon Linux Benchmark.

You’ll find the source code for this post in our GitHub repo.

At the end of this post, we will have the following architecture:

Requirements

 
To follow along, you will need Git and a text editor. Make sure Git is configured to work with AWS CodeCommit, as described in Part 1.

Technologies

 
In addition to the services and products used in Part 1 of this post, we also use these AWS services and third-party software:

AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion.

Amazon CloudWatch Events enables you to react selectively to events in the cloud and in your applications. Specifically, you can create CloudWatch Events rules that match event patterns, and take actions in response to those patterns.

AWS CodePipeline is a continuous integration and continuous delivery service for fast and reliable application and infrastructure updates. AWS CodePipeline builds, tests, and deploys your code every time there is a code change, based on release process models you define.

Amazon SNS is a fast, flexible, fully managed push notification service that lets you send individual messages or to fan out messages to large numbers of recipients. Amazon SNS makes it simple and cost-effective to send push notifications to mobile device users or email recipients. The service can even send messages to other distributed services.

Ansible is a simple IT automation system that handles configuration management, application deployment, cloud provisioning, ad-hoc task-execution, and multinode orchestration.

Getting Started

 
We use CloudFormation to bootstrap the following infrastructure:

Component Purpose
AWS CodeCommit repository Git repository where the AMI builder code is stored.
S3 bucket Build artifact repository used by AWS CodePipeline and AWS CodeBuild.
AWS CodeBuild project Executes the AWS CodeBuild instructions contained in the build specification file.
AWS CodePipeline pipeline Orchestrates the AMI build process, triggered by new changes in the AWS CodeCommit repository.
SNS topic Notifies subscribed email addresses when an AMI build is complete.
CloudWatch Events rule Defines how the AMI builder should send a custom event to notify an SNS topic.
Region AMI Builder Launch Template
N. Virginia (us-east-1)
Ireland (eu-west-1)

After launching the CloudFormation template linked here, we will have a pipeline in the AWS CodePipeline console. (Failed at this stage simply means we don’t have any data in our newly created AWS CodeCommit Git repository.)

Next, we will clone the newly created AWS CodeCommit repository.

If this is your first time connecting to a AWS CodeCommit repository, please see instructions in our documentation on Setup steps for HTTPS Connections to AWS CodeCommit Repositories.

To clone the AWS CodeCommit repository (console)

  1. From the AWS Management Console, open the AWS CloudFormation console.
  2. Choose the AMI-Builder-Blogpost stack, and then choose Output.
  3. Make a note of the Git repository URL.
  4. Use git to clone the repository.

For example: git clone https://git-codecommit.eu-west-1.amazonaws.com/v1/repos/AMI-Builder_repo

To clone the AWS CodeCommit repository (CLI)

# Retrieve CodeCommit repo URL
git_repo=$(aws cloudformation describe-stacks --query 'Stacks[0].Outputs[?OutputKey==`GitRepository`].OutputValue' --output text --stack-name "AMI-Builder-Blogpost")

# Clone repository locally
git clone ${git_repo}

Bootstrap the Repo with the AMI Builder Structure

 
Now that our infrastructure is ready, download all the files and templates required to build the AMI.

Your local Git repo should have the following structure:

.
├── ami_builder_event.json
├── ansible
├── buildspec.yml
├── cloudformation
├── packer_cis.json

Next, push these changes to AWS CodeCommit, and then let AWS CodePipeline orchestrate the creation of the AMI:

git add .
git commit -m "My first AMI"
git push origin master

AWS CodeBuild Implementation Details

 
While we wait for the AMI to be created, let’s see what’s changed in our AWS CodeBuild buildspec.yml file:

...
phases:
  ...
  build:
    commands:
      ...
      - ./packer build -color=false packer_cis.json | tee build.log
  post_build:
    commands:
      - egrep "${AWS_REGION}\:\sami\-" build.log | cut -d' ' -f2 > ami_id.txt
      # Packer doesn't return non-zero status; we must do that if Packer build failed
      - test -s ami_id.txt || exit 1
      - sed -i.bak "s/<<AMI-ID>>/$(cat ami_id.txt)/g" ami_builder_event.json
      - aws events put-events --entries file://ami_builder_event.json
      ...
artifacts:
  files:
    - ami_builder_event.json
    - build.log
  discard-paths: yes

In the build phase, we capture Packer output into a file named build.log. In the post_build phase, we take the following actions:

  1. Look up the AMI ID created by Packer and save its findings to a temporary file (ami_id.txt).
  2. Forcefully make AWS CodeBuild to fail if the AMI ID (ami_id.txt) is not found. This is required because Packer doesn’t fail if something goes wrong during the AMI creation process. We have to tell AWS CodeBuild to stop by informing it that an error occurred.
  3. If an AMI ID is found, we update the ami_builder_event.json file and then notify CloudWatch Events that the AMI creation process is complete.
  4. CloudWatch Events publishes a message to an SNS topic. Anyone subscribed to the topic will be notified in email that an AMI has been created.

Lastly, the new artifacts phase instructs AWS CodeBuild to upload files built during the build process (ami_builder_event.json and build.log) to the S3 bucket specified in the Outputs section of the CloudFormation template. These artifacts can then be used as an input artifact in any later stage in AWS CodePipeline.

For information about customizing the artifacts sequence of the buildspec.yml, see the Build Specification Reference for AWS CodeBuild.

CloudWatch Events Implementation Details

 
CloudWatch Events allow you to extend the AMI builder to not only send email after the AMI has been created, but to hook up any of the supported targets to react to the AMI builder event. This event publication means you can decouple from Packer actions you might take after AMI completion and plug in other actions, as you see fit.

For more information about targets in CloudWatch Events, see the CloudWatch Events API Reference.

In this case, CloudWatch Events should receive the following event, match it with a rule we created through CloudFormation, and publish a message to SNS so that you can receive an email.

Example CloudWatch custom event

[
        {
            "Source": "com.ami.builder",
            "DetailType": "AmiBuilder",
            "Detail": "{ \"AmiStatus\": \"Created\"}",
            "Resources": [ "ami-12cd5guf" ]
        }
]

Cloudwatch Events rule

{
  "detail-type": [
    "AmiBuilder"
  ],
  "source": [
    "com.ami.builder"
  ],
  "detail": {
    "AmiStatus": [
      "Created"
    ]
  }
}

Example SNS message sent in email

{
    "version": "0",
    "id": "f8bdede0-b9d7...",
    "detail-type": "AmiBuilder",
    "source": "com.ami.builder",
    "account": "<<aws_account_number>>",
    "time": "2017-04-28T17:56:40Z",
    "region": "eu-west-1",
    "resources": ["ami-112cd5guf "],
    "detail": {
        "AmiStatus": "Created"
    }
}

Packer Implementation Details

 
In addition to the build specification file, there are differences between the current version of the HashiCorp Packer template (packer_cis.json) and the one used in Part 1.

Variables

  "variables": {
    "vpc": "{{env `BUILD_VPC_ID`}}",
    "subnet": "{{env `BUILD_SUBNET_ID`}}",
         “ami_name”: “Prod-CIS-Latest-AMZN-{{isotime \”02-Jan-06 03_04_05\”}}”
  },
  • ami_name: Prefixes a name used by Packer to tag resources during the Builders sequence.
  • vpc and subnet: Environment variables defined by the CloudFormation stack parameters.

We no longer assume a default VPC is present and instead use the VPC and subnet specified in the CloudFormation parameters. CloudFormation configures the AWS CodeBuild project to use these values as environment variables. They are made available throughout the build process.

That allows for more flexibility should you need to change which VPC and subnet will be used by Packer to launch temporary resources.

Builders

  "builders": [{
    ...
    "ami_name": “{{user `ami_name`| clean_ami_name}}”,
    "tags": {
      "Name": “{{user `ami_name`}}”,
    },
    "run_tags": {
      "Name": “{{user `ami_name`}}",
    },
    "run_volume_tags": {
      "Name": “{{user `ami_name`}}",
    },
    "snapshot_tags": {
      "Name": “{{user `ami_name`}}",
    },
    ...
    "vpc_id": "{{user `vpc` }}",
    "subnet_id": "{{user `subnet` }}"
  }],

We now have new properties (*_tag) and a new function (clean_ami_name) and launch temporary resources in a VPC and subnet specified in the environment variables. AMI names can only contain a certain set of ASCII characters. If the input in project deviates from the expected characters (for example, includes whitespace or slashes), Packer’s clean_ami_name function will fix it.

For more information, see functions on the HashiCorp Packer website.

Provisioners

  "provisioners": [
    {
        "type": "shell",
        "inline": [
            "sudo pip install ansible"
        ]
    }, 
    {
        "type": "ansible-local",
        "playbook_file": "ansible/playbook.yaml",
        "role_paths": [
            "ansible/roles/common"
        ],
        "playbook_dir": "ansible",
        "galaxy_file": "ansible/requirements.yaml"
    },
    {
      "type": "shell",
      "inline": [
        "rm .ssh/authorized_keys ; sudo rm /root/.ssh/authorized_keys"
      ]
    }

We used shell provisioner to apply OS patches in Part 1. Now, we use shell to install Ansible on the target machine and ansible-local to import, install, and execute Ansible roles to make our target machine conform to our standards.

Packer uses shell to remove temporary keys before it creates an AMI from the target and temporary EC2 instance.

Ansible Implementation Details

 
Ansible provides OS patching through a custom Common role that can be easily customized for other tasks.

CIS Benchmark and Cloudwatch Logs are implemented through two Ansible third-party roles that are defined in ansible/requirements.yaml as seen in the Packer template.

The Ansible provisioner uses Ansible Galaxy to download these roles onto the target machine and execute them as instructed by ansible/playbook.yaml.

For information about how these components are organized, see the Playbook Roles and Include Statements in the Ansible documentation.

The following Ansible playbook (ansible</playbook.yaml) controls the execution order and custom properties:

---
- hosts: localhost
  connection: local
  gather_facts: true    # gather OS info that is made available for tasks/roles
  become: yes           # majority of CIS tasks require root
  vars:
    # CIS Controls whitepaper:  http://bit.ly/2mGAmUc
    # AWS CIS Whitepaper:       http://bit.ly/2m2Ovrh
    cis_level_1_exclusions:
    # 3.4.2 and 3.4.3 effectively blocks access to all ports to the machine
    ## This can break automation; ignoring it as there are stronger mechanisms than that
      - 3.4.2 
      - 3.4.3
    # CloudWatch Logs will be used instead of Rsyslog/Syslog-ng
    ## Same would be true if any other software doesn't support Rsyslog/Syslog-ng mechanisms
      - 4.2.1.4
      - 4.2.2.4
      - 4.2.2.5
    # Autofs is not installed in newer versions, let's ignore
      - 1.1.19
    # Cloudwatch Logs role configuration
    logs:
      - file: /var/log/messages
        group_name: "system_logs"
  roles:
    - common
    - anthcourtney.cis-amazon-linux
    - dharrisio.aws-cloudwatch-logs-agent

Both third-party Ansible roles can be easily configured through variables (vars). We use Ansible playbook variables to exclude CIS controls that don’t apply to our case and to instruct the CloudWatch Logs agent to stream the /var/log/messages log file to CloudWatch Logs.

If you need to add more OS or application logs, you can easily duplicate the playbook and make changes. The CloudWatch Logs agent will ship configured log messages to CloudWatch Logs.

For more information about parameters you can use to further customize third-party roles, download Ansible roles for the Cloudwatch Logs Agent and CIS Amazon Linux from the Galaxy website.

Committing Changes

 
Now that Ansible and CloudWatch Events are configured as a part of the build process, commiting any changes to the AWS CodeComit Git Repository will triger a new AMI build process that can be followed through the AWS CodePipeline console.

When the build is complete, an email will be sent to the email address you provided as a part of the CloudFormation stack deployment. The email serves as notification that an AMI has been built and is ready for use.

Summary

 
We used AWS CodeCommit, AWS CodePipeline, AWS CodeBuild, Packer, and Ansible to build a pipeline that continuously builds new, hardened CIS AMIs. We used Amazon SNS so that email addresses subscribed to a SNS topic are notified upon completion of the AMI build.

By treating our AMI creation process as code, we can iterate and track changes over time. In this way, it’s no different from a software development workflow. With that in mind, software patches, OS configuration, and logs that need to be shipped to a central location are only a git commit away.

Next Steps

 
Here are some ideas to extend this AMI builder:

  • Hook up a Lambda function in Cloudwatch Events to update EC2 Auto Scaling configuration upon completion of the AMI build.
  • Use AWS CodePipeline parallel steps to build multiple Packer images.
  • Add a commit ID as a tag for the AMI you created.
  • Create a scheduled Lambda function through Cloudwatch Events to clean up old AMIs based on timestamp (name or additional tag).
  • Implement Windows support for the AMI builder.
  • Create a cross-account or cross-region AMI build.

Cloudwatch Events allow the AMI builder to decouple AMI configuration and creation so that you can easily add your own logic using targets (AWS Lambda, Amazon SQS, Amazon SNS) to add events or recycle EC2 instances with the new AMI.

If you have questions or other feedback, feel free to leave it in the comments or contribute to the AMI Builder repo on GitHub.

Building Loosely Coupled, Scalable, C# Applications with Amazon SQS and Amazon SNS

Post Syndicated from Tara Van Unen original https://aws.amazon.com/blogs/compute/building-loosely-coupled-scalable-c-applications-with-amazon-sqs-and-amazon-sns/

 
Stephen Liedig, Solutions Architect

 

One of the many challenges professional software architects and developers face is how to make cloud-native applications scalable, fault-tolerant, and highly available.

Fundamental to your project success is understanding the importance of making systems highly cohesive and loosely coupled. That means considering the multi-dimensional facets of system coupling to support the distributed nature of the applications that you are building for the cloud.

By that, I mean addressing not only the application-level coupling (managing incoming and outgoing dependencies), but also considering the impacts of of platform, spatial, and temporal coupling of your systems. Platform coupling relates to the interoperability, or lack thereof, of heterogeneous systems components. Spatial coupling deals with managing components at a network topology level or protocol level. Temporal, or runtime coupling, refers to the ability of a component within your system to do any kind of meaningful work while it is performing a synchronous, blocking operation.

The AWS messaging services, Amazon SQS and Amazon SNS, help you deal with these forms of coupling by providing mechanisms for:

  • Reliable, durable, and fault-tolerant delivery of messages between application components
  • Logical decomposition of systems and increased autonomy of components
  • Creating unidirectional, non-blocking operations, temporarily decoupling system components at runtime
  • Decreasing the dependencies that components have on each other through standard communication and network channels

Following on the recent topic, Building Scalable Applications and Microservices: Adding Messaging to Your Toolbox, in this post, I look at some of the ways you can introduce SQS and SNS into your architectures to decouple your components, and show how you can implement them using C#.

Walkthrough

To illustrate some of these concepts, consider a web application that processes customer orders. As good architects and developers, you have followed best practices and made your application scalable and highly available. Your solution included implementing load balancing, dynamic scaling across multiple Availability Zones, and persisting orders in a Multi-AZ Amazon RDS database instance, as in the following diagram.


In this example, the application is responsible for handling and persisting the order data, as well as dealing with increases in traffic for popular items.

One potential point of vulnerability in the order processing workflow is in saving the order in the database. The business expects that every order has been persisted into the database. However, any potential deadlock, race condition, or network issue could cause the persistence of the order to fail. Then, the order is lost with no recourse to restore the order.

With good logging capability, you may be able to identify when an error occurred and which customer’s order failed. This wouldn’t allow you to “restore” the transaction, and by that stage, your customer is no longer your customer.

As illustrated in the following diagram, introducing an SQS queue helps improve your ordering application. Using the queue isolates the processing logic into its own component and runs it in a separate process from the web application. This, in turn, allows the system to be more resilient to spikes in traffic, while allowing work to be performed only as fast as necessary in order to manage costs.


In addition, you now have a mechanism for persisting orders as messages (with the queue acting as a temporary database), and have moved the scope of your transaction with your database further down the stack. In the event of an application exception or transaction failure, this ensures that the order processing can be retired or redirected to the Amazon SQS Dead Letter Queue (DLQ), for re-processing at a later stage. (See the recent post, Using Amazon SQS Dead-Letter Queues to Control Message Failure, for more information on dead-letter queues.)

Scaling the order processing nodes

This change allows you now to scale the web application frontend independently from the processing nodes. The frontend application can continue to scale based on metrics such as CPU usage, or the number of requests hitting the load balancer. Processing nodes can scale based on the number of orders in the queue. Here is an example of scale-in and scale-out alarms that you would associate with the scaling policy.

Scale-out Alarm

aws cloudwatch put-metric-alarm --alarm-name AddCapacityToCustomerOrderQueue --metric-name ApproximateNumberOfMessagesVisible --namespace "AWS/SQS" 
--statistic Average --period 300 --threshold 3 --comparison-operator GreaterThanOrEqualToThreshold --dimensions Name=QueueName,Value=customer-orders
--evaluation-periods 2 --alarm-actions <arn of the scale-out autoscaling policy>

Scale-in Alarm

aws cloudwatch put-metric-alarm --alarm-name RemoveCapacityFromCustomerOrderQueue --metric-name ApproximateNumberOfMessagesVisible --namespace "AWS/SQS" 
 --statistic Average --period 300 --threshold 1 --comparison-operator LessThanOrEqualToThreshold --dimensions Name=QueueName,Value=customer-orders
 --evaluation-periods 2 --alarm-actions <arn of the scale-in autoscaling policy>

In the above example, use the ApproximateNumberOfMessagesVisible metric to discover the queue length and drive the scaling policy of the Auto Scaling group. Another useful metric is ApproximateAgeOfOldestMessage, when applications have time-sensitive messages and developers need to ensure that messages are processed within a specific time period.

Scaling the order processing implementation

On top of scaling at an infrastructure level using Auto Scaling, make sure to take advantage of the processing power of your Amazon EC2 instances by using as many of the available threads as possible. There are several ways to implement this. In this post, we build a Windows service that uses the BackgroundWorker class to process the messages from the queue.

Here’s a closer look at the implementation. In the first section of the consuming application, use a loop to continually poll the queue for new messages, and construct a ReceiveMessageRequest variable.

public static void PollQueue()
{
    while (_running)
    {
        Task<ReceiveMessageResponse> receiveMessageResponse;

        // Pull messages off the queue
        using (var sqs = new AmazonSQSClient())
        {
            const int maxMessages = 10;  // 1-10

            //Receiving a message
            var receiveMessageRequest = new ReceiveMessageRequest
            {
                // Get URL from Configuration
                QueueUrl = _queueUrl, 
                // The maximum number of messages to return. 
                // Fewer messages might be returned. 
                MaxNumberOfMessages = maxMessages, 
                // A list of attributes that need to be returned with message.
                AttributeNames = new List<string> { "All" },
                // Enable long polling. 
                // Time to wait for message to arrive on queue.
                WaitTimeSeconds = 5 
            };

            receiveMessageResponse = sqs.ReceiveMessageAsync(receiveMessageRequest);
        }

The WaitTimeSeconds property of the ReceiveMessageRequest specifies the duration (in seconds) that the call waits for a message to arrive in the queue before returning a response to the calling application. There are a few benefits to using long polling:

  • It reduces the number of empty responses by allowing SQS to wait until a message is available in the queue before sending a response.
  • It eliminates false empty responses by querying all (rather than a limited number) of the servers.
  • It returns messages as soon any message becomes available.

For more information, see Amazon SQS Long Polling.

After you have returned messages from the queue, you can start to process them by looping through each message in the response and invoking a new BackgroundWorker thread.

// Process messages
if (receiveMessageResponse.Result.Messages != null)
{
    foreach (var message in receiveMessageResponse.Result.Messages)
    {
        Console.WriteLine("Received SQS message, starting worker thread");

        // Create background worker to process message
        BackgroundWorker worker = new BackgroundWorker();
        worker.DoWork += (obj, e) => ProcessMessage(message);
        worker.RunWorkerAsync();
    }
}
else
{
    Console.WriteLine("No messages on queue");
}

The event handler, ProcessMessage, is where you implement business logic for processing orders. It is important to have a good understanding of how long a typical transaction takes so you can set a message VisibilityTimeout that is long enough to complete your operation. If order processing takes longer than the specified timeout period, the message becomes visible on the queue. Other nodes may pick it and process the same order twice, leading to unintended consequences.

Handling Duplicate Messages

In order to manage duplicate messages, seek to make your processing application idempotent. In mathematics, idempotent describes a function that produces the same result if it is applied to itself:

f(x) = f(f(x))

No matter how many times you process the same message, the end result is the same (definition from Enterprise Integration Patterns: Designing, Building, and Deploying Messaging Solutions, Hohpe and Wolf, 2004).

There are several strategies you could apply to achieve this:

  • Create messages that have inherent idempotent characteristics. That is, they are non-transactional in nature and are unique at a specified point in time. Rather than saying “place new order for Customer A,” which adds a duplicate order to the customer, use “place order <orderid> on <timestamp> for Customer A,” which creates a single order no matter how often it is persisted.
  • Deliver your messages via an Amazon SQS FIFO queue, which provides the benefits of message sequencing, but also mechanisms for content-based deduplication. You can deduplicate using the MessageDeduplicationId property on the SendMessage request or by enabling content-based deduplication on the queue, which generates a hash for MessageDeduplicationId, based on the content of the message, not the attributes.
var sendMessageRequest = new SendMessageRequest
{
    QueueUrl = _queueUrl,
    MessageBody = JsonConvert.SerializeObject(order),
    MessageGroupId = Guid.NewGuid().ToString("N"),
    MessageDeduplicationId = Guid.NewGuid().ToString("N")
};
  • If using SQS FIFO queues is not an option, keep a message log of all messages attributes processed for a specified period of time, as an alternative to message deduplication on the receiving end. Verifying the existence of the message in the log before processing the message adds additional computational overhead to your processing. This can be minimized through low latency persistence solutions such as Amazon DynamoDB. Bear in mind that this solution is dependent on the successful, distributed transaction of the message and the message log.

Handling exceptions

Because of the distributed nature of SQS queues, it does not automatically delete the message. Therefore, you must explicitly delete the message from the queue after processing it, using the message ReceiptHandle property (see the following code example).

However, if at any stage you have an exception, avoid handling it as you normally would. The intention is to make sure that the message ends back on the queue, so that you can gracefully deal with intermittent failures. Instead, log the exception to capture diagnostic information, and swallow it.

By not explicitly deleting the message from the queue, you can take advantage of the VisibilityTimeout behavior described earlier. Gracefully handle the message processing failure and make the unprocessed message available to other nodes to process.

In the event that subsequent retries fail, SQS automatically moves the message to the configured DLQ after the configured number of receives has been reached. You can further investigate why the order process failed. Most importantly, the order has not been lost, and your customer is still your customer.

private static void ProcessMessage(Message message)
{
    using (var sqs = new AmazonSQSClient())
    {
        try
        {
            Console.WriteLine("Processing message id: {0}", message.MessageId);

            // Implement messaging processing here
            // Ensure no downstream resource contention (parallel processing)
            // <your order processing logic in here…>
            Console.WriteLine("{0} Thread {1}: {2}", DateTime.Now.ToString("s"), Thread.CurrentThread.ManagedThreadId, message.MessageId);
            
            // Delete the message off the queue. 
            // Receipt handle is the identifier you must provide 
            // when deleting the message.
            var deleteRequest = new DeleteMessageRequest(_queueName, message.ReceiptHandle);
            sqs.DeleteMessageAsync(deleteRequest);
            Console.WriteLine("Processed message id: {0}", message.MessageId);

        }
        catch (Exception ex)
        {
            // Do nothing.
            // Swallow exception, message will return to the queue when 
            // visibility timeout has been exceeded.
            Console.WriteLine("Could not process message due to error. Exception: {0}", ex.Message);
        }
    }
}

Using SQS to adapt to changing business requirements

One of the benefits of introducing a message queue is that you can accommodate new business requirements without dramatically affecting your application.

If, for example, the business decided that all orders placed over $5000 are to be handled as a priority, you could introduce a new “priority order” queue. The way the orders are processed does not change. The only significant change to the processing application is to ensure that messages from the “priority order” queue are processed before the “standard order” queue.

The following diagram shows how this logic could be isolated in an “order dispatcher,” whose only purpose is to route order messages to the appropriate queue based on whether the order exceeds $5000. Nothing on the web application or the processing nodes changes other than the target queue to which the order is sent. The rates at which orders are processed can be achieved by modifying the poll rates and scalability settings that I have already discussed.

Extending the design pattern with Amazon SNS

Amazon SNS supports reliable publish-subscribe (pub-sub) scenarios and push notifications to known endpoints across a wide variety of protocols. It eliminates the need to periodically check or poll for new information and updates. SNS supports:

  • Reliable storage of messages for immediate or delayed processing
  • Publish / subscribe – direct, broadcast, targeted “push” messaging
  • Multiple subscriber protocols
  • Amazon SQS, HTTP, HTTPS, email, SMS, mobile push, AWS Lambda

With these capabilities, you can provide parallel asynchronous processing of orders in the system and extend it to support any number of different business use cases without affecting the production environment. This is commonly referred to as a “fanout” scenario.

Rather than your web application pushing orders to a queue for processing, send a notification via SNS. The SNS messages are sent to a topic and then replicated and pushed to multiple SQS queues and Lambda functions for processing.

As the diagram above shows, you have the development team consuming “live” data as they work on the next version of the processing application, or potentially using the messages to troubleshoot issues in production.

Marketing is consuming all order information, via a Lambda function that has subscribed to the SNS topic, inserting the records into an Amazon Redshift warehouse for analysis.

All of this, of course, is happening without affecting your order processing application.

Summary

While I haven’t dived deep into the specifics of each service, I have discussed how these services can be applied at an architectural level to build loosely coupled systems that facilitate multiple business use cases. I’ve also shown you how to use infrastructure and application-level scaling techniques, so you can get the most out of your EC2 instances.

One of the many benefits of using these managed services is how quickly and easily you can implement powerful messaging capabilities in your systems, and lower the capital and operational costs of managing your own messaging middleware.

Using Amazon SQS and Amazon SNS together can provide you with a powerful mechanism for decoupling application components. This should be part of design considerations as you architect for the cloud.

For more information, see the Amazon SQS Developer Guide and Amazon SNS Developer Guide. You’ll find tutorials on all the concepts covered in this post, and more. To can get started using the AWS console or SDK of your choice visit:

Happy messaging!

Digital painter rundown

Post Syndicated from Eevee original https://eev.ee/blog/2017/06/17/digital-painter-rundown/

Another patron post! IndustrialRobot asks:

You should totally write about drawing/image manipulation programs! (Inspired by https://eev.ee/blog/2015/05/31/text-editor-rundown/)

This is a little trickier than a text editor comparison — while most text editors are cross-platform, quite a few digital art programs are not. So I’m effectively unable to even try a decent chunk of the offerings. I’m also still a relatively new artist, and image editors are much harder to briefly compare than text editors…

Right, now that your expectations have been suitably lowered:

Krita

I do all of my digital art in Krita. It’s pretty alright.

Okay so Krita grew out of Calligra, which used to be KOffice, which was an office suite designed for KDE (a Linux desktop environment). I bring this up because KDE has a certain… reputation. With KDE, there are at least three completely different ways to do anything, each of those ways has ludicrous amounts of customization and settings, and somehow it still can’t do what you want.

Krita inherits this aesthetic by attempting to do literally everything. It has 17 different brush engines, more than 70 layer blending modes, seven color picker dockers, and an ungodly number of colorspaces. It’s clearly intended primarily for drawing, but it also supports animation and vector layers and a pretty decent spread of raster editing tools. I just right now discovered that it has Photoshop-like “layer styles” (e.g. drop shadow), after a year and a half of using it.

In fairness, Krita manages all of this stuff well enough, and (apparently!) it manages to stay out of your way if you’re not using it. In less fairness, they managed to break erasing with a Wacom tablet pen for three months?

I don’t want to rag on it too hard; it’s an impressive piece of work, and I enjoy using it! The emotion it evokes isn’t so much frustration as… mystified bewilderment.

I once filed a ticket suggesting the addition of a brush size palette — a panel showing a grid of fixed brush sizes that makes it easy to switch between known sizes with a tablet pen (and increases the chances that you’ll be able to get a brush back to the right size again). It’s a prominent feature of Paint Tool SAI and Clip Studio Paint, and while I’ve never used either of those myself, I’ve seen a good few artists swear by it.

The developer response was that I could emulate the behavior by creating brush presets. But that’s flat-out wrong: getting the same effect would require creating a ton of brush presets for every brush I have, plus giving them all distinct icons so the size is obvious at a glance. Even then, it would be much more tedious to use and fill my presets with junk.

And that sort of response is what’s so mysterious to me. I’ve never even been able to use this feature myself, but a year of amateur painting with Krita has convinced me that it would be pretty useful. But a developer didn’t see the use and suggested an incredibly tedious alternative that only half-solves the problem and creates new ones. Meanwhile, of the 28 existing dockable panels, a quarter of them are different ways to choose colors.

What is Krita trying to be, then? What does Krita think it is? Who precisely is the target audience? I have no idea.


Anyway, I enjoy drawing in Krita well enough. It ships with a respectable set of brushes, and there are plenty more floating around. It has canvas rotation, canvas mirroring, perspective guide tools, and other art goodies. It doesn’t colordrop on right click by default, which is arguably a grave sin (it shows a customizable radial menu instead), but that’s easy to rebind. It understands having a background color beneath a bottom transparent layer, which is very nice. You can also toggle any brush between painting and erasing with the press of a button, and that turns out to be very useful.

It doesn’t support infinite canvases, though it does offer a one-click button to extend the canvas in a given direction. I’ve never used it (and didn’t even know what it did until just now), but would totally use an infinite canvas.

I haven’t used the animation support too much, but it’s pretty nice to have. Granted, the only other animation software I’ve used is Aseprite, so I don’t have many points of reference here. It’s a relatively new addition, too, so I assume it’ll improve over time.

The one annoyance I remember with animation was really an interaction with a larger annoyance, which is: working with selections kind of sucks. You can’t drag a selection around with the selection tool; you have to switch to the move tool. That would be fine if you could at least drag the selection ring around with the selection tool, but you can’t do that either; dragging just creates a new selection.

If you want to copy a selection, you have to explicitly copy it to the clipboard and paste it, which creates a new layer. Ctrl-drag with the move tool doesn’t work. So then you have to merge that layer down, which I think is where the problem with animation comes in: a new layer is non-animated by default, meaning it effectively appears in any frame, so simply merging it down with merge it onto every single frame of the layer below. And you won’t even notice until you switch frames or play back the animation. Not ideal.

This is another thing that makes me wonder about Krita’s sense of identity. It has a lot of fancy general-purpose raster editing features that even GIMP is still struggling to implement, like high color depth support and non-destructive filters, yet something as basic as working with selections is clumsy. (In fairness, GIMP is a bit clumsy here too, but it has a consistent notion of “floating selection” that’s easy enough to work with.)

I don’t know how well Krita would work as a general-purpose raster editor; I’ve never tried to use it that way. I can’t think of anything obvious that’s missing. The only real gotcha is that some things you might expect to be tools, like smudge or clone, are just types of brush in Krita.

GIMP

Ah, GIMP — open source’s answer to Photoshop.

It’s very obviously intended for raster editing, and I’m pretty familiar with it after half a lifetime of only using Linux. I even wrote a little Scheme script for it ages ago to automate some simple edits to a couple hundred files, back before I was aware of ImageMagick. I don’t know what to say about it, specifically; it’s fairly powerful and does a wide variety of things.

In fact I’d say it’s almost frustratingly intended for raster editing. I used GIMP in my first attempts at digital painting, before I’d heard of Krita. It was okay, but so much of it felt clunky and awkward. Painting is split between a pencil tool, a paintbrush tool, and an airbrush tool; I don’t really know why. The default brushes are largely uninteresting. Instead of brush presets, there are tool presets that can be saved for any tool; it’s a neat idea, but doesn’t feel like a real substitute for brush presets.

Much of the same functionality as Krita is there, but it’s all somehow more clunky. I’m sure it’s possible to fiddle with the interface to get something friendlier for painting, but I never really figured out how.

And then there’s the surprising stuff that’s missing. There’s no canvas rotation, for example. There’s only one type of brush, and it just stamps the same pattern along a path. I don’t think it’s possible to smear or blend or pick up color while painting. The only way to change the brush size is via the very sensitive slider on the tool options panel, which I remember being a little annoying with a tablet pen. Also, you have to specifically enable tablet support? It’s not difficult or anything, but I have no idea why the default is to ignore tablet pressure and treat it like a regular mouse cursor.

As I mentioned above, there’s also no support for high color depth or non-destructive editing, which is honestly a little embarrassing. Those are the major things Serious Professionals™ have been asking for for ages, and GIMP has been trying to provide them, but it’s taking a very long time. The first signs of GEGL, a new library intended to provide these features, appeared in GIMP 2.6… in 2008. The last major release was in 2012. GIMP has been working on this new plumbing for almost as long as Krita’s entire development history. (To be fair, Krita has also raised almost €90,000 from three Kickstarters to fund its development; I don’t know that GIMP is funded at all.)

I don’t know what’s up with GIMP nowadays. It’s still under active development, but the exact status and roadmap are a little unclear. I still use it for some general-purpose editing, but I don’t see any reason to use it to draw.

I do know that canvas rotation will be in the next release, and there was some experimentation with embedding MyPaint’s brush engine (though when I tried it it was basically unusable), so maybe GIMP is interested in wooing artists? I guess we’ll see.

MyPaint

Ah, MyPaint. I gave it a try once. Once.

It’s a shame, really. It sounds pretty great: specifically built for drawing, has very powerful brushes, supports an infinite canvas, supports canvas rotation, has a simple UI that gets out of your way. Perfect.

Or so it seems. But in MyPaint’s eagerness to shed unnecessary raster editing tools, it forgot a few of the more useful ones. Like selections.

MyPaint has no notion of a selection, nor of copy/paste. If you want to move a head to align better to a body, for example, the sanctioned approach is to duplicate the layer, erase the head from the old layer, erase everything but the head from the new layer, then move the new layer.

I can’t find anything that resembles HSL adjustment, either. I guess the workaround for that is to create H/S/L layers and floodfill them with different colors until you get what you want.

I can’t work seriously without these basic editing tools. I could see myself doodling in MyPaint, but Krita works just as well for doodling as for serious painting, so I’ve never gone back to it.

Drawpile

Drawpile is the modern equivalent to OpenCanvas, I suppose? It lets multiple people draw on the same canvas simultaneously. (I would not recommend it as a general-purpose raster editor.)

It’s a little clunky in places — I sometimes have bugs where keyboard focus gets stuck in the chat, or my tablet cursor becomes invisible — but the collaborative part works surprisingly well. It’s not a brush powerhouse or anything, and I don’t think it allows textured brushes, but it supports tablet pressure and canvas rotation and locked alpha and selections and whatnot.

I’ve used it a couple times, and it’s worked well enough that… well, other people made pretty decent drawings with it? I’m not sure I’ve managed yet. And I wouldn’t use it single-player. Still, it’s fun.

Aseprite

Aseprite is for pixel art so it doesn’t really belong here at all. But it’s very good at that and I like it a lot.

That’s all

I can’t name any other serious contender that exists for Linux.

I’m dimly aware of a thing called “Photo Shop” that’s more intended for photos but functions as a passable painter. More artists seem to swear by Paint Tool SAI and Clip Studio Paint. Also there’s Paint.NET, but I have no idea how well it’s actually suited for painting.

And that’s it! That’s all I’ve got. Krita for drawing, GIMP for editing, Drawpile for collaborative doodling.

Estefannie’s GPS-Controlled GoPro Photo Taker

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/estefannie-gopro-selfie/

Are you tired of having to take selfies physically? Do you only use your GoPro for the occasional beach vacation? Are you maybe even wondering what to do with the load of velcro you bought on a whim? Then we have good news for you: Estefannie‘s back to help you out with her Personal Automated GPS-Controlled Portable Photo Taker…PAGCPPT for short…or pagsssspt, if you like.

RASPBERRY PI + GPS CONTROLLED PHOTO TAKER

Hey World! Do you like vacation pictures but don’t like taking them? Make your own Personal Automated GPS Controlled Portable Photo Taker! The code, components, and instructions are in my Hackster.io account: https://www.hackster.io/estefanniegg/automated-gps-controlled-photo-taker-3fc84c For this build, I decided to put together a backpack to take pictures of me when I am close to places that like.

The Personal Automated GPS-Controlled Portable Photo Taker

Try saying that five times in a row.

Go on. I’ll wait.

Using a Raspberry Pi 3, a GPS module, a power pack, and a GoPro plus GoPro Stick, Estefannie created the PAGCPPT as a means of automatically taking selfies at pre-specified tourist attractions across London.

Estefannie Explains it All Raspberry Pi GPS GoPro Camera

There’s pie in my backpack too…but it’s a bit messy

With velcro and hot glue, she secured the tech in place on (and inside) a backpack. Then it was simply a case of programming her set up to take pictures while she walked around the city.

Estefannie Explains it All Raspberry Pi GPS GoPro Camera

Making the GoPro…go

Estefannie made use of a GoPro API library to connect her GoPro to the Raspberry Pi via WiFi. With the help of this library, she wrote a Python script that made the GoPro take a photograph whenever her GPS module placed her within a ten-metre radius of a pre-selected landmark such as Tower Bridge, Abbey Road, or Platform 9 3/4.

Estefannie Explains it All Raspberry Pi GPS GoPro Camera

“Accio selfie.”

The full script, as well as details regarding the components she used for the project, can be found on her hackster.io page here.

Estefannie Explains it All

You’ll have noticed that we’ve covered Estefannie once or twice before on the Raspberry Pi blog. We love project videos that convey a sense of ‘Oh hey, I can totally build one of those!’, and hers always tick that box. They are imaginative, interesting, quirky, and to be totally honest with you, I’ve been waiting for this particular video since she hinted at it on her visit to Pi Towers in May. I got the inside scoop, yo!

What’s better than taking pictures? Not taking pictures. But STILL having pictures. I made a personal automated GPS controlled Portable Photo Taker ⚡ NEW VIDEO ALERT⚡ Link in bio.

1,351 Likes, 70 Comments – Estefannie Explains It All (@estefanniegg) on Instagram: “What’s better than taking pictures? Not taking pictures. But STILL having pictures. I made a…”

Make sure to follow her on YouTube and Instagram for more maker content and random shenanigans. And if you have your own maker social media channel, YouTube account, blog, etc, this is your chance to share it for the world to see in the comments below!

The post Estefannie’s GPS-Controlled GoPro Photo Taker appeared first on Raspberry Pi.

Подкаст за българска фантастика

Post Syndicated from Григор original http://www.gatchev.info/blog/?p=2062

Днес в скромния ми блог е на гости Валентин Д. Иванов. Известен по света като астроном и откривател на класа небесни тела „планемо“ – самотните планети, които нямат звезда. Едно от големите астрономически открития за последните 100 години.

У нас е „световно неизвестен“ – повечето българи мислят, че „астрономия“ е грешка и правилното е „астрология“. За щастие, в кръговете на любителите на фантастиката Вальо е отлично известен писател, преводач, популяризатор, застъпник на фендъма и какво ли не още. И може би най-много от всичко фен с душа, който не просто произвежда шум, а върши полезни неща.

За едно от тези неща – по любезния съвет на Александър Карапанчев – ще прочетете по-долу.

—-

Добре дошли в специализирания подкаст „Българска фантастика“ –

С еднакъв успех можете да го наричате и аудио списание. Целта ни е да произвеждаме аудио версии на български фантастични произведения.

Преди година и нещо, по силата на служебните си задължения в Eвропейската южна обсерватория, ми се наложи да правя образователни филмчета за нашите средства за обработка на наблюдателни данни (https://www.youtube.com/channel/UCCq4rxr30ydNyV94OWmLrMA). От друга страна, аудио фантастиката ми е близка, понеже доста често си запълвам времето, докато пътувам, със слушане на фантастични подкастове. Има много на английски (http://escapepod.org/; www.starshipsofa.com/) и руски език (https://fantlab.ru/work203487). Вече немалко списания слагат на страниците си и аудио версии на публикуваните разкази (http://www.newyorker.com/series/fiction-podcast, http://www.lightspeedmagazine.com/; http://strangehorizons.com/podcasts/).

Не беше далеч мисълта да опитам с българска фантастика, в частност с моята собствена, и на 7 юни 2016 г. се появи това – https://www.youtube.com/watch?v=7Rfpa3NvR34.

Ясно е, че аз не съм професионален актьор, и резултатът беше точно толкова зле, колкото очаквах. За известно време оставих това начинание настрани, но преди няколко месеца пак се наложи да се върна към видео ръководствата и събрах смелост да пробвам отново. Разказът на Иван Вазов можеше да стане по-добре, обаче последните два си ги харесвам, колкото и да е нескромно. Живот и здраве, по-нататък се надявам да стават още по-сполучливи.

Ще се опитам да подготвям нов разказ един път на месец, най-много на два месеца. Бързам да кажа, че не мога да гарантирам периодичността, тя ще зависи от обстоятелствата. Изданието е плод на колектив от хора, включващ Дружеството на българските фантасти „Тера Фантазия“ и Фондация „Човешката библиотека“. По-нататък ще представя всеки един от тях.

Поканвам всички желаещи да ми пращат разкази и стихове в обем до 2500 думи на podcastbgnf@gmail.com.

Възнамеряваме да редуваме художествените произведения с публицистика, обаче за нея моля първо да се свържете с нас, за да проверите дали би ни заинтересувала. Същото се отнася и за илюстрациите – всеки разказ има нужда от една. Не знаем предварително какво ще публикуваме, но достатъчно общи фантастични сюжети са подходящи. Предполагам, че с времето ще създадем резерв от илюстрации, които ще използваме в бъдеще.

Определена тема няма. Изисквания също няма освен обичайните – разказите да не разпалват вражда и да не включват ненужно насилие или сексуални описания. Ще правя аудио версии на това, което аз и колегите ми харесаме. Всичко е субективно, не се огорчавайте, ако не изберем разказа ви или не успеем да му подготвим аудио версия по някаква друга причина. Мислете си как са се разпространявали книгите през Средновековието – някой е трябвало толкова да хареса вашия текст, че да отдели няколко месеца, за да си направи копие собственоръчно или да плати на специалист калиграф, който да произведе копието.

Дебело подчертавам, че ние нямаме монопол. Винаги може да си направите ваша аудио версия на собственото си произведение. Авторите ни не получават хонорари, но и не плащат за публикацията. Преди да ни упреквате за нещо, моля не забравяйте, че за това начинание отделяме доброволно и безвъзмездно от собственото си време.

Освен автори, поканвам с нас да се свързват и желаещи да четат разкази. Подозирам, че от такива хора ще имаме много по-голяма нужда, отколкото от автори.

В началото казах „първо аудио списание“, но има някои предтечи, които е редно да спомена. Например Богдан Дуков (https://www.youtube.com/channel/UCzD5Irz7MHwGA0_yiANA5wQ) от доста време публикува чудесни аудио версии на българската класика, включително от Светослав Минков (https://www.youtube.com/watch?v=jK3jQ7TRQGQ). Един от подкастовете на „Правилният Мед“ (https://www.youtube.com/channel/UCuP9AG8V1M_LbNgL-Ku3uZw) от 2014 г. е разговор за фантастиката (https://www.youtube.com/watch?v=SpMKNQo1Ias). И, разбира се, Янчо Чолаков, който през 2012 г. чете откъс от книгата си „Историята на Самотния редник“ (https://www.youtube.com/watch?v=yGt0ToQM_Sw). Може би има и други – ако науча за тях, с удоволствие ще ги добавя.

Пожелайте ни успех!

Encased in amber: meet the epoxy-embedded Pi

Post Syndicated from Janina Ander original https://www.raspberrypi.org/blog/epoxy-pi-resin-io/

The maker of one of our favourite projects from this year’s Maker Faire Bay Area took the idea of an ’embedded device’ and ran with it: Ronald McCollam has created a wireless, completely epoxy-encased Pi build – screen included!

Resin.io in resin epoxy-encased Raspberry Pi

*cue epic music theme* “Welcome…to resin in resin.”

Just encase…

Of course, this build is not meant to be a museum piece: Ronald embedded a Raspberry Pi 3 with built-in wireless LAN and Bluetooth to create a hands-on demonstration of the resin.io platform, for which he is a Solution Architect. Resin.io is useful for remotely controlling groups of Linux-based IoT devices. In this case, Ronald used it to connect to the encased Pi. And yes, he named his make Resin-in-resin – we salute you, sir!

resin.io in resin epoxy-encased Raspberry Pi

“Life uh…finds a way.”

Before he started the practical part of his project, he did his research to find a suitable resin. He found that epoxy types specifically designed for encasing electronics are very expensive. In the end, Ronald tried out a cheap type, usually employed to coat furniture, by encasing an LED. It worked perfectly, and he went ahead to use this resin for embedding the Pi.

Bubbleshooting epoxy

This was the first time Ronald had worked with resin, so he learned some essential things about casting. He advises other makers to mix the epoxy very, very slowly to minimize the formation of bubbles; to try their hands on some small-scale casting attempts first; and to make sure they’re using a large enough mold for casting. Another thing to keep in mind is that some components of the make will heat up and expand while the device is running.

His first version of an encased Pi was still connected to the outside world by its USB cable:

Ronald McCollam on Twitter

Updates don’t get more “hands off” than a Raspberry Pi encased in epoxy — @resin_io inside resin! Come ask me about it at @DockerCon!

Not satisfied with this, he went on to incorporate an inductive charging coil as a power source, so that the Pi could be totally insulated in epoxy. The Raspberry Pi Foundation’s Matt Richardson got a look the finished project at Maker Faire Bay Area:

MattRichardson🏳️‍🌈 on Twitter

If you’re at @makerfaire, you must check out what @resin_io is showing. A @Raspberry_Pi completely enclosed in resin. Completely wireless. https://t.co/djVjoLz3hI

MAGNETS!

The charging coil delivers enough power to keep the Pi running for several hours, but it doesn’t allow secure booting. After some head-scratching, Ronald came up with a cool solution to this problem: he added a battery and a magnetic reed switch. He explains:

[The] boot process is to use the magnetic switch to turn off the Pi, put it on the charger for a few minutes to allow the battery to charge up, then remove the magnet so the Pi boots.

Pi in resin controlled by resin.io

“God help us, we’re in the hands of engineers.”

He talks about his build on the resin.io blog, and has provided a detailed project log on Hackaday. For those of you who want to recreate this project at home, Ronald has even put together an Adafruit wishlist of the necessary components.

Does this resin-ate with you?

What’s especially great about Ronald’s posts is that they’re full of helpful tips about getting started with using epoxy resin in your digital making projects. So whether you’re keen to build your own wireless Pi, or just generally interested in embedding electronic components in resin, you’ll find his write-ups useful.

If you have experience in working with epoxy and electronic devices and want to share what you’ve learned, please do so in the comments!

The post Encased in amber: meet the epoxy-embedded Pi appeared first on Raspberry Pi.

How to Deploy Local Administrator Password Solution with AWS Microsoft AD

Post Syndicated from Dragos Madarasan original https://aws.amazon.com/blogs/security/how-to-deploy-local-administrator-password-solution-with-aws-microsoft-ad/

Local Administrator Password Solution (LAPS) from Microsoft simplifies password management by allowing organizations to use Active Directory (AD) to store unique passwords for computers. Typically, an organization might reuse the same local administrator password across the computers in an AD domain. However, this approach represents a security risk because it can be exploited during lateral escalation attacks. LAPS solves this problem by creating unique, randomized passwords for the Administrator account on each computer and storing it encrypted in AD.

Deploying LAPS with AWS Microsoft AD requires the following steps:

  1. Install the LAPS binaries on instances joined to your AWS Microsoft AD domain. The binaries add additional client-side extension (CSE) functionality to the Group Policy client.
  2. Extend the AWS Microsoft AD schema. LAPS requires new AD attributes to store an encrypted password and its expiration time.
  3. Configure AD permissions and delegate the ability to retrieve the local administrator password for IT staff in your organization.
  4. Configure Group Policy on instances joined to your AWS Microsoft AD domain to enable LAPS. This configures the Group Policy client to process LAPS settings and uses the binaries installed in Step 1.

The following diagram illustrates the setup that I will be using throughout this post and the associated tasks to set up LAPS. Note that the AWS Directory Service directory is deployed across multiple Availability Zones, and monitoring automatically detects and replaces domain controllers that fail.

Diagram illustrating this blog post's solution

In this blog post, I explain the prerequisites to set up Local Administrator Password Solution, demonstrate the steps involved to update the AD schema on your AWS Microsoft AD domain, show how to delegate permissions to IT staff and configure LAPS via Group Policy, and demonstrate how to retrieve the password using the graphical user interface or with Windows PowerShell.

This post assumes you are familiar with Lightweight Directory Access Protocol Data Interchange Format (LDIF) files and AWS Microsoft AD. If you need more of an introduction to Directory Service and AWS Microsoft AD, see How to Move More Custom Applications to the AWS Cloud with AWS Directory Service, which introduces working with schema changes in AWS Microsoft AD.

Prerequisites

In order to implement LAPS, you must use AWS Directory Service for Microsoft Active Directory (Enterprise Edition), also known as AWS Microsoft AD. Any instance on which you want to configure LAPS must be joined to your AWS Microsoft AD domain. You also need a Management instance on which you install the LAPS management tools.

In this post, I use an AWS Microsoft AD domain called example.com that I have launched in the EU (London) region. To see which the regions in which Directory Service is available, see AWS Regions and Endpoints.

Screenshot showing the AWS Microsoft AD domain example.com used in this blog post

In addition, you must have at least two instances launched in the same region as the AWS Microsoft AD domain. To join the instances to your AWS Microsoft AD domain, you have two options:

  1. Use the Amazon EC2 Systems Manager (SSM) domain join feature. To learn more about how to set up domain join for EC2 instances, see joining a Windows Instance to an AWS Directory Service Domain.
  2. Manually configure the DNS server addresses in the Internet Protocol version 4 (TCP/IPv4) settings of the network card to use the AWS Microsoft AD DNS addresses (172.31.9.64 and 172.31.16.191, for this blog post) and perform a manual domain join.

For the purpose of this post, my two instances are:

  1. A Management instance on which I will install the management tools that I have tagged as Management.
  2. A Web Server instance on which I will be deploying the LAPS binary.

Screenshot showing the two EC2 instances used in this post

Implementing the solution

 

1. Install the LAPS binaries on instances joined to your AWS Microsoft AD domain by using EC2 Run Command

LAPS binaries come in the form of an MSI installer and can be downloaded from the Microsoft Download Center. You can install the LAPS binaries manually, with an automation service such as EC2 Run Command, or with your existing software deployment solution.

For this post, I will deploy the LAPS binaries on my Web Server instance (i-0b7563d0f89d3453a) by using EC2 Run Command:

  1. While signed in to the AWS Management Console, choose EC2. In the Systems Manager Services section of the navigation pane, choose Run Command.
  2. Choose Run a command, and from the Command document list, choose AWS-InstallApplication.
  3. From Target instances, choose the instance on which you want to deploy the LAPS binaries. In my case, I will be selecting the instance tagged as Web Server. If you do not see any instances listed, make sure you have met the prerequisites for Amazon EC2 Systems Manager (SSM) by reviewing the Systems Manager Prerequisites.
  4. For Action, choose Install, and then stipulate the following values:
    • Parameters: /quiet
    • Source: https://download.microsoft.com/download/C/7/A/C7AAD914-A8A6-4904-88A1-29E657445D03/LAPS.x64.msi
    • Source Hash: f63ebbc45e2d080630bd62a195cd225de734131a56bb7b453c84336e37abd766
    • Comment: LAPS deployment

Leave the other options with the default values and choose Run. The AWS Management Console will return a Command ID, which will initially have a status of In Progress. It should take less than 5 minutes to download and install the binaries, after which the Command ID will update its status to Success.

Status showing the binaries have been installed successfully

If the Command ID runs for more than 5 minutes or returns an error, it might indicate a problem with the installer. To troubleshoot, review the steps in Troubleshooting Systems Manager Run Command.

To verify the binaries have been installed successfully, open Control Panel and review the recently installed applications in Programs and Features.

Screenshot of Control Panel that confirms LAPS has been installed successfully

You should see an entry for Local Administrator Password Solution with a version of 6.2.0.0 or newer.

2. Extend the AWS Microsoft AD schema

In the previous section, I used EC2 Run Command to install the LAPS binaries on an EC2 instance. Now, I am ready to extend the schema in an AWS Microsoft AD domain. Extending the schema is a requirement because LAPS relies on new AD attributes to store the encrypted password and its expiration time.

In an on-premises AD environment, you would update the schema by running the Update-AdmPwdADSchema Windows PowerShell cmdlet with schema administrator credentials. Because AWS Microsoft AD is a managed service, I do not have permissions to update the schema directly. Instead, I will update the AD schema from the Directory Service console by importing an LDIF file. If you are unfamiliar with schema updates or LDIF files, see How to Move More Custom Applications to the AWS Cloud with AWS Directory Service.

To make things easier for you, I am providing you with a sample LDIF file that contains the required AD schema changes. Using Notepad or a similar text editor, open the SchemaChanges-0517.ldif file and update the values of dc=example,dc=com with your own AWS Microsoft AD domain and suffix.

After I update the LDIF file with my AWS Microsoft AD details, I import it by using the AWS Management Console:

  1. On the Directory Service console, select from the list of directories in the Microsoft AD directory by choosing its identifier (it will look something like d-534373570ea).
  2. On the Directory details page, choose the Schema extensions tab and choose Upload and update schema.
    Screenshot showing the "Upload and update schema" option
  3. When prompted for the LDIF file that contains the changes, choose the sample LDIF file.
  4. In the background, the LDIF file is validated for errors and a backup of the directory is created for recovery purposes. Updating the schema might take a few minutes and the status will change to Updating Schema. When the process has completed, the status of Completed will be displayed, as shown in the following screenshot.

Screenshot showing the schema updates in progress
When the process has completed, the status of Completed will be displayed, as shown in the following screenshot.

Screenshot showing the process has completed

If the LDIF file contains errors or the schema extension fails, the Directory Service console will generate an error code and additional debug information. To help troubleshoot error messages, see Schema Extension Errors.

The sample LDIF file triggers AWS Microsoft AD to perform the following actions:

  1. Create the ms-Mcs-AdmPwd attribute, which stores the encrypted password.
  2. Create the ms-Mcs-AdmPwdExpirationTime attribute, which stores the time of the password’s expiration.
  3. Add both attributes to the Computer class.

3. Configure AD permissions

In the previous section, I updated the AWS Microsoft AD schema with the required attributes for LAPS. I am now ready to configure the permissions for administrators to retrieve the password and for computer accounts to update their password attribute.

As part of configuring AD permissions, I grant computers the ability to update their own password attribute and specify which security groups have permissions to retrieve the password from AD. As part of this process, I run Windows PowerShell cmdlets that are not installed by default on Windows Server.

Note: To learn more about Windows PowerShell and the concept of a cmdlet (pronounced “command-let”), go to Getting Started with Windows PowerShell.

Before getting started, I need to set up the required tools for LAPS on my Management instance, which must be joined to the AWS Microsoft AD domain. I will be using the same LAPS installer that I downloaded from the Microsoft LAPS website. In my Management instance, I have manually run the installer by clicking the LAPS.x64.msi file. On the Custom Setup page of the installer, under Management Tools, for each option I have selected Install on local hard drive.

Screenshot showing the required management tools

In the preceding screenshot, the features are:

  • The fat client UI – A simple user interface for retrieving the password (I will use it at the end of this post).
  • The Windows PowerShell module – Needed to run the commands in the next sections.
  • The GPO Editor templates – Used to configure Group Policy objects.

The next step is to grant computers in the Computers OU the permission to update their own attributes. While connected to my Management instance, I go to the Start menu and type PowerShell. In the list of results, right-click Windows PowerShell and choose Run as administrator and then Yes when prompted by User Account Control.

In the Windows PowerShell prompt, I type the following command.

Import-module AdmPwd.PS

Set-AdmPwdComputerSelfPermission –OrgUnit “OU=Computers,OU=MyMicrosoftAD,DC=example,DC=com

To grant the administrator group called Admins the permission to retrieve the computer password, I run the following command in the Windows PowerShell prompt I previously started.

Import-module AdmPwd.PS

Set-AdmPwdReadPasswordPermission –OrgUnit “OU=Computers, OU=MyMicrosoftAD,DC=example,DC=com” –AllowedPrincipals “Admins”

4. Configure Group Policy to enable LAPS

In the previous section, I deployed the LAPS management tools on my management instance, granted the computer accounts the permission to self-update their local administrator password attribute, and granted my Admins group permissions to retrieve the password.

Note: The following section addresses the Group Policy Management Console and Group Policy objects. If you are unfamiliar with or wish to learn more about these concepts, go to Get Started Using the GPMC and Group Policy for Beginners.

I am now ready to enable LAPS via Group Policy:

  1. On my Management instance (i-03b2c5d5b1113c7ac), I have installed the Group Policy Management Console (GPMC) by running the following command in Windows PowerShell.
Install-WindowsFeature –Name GPMC
  1. Next, I have opened the GPMC and created a new Group Policy object (GPO) called LAPS GPO.
  2. In the Local Group Policy Editor, I navigate to Computer Configuration > Policies > Administrative Templates > LAPS. I have configured the settings using the values in the following table.

Setting

State

Options

Password Settings

Enabled

Complexity: large letters, small letters, numbers, specials

Do not allow password expiration time longer than required by policy

Enabled

N/A

Enable local admin password management

Enabled

N/A

  1. Next, I need to link the GPO to an organizational unit (OU) in which my machine accounts sit. In your environment, I recommend testing the new settings on a test OU and then deploying the GPO to production OUs.

Note: If you choose to create a new test organizational unit, you must create it in the OU that AWS Microsoft AD delegates to you to manage. For example, if your AWS Microsoft AD directory name were example.com, the test OU path would be example.com/example/Computers/Test.

  1. To test that LAPS works, I need to make sure the computer has received the new policy by forcing a Group Policy update. While connected to the Web Server instance (i-0b7563d0f89d3453a) using Remote Desktop, I open an elevated administrative command prompt and run the following command: gpupdate /force. I can check if the policy is applied by running the command: gpresult /r | findstr LAPS GPO, where LAPS GPO is the name of the GPO created in the second step.
  2. Back on my Management instance, I can then launch the LAPS interface from the Start menu and use it to retrieve the password (as shown in the following screenshot). Alternatively, I can run the Get-ADComputer Windows PowerShell cmdlet to retrieve the password.
Get-ADComputer [YourComputerName] -Properties ms-Mcs-AdmPwd | select name, ms-Mcs-AdmPwd

Screenshot of the LAPS UI, which you can use to retrieve the password

Summary

In this blog post, I demonstrated how you can deploy LAPS with an AWS Microsoft AD directory. I then showed how to install the LAPS binaries by using EC2 Run Command. Using the sample LDIF file I provided, I showed you how to extend the schema, which is a requirement because LAPS relies on new AD attributes to store the encrypted password and its expiration time. Finally, I showed how to complete the LAPS setup by configuring the necessary AD permissions and creating the GPO that starts the LAPS password change.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or issues implementing this solution, please start a new thread on the Directory Service forum.

– Dragos

Raspberry Pi Looper-Synth-Drum…thing

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/raspberry-pi-looper/

To replace his iPad for live performance, Colorado-based musician Toby Hendricks built a looper, complete with an impressive internal sound library, all running on a Raspberry Pi.

Raspberry Pi Looper/synth/drum thing

Check out the guts here: https://youtu.be/mCOHFyI3Eoo My first venture into raspberry pi stuff. Running a custom pure data patch I’ve been working on for a couple years on a Raspberry Pi 3. This project took a couple months and I’m still tweaking stuff here and there but it’s pretty much complete, it even survived it’s first live show!

Toby’s build is a pretty mean piece of kit, as this video attests. Not only does it have a multitude of uses, but the final build is beautiful. Do make sure to watch to the end of the video for a wonderful demonstration of the kit.

Inside the Raspberry Pi looper

Alongside the Raspberry Pi and Behringer U-Control sound card, Toby used Pure Data, a multimedia visual programming language, and a Teensy 3.6 processor to complete the build. Together, these allow for playback of a plethora of sounds, which can either be internally stored, or externally introduced via audio connectors along the back.

This guy is finally taking shape. DIY looper/fx box/sample player/synth. #teensy #arduino #raspberrypi #puredata

98 Likes, 6 Comments – otem rellik (@otem_rellik) on Instagram: “This guy is finally taking shape. DIY looper/fx box/sample player/synth. #teensy #arduino…”

Delay, reverb, distortion, and more are controlled by sliders along one side, while pre-installed effects are selected and played via some rather beautiful SparkFun buttons on the other. Loop buttons, volume controls, and a repurposed Nintendo DS screen complete the interface.

Raspberry Pi Looper Guts

Thought I’d do a quick overview of the guts of my pi project. Seems like many folks have been interested in seeing what the internals look like.

Code for the looper can be found on Toby’s GitHub here. Make sure to continue to follow him via YouTube and Instagram for updates on the build, including these fancy new buttons.

Casting my own urethane knobs and drum pads from 3D printed molds! #3dprinted #urethanecasting #diy

61 Likes, 4 Comments – otem rellik (@otem_rellik) on Instagram: “Casting my own urethane knobs and drum pads from 3D printed molds! #3dprinted #urethanecasting #diy”

I got the music in me

If you want to get musical with a Raspberry Pi, but the thought of recreating Toby’s build is a little daunting, never fear! Our free GPIO Music Box resource will help get you started. And projects such as Mike Horne’s fabulous Raspberry Pi music box should help inspire you to take your build further.

Raspberry Pi Looper post image of Mike Horne's music box

Mike’s music box boasts wonderful flashy buttons and turny knobs for ultimate musical satisfaction!

If you use a Raspberry Pi in any sort of musical adventure, be sure to share your project in the comments below!

 

 

The post Raspberry Pi Looper-Synth-Drum…thing appeared first on Raspberry Pi.

Нов сървър, нов късмет

Post Syndicated from Боян Юруков original https://yurukov.net/blog/2017/nov-sarvar/

Както писах вече в социалките, през последните дни обнових сървъра, на който се намира този и останалите ми сайтове. Хостингът ми все още е Superhosting.bg и нямам намерение да се местя. Наложи се обаче да обновя операционната система на самия сървър и за целта трябваше да прехвърля всичките си сайтове, както и процесите отварящи данни от страниците на държавната администрация.

Единственото нещо, което се е промени при блога е, че вече работи под HTTPS. Това означава, че връзката е криптирана. Макар да няма чувствителна информация на този сайт, минавайки на HTTPS е добра практика по принцип. Glasuvam.org, например, отдавна има сертификат. Скоро ще добавя такъв и на останалите сайтове, както и на мейл сървъра. При събирането на данните като тези за GovAlert.eu и производството на ток имаше пауза от около 10 мин. При данните за замърсяването на въздуха и раждаемостта нямаше отражение това.

Изчистих доста неща от сървъра – стари сайтове, тестове, пробни инсталации, „временни“ файлове, които съм споделял. Затова може да забележите, че някои линкове не работят. Ще възстановявам някои от файловете, ако реша, че са нужни. Ще се радвам да ми пишете, ако забележите проблеми с този сайт или другите услуги.

How The Intercept Outed Reality Winner

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/06/how-intercept-outed-reality-winner.html

Today, The Intercept released documents on election tampering from an NSA leaker. Later, the arrest warrant request for an NSA contractor named “Reality Winner” was published, showing how they tracked her down because she had printed out the documents and sent them to The Intercept. The document posted by the Intercept isn’t the original PDF file, but a PDF containing the pictures of the printed version that was then later scanned in.

As the warrant says, she confessed while interviewed by the FBI. Had she not confessed, the documents still contained enough evidence to convict her: the printed document was digitally watermarked.

The problem is that most new printers print nearly invisibly yellow dots that track down exactly when and where documents, any document, is printed. Because the NSA logs all printing jobs on its printers, it can use this to match up precisely who printed the document.

In this post, I show how.

You can download the document from the original article here. You can then open it in a PDF viewer, such as the normal “Preview” app on macOS. Zoom into some whitespace on the document, and take a screenshot of this. On macOS, hit [Command-Shift-3] to take a screenshot of a window. There are yellow dots in this image, but you can barely see them, especially if your screen is dirty.

We need to highlight the yellow dots. Open the screenshot in an image editor, such as the “Paintbrush” program built into macOS. Now use the option to “Invert Colors” in the image, to get something like this. You should see a roughly rectangular pattern checkerboard in the whitespace.

It’s upside down, so we need to rotate it 180 degrees, or flip-horizontal and flip-vertical:

Now we go to the EFF page and manually click on the pattern so that their tool can decode the meaning:

This produces the following result:

The document leaked by the Intercept was from a printer with model number 54, serial number 29535218. The document was printed on May 9, 2017 at 6:20. The NSA almost certainly has a record of who used the printer at that time.

The situation is similar to how Vice outed the location of John McAfee, by publishing JPEG photographs of him with the EXIF GPS coordinates still hidden in the file. Or it’s how PDFs are often redacted by adding a black bar on top of image, leaving the underlying contents still in the file for people to read, such as in this NYTime accident with a Snowden document. Or how opening a Microsoft Office document, then accidentally saving it, leaves fingerprints identifying you behind, as repeatedly happened with the Wikileaks election leaks. These sorts of failures are common with leaks. To fix this yellow-dot problem, use a black-and-white printer, black-and-white scanner, or convert to black-and-white with an image editor.

Copiers/printers have two features put in there by the government to be evil to you. The first is that scanners/copiers (when using scanner feature) recognize a barely visible pattern on currency, so that they can’t be used to counterfeit money, as shown on this $20 below:

The second is that when they print things out, they includes these invisible dots, so documents can be tracked. In other words, those dots on bills prevent them from being scanned in, and the dots produced by printers help the government track what was printed out.

Yes, this code the government forces into our printers is a violation of our 3rd Amendment rights.


While I was writing up this post, these tweets appeared first:


Comments:
https://news.ycombinator.com/item?id=14494818

“Only a year? It’s felt like forever”: a twelve-month retrospective

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/12-months-raspberry-pi/

This weekend saw my first anniversary at Raspberry Pi, and this blog marks my 100th post written for the company. It would have been easy to let one milestone or the other slide had they not come along hand in hand, begging for some sort of acknowledgement.

Alex, Matt, and Courtney in a punt on the Cam

The day Liz decided to keep me

So here it is!

Joining the crew

Prior to my position in the Comms team as Social Media Editor, my employment history was largely made up of retail sales roles and, before that, bit parts in theatrical backstage crews. I never thought I would work for the Raspberry Pi Foundation, despite its firm position on my Top Five Awesome Places I’d Love to Work list. How could I work for a tech company when my knowledge of tech stretched as far as dismantling my Game Boy when I was a kid to see how the insides worked, or being the one friend everyone went to when their phone didn’t do what it was meant to do? I never thought about the other side of the Foundation coin, or how I could find my place within the hidden workings that turned the cogs that brought everything together.

… when suddenly, as if out of nowhere, a new job with a dream company. #raspberrypi #positive #change #dosomething

12 Likes, 1 Comments – Alex J’rassic (@thealexjrassic) on Instagram: “… when suddenly, as if out of nowhere, a new job with a dream company. #raspberrypi #positive…”

A little luck, a well-written though humorous resumé, and a meeting with Liz and Helen later, I found myself the newest member of the growing team at Pi Towers.

Ticking items off the Bucket List

I thought it would be fun to point out some of the chances I’ve had over the last twelve months and explain how they fit within the world of Raspberry Pi. After all, we’re about more than just a $35 credit card-sized computer. We’re a charitable Foundation made up of some wonderful and exciting projects, people, and goals.

High altitude ballooning (HAB)

Skycademy offers educators in the UK the chance to come to Pi Towers Cambridge to learn how to plan a balloon launch, build a payload with onboard Raspberry Pi and Camera Module, and provide teachers with the skills needed to take their students on an adventure to near space, with photographic evidence to prove it.

All the screens you need to hunt balloons. . We have our landing point and are now rushing to Therford to find the payload in a field. . #HAB #RasppberryPi

332 Likes, 5 Comments – Raspberry Pi (@raspberrypifoundation) on Instagram: “All the screens you need to hunt balloons. . We have our landing point and are now rushing to…”

I was fortunate enough to join Sky Captain James, along with Dan Fisher, Dave Akerman, and Steve Randell on a test launch back in August last year. Testing out new kit that James had still been tinkering with that morning, we headed to a field in Elsworth, near Cambridge, and provided Facebook Live footage of the process from payload build to launch…to the moment when our balloon landed in an RAF shooting range some hours later.

RAF firing range sign

“Can we have our balloon back, please, mister?”

Having enjoyed watching Blue Peter presenters send up a HAB when I was a child, I marked off the event on my bucket list with a bold tick, and I continue to show off the photographs from our Raspberry Pi as it reached near space.

Spend the day launching/chasing a high-altitude balloon. Look how high it went!!! #HAB #ballooning #space #wellspacekinda #ish #photography #uk #highaltitude

13 Likes, 2 Comments – Alex J’rassic (@thealexjrassic) on Instagram: “Spend the day launching/chasing a high-altitude balloon. Look how high it went!!! #HAB #ballooning…”

You can find more information on Skycademy here, plus more detail about our test launch day in Dan’s blog post here.

Dear Raspberry Pi Friends…

My desk is slowly filling with stuff: notes, mementoes, and trinkets that find their way to me from members of the community, both established and new to the life of Pi. There are thank you notes, updates, and more from people I’ve chatted to online as they explore their way around the world of Pi.

Letter of thanks to Raspberry Pi from a young fan

*heart melts*

By plugging myself into social media on a daily basis, I often find hidden treasures that go unnoticed due to the high volume of tags we receive on Facebook, Twitter, Instagram, and so on. Kids jumping off chairs in delight as they complete their first Scratch project, newcomers to the Raspberry Pi shedding a tear as they make an LED blink on their kitchen table, and seasoned makers turning their hobby into something positive to aid others.

It’s wonderful to join in the excitement of people discovering a new skill and exploring the community of Raspberry Pi makers: I’ve been known to shed a tear as a result.

Meeting educators at Bett, chatting to teen makers at makerspaces, and sharing a cupcake or three at the birthday party have been incredible opportunities to get to know you all.

You’re all brilliant.

The Queens of Robots, both shoddy and otherwise

Last year we welcomed the Queen of Shoddy Robots, Simone Giertz to Pi Towers, where we chatted about making, charity, and space while wandering the colleges of Cambridge and hanging out with flat Tim Peake.

Queen of Robots @simonegiertz came to visit #PiTowers today. We hung out with cardboard @astro_timpeake and ate chelsea buns at @fitzbillies #Cambridge. . We also had a great talk about the educational projects of the #RaspberryPi team, #AstroPi and how not enough people realise we’re a #charity. . If you’d like to learn more about the Raspberry Pi Foundation and the work we do with #teachers and #education, check out our website – www.raspberrypi.org. . How was your day? Get up to anything fun?

597 Likes, 3 Comments – Raspberry Pi (@raspberrypifoundation) on Instagram: “Queen of Robots @simonegiertz came to visit #PiTowers today. We hung out with cardboard…”

And last month, the wonderful Estefannie ‘Explains it All’ de La Garza came to hang out, make things, and discuss our educational projects.

Estefannie on Twitter

Ahhhh!!! I still can’t believe I got to hang out and make stuff at the @Raspberry_Pi towers!! Thank you thank you!!

Meeting such wonderful, exciting, and innovative YouTubers was a fantastic inspiration to work on my own projects and to try to do more to help others discover ways to connect with tech through their own interests.

Those ‘wow’ moments

Every Raspberry Pi project I see on a daily basis is awesome. The moment someone takes an idea and does something with it is, in my book, always worthy of awe and appreciation. Whether it be the aforementioned flashing LED, or sending Raspberry Pis to the International Space Station, if you have turned your idea into reality, I applaud you.

Some of my favourite projects over the last twelve months have not only made me say “Wow!”, they’ve also inspired me to want to do more with myself, my time, and my growing maker skill.

Museum in a Box on Twitter

Great to meet @alexjrassic today and nerd out about @Raspberry_Pi and weather balloons and @Space_Station and all things #edtech 🎈⛅🛰📚🤖

Projects such as Museum in a Box, a wonderful hands-on learning aid that brings the world to the hands of children across the globe, honestly made me tear up as I placed a miniaturised 3D-printed Virginia Woolf onto a wooden box and gasped as she started to speak to me.

Jill Ogle’s Let’s Robot project had me in awe as Twitch-controlled Pi robots tackled mazes, attempted to cut birthday cake, or swung to slap Jill in the face over webcam.

Jillian Ogle on Twitter

@SryAbtYourCats @tekn0rebel @Beam Lol speaking of faces… https://t.co/1tqFlMNS31

Every day I discover new, wonderful builds that both make me wish I’d thought of them first, and leave me wondering how they manage to make them work in the first place.

Space

We have Raspberry Pis in space. SPACE. Actually space.

Raspberry Pi on Twitter

New post: Mission accomplished for the European @astro_pi challenge and @esa @Thom_astro is on his way home 🚀 https://t.co/ycTSDR1h1Q

Twelve months later, this still blows my mind.

And let’s not forget…

  • The chance to visit both the Houses of Parliment and St James’s Palace

Raspberry Pi team at the Houses of Parliament

  • Going to a Doctor Who pre-screening and meeting Peter Capaldi, thanks to Clare Sutcliffe

There’s no need to smile when you’re #DoctorWho.

13 Likes, 2 Comments – Alex J’rassic (@thealexjrassic) on Instagram: “There’s no need to smile when you’re #DoctorWho.”

We’re here. Where are you? . . . . . #raspberrypi #vidconeu #vidcon #pizero #zerow #travel #explore #adventure #youtube

1,944 Likes, 30 Comments – Raspberry Pi (@raspberrypifoundation) on Instagram: “We’re here. Where are you? . . . . . #raspberrypi #vidconeu #vidcon #pizero #zerow #travel #explore…”

  • Making a GIF Cam and other builds, and sharing them with you all via the blog

Made a Gif Cam using a Raspberry Pi, Pi camera, button and a couple LEDs. . When you press the button, it takes 8 images and stitches them into a gif file. The files then appear on my MacBook. . Check out our Twitter feed (Raspberry_Pi) for examples! . Next step is to fit it inside a better camera body. . #DigitalMaking #Photography #Making #Camera #Gif #MakersGonnaMake #LED #Creating #PhotosofInstagram #RaspberryPi

19 Likes, 1 Comments – Alex J’rassic (@thealexjrassic) on Instagram: “Made a Gif Cam using a Raspberry Pi, Pi camera, button and a couple LEDs. . When you press the…”

The next twelve months

Despite Eben jokingly firing me near-weekly across Twitter, or Philip giving me the ‘Dad glare’ when I pull wires and buttons out of a box under my desk to start yet another project, I don’t plan on going anywhere. Over the next twelve months, I hope to continue discovering awesome Pi builds, expanding on my own skills, and curating some wonderful projects for you via the Raspberry Pi blog, the Raspberry Pi Weekly newsletter, my submissions to The MagPi Magazine, and the occasional video interview or two.

It’s been a pleasure. Thank you for joining me on the ride!

The post “Only a year? It’s felt like forever”: a twelve-month retrospective appeared first on Raspberry Pi.

How to track that annoying pop-up

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/06/how-to-track-that-annoying-pop-up.html

In a recent update to their Office suite on Windows, Microsoft made a mistake where every hour, for a fraction of a second,  a black window pops up on the screen. This leads many to fear their system has been infected by a virus. I thought I’d document how to track this down.

The short answer is to use Mark Russinovich’s “sysinternals.com” tools. He’s Windows internals guru at Microsoft and has been maintaining a suite of tools that are critical for Windows system maintenance and security. Copy all the tools from “https://live.sysinternals.com“. Also, you can copy with Microsoft Windows Networking (SMB).

Of these tools, what we want is something that looks at “processes”. There are several tools that do this, but focus on processes that are currently running. What we want is something that monitors process creation.

The tool for that is “sysmon.exe”. It can monitor not only process creation, but a large number of other system events that a techy can use to see what the system has been doing, and if you are infected with a virus.

Sysmon has a fairly complicated configuration file, and if you enabled everything, you’d soon be overwhelmed with events. @SwiftOnSecurity has published a configuration file they use in the real world in real environment that cuts down on the noise, and focuses on events that are really important. It enables monitoring of “process creation”, but filters out know good processes that might fill up your logs. You grab the file here. Save it to the same directory to where you saved Sysmon:

https://raw.githubusercontent.com/SwiftOnSecurity/sysmon-config/master/sysmonconfig-export.xml

Once you’ve done it, run the following command to activate the Sysmon monitoring service using this configuration file by running the following command as Administrator. (Right click on the Command Prompt icon and select More/Run as Administrator).

sysmon.exe -accepteula -i sysmonconfig-export.xml

Now sit back and relax until that popup happens again. Right after it does, go into the “Event Viewer” application (click on Windows menu and type “Event Viewer”, or run ‘eventvwr.exe’. Now you have to find where the sysmon events are located, since there are so many things that log events.

The Sysmon events are under the path:

Applications and Services Logs\Microsoft\Windows\Sysmon\operational

When you open that up, you should see the top event is the one we are looking for. Actually, the very top event is launching the process “eventvwr.exe”, but the next one down is our event. It looks like this:

Drilling down into the details, we find the the offending thing causing those annoying popups is “officebackgroundtask.exe” in Office.

We can see it’s started by the “Schedule” service. This means we can go look at it with “autoruns.exe”, another Sysinternals tool that looks at all the things configured to automatically start when you start/login to your computer.

They are pink, which [update] is how autoruns shows they are “unsigned” programs (Microsoft’s programs should, normally, always be signed, so this should be suspicious). I’m assuming the suspicious thing is that they run in the user’s context, rather than system context, creating popup screens.

Autoruns allows you to do a bunch of things. You can click on the [X] box and disable it from running in the future. You can [right-click] in order to upload to Virus Total and check if it’s a known virus.

You can also double-click, to open the Task Scheduler, and see the specific configuration. You can see here that this thing is scheduled to run every hour:

Conclusion

So the conclusions are this.
To solve this particular problem of identifying what’s causing a process to flash a screen occasionally, use sysmon.
To solve generation problems like this, use Sysinternals suite of applications.
I haven’t been, but I am now, using @SwiftOnSecurity’s sysmon configuration just to monitor the security of my computers. I should probably install something to move a copy of the logs off the system.

Some Notes

Some URLs:
Some tweets:

YouTube live-streaming made easy

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/youtube-live-streaming-docker/

Looking to share your day, event, or the observations of your nature box live on the internet via a Raspberry Pi? Then look no further, for Alex Ellis has all you need to get started with YouTube live-streaming from your Pi.

YouTube live-streaming Docker Raspberry Pi

The YouTube live dashboard. Image c/o Alex Ellis

If you spend any time on social media, be it Facebook, Instagram, YouTube, or Twitter, chances are you’ve been notified of someone ‘going live’.

Live-streaming video on social platforms has become almost ubiquitous, whether it’s content by brands, celebrities, or your cousin or nan – everyone is doing it.

Even us!

Live from Pi Towers – Welcome

Carrie Anne and Alex offer up a quick tour of the Pi Towers lobby while trying to figure out how Facebook Live video works.

YouTube live-streaming with Alex Ellis and Docker

In his tutorial, Alex demonstrates an easy, straightforward approach to live-streaming via a Raspberry Pi with the help of a Docker image of FFmpeg he has built. He says that with the image, instead of “having to go through lots of manual steps, we can type in a handful of commands and get started immediately.”

Why is the Docker image so helpful?

As Alex explains on his blog, if you want to manually configure your Raspberry Pi Zero for YouTube live-streaming, you need to dedicate more than a few hours of your day.

Normally this would have involved typing in many manual CLI commands and waiting up to 9 hours for some video encoding software (ffmpeg) to compile itself.

Get anything wrong (like Alex did the first time) and you have to face another nine hours of compilation time before you’re ready to start streaming – not ideal if your project is time-sensitive.

Alex Ellis on Twitter

See you in 8-12 hours? Building ffmpeg on a my @Raspberry_Pi #pizero with @docker

Using the Docker image

In his tutorial, Alex uses a Raspberry Pi Zero and advises that the project will work with either Raspbian Jessie Lite or PIXEL. Once you’ve installed Docker, you can pull the FFmpeg image he has created directly to your Pi from the Docker Hub. (We advise that while doing so, you should feel grateful to Alex for making the image available and saving you so much time.)

It goes without saying that you’ll need a YouTube account in order to live-stream to YouTube; go to the YouTube live streaming dashboard to obtain a streaming key.

Alex Ellis on Twitter

Get live streaming to @YouTube with this new weekend project and guide using your @Raspberry_Pi and @docker. https://t.co/soqZ9D9jbS

For a comprehensive breakdown of how to stream to YouTube via a Raspberry Pi, head to Alex’s blog. You’ll also find plenty of other Raspberry Pi projects there to try out.

Why live-stream from a Raspberry Pi?

We see more and more of our community members build Raspberry Pi projects that involve video capture. The minute dimensions of the Raspberry Pi Zero and Zero W make them ideal for fitting into robots, nature boxes, dash cams, and more. What better way to get people excited about your video than to share it with them live?

If you have used a Raspberry Pi to capture or stream footage, make sure to link to your project in the comments below. And if you give Alex’s Docker image a go, do let us know how you get on.

The post YouTube live-streaming made easy appeared first on Raspberry Pi.