Tag Archives: Uncategorized

Changes to our sending review process

Post Syndicated from Dustin Taylor original https://aws.amazon.com/blogs/messaging-and-targeting/changes-to-our-sending-review-process/

We’re changing some of the language we use in our sending review process to make our communications clearer and more helpful.

If you’re not familiar with the sending review process, it refers to the actions that we take when there are issues with the email sent from an Amazon SES or Amazon Pinpoint account. Usually, these issues are a result of senders making honest mistakes. However, when email providers receive problematic email from a sender, they can’t tell if the sender made a mistake, or if they’re doing something malicious. If an email provider detects a problem that’s severe enough, they might block all incoming email from the sender’s IP address. If that happens, email sent from other senders who use the same IP address is blocked as well.

For this reason, we look for certain patterns and behaviors that could cause deliverability problems, and then work with our customers to help resolve the issues with the email sent from their accounts. We used to call this our enforcement process, but we now refer to it as our sending review process. This name is a much better description of the process (not to mention a bit friendlier).

You might notice some other changes as well. When the reputation metrics for an account (such as the account’s bounce or complaint rate) exceed certain levels, or another issue occurs that could impact the reputation of that account, we’ll monitor the email sending behaviors of that account for a certain period of time. During this time, we make a note of whether the problem gets better or worse. Previously, this period was called a probation period; we now call it a review period.

If an account is under review, but the sender isn’t able to correct the issue before the end of the review period, we’ll temporarily disable the account’s ability to send any more email. We take this action to protect the reputation of the sender, and to ensure that other customers can send email without experiencing deliverability issues. We used to call this a suspension, but that name seemed very permanent and punitive. We now refer to these events as sending pauses, because in the majority of cases, they’re temporary and reversible.

Finally, if a sender disagrees with our decision to place a review period or sending pause on their account, they can contact us to let us know why we made this decision in error. This used to be known as an appeal, but we now call it a review.

If we ever change the status of your account, such as by implementing a review period or sending pause, we’ll contact you by email at the address associated with your AWS account. We recommend that you make sure that we have the right email address. For information about changing the email address associated with your AWS account, see Managing an AWS Account in the AWS Billing and Cost Management User Guide.

In addition to sending you a notification by email, we’ll also update the reputation dashboard in the Amazon SES console to show the current status of your account. To learn more about the reputation dashboard, see Using the Reputation Dashboard in the Amazon SES Developer Guide.

Learn about AWS Services and Solutions – September AWS Online Tech Talks

Post Syndicated from Robin Park original https://aws.amazon.com/blogs/aws/learn-about-aws-services-and-solutions-september-aws-online-tech-talks/

AWS Online Tech Talks are live, online presentations that cover a broad range of topics at varying technical levels. Join us this month to learn about AWS services and solutions. We’ll have experts online to help answer any questions you may have.

Featured this month is our first ever fireside chat discussion. Join Debanjan Saha, General Manager of Amazon Aurora and Amazon RDS, to learn how customers are using our relational database services and leveraging database innovations.

Register today!

Note – All sessions are free and in Pacific Time.

Tech talks featured this month:

Compute

September 24, 2018 | 09:00 AM – 09:45 AM PT – Accelerating Product Development with HPC on AWS – Learn how you can accelerate product development by harnessing the power of high performance computing on AWS.

September 26, 2018 | 09:00 AM – 10:00 AM PT – Introducing New Amazon EC2 T3 Instances – General Purpose Burstable Instances – Learn about new Amazon EC2 T3 instance types and how they can be used for various use cases to lower infrastructure costs.

September 27, 2018 | 09:00 AM – 09:45 AM PT – Hybrid Cloud Customer Use Cases on AWS: Part 2 – Learn about popular hybrid cloud customer use cases on AWS.

Containers

September 19, 2018 | 11:00 AM – 11:45 AM PT – How Talroo Used AWS Fargate to Improve their Application Scaling – Learn how Talroo, a data-driven solution for talent and jobs, migrated their applications to AWS Fargate so they can run their application without worrying about managing infrastructure.

Data Lakes & Analytics

September 17, 2018 | 11:00 AM – 11:45 AM PT – Secure Your Amazon Elasticsearch Service Domain – Learn about the multi-level security controls provided by Amazon Elasticsearch Service (Amazon ES) and how to set the security for your Amazon ES domain to prevent unauthorized data access.

September 20, 2018 | 11:00 AM – 12:00 PM PT – New Innovations from Amazon Kinesis for Real-Time Analytics – Learn about the new innovations from Amazon Kinesis for real-time analytics.

Databases

September 17, 2018 | 01:00 PM – 02:00 PM PT – Applied Live Migration to DynamoDB from Cassandra – Learn how to migrate a live Cassandra-based application to DynamoDB.

September 18, 2018 | 11:00 AM – 11:45 AM PT – Scaling Your Redis Workloads with Redis Cluster – Learn how Redis cluster with Amazon ElastiCache provides scalability and availability for enterprise workloads.

**Featured: September 20, 2018 | 09:00 AM – 09:45 AM PT – Fireside Chat: Relational Database Innovation at AWS – Join our fireside chat with Debanjan Saha, GM, Amazon Aurora and Amazon RDS to learn how customers are using our relational database services and leveraging database innovations.

DevOps

September 19, 2018 | 09:00 AM – 10:00 AM PT – Serverless Application Debugging and Delivery – Learn how to bring traditional best practices to serverless application debugging and delivery.

Enterprise & Hybrid

September 26, 2018 | 11:00 AM – 12:00 PM PT – Transforming Product Development with the Cloud – Learn how to transform your development practices with the cloud.

September 27, 2018 | 11:00 AM – 12:00 PM PT – Fueling High Performance Computing (HPC) on AWS with GPUs – Learn how you can accelerate time-to-results for your HPC applications by harnessing the power of GPU-based compute instances on AWS.

IoT

September 24, 2018 | 01:00 PM – 01:45 PM PT – Manage Security of Your IoT Devices with AWS IoT Device Defender – Learn how AWS IoT Device Defender can help you manage the security of IoT devices.

September 26, 2018 | 01:00 PM – 02:00 PM PT – Over-the-Air Updates with Amazon FreeRTOS – Learn how to execute over-the-air updates on connected microcontroller-based devices with Amazon FreeRTOS.

Machine Learning

September 17, 2018 | 09:00 AM – 09:45 AM PT – Build Intelligent Applications with Machine Learning on AWS – Learn how to accelerate development of AI applications using machine learning on AWS.

September 18, 2018 | 09:00 AM – 09:45 AM PT – How to Integrate Natural Language Processing and Elasticsearch for Better Analytics – Learn how to process, analyze and visualize data by pairing Amazon Comprehend with Amazon Elasticsearch.

September 20, 2018 | 01:00 PM – 01:45 PM PT – Build, Train and Deploy Machine Learning Models on AWS with Amazon SageMaker – Dive deep into building, training, & deploying machine learning models quickly and easily using Amazon SageMaker.

Management Tools

September 19, 2018 | 01:00 PM – 02:00 PM PT – Automated Windows and Linux Patching – Learn how AWS Systems Manager can help reduce data breach risks across your environment through automated patching.

re:Invent

September 12, 2018 | 08:00 AM – 08:30 AM PT – Episode 5: Deep Dive with Our Community Heroes and Jeff Barr – Get the insider secrets with top recommendations and tips for re:Invent 2018 from AWS community experts.

Security, Identity, & Compliance

September 24, 2018 | 11:00 AM – 12:00 PM PT – Enhanced Security Analytics Using AWS WAF Full Logging – Learn how to use AWS WAF security incidence logs to detect threats.

September 27, 2018 | 01:00 PM – 02:00 PM PT – Threat Response Scenarios Using Amazon GuardDuty – Discover methods for operationalizing your threat detection using Amazon GuardDuty.

Serverless

September 18, 2018 | 01:00 PM – 02:00 PM PT – Best Practices for Building Enterprise Grade APIs with Amazon API Gateway – Learn best practices for building and operating enterprise-grade APIs with Amazon API Gateway.

Storage

September 25, 2018 | 09:00 AM – 10:00 AM PT – Ditch Your NAS! Move to Amazon EFS – Learn how to move your on-premises file storage to Amazon EFS.

September 25, 2018 | 11:00 AM – 12:00 PM PT – Deep Dive on Amazon Elastic File System (EFS): Scalable, Reliable, and Elastic File Storage for the AWS Cloud – Get live demos and learn tips & tricks for optimizing your file storage on EFS.

September 25, 2018 | 01:00 PM – 01:45 PM PT – Integrating File Services to Power Your Media & Entertainment Workloads – Learn how AWS file services deliver high performance shared file storage for media & entertainment workflows.

Autonomous drones (only slightly flammable)

Post Syndicated from Liz Upton original https://www.raspberrypi.org/blog/autonomous-drones-only-slightly-flammable/

I had an email a little while ago, which opened: “I don’t know if you remember me, but…”

As it happens, I remembered Andy Baker very well, in large part because an indoor autonomous drone demo he ran at a Raspberry Pi birthday party a couple of years ago ACTUALLY CAUGHT FIRE. Here’s a refresher.

Raspberry Pi Party Autonomous drone demo + fire

At the Raspberry Pi IV party and there is a great demo of an Autonomous drone which is very impressive with only using a Pi. However it caught on fire. But i believe it does actually work.

We’ve been very careful since then to make sure that speakers are always accompanied by a fire extinguisher.

I love stories like Andy’s. He started working with the Raspberry Pi shortly after our first release in 2012, and had absolutely no experience with drones or programming them; there’s nothing more interesting than watching someone go from a standing start to something really impressive. It’s been a couple of years since we were last in touch, but Andy mailed me last week to let me know he’s just completed his piDrone project, after years of development. I thought you’d like to hear about it too. Over to Andy!

Building an autonomous drone from scratch

I suffer from “terminal boredom syndrome”; I always need a challenging hobby to keep me sane. In 2012, the Raspberry Pi was launched just as my previous hobby had come to an end. After six months of playing (including a Raspberry Pi version of a BBC Micro Turtle robot I did at school 30+ years ago), I was looking for something really challenging. DIY drones were emerging, so I set out making one with a Raspberry Pi and Python, from absolute ignorance but loads of motivation.  Six years later, with only one fire (at the Raspberry Pi 4th Birthday Party, no less!), the job is done.

Here’s smaller Zoë, larger Hermione and their remote-controller, Ivy:

Zoë (as in “Ball”), the smallest drone, is based on a Pi ZeroW, supporting preset- and manual-flight controls. Hermione (as in “Granger”) is a Pi3 drone, supporting the above along with GPS and obstacle-avoidance.

Penelope (as in “Pitstop”), not shown above, is a B3+ with mix of the two above.

Development history

It probably took four years(!) to get the drone to simply hover stably for more than a few seconds. For example, the accelerometer (IMU) tells gravity and acceleration in 3D; and from sum math(s), angles, speed and distance. But IMU output is very noisy. It drifts with temperature, and because gravity is huge compared to the propeller changes, it doesn’t take long before the calculated speed and distance values drift significantly. It took a lot of time, experimentation and guesswork to get accelerometer, gyrometer, ground-facing LiDAR and a Raspberry Pi camera to work together to get a stable hover for minutes rather than seconds. And during that experimentation, there were plenty of crashes: replacement parts were needed many many times! However, with a sixty-second stable hover finally working, adding cool features like GPS tracking, object avoidance and human control were trivial in comparison.

GNSS waypoint tracked successfully!

See http://blog.pistuffing.co.uk/whoohoo/

Obstruction avoidance test 2 – PASSED!!!!!

Details at http://pidrone.io/posts/obstruction-avoidance-test-2-passed/

Human control (iPhone)

See http://pidrone.io/posts/human-i-am-human/

In passing, I’m a co-founder and assistant at the Cotswold Raspberry Jam (cotswoldjam.org). I’m hoping to take Zoë to the next event on September 15th – tickets are free – and there’s so much more learn, interact and play with beyond the piDrone.

Finally, a few years ago, my goal became getting the piDrone exploring a maze: all but minor tweaks are now in places. Sadly, piDrone battery power for exploring a large maze currently doesn’t exist. Perhaps my next project will be designing a nuclear-fusion battery pack?  Deuterium oxide (heavy water) is surprisingly cheap, it seems…

More resources

If you want to learn more, there’s years of development on Andy’s blog at http://pidrone.io, and he’s made considerable documentation available at GitHub if you want to explore things further after this blog post. Thanks Andy!

The post Autonomous drones (only slightly flammable) appeared first on Raspberry Pi.

Learn to write games for the BBC Micro with Eben

Post Syndicated from Eben Upton original https://www.raspberrypi.org/blog/learn-to-write-games-for-the-bbc-micro-with-eben/

Long-time fans of the Raspberry Pi will know that we were inspired to make a programmable computer for kids by our own experiences with a machine called the BBC Micro, which many of us learned with in the 1980s.

This post is the first of what’s going to be an irregular series where I’ll walk you through building the sort of game we used to play when we were kids. You’ll need a copy of BeebEm (scroll down for a Linux port if you’re using a Pi – but this tutorial can be carried out on a PC or Mac as well as on an original BBC Micro if you have access to one).

I’m going to be presenting the next game in this series, tentatively titled Eben Goes Skiing, at the Centre for Computing History in Cambridge at 2pm this afternoon – head on down if you’d like to learn how to make scrolling ascii moguls.

Helicopter tutorial

We’re going to build a simple helicopter game in BBC BASIC. This will demonstrate a number of neat features, including user-defined characters, non-blocking keyboard input using INKEY, and positioning text and graphics using PRINT TAB.

Let’s start with user-defined characters. These provide us with an easy way to create a monochrome 8×8-pixel image by typing in 8 small numbers. As an example, let’s look at our helicopter sprite:

Each column pixel position in a row is “worth” a different power of 2, from 1 for the rightmost pixel up to 128 for the leftmost. To generate our 8 numbers, we process one row at a time, adding up the value for each occupied pixel position. We can now create custom character number 226 using the VDU 23 command. To display the character, we change to a graphics mode using the MODE command and display it using the PRINT command.

Type the following:

10MODE 2

70VDU 23,226,0,248,32,116,126,116,112,0

RUN

PRINT CHR$(226)

You should see the little helicopter on the screen just above your prompt. Let’s define some more characters for our game, with character numbers 224 through 229. These represent leftward and rightward flying birds, a rightward flying helicopter, the surface of the sea, and a landing pad.

Type the following:

50VDU 23,224,0,14,12,104,16,28,8,0

60VDU 23,225,0,112,48,22,8,56,16,0

80VDU 23,227,0,31,4,46,126,46,14,0

90VDU 23,228,0,102,255,255,255,255,255,255

100VDU 23,229,255,255,0,0,0,0,0,0

Trying running your program and using print to view the new characters!

Now we’re ready to use our sea and platform characters to build the game world. Mode 2 on the BBC Micro has 20 character positions across, and 32 down. We’ll draw 20 copies of the sea character in row 30 (remember, rows and columns are numbered from zero) using a FOR loop and the PRINT TAB command, and pick a random position for the platform using the RND() function.

Type the following:

110FOR I%=0 TO 19

120PRINT TAB(I%,30) CHR$(228);

130NEXT

140P%=RND(20)-1

150PRINT TAB(P%,30) CHR$(229);

RUN

You should see something like this:

Don’t worry about that cursor and prompt: they won’t show up in the finished game.

It’s time to add the helicopter. We’ll create variables X% and Y% to hold the position of the helicopter, and Z% to tell us if it last moved left or right. We’ll initialise X% to a random position, Y% to the top of the screen, and Z% to zero, meaning “left”. We can use PRINT TAB again to draw the helicopter (either character 226 or 227 depending on Z%) at its current position. The whole thing is wrapped up in a REPEAT loop, which keeps executing until the helicopter reaches the ground (in row 29).

Type the following:

160X%=RND(20)-1:Y%=0:Z%=0

180REPEAT

260PRINT TAB(X%,Y%) CHR$(226+Z%);

290UNTIL Y%=29

RUN

You’ll see the helicopter sitting at the top of the screen.

We’re almost there: let’s give our helicopter the ability to move left, right and down. On each trip round the loop, we move down one row, and use the INKEY() function to read the Z and X keys on the keyboard. If Z is pressed, and we’re not already at the left of the
screen, we move one column left. If X is pressed, and we’re not already at the right of the screen, we move one column right.

Type the following:

210IF INKEY(-98) AND X%>0 THEN X%=X%-1:Z%=0

220IF INKEY(-67) AND X%<19 THEN X%=X%+1:Z%=1

230Y%=Y%+1

RUN

You should see something like this:

The game is much, much too fast to control, and the helicopter leaves trails: not surprising, as we didn’t do anything to erase the previous frame. Let’s use PRINT TAB to place a “space” character over the previous position of the helicopter, and add an empty FOR loop to slow things down a bit.

Type the following:

190PRINT TAB (%,Y%)"";

280FOR I%=1 TO 200:NEXT

RUN

Much better! This is starting to feel like a real game. Let’s finish it off by:

  • Adding a bird that flies back and forth
  • Detecting whether you hit the pad or not
  • Getting rid of the annoying cursor using a “magic” VDU 23 command
  • Putting an outer loop in to let you play again

Type the following:

20REPEAT

30CLS

40VDU 23,1,0;0;0;0;

170A%=RND(18):B%=10:C%=RND(2)-1

200PRINT TAB(A%,B%) "";

240A%=A%+2*C%-1

250IF A%=0 OR A%=19 THEN C%=1-C%

270PRINT TAB(A%,B%) CHR$(224+C%);

300IF X%=P% PRINT TAB(6,15) "YOU WIN" ELSE PRINT TAB(6,15) "YOU
LOSE"

310PRINT TAB(4,16) "PRESS SPACE"

320REPEAT UNTIL INKEY(-99)

330UNTIL FALSE

RUN

And here it is in all its glory.

You might want to try adding some features to the game: collision with the bird, things to collect, vertical scrolling. The sky’s the limit!

I created a full version of the game, using graphics from our very own Sam Alder, for the Hackaday 1K challenge; you can find it here.

Appendix

Here’s the full source for the game in one block. If you get errors when you run your code, type:

MODE 0
LIST

And compare the output very carefully with what you see here.

10MODE 2
20REPEAT
30CLS
40VDU 23,1,0;0;0;0;
50VDU 23,224,0,14,12,104,16,28,8,0   
60VDU 23,225,0,112,48,22,8,56,16,0
70VDU 23,226,0,248,32,116,126,116,112,0
80VDU 23,227,0,31,4,46,126,46,14,0
90VDU 23,228,0,102,255,255,255,255,255,255
100VDU 23,229,255,255,0,0,0,0,0,0
110FOR I%=0 TO 19
120PRINT TAB(I%,30) CHR$(228);
130NEXT
140P%=RND(20)-1
150PRINT TAB(P%,30) CHR$(229);
160X%=RND(20)-1:Y%=0:Z%=0
170A%=RND(18):B%=10:C%=RND(2)-1
180REPEAT
190PRINT TAB(X%,Y%) " ";
200PRINT TAB(A%,B%) " ";  
210IF INKEY(-98) AND X%>0 THEN X%=X%-1:Z%=0  
220IF INKEY(-67) AND X%<19 THEN X%=X%+1:Z%=1
230Y%=Y%+1
240A%=A%+2*C%-1
250IF A%=0 OR A%=19 THEN C%=1-C%
260PRINT TAB(X%,Y%) CHR$(226+Z%);
270PRINT TAB(A%,B%) CHR$(224+C%);
280FOR I%=1 TO 200:NEXT
290UNTIL Y%=29
300IF X%=P% PRINT TAB(6,15) "YOU WIN" ELSE PRINT TAB(6,15) "YOU LOSE"
310PRINT TAB(4,16) "PRESS SPACE"
320REPEAT UNTIL INKEY(-99)
330UNTIL FALSE


The post Learn to write games for the BBC Micro with Eben appeared first on Raspberry Pi.

How to use AWS Secrets Manager to rotate credentials for all Amazon RDS database types, including Oracle

Post Syndicated from Apurv Awasthi original https://aws.amazon.com/blogs/security/how-to-use-aws-secrets-manager-rotate-credentials-amazon-rds-database-types-oracle/

You can now use AWS Secrets Manager to rotate credentials for Oracle, Microsoft SQL Server, or MariaDB databases hosted on Amazon Relational Database Service (Amazon RDS) automatically. Previously, I showed how to rotate credentials for a MySQL database hosted on Amazon RDS automatically with AWS Secrets Manager. With today’s launch, you can use Secrets Manager to automatically rotate credentials for all types of databases hosted on Amazon RDS.

In this post, I review the key features of Secrets Manager. You’ll then learn:

  1. How to store the database credential for the superuser of an Oracle database hosted on Amazon RDS
  2. How to store the Oracle database credential used by an application
  3. How to configure Secrets Manager to rotate both Oracle credentials automatically on a schedule that you define

Key features of Secrets Manager

AWS Secrets Manager makes it easier to rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. The key features of this service include the ability to:

  1. Secure and manage secrets centrally. You can store, view, and manage all your secrets centrally. By default, Secrets Manager encrypts these secrets with encryption keys that you own and control. You can use fine-grained IAM policies or resource-based policies to control access to your secrets. You can also tag secrets to help you discover, organize, and control access to secrets used throughout your organization.
  2. Rotate secrets safely. You can configure Secrets Manager to rotate secrets automatically without disrupting your applications. Secrets Manager offers built-in integrations for rotating credentials for all Amazon RDS databases (MySQL, PostgreSQL, Oracle, Microsoft SQL Server, MariaDB, and Amazon Aurora.) You can also extend Secrets Manager to meet your custom rotation requirements by creating an AWS Lambda function to rotate other types of secrets.
  3. Transmit securely. Secrets are transmitted securely over Transport Layer Security (TLS) protocol 1.2. You can also use Secrets Manager with Amazon Virtual Private Cloud (Amazon VPC) endpoints powered by AWS Privatelink to keep this communication within the AWS network and help meet your compliance and regulatory requirements to limit public internet connectivity.
  4. Pay as you go. Pay for the secrets you store in Secrets Manager and for the use of these secrets; there are no long-term contracts, licensing fees, or infrastructure and personnel costs. For example, a typical production-scale web application will generate an estimated monthly bill of $6. If you follow along the instructions in this blog post, your estimated monthly bill for Secrets Manager will be $1. Note: you may incur additional charges for using Amazon RDS and Amazon Lambda, if you’ve already consumed the free tier for these services.

Now that you’re familiar with Secrets Manager features, I’ll show you how to store and automatically rotate credentials for an Oracle database hosted on Amazon RDS. I divided these instructions into three phases:

  1. Phase 1: Store and configure rotation for the superuser credential
  2. Phase 2: Store and configure rotation for the application credential
  3. Phase 3: Retrieve the credential from Secrets Manager programmatically

Prerequisites

To follow along, your AWS Identity and Access Management (IAM) principal (user or role) requires the SecretsManagerReadWrite AWS managed policy to store the secrets. Your principal also requires the IAMFullAccess AWS managed policy to create and configure permissions for the IAM role used by Lambda for executing rotations. You can use IAM permissions boundaries to grant an employee the ability to configure rotation without also granting them full administrative access to your account.

Phase 1: Store and configure rotation for the superuser credential

From the Secrets Manager console, on the right side, select Store a new secret.

Since I’m storing credentials for database hosted on Amazon RDS, I select Credentials for RDS database. Next, I input the user name and password for the superuser. I start by securing the superuser because it’s the most powerful database credential and has full access to the database.
 

Figure 1: For "Select secret type," choose "Credentials for RDS database"

Figure 1: For “Select secret type,” choose “Credentials for RDS database”

For this example, I choose to use the default encryption settings. Secrets Manager will encrypt this secret using the Secrets Manager DefaultEncryptionKey in this account. Alternatively, I can choose to encrypt using a customer master key (CMK) that I have stored in AWS Key Management Service (AWS KMS). To learn more, read the Using Your AWS KMS CMK documentation.
 

Figure 2: Choose either DefaultEncryptionKey or use a CMK

Figure 2: Choose either DefaultEncryptionKey or use a CMK

Next, I view the list of Amazon RDS instances in my account and select the database this credential accesses. For this example, I select the DB instance oracle-rds-database from the list, and then I select Next.

I then specify values for Secret name and Description. For this example, I use Database/Development/Oracle-Superuser as the name and enter a description of this secret, and then select Next.
 

Figure 3: Provide values for "Secret name" and "Description"

Figure 3: Provide values for “Secret name” and “Description”

Since this database is not yet being used, I choose to enable rotation. To do so, I select Enable automatic rotation, and then set the rotation interval to 60 days. Remember, if this database credential is currently being used, first update the application (see phase 3) to use Secrets Manager APIs to retrieve secrets before enabling rotation.
 

Figure 4: Select "Enable automatic rotation"

Figure 4: Select “Enable automatic rotation”

Next, Secrets Manager requires permissions to rotate this secret on my behalf. Because I’m storing the credentials for the superuser, Secrets Manager can use this credential to perform rotations. Therefore, on the same screen, I select Use a secret that I have previously stored in AWS Secrets Manager, and then select Next.

Finally, I review the information on the next screen. Everything looks correct, so I select Store. I have now successfully stored a secret in Secrets Manager.

Note: Secrets Manager will now create a Lambda function in the same VPC as my Oracle database and trigger this function periodically to change the password for the superuser. I can view the name of the Lambda function on the Rotation configuration section of the Secret Details page.

The banner on the next screen confirms that I’ve successfully configured rotation and the first rotation is in progress, which enables me to verify that rotation is functioning as expected. Secrets Manager will rotate this credential automatically every 60 days.
 

Figure 5: The confirmation notification

Figure 5: The confirmation notification

Phase 2: Store and configure rotation for the application credential

The superuser is a powerful credential that should be used only for administrative tasks. To enable your applications to access a database, create a unique database credential per application and grant these credentials limited permissions. You can use these database credentials to read or write to database tables required by the application. As a security best practice, deny the ability to perform management actions, such as creating new credentials.

In this phase, I will store the credential that my application will use to connect to the Oracle database. To get started, from the Secrets Manager console, on the right side, select Store a new secret.

Next, I select Credentials for RDS database, and input the user name and password for the application credential.

I continue to use the default encryption key. I select the DB instance oracle-rds-database, and then select Next.

I specify values for Secret Name and Description. For this example, I use Database/Development/Oracle-Application-User as the name and enter a description of this secret, and then select Next.

I now configure rotation. Once again, since my application is not using this database credential yet, I’ll configure rotation as part of storing this secret. I select Enable automatic rotation, and set the rotation interval to 60 days.

Next, Secrets Manager requires permissions to rotate this secret on behalf of my application. Earlier in the post, I mentioned that applications credentials have limited permissions and are unable to change their password. Therefore, I will use the superuser credential, Database/Development/Oracle-Superuser, that I stored in Phase 1 to rotate the application credential. With this configuration, Secrets Manager creates a clone application user.
 

Figure 6: Select the superuser credential

Figure 6: Select the superuser credential

Note: Creating a clone application user is the preferred mechanism of rotation because the old version of the secret continues to operate and handle service requests while the new version is prepared and tested. There’s no application downtime while changing between versions.

I review the information on the next screen. Everything looks correct, so I select Store. I have now successfully stored the application credential in Secrets Manager.

As mentioned in Phase 1, AWS Secrets Manager creates a Lambda function in the same VPC as the database and then triggers this function periodically to rotate the secret. Since I chose to use the existing superuser secret to rotate the application secret, I will grant the rotation Lambda function permissions to retrieve the superuser secret. To grant this permission, I first select role from the confirmation banner.
 

Figure 7: Select the "role" link that's in the confirmation notification

Figure 7: Select the “role” link that’s in the confirmation notification

Next, in the Permissions tab, I select SecretsManagerRDSMySQLRotationMultiUserRolePolicy0. Then I select Edit policy.
 

Figure 8: Edit the policy on the "Permissions" tab

Figure 8: Edit the policy on the “Permissions” tab

In this step, I update the policy (see below) and select Review policy. When following along, remember to replace the placeholder ARN-OF-SUPERUSER-SECRET with the ARN of the secret you stored in Phase 1.


{
  "Statement": [
    {
        "Effect": "Allow",
        "Action": [
            "ec2:CreateNetworkInterface",
			"ec2:DeleteNetworkInterface",
			"ec2:DescribeNetworkInterfaces",
			"ec2:DetachNetworkInterface"
		],
		"Resource": "*"
	},
	{
	    "Sid": "GrantPermissionToUse",
		"Effect": "Allow",
		"Action": [
            "secretsmanager:GetSecretValue"
        ],
		"Resource": "ARN-OF-SUPERUSER-SECRET"
	}
  ]
}

Here’s what it will look like:
 

Figure 9: Edit the policy

Figure 9: Edit the policy

Next, I select Save changes. I have now completed all the steps required to configure rotation for the application credential, Database/Development/Oracle-Application-User.

Phase 3: Retrieve the credential from Secrets Manager programmatically

Now that I have stored the secret in Secrets Manager, I add code to my application to retrieve the database credential from Secrets Manager. I use the sample code from Phase 2 above. This code sets up the client and retrieves and decrypts the secret Database/Development/Oracle-Application-User.

Remember, applications require permissions to retrieve the secret, Database/Development/Oracle-Application-User, from Secrets Manager. My application runs on Amazon EC2 and uses an IAM role to obtain access to AWS services. I attach the following policy to my IAM role. This policy uses the GetSecretValue action to grant my application permissions to read secret from Secrets Manager. This policy also uses the resource element to limit my application to read only the Database/Development/Oracle-Application-User secret from Secrets Manager. You can refer to the Secrets Manager Documentation to understand the minimum IAM permissions required to retrieve a secret.


{
 "Version": "2012-10-17",
 "Statement": {
    "Sid": "RetrieveDbCredentialFromSecretsManager",
    "Effect": "Allow",
    "Action": "secretsmanager:GetSecretValue",
    "Resource": "arn:aws:secretsmanager:<AWS-REGION>:<ACCOUNT-NUMBER>:secret: Database/Development/Oracle-Application-User     
 }
}

In the above policy, remember to replace the placeholder <AWS-REGION> with the AWS region that you’re using and the placeholder <ACCOUNT-NUMBER> with the number of your AWS account.

Summary

I explained the key benefits of Secrets Manager as they relate to RDS and showed you how to help meet your compliance requirements by configuring Secrets Manager to rotate database credentials automatically on your behalf. Secrets Manager helps you protect access to your applications, services, and IT resources without the upfront investment and on-going maintenance costs of operating your own secrets management infrastructure. To get started, visit the Secrets Manager console. To learn more, visit Secrets Manager documentation.

If you have comments about this post, submit them in the Comments section below. If you have questions about anything in this post, start a new thread on the Secrets Manager forum.

Want more AWS Security news? Follow us on Twitter.

Apurv Awasthi

Apurv is the product manager for credentials management services at AWS, including AWS Secrets Manager and IAM Roles. He enjoys the “Day 1” culture at Amazon because it aligns with his experience building startups in the sports and recruiting industries. Outside of work, Apurv enjoys hiking. He holds an MBA from UCLA and an MS in computer science from University of Kentucky.

Helen’s hoglet: an adorable adventure

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/helens-hoglet-an-adorable-adventure/

Today is a bank holiday here in England, as well as for lucky people in Wales and Northern Ireland. Pi Towers UK is running on a skeleton crew of Babbage Bear, several automated Raspberry Pis, and Noel Fielding, who lives behind the red door we never open.

So, as a gift for you all while we’re busy doing bank holiday things, here’s a video that Helen Lynn just recorded of one of the baby hedgehogs who live in her garden.

Helen’s hoglet

Uploaded by Raspberry Pi on 2018-08-24.

You’re welcome. See you tomorrow!

The post Helen’s hoglet: an adorable adventure appeared first on Raspberry Pi.

Introducing the PoE HAT – available now!

Post Syndicated from Roger Thornton original https://www.raspberrypi.org/blog/introducing-power-over-ethernet-poe-hat/

In March 2018 we announced the launch of Raspberry Pi 3 Model B+. One of the many features added to the new board was the ability to be powered through Power over Ethernet (PoE) with a HAT. We are really pleased to announce that the PoE HAT is on sale from today.

Raspberry Pi PoE HAT Power over ethernet

The HAT connects to the Raspberry Pi 3+’ 0.1” headers; the 40-way GPIO; and the new 4-pin header near the USB connectors, which allows you to power the system using your Ethernet cable.

Power over Ethernet

Power over Ethernet is a widely adopted standard that places power on the Ethernet cable along with the data. It has no effect on the data, so you won’t lose bandwidth by using PoE. There are various standards of PoE; this HAT uses the most common standard 802.3af, which allows delivery of up to 15W. This means that the HAT is capable of providing all the power needed for running your Raspberry Pi. You will need power sourcing equipment to power your Pi. This is either provided by your network switch or with power injectors on an Ethernet cable.

Raspberry Pi PoE HAT Power over ethernet

Using the PoE HAT

The HAT is a compact, single-sided board that sits within the footprint of the Raspberry Pi. It will fit comfortably inside an official Raspberry Pi case. A small (25mm) fan is pre-installed on the board. We see the product as a useful component for people building systems that may be in tougher environments, so the addition of the fan helps with cooling. The fan is controlled over I2C via a small ATMEL processor which allows for it to be temperature-controlled: when your Raspberry Pi processor hits certain temperatures, the fan will be turned on to cool it down. To enable this you will need to get the latest firmware (sudo rpi-update).

Raspberry Pi PoE HAT Power over ethernet

Because the fan is controlled over I2C, none of the GPIO are used, so you can stack a second HAT on top of the connector. To do this you will need to buy some longer pass-through headers that expose the pins on the other side of the PoE HAT. You will need one for the 40-way and one for the 4-way connector that has the PoE splitters on it.


We’ve tested a variety of pass-through headers and can recommend the 2×20 pin header from Pimoroni and the 4-way risers from RS and element14.

Getting mains power to remote areas of buildings is often tricky. PoE support enables this with just an Ethernet cable, allowing you to provide power (and data) to your Pi wherever it is located. With the improved network booting you can now dispense with not only the power supply but also the SD Card, making deployment even cheaper for a Raspberry Pi based system in your factory or workplace.

Get ahead, get a HAT

We are very excited to see what new projects this enables for you. The Raspberry Pi Power over Ethernet HAT is available for sale now at $20, from Farnell, RS and The Approved Reseller Network.

The post Introducing the PoE HAT – available now! appeared first on Raspberry Pi.

How to Efficiently Extract and Query Tagged Resources Using the AWS Resource Tagging API and S3 Select (SQL)

Post Syndicated from Marcilio Mendonca original https://aws.amazon.com/blogs/architecture/how-to-efficiently-extract-and-query-tagged-resources-using-the-aws-resource-tagging-api-and-s3-select-sql/

AWS customers can use tags to assign metadata to their AWS resources. Each tag is a simple label consisting of a customer-defined key and an optional value that can make it easier to manage, search for, and filter resources. Although there are no inherent types of tags, they enable customers to categorize resources by multiple criteria such as purpose, owner and, environment.

Once a tagging strategy is defined and enforced, customers can use the AWS Tag Editor to view and manage tags on their AWS resources, regardless of service or region. They can use the tag editor to search for resources by resource type, region, or tag, and then manage the tags applied to those resources.

However, customers have asked for guidance on how to build custom automation mechanisms to extract and query tagged resources so that they can extend the built-in functionalities of the Tag Editor. For instance, customers can build automation to generate custom CSV files for tagged resources and perhaps use SQL to query those resources. In addition, automation allows customers to add validation checks to their CI/CD deployment pipelines, for instance, to check whether resources have been properly tagged.

In this blog post, we introduce a simple yet efficient AWS architecture for extracting and querying tagged resources based on AWS cloud-native features such as the Resource Tagging API and S3 Select. We provide sample code for the architecture discussed that can help customers to customize and/or extend the architecture for their own purpose. By relying on AWS cloud-native features, customers can save time and reduce costs while still being able to do customizations.

For customers unfamiliar with the Resource Tagging API and the S3 Select features, below is a very brief introduction.

Resource Tagging API
AWS customers can use the Resource Tagging API to programatically access the same resource group operations that had been accessible only from the AWS Management Console by now using the AWS SDKs or the AWS Command Line Interface (CLI). By doing so, customers can build automation that fits their need, e.g., code that extract, export, and queries tagged resources.

For further details, please read Resource Groups Tagging – Reference

S3 Select
S3 Select enables applications to retrieve only a subset of data from an object by using simple SQL expressions. By using S3 Select to retrieve only the data needed by the application, customers can achieve drastic performance increases – in many cases you can get as much as a 400% improvement.

For further details, please read:

The Overall Solution Architecture

The figure above depict the overall architecture discussed in this post. It is a simple yet efficient architecture for extracting and querying tagged resources based on AWS cloud-native features. The Resource Tagging API is used to extract tagged resources from one or more AWS accounts via the Python AWS SDK, then a custom CSV file is generated and pushed to S3. Once in S3, the tagged resources file can now be efficiently queried via S3 Select also using Python AWS SDK. By leveraging S3 Select, we can now use SQL to query tagged resources and save on S3 data transfer costs since only the filtered results will be returned directly from S3. Pretty neat, eh?

The Extract Process
The extract process was built using Python 3 and relies on the Resource Tagging API to fetch pages of tagged resources and export them to CSV using the csv Python library.

We start importing the required libraries (boto3 is the AWS SDK for Python, argparse helps managing input parameters, and csv supports building valid CSV files):

import boto3
import argparse
import csv

Then, we define the header columns to use when generating the CSV files containing all tagged resources and the writeToCsv function:

field_names = ['ResourceArn', 'TagKey', 'TagValue']

def writeToCsv(writer, args, tag_list):
    for resource in tag_list:
        print("Extracting tags for resource: " +
              resource['ResourceARN'] + "...")
        for tag in resource['Tags']:
            row = dict(
                ResourceArn=resource['ResourceARN'], TagKey=tag['Key'], TagValue=tag['Value'])
            writer.writerow(row)

We take the CSV output file path as a required parameter so that users can specificy the desired output file name using the argparse library:

def input_args():
    parser = argparse.ArgumentParser()
    parser.add_argument("--output", required=True,
                        help="Output CSV file (eg, /tmp/tagged-resources.csv)")
    return parser.parse_args()

And then, we implement the main extract logic that uses the Resource Tagging API (see boto3.client(‘resourcegroupstaggingapi’) in the code below). Note that we fetch 50 resources at a time and write them to the CSV output file until no more resources are found.

def main():
    args = input_args()
    restag = boto3.client('resourcegroupstaggingapi')
    with open(args.output, 'w') as csvfile:
        writer = csv.DictWriter(csvfile, quoting=csv.QUOTE_ALL,
                                delimiter=',', dialect='excel', fieldnames=field_names)
        writer.writeheader()
        response = restag.get_resources(ResourcesPerPage=50)
        writeToCsv(writer, args, response['ResourceTagMappingList'])
        while 'PaginationToken' in response and response['PaginationToken']:
            token = response['PaginationToken']
            response = restag.get_resources(
                ResourcesPerPage=50, PaginationToken=token)
            writeToCsv(writer, args, response['ResourceTagMappingList'])

if __name__ == '__main__':
    main()

The extract procedure is pretty simple and illustrates well how to use the Resource Tagging API to customize the output. It will also use the default credentials in your account.

Here is how the extract process can be triggered for the QA account (assuming the python source file is named aws-tagged-resources-extractor.py and that there is a QA_AWS_ACCOUNT AWS profile defined in your ~/.aws/credentials file).

export AWS_PROFILE=QA_AWS_ACCOUNT
python aws-tagged-resources-extractor.py --output /tmp/qa-tagged-resources.csv

The extract procedure can be applied to other AWS accounts by updating the AWS_PROFILE environment variable accordingly.

The extract procedure can be applied to other AWS accounts by updating the AWS_PROFILE environment variable accordingly.

The ‘Upload to S3’ Process
Once file /tmp/qa-tagged-resources.csv is generated, it can be upload to an S3 bucket using the AWS CLI (or one could extend the extract sample code above to do so):

aws s3 cp /tmp/qa-tagged-resources.csv s3://[REPLACE-WITH-YOUR-S3-BUCKET]

The Query Process
Once the CSV files containing tagged resources for different AWS accounts are uploaded to S3, we can now use S3 Select to perform familiar SQL queries against these files. Another advantage of using S3 Select is that it reduces the amount of data transferred from S3 which is especially relevant in our case when accounts have a very large number of tagged resources.

We again use the boto3 and argparse libraries (Python 3). Required input parameters include the S3 bucket (–bucket) and the S3 key (–key). The SQL query parameter (–query) is optional and will return all results if not provided.

import boto3
import argparse

def input_args():
    parser = argparse.ArgumentParser()
    parser.add_argument("--bucket", required=True, help="SQL query to filter tagged resources output")
    parser.add_argument("--key", required=True, help="SQL query to filter tagged resources output")
    parser.add_argument("--query", default="select * from s3object", help="SQL query to filter tagged resources output")
    return parser.parse_args()

The main query logic is shown below. It uses the boto3.client(‘s3’) to initialize an s3 client that is later used to query the tagged resources CSV file in S3 via the select_object_content() function. This function takes the S3 bucket name, S3 key, and query as parameters. Check the [Boto3] (http://boto3.readthedocs.io/en/latest/reference/services/s3.html) API reference for details on this function and its inputs and outputs.

def main():
    args = input_args()
    s3 = boto3.client('s3')
    response = s3.select_object_content(
        Bucket=args.bucket,
        Key=args.key,
        ExpressionType='SQL',
        Expression=args.query,
        InputSerialization = {'CSV': {"FileHeaderInfo": "Use"}},
        OutputSerialization = {'CSV': {}},
    )

    for event in response['Payload']:
        if 'Records' in event:
            records = event['Records']['Payload'].decode('utf-8')
            print(records)
            
if __name__ == '__main__':
    main()

Here’s a few examples of how to trigger the query procedure against the CSV files stored in S3 (assuming the Python source file for the query procedure is called aws-tagged-resources-querier). We assume that the S3 bucket is located in a single account referenced by profile CENTRAL_AWS_ACCOUNT.

Return the resource ARNs of all route tables containing a tag named ‘aws:cloudformation:stack-name’ in the QA AWS account

export AWS_PROFILE= CENTRAL_AWS_ACCOUNT
python aws-tagged-resources-querier \
     --bucket [REPLACE-WITH-YOUR-S3-BUCKET] \
     --key qa-tagged-resources.csv \
     --query "select ResourceArn from s3object s \
              where s.ResourceArn like 'arn:aws:ec2%route-table%' \
                and s.TagKey='aws:cloudformation:stack-name'"

We invite readers to build more sophisticated SQL queries.

Summary
In this blog post, we introduced a simple yet efficient AWS architecture for extracting and querying tagged resources based on AWS cloud-native features such as the Resource Tagging API and S3 Select. We provided sample code that can help customers to customize and/or extend the architecture for their own purpose. By relying on AWS cloud-native features, customers can save time and reduce costs while still being able to do customizations.

The “extract” process discussed above is available in the AWS Serverless Repository under an application called aws-tag-explorer. Check it out!

Happy Resource Tagging!

About the Author

Marcilio Mendonca is a Sr. Consultant in the Global DevOps Team at AWS Professional Services. In the past years, he has been helping AWS customers to design, build and deploy best-in-class cloud-native AWS applications using VMs, containers and serverless architectures. Prior to joining AWS, Marcilio was a Software Development Engineer with Amazon. Marcilio also holds a PhD in Computer Science.

 

 

Migrating a multi-tier application from a Microsoft Hyper-V environment using AWS SMS and AWS Migration Hub

Post Syndicated from Martin Yip original https://aws.amazon.com/blogs/compute/migrating-a-multi-tier-application-from-a-microsoft-hyper-v-environment-using-aws-sms-and-aws-migration-hub/

Shane Baldacchino is a Solutions Architect at Amazon Web Services

Many customers ask for guidance to migrate end-to-end solutions running in their on-premises data center to AWS. This post provides an overview of moving a common blogging platform, WordPress, running on an on-premises virtualized Microsoft Hyper-V platform to AWS, including re-pointing the DNS records associated to the website.

AWS Server Migration Service (AWS SMS) is an agentless service that makes it easier and faster for you to migrate thousands of on-premises workloads to AWS. In November 2017, AWS added support for Microsoft’s Hyper-V hypervisor. AWS SMS allows you to automate, schedule, and track incremental replications of live server volumes, making it easier for you to coordinate large-scale server migrations. In this post, I guide you through migrating your multi-tier workloads using both AWS SMS and AWS Migration Hub.

Migration Hub provides a single location to track the progress of application migrations across multiple AWS and partner solutions. In this post, you use AWS SMS as a mechanism to migrate the virtual machines (VMs) and track them via Migration Hub. You can also use other third-party tools in Migration Hub, and choose the migration tools that best fit your needs. Migration Hub allows you to get progress updates across all migrations, identify and troubleshoot any issues, and reduce the overall time and effort spent on your migration projects.

Migration Hub and AWS SMS are both free. You pay only for the cost of the individual migration tools that you use, and any resources being consumed on AWS.

Walkthrough

For this walkthrough, the WordPress blog is currently running as a two-tier stack in a corporate data center. The example environment is multi-tier and polyglot in nature. The frontend uses Windows Server 2016 (running IIS 10 with PHP as an ISAPI extension) and the backend is supported by a MySQL server running on Ubuntu 16.04 LTS. All systems are hosted on a virtualized platform. As the environment consists of multiple servers, you can use Migration Hub to group the servers together as an application and manage the holistic process of migrating the application.
The key elements of this migration process involve the following steps:

  1. Establish your AWS environment.
  2. Replicate your database.
  3. Download the SMS Connector from the AWS Management Console.
  4. Configure AWS SMS and Hyper-V permissions.
  5. Install and configure the SMS Connector appliance.
  6. Configure Hyper-V host permissions.
  7. Import your virtual machine inventory and create a replication job.
  8. Use AWS Migration Hub to track progress.
  9. Launch your Amazon EC2 instance.
  10. Change your DNS records to resolve the WordPress blog to your EC2 instance.

Before you start, ensure that your source systems OS and hypervisor version are supported by AWS SMS. For more information, see the Server Migration Service FAQ. This post focuses on the Microsoft Hyper-V hypervisor.

Establish your AWS environment

First, establish your AWS environment. If your organization is new to AWS, this may include account or subaccount creation, a new virtual private cloud (VPC), and associated subnets, route tables, internet gateways, and so on. Think of this phase as setting up your software-defined data center. For more information, see Getting Started with Amazon EC2 Linux Instances.

The blog is a two-tier stack, so go with two private subnets. Because you want it to be highly available, use multiple Availability Zones. An Availability Zone resides within an AWS Region. Each Availability Zone is isolated, but the zones within a Region are connected through low-latency links. This allows architects and solution designers to build highly available solutions.

Replicate your database

WordPress uses a MySQL relational database. You could continue to manage MySQL and the associated EC2 instances associated with maintaining and scaling a database. But for this walkthrough, I am using this opportunity to migrate to an RDS instance of Amazon Aurora, as it is a MySQL-compliant database. Not only is Amazon Aurora a high-performant database engine but it frees you up to focus on application development by managing time-consuming database administration tasks, including backups, software patching, monitoring, scaling, and replication.

Use AWS Database Migration Service (AWS DMS) to migrate your MySQL database to Amazon Aurora easily and securely. You can send the results from AWS DMS to Migration Hub. This allows you to create a single pane view of your application migration.

After a database migration instance has been instantiated, configure the source and destination endpoints and create a replication task.

By attaching to the MySQL binlog, you can seed in the current data in the database and also capture all future state changes in near–real time. For more information, see Migrating a MySQL-Compatible Database to Amazon Aurora.

Finally, the task shows that you are replicating current data in your WordPress blog database and future changes from MySQL into Amazon Aurora.

Download the SMS Connector from the AWS Management Console

Now, use AWS SMS to migrate your IIS/PHP frontend. AWS SMS is delivered as a virtual appliance that can be deployed in your Hyper-V environment.

To download the SMS Connector, log in to the console and choose Server Migration Service, Connectors, SMS Connector setup guide. Download the VHD file for SCVMM/Hyper-V.

Configure SMS

Your hypervisor and AWS SMS need an appropriate user with sufficient privileges to perform migrations:

Launch a new VM in Hyper-V based on the SMS Connector that you downloaded. To configure the connector, connect to it via HTTPS. You can obtain the SMS Connector IP address from within Hyper-V. By default, the SMS Connector uses DHCP to obtain a valid IP address.

Connect to the SMS Connector via HTTPS. In the example above, the connector IP address is 10.0.0.88. In your browser, enter https://10.0.0.88. As the SMS Connector can only work with one hypervisor at a time, you must state the hypervisor with which to interface. For the purpose of this post, the examples use Microsoft Hyper-V.

Configure the connector with the IAM and hypervisor credentials that you created earlier.

After you have entered in both your AWS and Hyper-V credentials and the associated connectivity and authentication checks have passed, you are redirected to the home page of your SMS Connector. The home page provides you a status on connectivity and the health of the SMS Connector.

Configure Hyper-V host permissions

You also must modify your Hyper-V hosts to provide WinRM connectivity. AWS provides a downloadable PowerShell script to configure your Windows environment to support WinRM communications with the SMS Connector. The same script is used for configuring either standalone Hyper-V or SCVMM.

Execute the PowerShell script and follow the prompts. In the following example, Reconfigure Hyper-V not managed by SCVMM (Standalone Hyper-V)… was selected.

Import your virtual machine inventory and create a replication job

You have now configured the SMS Connector and your Microsoft Hyper-V hosts. Switch to the console to import your server catalog to AWS SMS. Within AWS SMS, choose Connectors, Import Server Catalog.

This process can take up to a few minutes and is dependent on the number of machines in your Hyper-V inventory.

Select the server to migrate and choose Create replication job. The console guides you through the process. The time that the initial replication task takes to complete is dependent on the available bandwidth and the size of your VM. After the initial seed replication, network bandwidth is minimized as AWS SMS replicates only incremental changes occurring on the VM.

Use Migration Hub to track progress

You have now successfully started your database migration via AWS DMS, set up your SMS Connector, configured your Microsoft Hyper-V environment, and started a replication job.

You can now track the collective progress of your application migration. To track migration progress, connect AWS DMS and AWS SMS to Migration Hub.

To do this, navigate to Migration Hub in the AWS Management Console. Under Migrate and Tools, connect both services so that the migration status of these services is sent to Migration Hub.

You can then group your servers into an application in Migration Hub and collectively track the progress of your migration. In this example, I created an application, Company Blog, and added in my servers from both AWS SMS and AWS DMS.

The progress updates from linked services are automatically sent to Migration Hub so that you can track tasks in progress. The dashboard reflects any status changes that occur in the linked services. You can see from the following image that one server is complete while another is in progress.

Using Migration Hub, you can view the migration progress of all applications. This allows you to quickly get progress updates across all of your migrations, easily identify and troubleshoot any issues, and reduce the overall time and effort spent on your migration projects.

Launch your EC2 instance

When your replication task is complete, the artifact created by AWS SMS is a custom AMI that you can use to deploy an EC2 instance. Follow the usual process to launch your EC2 instance, using the custom AMI created by AWS SMS, noting that you may need to replace any host-based firewalls with security groups and NACLs.

When you create an EC2 instance, ensure that you pick the most suitable EC2 instance type and size to match your performance requirements while optimizing for cost.

While your new EC2 instance is a replica of your on-premises VM, you should always validate that applications are functioning. How you do this differ on an application-by-application basis. You can use a combination of approaches, such as editing a local host file and testing your application, SSH, RDP, and Telnet.

From the RDS console, get your connection string details and update your WordPress configuration file to point to the Amazon Aurora database. As WordPress is expecting a MySQL database and Amazon Aurora is MySQL-compliant, this change of database engine is transparent to WordPress.

Change your DNS records to resolve the WordPress blog to your EC2 instance

You have validated that your WordPress application is running correctly, as you are still receiving changes from your on-premises data center via AWS DMS into your Amazon Aurora database. You can now update your DNS zone file using Amazon Route 53. Amazon Route 53 can be driven by multiple methods: console, SDK, or AWS CLI.

For this walkthrough, use Windows PowerShell for AWS to update the DNS zone file. The example shows UPSERTING the A record in the zone to resolve to the Amazon EC2 instance created with AWS SMS.

Based on the TTL of your DNS zone file, end users slowly resolve the WordPress blog to AWS.

Summary

You have now successfully migrated your WordPress blog to AWS using AWS migration services, specifically the AWS SMS Hyper-V/SCVMM Connector. Your blog now resolves to AWS. After validation, you are ready to decommission your on-premises resources.

Many architectures can be extended to use many of the inherent benefits of AWS, with little effort. For example, by using Amazon CloudWatch metrics to drive scaling policies, you can use an Application Load Balancer as your frontend. This removes the single point of failure for a single EC2 instance

Building Real Time AI with AWS Fargate

Post Syndicated from AWS Admin original https://aws.amazon.com/blogs/architecture/building-real-time-ai-with-aws-fargate/

This post is a contribution from AWS customer, Veritone. It was originally published on the company’s Website

Here at Veritone, we deal with a lot of data. Our product uses the power of cognitive computing to analyze and interpret the contents of structured and unstructured data, particularly audio and video. We use cognitive computing to provide valuable insights to our customers.

Our platform is designed to ingest audio, video and other types of data via a series of batch processes (called “engines”) that process the media and attach some sort of output to it, such as transcripts or facial recognition data.

Our goal was to design a data pipeline that could process streaming audio, video, or other content from sources, such as IP cameras, mobile devices, and structured data feeds in real-time, through an open ecosystem of cognitive engines. This enables support for customer use cases like real-time transcription for live-broadcast TV and radio, face and object detection for public safety applications, and the real-time analysis of social media for harmful content.

Why AWS Fargate?
We leverage Docker containers as the deployment artifact of both our internal services and cognitive engines. This gave us the flexibility to deploy and execute services in a reliable and portable way. Fargate on AWS turned out to be a perfect tool for orchestrating the dynamic nature of our deployments.

Fargate allows us to quickly scale Docker-based engines from zero to any desired number without having to worry about pre-provisioning capacity or bootstrapping and managing EC2 instances. We use Fargate both as a backend for quickly starting engine containers on demand and for the orchestration of services that need to always be running. It enables us to handle sudden bursts of real-time workloads with a consistent launch time. Fargate also allows our developers to get near-immediate feedback on deployments without having to manage any infrastructure or deal with downtime. The integration with Fargate makes this super simple.

Moving to Real Time
We designed a solution (shown below), in which media from a source, such as a mobile app, which “pushes” streams into our platform, or an IP camera feed, which is “pulled”, is streamed through a series of containerized engines, processing the data as it is ingested. Some engines, which we refer to as Stream Engines, work on raw media streams from start to finish. For all others, streams are decomposed into a series of objects, such as video frames or small audio/video chunks that can be processed in parallel by what we call Object Engines. An output stream of results from each engine in the pipeline is relayed back to our core platform or customer-facing applications via Veritone’s APIs.

Message queues placed between the components facilitate the flow of stream data, objects, and events through the data pipeline. For that, we defined a number of message formats. We decided to use Apache Kafka, a streaming message platform, as the message bus between these components.

Kafka gives us the ability to:

  • Guarantee that a consumer receives an entire stream of messages, in sequence.
  • Buffer streams and have consumers process streams at their own pace.
  • Determine “lag” of engine queues.
  • Distribute workload across engine groups, by utilizing partitions.

The flow of stream data and the lifecycle of the engines is managed and coordinated by a number of microservices written in Go. These include the Scheduler, Coordinator, and Engine Orchestrators.

Deployment and Orchestration
For processing real-time data, such as streaming video from a mobile device, we required the flexibility to deploy dynamic container configurations and often define new services (engines) on the fly. Stream Engines need to be launched on-demand to handle an incoming stream. Object Engines, on the other hand, are brought up and torn down in response to the amount of pending work in their respective queues.

EC2 instances typically require provisioning to be done in anticipation of incoming load and generally take too long to start in this case. We needed a way to quickly scale Docker containers on demand, and Fargate made this achievable with very little effort.

In Closing
Fargate helped us solve a lot of problems related to real-time processing, including the reduction of operational overhead, for this dynamic environment. We expect it to continue to grow and mature as a service. Some features we would like to see in the near future include GPU support for our GPU-based AI Engines and the ability to cache container images that are larger for quicker “warm” launch times.

About Veritone
Veritone created the world’s first operating system for Artificial Intelligence. Veritone’s aiWARE operating system unlocks the power of cognitive computing to transform and analyze audio, video and other data sources in an automated manner to generate actionable insights. The Veritone platform provides customers ease, speed and accuracy at low cost.

The Veritone authors are Christopher Stobie – [email protected] and Mezzi Sotoodeh – [email protected]

AWS Online Tech Talks – August 2018

Post Syndicated from Robin Park original https://aws.amazon.com/blogs/aws/aws-online-tech-talks-august-2018/

AWS Online Tech Talks are live, online presentations that cover a broad range of topics at varying technical levels. Join us this month to learn about AWS services and solutions. We’ll have experts online to help answer any questions you may have. We’ve also launched our first-ever office hours style tech talk, where you have the opportunity to ask questions to our experts! This month we’ll be covering Amazon Aurora and Backup to AWS. Register today and join us! Please note – all sessions are free and in Pacific Time.

Tech talks featured this month:

Compute

August 28, 2018 | 11:00 AM – 11:45 AM PT – High Performance Computing on AWS – Learn how AWS scale and performance can deliver faster time to insights for your HPC environments.

Containers

August 22, 2018 | 11:00 AM – 11:45 AM PT – Distributed Tracing for Kubernetes Applications on AWS – Learn how to use AWS X-Ray to debug and monitor Kubernetes applications.

Data Lakes & Analytics

August 22, 2018 | 01:00 PM – 02:00 PM PT – Deep Dive on Amazon Redshift – Learn how to analyze all your data – across your data warehouse and data lake – with Amazon Redshift.

August 23, 2018 | 09:00 AM – 09:45 AM PT – Deep Dive on Amazon Athena – Dive deep on Amazon Athena and learn how to query S3 without servers to manage.

Databases

August 21, 2018 | 11:00 AM – 11:45 AM PT – Accelerate Database Development and Testing on AWS – Learn how to build database applications faster with Amazon Aurora.

Office Hours: August 30, 2018 | 11:00 AM – 12:00 PM PT – AWS Office Hours: Amazon Aurora – Opening up the Hood on AWS’ Fastest Growing Service – Ask AWS experts about anything on Amazon Aurora – From what makes Amazon Aurora different from other cloud databases to the unique ways our customers are leveraging it.

DevOps

August 22, 2018 | 09:00 AM – 10:00 AM PT – Amazon CI/CD Practices for Software Development Teams – Learn about Amazon’s CI/CD practices and how to leverage the AWS Developer Tools for similar workflows.

Enterprise & Hybrid

August 28, 2018 | 09:00 AM – 09:45 AM PT – Empower Your Organization with Alexa for Business – Discover how Amazon Alexa can act as an intelligent assistant and help you be more productive at work.

August 29, 2018 | 11:00 AM – 11:45 AM PT – Migrating Microsoft Workloads Like an Expert – Learn best practices on how to migrate Microsoft workloads to AWS like an expert.

IoT

August 27, 2018 | 01:00 PM – 02:00 PM PT – Using Predictive Analytics in Industrial IoT Applications – Learn how AWS IoT is used in industrial applications to predict equipment performance.

Machine Learning

August 20, 2018 | 09:00 AM – 10:00 AM PT – Machine Learning Models with TensorFlow Using Amazon SageMaker – Accelerate your ML solutions to production using TensorFlow on Amazon SageMaker.

August 21, 2018 | 09:00 AM – 10:00 AM PT – Automate for Efficiency with AI Language Services – Learn how organizations can benefit from intelligent automation through AI Language Services.

Mobile

August 29, 2018 | 01:00 PM – 01:45 PM PT – Building Serverless Web Applications with AWS Amplify – Learn how to build full stack serverless web applications with JavaScript & AWS.

re:Invent

August 23, 2018 | 11:00 AM – 11:30 AM PT – Episode 4: Inclusion & Diversity at re:Invent – Join Jill and Annie to learn about this year’s inclusion and diversity activities at re:Invent.

Security, Identity, & Compliance

August 27, 2018 | 11:00 AM – 12:00 PM PT – Automate Threat Migitation Using AWS WAF and Amazon GuardDuty – Learn best practices for using AWS WAF to automatically mitigate threats found by Amazon GuardDuty.

Serverless

August 21, 2018 | 01:00 PM – 02:00 PM PT – Serverless Streams, Topics, Queues, & APIs! How to Pick the Right Serverless Application Pattern – Learn how to pick the right design pattern for your serverless application with AWS Lambda.

Storage

Office Hours: August 23, 2018 | 01:00 PM – 02:00 PM PT – AWS Office Hours: Backing Up to AWS – Increasing Storage Scalability to Meet the Challenges of Today’s Data Landscape – Ask AWS experts anything from how to choose and deploy backup solutions in the cloud, to how to work with the AWS partner ecosystem, to best practices to maximize your resources.

August 27, 2018 | 09:00 AM – 09:45 AM PT – Data Protection Best Practices with EBS Snapshots – Learn best practices on how to easily make a simple point-in-time backup for your Amazon EC2 instances using Amazon EBS snapshots.

August 29, 2018 | 09:00 AM – 09:45 AM PT – Hybrid Cloud Storage with AWS Storage Gateway & Amazon S3 – Learn how to use Amazon S3 for your on-prem. applications with AWS Storage Gateway.

August 30, 2018 | 01:00 PM – 01:45 PM PT – A Briefing on AWS Data Transfer Services – Learn about your options for moving data into AWS, processing data at the edge, and building hybrid cloud architectures with AWS.

Raspberry Pi as car computer

Post Syndicated from Liz Upton original https://www.raspberrypi.org/blog/raspberry-pi-as-car-computer/

Carputers! Fabrice Aneche is documenting his ongoing build, which equips an older (2011) car with some of the features a 2018 model might have: thus far, a reversing camera (bought off the shelf, with a modified GUI to show the date and the camera’s output built with Qt and Golang), GPS and offline route guidance.

rearcam

We’re not sure how the car got through that little door there.

It was back in 2013, when the Raspberry Pi had been on the market for about a year, that we started to see carputer projects emerge. They tended to be focussed in two directions: in-car entertainment, and on-board diagnostics (OBD). We ended up hiring the wonderful Martin O’Hanlon, who wrote up the first OBD project we came across, just this year. Being featured on this blog can change your life, I tell you.

In the last five years, the Pi’s evolved: you’re now working with a lot more processing power, there’s onboard WiFi, and far more peripherals which can be useful in a…vehicular context are available. Consequently, the flavour of the car projects we’re seeing has changed somewhat, with navigation systems and cameras much more visible. Fabrice’s is one of the best examples we’ve found.

solarised map

Night-view navigation system

GPS is all very well, but you, the human person driver, will want directions at every turn. So Fabrice wrote a user interface to serve up live maps and directions, mostly in Qt5 and QML (he’s got some interesting discussion on his website about why he stopped using X11, which turned out to be too slow for his needs). All the non-QML work is done in Go. It’s all open-source, and on GitHub, if you’d like to contribute or roll your own project. He’s also worked over the Linux GPS daemons, found them lacking, and has produced his own:

…the Linux gps daemons are using obscure and over complicated protocols so I’ve decided to write my own gps daemon in Go using a gRPC stream interface. You can find it here.

I’m also not satisfied with the map matching of OSRM for real time display, I may rewrite one using mbmatch.

street map display

We’ll be keeping an eye on this project; given how much clever has gone into it already, we’re pretty sure that Fabrice will be adding new features. Thanks Fabrice!

The post Raspberry Pi as car computer appeared first on Raspberry Pi.

Improving application performance and reducing costs with Amazon EBS-Optimized Instance burst capability

Post Syndicated from Geoff Murase original https://aws.amazon.com/blogs/compute/improving-application-performance-and-reducing-costs-with-amazon-ebs-optimized-instance-burst-capability/

Contributed by Sooraj Prasannan, Senior Product Manager, Amazon Elastic Block Store

In November 2017, Amazon EC2 introduced C5 compute-intensive instances and M5 general-purpose instances. In the first half of 2018, we released EC2 C5d instances and M5d instances by adding high-speed, ultra-low latency local NVMe storage to the EC2 C5 and M5 instance families. EC2 C5/C5d and M5/M5d instances are built on the Nitro system. This collection of AWS-built hardware and software components enables high performance, high availability, high security, and bare metal capabilities to reduce virtualization overhead.

During the design of the Nitro system, we analyzed real-world workloads and recognized the need for smaller instance sizes to drive higher performance from their Amazon EBS volumes. We found that the majority of application storage needs are bursty, with short, intense periods of high I/O and plenty of idle time between bursts. To improve the experience for these workloads, we developed burst capability for smaller instance sizes. Available on EC2 C5/C5d and M5/M5d instances, this feature enables large, xlarge, and 2xlarge instance sizes to drive the same performance as the 4xlarge instance for 30 minutes each day.

For applications with spiky Amazon EBS demand, you can right-size your instances based on your CPU and memory requirements and still meet your EBS-optimized instance performance requirements. This higher performance also enables you to speed up sections of your workflow dependent on EBS-optimized instance performance. Faster workflows result in quicker job completions and improved resource utilization. The burst capability ultimately enables you to reduce costs by right-sizing your instance and improving total resource usage.

With this performance increase, you will be able to handle unplanned spikes in demand without any impact to your application performance. You can now size your instances based on historical average trends. This burst capability gives you more performance to absorb spikes without affecting your customer experience.

Using Amazon CloudWatch metrics to monitor burst usage

For better visibility into your performance, instances based on the Nitro system provide Amazon CloudWatch metrics to help profile your usage. Based on the usage profile, you can decide if smaller instances meet your requirements.

These instances give you the ability to monitor your usage via instance level CloudWatch metrics for operations (EBSReadOpsandEBSWriteOps) and bytes transferred (EBSReadBytesand EBSWriteBytes). For more information on these metrics, see List of available CloudWatch metrics for your instances. These metrics support basic monitoring (five-minute frequency) by default, but you can enable detailed monitoring (one-minute frequency) for an additional cost. For more information, see Amazon CloudWatch pricing.

For large, xlarge, and 2xlarge instances, we also provide burst balance metrics. EBSIOBalance% monitors the instance I/O burst bucket, and EBSByteBalance% monitors the instance byte burst bucket. These metrics give information about the percentage of I/O or bytes credits remaining in the respective burst buckets. The metrics are expressed as a percentage, where 100% means that the instance has accumulated the maximum number of credits. You can set up an alarm that triggers if the balance gets too low.

To demonstrate these metrics, we launched an m5.large instance. We then attached a 500GB io1 Amazon EBS volume with 32,000 provisioned IOPS to the instance. Amazon EBS volumes attached to instances based on the Nitro system are exposed as NVMe devices.

First, we ran a large block (128 KiB) test using fio to /dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol02f2f9a66c2ebfd66 and monitored both EBSIOBalance% and EBSByteBalance%.

$ sudo fio --filename= /dev/disk/by-id/nvme-
Amazon_Elastic_Block_Store_vol02f2f9a66c2ebfd66 --rw=randread --
bs=128k --runtime=2400 --time_based=1 --iodepth=32 --
ioengine=libaio --direct=1 --name=large-block-test 

Because this is a large block workload, it’s not driving enough IOPS to deplete EBSIOBalance%. It depletes EBSByteBalance% instead, as shown in the following image.

Then we ran a small block test to understand how it affects EBSIOBalance% and EBSByteBalance%.

$ sudo fio --filename= /dev/disk/by-id/nvme-
Amazon_Elastic_Block_Store_vol02f2f9a66c2ebfd66 --rw=randread --
bs=16k --runtime=2400 --time_based=1 --iodepth=32 --
ioengine=libaio --direct=1 --name=small-block-test 

Because this is a small block test, it drives higher IOPS than bytes/second. Hence, EBSIOBalance% drops faster than EBSByteBalance%, as shown in the following image.

As long as EBSIOBalance% and EBSByteBalance% are above 0%, the instance can drive the burst performance. When the instance I/O activity is below the baseline rate, the burst buckets refill. After the tests finished, we paused all I/O from the instance. This period of inactivity allows the instance burst buckets to refill, as EBSIOBalance% and EBSByteBalance% show in the following image.

The refill rate for a burst bucket is the difference between the baseline rate and the instance I/O activity. For example, m5.large has a baseline throughput rate of 60 MB/s and a baseline IOPS rate of 3600 IOPS. Suppose the instance I/O activity is 10 MB/s and 1000 IOPS. The byte bucket fills at the rate of 50 MB/s (60 MB/s minus 10 MB/s). The IOPS bucket fills at the rate of 2600 IOPS (3600 IOPS minus 1000 IOPS). For the baseline rates for the different instances, see Amazon EBS–optimized instances. In addition, we top off the burst buckets every 24 hours, which means that the instance has burst performance available for 30 minutes each day.

Performance enhancements

We have continued to make enhancements to the Nitro system. With the latest set of enhancements, we have increased the maximum burst bandwidth on the large, xlarge, and 2xlarge EC2 C5/C5d and M5/M5d instances to 3.5 Gbps, up from 2.25 Gbps and 2.12 Gbps, respectively. We have also increased the maximum burst IOPS for EC2 C5/C5d to 20,000 IOPS and to 18,750 IOPS for M5/M5d, up from 16,000 IOPS for both. All new EC2 C5/C5d and M5/M5d smaller instances can take advantage of this performance increase at no additional cost.

For the latest list of instances based on the Nitro system that support this burst feature and their corresponding performance numbers, see Amazon EBS–optimized instances.

Amazon ElastiCache for Redis now PCI DSS compliant, allowing you to process sensitive payment card data in-memory for faster performance

Post Syndicated from Manan Goel original https://aws.amazon.com/blogs/security/amazon-elasticache-redis-now-pci-dss-compliant-payment-card-data-in-memory/

Amazon ElastiCache for Redis has achieved the Payment Card Industry Data Security Standard (PCI DSS). This means that you can now use ElastiCache for Redis for low-latency and high-throughput in-memory processing of sensitive payment card data, such as Customer Cardholder Data (CHD). ElastiCache for Redis is a Redis-compatible, fully-managed, in-memory data store and caching service in the cloud. It delivers sub-millisecond response times with millions of requests per second.

To create a PCI-Compliant ElastiCache for Redis cluster, you must use the latest Redis engine version 4.0.10 or higher and current generation node types. The service offers various data security controls to store, process, and transmit sensitive financial data. These controls include in-transit encryption (TLS), at-rest encryption, and Redis AUTH. There’s no additional charge for PCI DSS compliant ElastiCache for Redis.

In addition to PCI, ElastiCache for Redis is a HIPAA eligible service. If you want to use your existing Redis clusters that process healthcare information to also process financial information while meeting PCI requirements, you must upgrade your Redis clusters from 3.2.6 to 4.0.10. For more details, see Upgrading Engine Versions and ElastiCache for Redis Compliance.

Meeting these high bars for security and compliance means ElastiCache for Redis can be used for secure database and application caching, session management, queues, chat/messaging, and streaming analytics in industries as diverse as financial services, gaming, retail, e-commerce, and healthcare. For example, you can use ElastiCache for Redis to build an internet-scale, ride-hailing application and add digital wallets that store customer payment card numbers, thus enabling people to perform financial transactions securely and at industry standards.

To get started, see ElastiCache for Redis Compliance Documentation.

Want more AWS Security news? Follow us on Twitter.

Deploy an 8K HEVC pipeline using Amazon EC2 P3 instances with AWS Batch

Post Syndicated from Geoff Murase original https://aws.amazon.com/blogs/compute/deploy-an-8k-hevc-pipeline-using-amazon-ec2-p3-instances-with-aws-batch/

Contributed by Amr Ragab, HPC Application Consultant, AWS Professional Services

AWS provides several managed services for file- and streaming-based media encoding options.

Currently, these services offer up to 4K encoding. Recent developments and the growing popularity of 8K content has now increased the need to distribute higher resolution content.

In this solution, you use an Amazon EC2 P3 instance to create a file-based encoding pipeline utilizing AWS Batch by first uploading a sample 8K (7680×4320) file to Amazon S3.

AWS Batch

AWS Batch enables developers, scientists, and engineers to easily and efficiently run hundreds of thousands of batch computing jobs on AWS. AWS Batch dynamically provisions the optimal quantity and type of compute resources (e.g., CPU or memory optimized instances) based on the volume and specific resource requirements of the batch jobs submitted. With AWS Batch, there is no need to install and manage batch computing software or server clusters that you use to run your jobs, allowing you to focus on analyzing results and solving problems. AWS Batch plans, schedules, and executes your batch computing workloads across the full range of AWS compute services and features, such as Amazon EC2 and Spot Instances.

P3 instances for video transcoding workloads

The P3 instance comes equipped with the NVIDIA Tesla V100 GPU. The V100 is a 16 GB 5,120 CUDA Core-GPU based on the latest Volta architecture; well suited for video coding workloads. The largest instance size in that family, p3.16xlarge, has 64 vCPU, 488 GB of RAM, 8 NVIDIA Tesla V100 GPUs, and 25 Gbps networking bandwidth.

Other than being a mainstay in computational workloads the V100 offers enhanced hardware-based encoding/decoding (NVENC/NVDEC). The following tables summarize the NVENC/NVDEC options available compared to other GPUs offered at AWS.

NVENC Support Matrix

AWS GPU instance
GPU FAMILY GPU H.264 (AVCHD) YUV 4:2:0 H.264 (AVCHD) YUV 4:4:4 H.264 (AVCHD) Lossless H.265 (HEVC) 4K YUV 4:2:0 H.265 (HEVC) 4K YUV 4:4:4 H.265 (HEVC) 4K Lossless H.265 (HEVC) 8k
G2 Kepler GRID K520 YES
P2 Kepler (2nd Gen) Tesla K80 YES
G3 Maxwell (2nd Gen) Tesla M60 YES YES YES YES
P3 Volta Tesla V100 YES YES YES YES YES YES YES

NVDEC Support Matrix

AWS GPU instance GPU FAMILY GPU MPEG-2 VC-1 H.264 (AVCHD) H.265 (HEVC) VP8 VP9
G2 Kepler GRID K520 YES YES YES
P2 Kepler (2nd Gen) Tesla K80 YES YES YES
G3 Maxwell (2nd Gen) Tesla M60 YES YES YES YES
P3 Volta Tesla V100 YES YES YES YES YES YES

Cinematic 8K encoding is supported using the Tesla V100 (P3 instance family) either in landscape or portrait orientations using the HEVC codec. 

GPU H264 H264_444 H264_ME H264_WxH HEVC HEVC_Main10 HEVC_Lossless HEVC_SAO HEVC_444 HEVC_ME HEVC_WxH
Tesla M60 + + + 4096x
4096
+ 4096x
4096
Tesla V100 + + + 4096x
4096
+ + + + + + 8192x
8192

Prerequisites

To follow along with these procedures, ensure that you have the following:

  • An AWS account with permissions to create IAM roles and policies, as well as read and write access to S3
  • Registration with the NVIDIA Developer Network
  • Familiarity with Docker

Deployment

For deployment, you containerize the encoding pipeline. After building the underlying P3 container instance, you then use nvidia-docker2 to build the video-encoding Docker image, which is registered with Amazon Elastic Container Registry (Amazon ECR).

As shown in the following diagram, the pipeline reads an input raw YUV file from S3, then pulls the containerized encoding application to execute at scale on the P3 container instance. The encoded video file is then be transferred to S3.

The nvidia-docker2 image video encoding stack contains the following components:

  • NVIDIA CUDA 9.2
  • FFMPEG 4.0
  • NVIDIA Video Codec SDK 8.1

This is a relatively lengthy procedure. However, after it’s built, the underlying instance and Docker image are reusable and can be quickly deployed as part of a high performance computing (HPC) pipeline.

Creating the ECS container instance

The underlying instance can be built by selecting the Amazon Linux AMI with the p3.2xlarge instance type in a public subnet. Additionally, add an EBS volume (150 GB), which is used for the 8k input, raw yuv, and output files. Scale the storage amount for larger input files. Persist the mount in /etc/fstab. Connect to the instance over SSH and install any OS updates as well as the EPEL Release and support packages as well as the base docker-ce.

sudo yum update -y
sudo yum install yum-utils \
                 device-mapper-persistent-data \
                 lvm2 \

sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
sudo yum install epel-release-latest-7.noarch.rpm
sudo yum update
sudo yum install docker-ce -y

The NVIDIA/CUDA stack can be installed using the cuda-repo-rhel7.rpm file. The CUDA framework installs the NVIDIA driver dependencies.

sudo yum install cuda -y

Next, install nvidia-docker2 as provided in the NVIDIA GitHub repo.

distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.repo | \
  sudo tee /etc/yum.repos.d/nvidia-docker.repo
sudo yum install -y nvidia-docker2
sudo pkill -SIGHUP dockerd

sudo tee /etc/docker/daemon.json <<EOF
{
    "runtimes": {
        "nvidia": {
            "path": "/usr/bin/nvidia-container-runtime",
            "runtimeArgs": []
        }
    },
    "default-runtime": "nvidia"
}
EOF

sudo systemctl restart docker

With the base components in place, make this instance compatible with the ECS service:

sudo yum install ecs-init -y

Create the /etc/ecs/ecs.config file with the following template:

cat << EOF > /etc/ecs/ecs.config
ECS_DATADIR=/data
ECS_ENABLE_TASK_IAM_ROLE=true
ECS_ENABLE_TASK_IAM_ROLE_NETWORK_HOST=true
ECS_LOGFILE=/log/ecs-agent.log
ECS_AVAILABLE_LOGGING_DRIVERS=["json-file","awslogs"]
ECS_LOGLEVEL=info
ECS_CLUSTER=default
EOF

Iptables and packet forwarding rules need to be created to pass IAM roles into task operations:

sudo sh -c "echo 'net.ipv4.conf.all.route_localnet = 1' >> /etc/sysctl.conf"
sudo sysctl -p /etc/sysctl.conf
sudo iptables -t nat -A PREROUTING -p tcp -d 169.254.170.2 --dport 80 -j DNAT --to-destination 127.0.0.1:51679
sudo iptables -t nat -A OUTPUT -d 169.254.170.2 -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 51679
sudo sh -c 'iptables-save > /etc/sysconfig/iptables'

Finally, a systemd unit file needs to be created:

sudo cat << EOF > /etc/systemd/system/[email protected]
[Unit]
Description=Docker Container %I
Requires=docker.service
After=docker.service

[Service]
Restart=always
ExecStartPre=-/usr/bin/docker rm -f %i
ExecStart=/usr/bin/docker run --name %i \
--privileged \
--restart=on-failure:10 \
--volume=/var/run:/var/run \
--volume=/var/log/ecs/:/log:Z \
--volume=/var/lib/ecs/data:/data:Z \
--volume=/etc/ecs:/etc/ecs \
--net=host \
--env-file=/etc/ecs/ecs.config \
amazon/amazon-ecs-agent:latest
ExecStop=/usr/bin/docker stop %i

[Install]
WantedBy=default.target
EOF

sudo systemctl enable [email protected]
sudo systemctl start [email protected]
sudo systemctl status [email protected]

Ensure that the [email protected] service starts successfully.

Creating the NVIDIA-Docker image

With Docker installed, pull the latest nvidia/cuda:latest image from DockerHub.

docker pull nvidia/cuda:latest

It is best at this point to run the Docker container in interactive mode. However, a Docker build file can be created afterwards. At the time of publication, only CUDA 9.0 is installed. NVIDIA has already provided the necessary repositories. Install CUDA 9.2, and support packages, inside the Docker container, referenced by the (docker)  label:

docker run -it --runtime=nvidia --rm nvidia/cuda
(docker) apt update
(docker) apt install pkg-config build-essential wget curl nasm unzip \
                     git libglew-dev cuda-toolkit-9-2 python3-pip -y
(docker) pip3 install awscli

Next, download the FFMPEG 4.0, nv-codec-headers, and the Video Codec SDK 8.1 from the NVIDIA Developer platform.

First, extract the nv-codec-headers and into the directory:

(docker) make
(docker) make install

Extract the ffmpeg-4.0 directory and compile and install FFmpeg:

(docker) ./configure --enable-cuda --enable-cuvid --enable-nvenc --enable-nonfree --enable-libnpp --extra-cflags=-I/usr/local/cuda/include --extra-ldflags=-L/usr/local/cuda/lib64
(docker) make -j 4
(docker) make install

Download and extract the NVIDIA Video Codec SDK 8.1. The “Samples” directory has a preconfigured Makefile that compiles the binaries in the SDK. After it’s successful, confirm that the binaries are correctly set up.

(docker): ~/Video_Codec_SDK_8.1.24/Samples/AppEncode/AppEncCuda$ ./AppEncCuda -h
Options:
-i Input file path
-o Output file path
-s Input resolution in this form: WxH
-if Input format: iyuv nv12 yuv444 p010 yuv444p16 bgra bgra10 ayuv abgr abgr10
-gpu Ordinal of GPU to use
-codec Codec: h264 hevc
-preset Preset: default hp hq bd ll ll_hp ll_hq lossless lossless_hp
-profile H264: baseline main high high444; HEVC: main main10 frext
-444 (Only for RGB input) YUV444 encode
-rc Rate control mode: constqp vbr cbr cbr_ll_hq cbr_hq vbr_hq
-fps Frame rate
-gop Length of GOP (Group of Pictures)
-bf Number of consecutive B-frames
-bitrate Average bit rate, can be in unit of 1, K, M
-maxbitrate Max bit rate, can be in unit of 1, K, M
-vbvbufsize VBV buffer size in bits, can be in unit of 1, K, M
-vbvinit VBV initial delay in bits, can be in unit of 1, K, M
-aq Enable spatial AQ and set its stength (range 1-15, 0-auto)
-temporalaq (No value) Enable temporal AQ
-lookahead Maximum depth of lookahead (range 0-32)
-cq Target constant quality level for VBR mode (range 1-51, 0-auto)
-qmin Min QP value
-qmax Max QP value
-initqp Initial QP value
-constqp QP value for constqp rate control mode
Note: QP value can be in the form of qp_of_P_B_I or qp_P,qp_B,qp_I (no space)

Encoder Capability
# GPU H264 H264_444 H264_ME H264_WxH HEVC HEVC_Main10 HEVC_Lossless HEVC_SAO HEVC_444 HEVC_ME HEVC_WxH
0 Tesla V100-SXM2-16GB + + + 4096x4096 + + + + + + 8192x8192

Create a small script to be used for the 8K-encoding test inside the Docker container. Save the file as /root/nvenc-processor.sh. In the basic form, this script encodes using a single thread. For comparison, the same file is encoded using four threads.

(docker)
#!/bin/bash -xe
time aws s3 cp $S3_INPUT /mnt/8k.webm

time /usr/local/bin/ffmpeg -y -hwaccel cuda -i /mnt/8k.webm -c:v rawvideo -pix_fmt yuv420p /mnt/8k.yuv
time /root/Video_Codec_SDK/Samples/AppEncode/AppEncCuda/AppEncCuda -i /mnt/8k.yuv -o /mnt/8k.hevc -s 7680x4320 -codec hevc
time /root/Video_Codec_SDK/Samples/AppEncode/AppEncPerf/AppEncPerf -i /mnt/8k.yuv -s 7680x4320 -thread 4 -codec hevc

time aws s3 cp /mnt/8k.hevc $S3_OUTPUT

This script downloads a file from S3 and processes it through FFmpeg. Using the AppEncCuda and AppEncPerf methods, create the 8K-encoded file to be uploaded back to S3. Commit your Docker container into a new Docker image:

docker commit -m "creating hvec-processor image" <containerid> nvidia-hvec:latest

Ensure that a Docker repo has been created in Amazon ECS. Choose Repositories, Create Repository. After you open the repository, choose View Push Commands. Commit the new created image to your ECR repo.

After confirming that your image is in your ECR repo, delete all images locally in the instance:

docker rmi -f $(docker images -a -q)

Before stopping the instance, remove the ECS agent checkpoint file:

sudo rm -rf /var/lib/ecs/data/ecs_agent_data.json

Create an AMI from the instance, maintaining the attached EBS volume. Note the AMI ID.

Creating IAM role permissions

To ensure that access to ECS is controlled and to allow AWS Batch to be called, create two IAM roles:

  • BatchServiceRole allows AWS Batch to call services on your behalf.
  • ecsInstanceRole is specific to this workflow and adds permissions for S3FullAccess. This allows the container to read from and write to your S3 bucket. The following screenshot shows the example policy stack.

In AWS Batch, select the compute environment and create a managed compute environment. Assign a cluster name and min and max vCPUs values. Use the AMI ID, and IAM roles created earlier. Use the Spot pricing model with a consideration of running at 60% of the On-Demand price. Look at the current Spot price to see if more aggressive discounts are possible.

Note the cluster name. In Amazon ECS, you should see the cluster created. Next, create a job queue and associate this job queue with the compute environment created earlier. Note the job queue name.

Next, create a job definition file. This provides the job parameters to be used including mounting paths, CPU, and memory requirements.

{
    "containerProperties": {
        "mountPoints": [
            {
                "sourceVolume": "codec-data",
                "readOnly": false,
                "containerPath": "/mnt"
            }],
        "image": "<accountnumber>.dkr.ecr.us-east-1.amazonaws.com/nvidia/nvidia-hvec:latest",
        "command": ["/root/nvenc-processor.sh"],
        "volumes": [
            {
                "host": {"sourcePath": "/mnt"},
                "name": "codec-data"
            }],
        "memory": 32768,
        "vcpus": 8,
        "privileged": true,
        "environment": [
            {
                "name": "S3_INPUT",
                "value": "s3://<bucket>/<key_name>"
            },
            {
                "name": "S3_OUTPUT",
                "value": "s3://<bucket>"
            }
        ],
        "ulimits": []
    },
    "type": "container",
    "jobDefinitionName": "nvenc-test"
}

Save the file as nvenc-test.json and register the job in AWS Batch.

aws batch register-job-definition --cli-input-json file://nvenc-test.json

In the AWS Batch console, create a job queue assigning a priority of 1 to the compute environment created earlier. Create a job assigning a job name, with the job definition file, and job queue. Add additional environment variables for the S3 bucket. Ensure that these buckets and input file are created.

S3_INPUT = s3://<bucket>/<key_name> 
S3_OUTPUT = s3://<bucket> 

Execute the job. In a few moments, the job should be in the Running state. Check the CloudWatch logs for an updated status of the job progression. Open the job record information and scroll down to CloudWatch metrics. The events are logged in a new AWS Batch log stream.

A 1-minute 8K YUV 4:2:0 file took approximately 10 minutes single-threaded (top panel), and 58 seconds using four threads (bottom panel). The nvenc-processor.sh script serves as a basic implementation of 8K encoding. Explore the options provided by the NVIDIA Video Codec SDK for additional encoding/decoding and transcoding options.

Conclusion

With AWS Batch, a customized container instance, and a dockerized NVIDIA video encoding platform, AWS can provide your HD, 4K, and now 8K media distribution. I invite you to incorporate this into your automated pipeline.

With some minor modification, it’s possible to trigger this pipeline after a new file is uploaded into S3. Then, execute through AWS Lambda or as part of an AWS Step Functions workflow.

 

 

 

 

 

How to migrate your on-premises domain to AWS Managed Microsoft AD using ADMT

Post Syndicated from Danny Jenkins original https://aws.amazon.com/blogs/security/how-to-migrate-your-on-premises-domain-to-aws-managed-microsoft-ad-using-admt/

Customers often ask me how to migrate their on-premises Active Directory (AD) domain to AWS so they can be free of the operational management of their AD infrastructure. Frequently they are unsure how to make the migration easy. A common approach using the CSVDE utility doesn’t migrate attributes such as user passwords. This makes migration difficult and necessitates manual effort for a large part of the migration that can cause operational and security challenges when migrating to a new directory. So what’s changed?

You can now use the Active Directory Migration Toolkit (ADMT) along with the Password Export Service (PES) to migrate your self-managed AD to AWS Directory Service for Microsoft Active Directory, also known as AWS Managed Microsoft AD. This enables you to migrate AD objects and encrypted passwords for your users more easily.

AWS Managed Microsoft AD, a managed service built on actual Microsoft Active Directory. AWS provides operational management of the domain controllers and you use standard AD tools to administer users, groups, and computers. AWS Managed Microsoft AD enables you to take advantage of built-in Active Directory features, such as Group Policy, trusts, and single sign-on and helps make it easy to migrate AD-dependent workloads into the AWS Cloud. With AWS Managed Microsoft AD, you can join Amazon EC2 and Amazon RDS for SQL Server instances to a domain, and use AWS Enterprise IT applications, such as Amazon WorkSpaces, and AWS SSO with Active Directory users and groups.

In this blog, I will show you how to migrate your existing AD objects to AWS Managed Microsoft AD. The source of the objects can be your self-managed AD running on EC2, on-premises, co-located, or even another cloud provider. I will show how to use ADMT and PES to migrate objects including users (and their passwords), groups, and computers.

The blog post assumes you are familiar with AD and how to use the Remote Desktop Protocol client to sign and use EC2 Windows instances.

Background

In this post I will migrate user and computer objects, as well as passwords, to a new AWS Managed Microsoft AD directory. The source will be an on-premises domain.

This example migration will be for a fairly simple use case. Large customers with complex source domains or forests may have more complex processes involved to map users, groups, and computers to the single OU structure of AWS Managed Microsoft AD. For example, you may want to migrate an OU at a time. Customers with single domain forests may be able to migrate in fewer steps. Similarly, the options you might select in ADMT will vary based on what you are trying to accomplish.

To perform the migration, I will use the admin user account from my AWS Managed Microsoft AD. AWS creates the admin user account and delegates administrative permissions to the account for an organizational unit (OU) in the AWS Managed Microsoft AD domain. This account has most of the permissions required to manage your domain, and all the permissions required to complete this migration.

In this example, I have a Source domain called source.local that’s running in a 10.0.0.0/16 network range, and I want to migrate my users, groups, and computers to a destination domain in AWS Managed Microsoft AD called destination.local that’s running in a network range of 192.168.0.0/16.

To migrate users from source.local to destination.local, I need a migration computer that I join to the destination.local domain on which I will run ADMT. I also use this machine to perform administrative tasks on my AWS Managed Microsoft AD. As a prerequisite for ADMT, I must install Microsoft SQL Express 2016 on the migration computer. I also need an administrative account that has permissions in both the source and destination AD domains. To do this, I will use an AD trust and will add my AWS Managed Microsoft AD admin account from destination.local to my source.local domain. Next I will install ADMT on the migration computer, and will run the PES on one of the source.local domain controllers. Finally, I will migrate the users and computers.

For this example, I have a handful of users, groups, and computers, shown in the source domain in these screen shots, that I will migrate:
 

Figure 1: Example source users

Figure 1: Example source users

 

Figure 2: Example client computers

Figure 2: Example client computers

In the remainder of this blog, I will show you how to do the migration in 5 main steps:

  1. Prepare the forests, migration computer, and administrative account.
  2. Install SQL Express and ADMT on the migration computer.
  3. Configure ADMT and PES.
  4. Migrate users and groups.
  5. Migrate computers.

Step 1: Prepare the forests, migration computer, and administrative account

To migrate users and passwords from the source domain to AWS Managed Microsoft AD, you must have a 2-way forest trust. The trust from the source domain to AWS Managed Microsoft AD enables you to add the admin account from the AWS Managed Microsoft AD to the source domain. This is necessary so you can grant the AWS Managed Microsoft AD admin account permissions in your source AD directory so it can read the attributes to migrate. I’ve already created a two-way forest trust between these domains. You should do the same by following this guide. Once your trust has been created, it should show up in the AWS console as Verified.

The ADMT tool should be installed on a computer that isn’t the domain controller in the destination domain destination.local. For this, I will launch an EC2 instance in the same VPC as my domain controller and I will add it to the destination.local domain using the EC2 seamless domain join feature. This will act as my ADMT transfer machine.

  1. Launch a Microsoft Windows 2012 R2 instance.
  2. Complete a domain join to the target domain destination.local. You can complete this manually, or alternatively you can use AWS Systems Manager to complete a seamless domain join as covered here.
  3. Sign into the instance using RDP and use Active Directory Users and Computers (ADUC) to add the AWS Managed Microsoft AD admin user from the destination.local domain to the source.local domain’s built-in administrators group (you will not be able to add the admin user as a domain admin). For information on how to set up this instance to use ADUC, please see this documentation.
     
    Figure 3: the "Administrator's Propoerties" dialog box

    Figure 3: the “Administrator’s Propoerties” dialog box

Step 2: Install SQL Express and ADMT on the migration computer

Next, I need to install SQL Express and ADMT on the migration computer by following these steps.

  1. Install Microsoft SQL Express 2016 on this computer with a default install.
  2. Download ADMT version 3.2 from the Microsoft website.
  3. Once it has downloaded, run the installer and, when setting the tool up, on the Database Selection page of the wizard, for Database (Server\Instance), type the local instance of Microsoft SQL Express we previously installed to work with ADMT.
     
    Figure 4: Specify the "Database (Server\Instance)"

    Figure 4: Specify the “Database (Server\Instance)”

  4. On the Database Import page of the wizard, select No, do not import data from an existing database (Default).
     
    Figure 5: The "Database Import" dialog box

    Figure 5: The “Database Import” dialog box

  5. Complete the rest of the installation using all of the default options.

Step 3: Configure ADMT and PES

I’ll use PES to take care of encrypted password synchronization, but before I configure that, I need to create an encryption key that will be used during this process to encrypt the password migration.

  1. On the ADMT transfer machine, open an elevated Command Prompt and use the following format to create the encryption key.
     
    admt key /option:create /sourcedomain:<SourceDomain> /keyfile:<KeyFilePath> /keypassword:{<password>|*}
     
    Here’s an example:
     
    admt key /option:create /sourcedomain:source.local /keyfile:c:\ /keypassword:password123

    Note: If you get an error stating that the command is not found, close and reopen Command Prompt to refresh the path locations to the ADMT executable, and then try again.

  2. Now, I can download and start the install for the Password Export Server.
  3. Start the install and, in the ADMT Password Migration DLL Setup window, browse to the encryption file you created in the previous step.
  4. When prompted, enter the password used in the ADMT encryption command.
  5. Run PES using the local system account. Note that this will prompt a restart of the domain controller you’re installing PES on.
  6. Once the domain controller has rebooted, open services.msc and start the Password Export Server Service, which is currently set to Manual. You might choose to set this to automatic if it’s likely your DC will be rebooted again before the end of your migration.
     
    Figure 6: Start the Password Export Server Service

    Figure 6: Start the Password Export Server Service

  7. You can now open the Active Directory Migration Tool: Control Panel > System and Security > Administrative Tools > Active Directory Migration Tool.
  8. Right-click Active Directory Migration Tool to see the migration options:
     
    Figure 7: List of migration options

    Figure 7: List of migration options

Step 4: Migrate users and groups

  1. In the Domain Selection page, select or type the Source and Target domains, and then select Next.
  2. On the User Selection page, select the users to migrate. You can use an include file if you have a large domain. Select Next.
  3. On the Organizational Unit Selection page, select the destination OU that you want to migrate your users across to, and then select Next. AWS Managed Microsoft AD gives you a managed OU where you can create your OU tree structure.

    In this example, I will place them in Users OU:
     
    LDAP://destination.local/OU=Users,OU=destination,DC=destination,DC=local
     

  4. On the Password Options page, select Migrate passwords, and then select Next. This will contact PES running on the source domain controller.
  5. On the Account Transitions Page, decide how to handle the migration of user objects. In this example, I’m going to replicate the state from the source domain. Migrating SID history is beneficial when you’re doing long, staged migrations where users may need to access resources in the source and destination domain before migration is complete. At this time, AWS Managed Microsoft AD doesn’t support migrating user SIDs. I select Target same as source, and then select Next. Again, what you choose to do might be different.
     
    Figure 8: The "Account Transition Options" dialog

    Figure 8: The “Account Transition Options” dialog

  6. Now, let’s customize the transfer. The following screen shot shows the commonly selected options on the User Options page of the User Account Migration Wizard:
     
    Figure 9: Common user options, including "Update user rights," "Migrate associated user groups," and "Fix users' group memberships

    Figure 9: Common user options

It’s likely you’ll have more than one migration pass, so choosing how you handle existing objects is important. This will be a single run for us, but the default behavior is to not migrate if the object already exists (see the image of the Conflict Management page below). If you’re running multiple passes, you’ll will want to look at options that involve merging conflicting objects. The method you select will depend on your use case. If you don’t know where to start, read this article.
 

Figure 10: The "Conflict Management" dialog box with the "Do not migrate source object..." option selected

Figure 10: The “Conflict Management” dialog box

In my example, you can see my 3 users, and any groups they were members of, have been migrated.
 

Figure 11: The "Migration Progress" window

Figure 11: The “Migration Progress” window

We can verify this by checking the users exist in our destination.local domain:
 

Figure 12: Checking the users exist in the destination.local domain

Figure 12: Checking the users exist in the destination.local domain

Step 5: Migrate computers

Now, I’ll move on to computer objects.

  1. Open the Active Directory Migration Tool: Control Panel > System and Security > Administrative Tools > Active Directory Migration Tool.
  2. Right-click Active Directory Migration Tool and select Computer Migration Wizard.
  3. Select the computers you want to migrate to the new domain. I’ll select four computers for migration.
     
    Figure 13: Four computers that will be migrated

    Figure 13: Four computers that will be migrated

  4. On the Translate Objects page, select which access controls you want to reapply during the migration, and then select Next.
     
    Figure 14: The "Translate Objects" dialog box

    Figure 14: The “Translate Objects” dialog box

    The migration process will quickly show completed, but we need to make sure the entire process worked.

  5. To verify the migration was successful, select Close, and the migration tool will open a new window that has a link to the migration log. Check the log file to see that it has started the process of migrating these four computers:


    2017-08-11 04:09:01 The Active Directory Migration Tool Agent will be installed on WIN-56SQFFFJCR1.source.local

    2017-08-11 04:09:01 The Active Directory Migration Tool Agent will be installed on WIN-IG2V2NAN1MU.source.local

    2017-08-11 04:09:01 The Active Directory Migration Tool Agent will be installed on WIN-QKQEJHUEV27.source.local

    2017-08-11 04:09:01 The Active Directory Migration Tool Agent will be installed on WIN-SE98KE4Q9CR.source.local

    If the admin user doesn’t have access to the C$ or admin$ share on the computer in the source domain share, then then installation of the agent will fail as shown here:


    2017-08-11 04:09:29 ERR2:7006 Failed to install agent on \\WIN-IG2V2NAN1MU.source.local, rc=5 Access is denied.

    Once the agent is installed, it will perform a domain disjoin from source.local and perform a join to desintation.local. The log file will update when this has been successful:


    2017-08-11 04:13:29 Post-check passed on the computer ‘WIN-SE98KE4Q9CR.source.local’. The new computer name is ‘WIN-SE98KE4Q9CR.destination.local’.

    2017-08-11 04:13:29 Post-check passed on the computer ‘WIN-QKQEJHUEV27.source.local’. The new computer name is ‘WIN-QKQEJHUEV27.destination.local’.

    2017-08-11 04:13:29 Post-check passed on the computer ‘WIN-56SQFFFJCR1.source.local’. The new computer name is ‘WIN-56SQFFFJCR1.destination.local’.

    You can then view the new computer objects in the destination domain.

  6. Log in to one of the old source.localsource.local computers and, by looking at the computer’s System Properties, confirm that the computer is now a member of the new destination.local domain.
     
    Figure 15: Confirm the computer is member of the destination.local domain

    Figure 15: Confirm the computer is member of the destination.local domain

Summary

In this simple example I showed how to migrate users and their passwords, groups, and computer objects from an on premises deployment of Active Directory, to our fully AWS Managed Microsoft AD. We created a management instance on which we ran SQL Express and ADMT, we created a forest trust to grant permissions for an account to use ADT to move users, I configured ADMT and the PES tool, and then stepped through the migration using ADMT.

The ADMT tool gives us a great way to migrate to our managed Microsoft AD service that allows powerful customization of the migration, and it does so in a more secure way through encrypted password synchronization. You may need to do additional investigation and planning if the complexity of your environment requires a different approach with some of these steps.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Directory service forum or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Борба с дезинформацията онлайн: проект за кодекс

Post Syndicated from nellyo original https://nellyo.wordpress.com/2018/07/21/disinfo-2/

Работната група към ЕК  за онлайн дезинформацията представи проект на кодекс за борба с онлайн дезинформацията.

Кодексът   (Code of Practice)  и  Приложението (Аnnex)    описват целите на Комисията, очертани в съобщението  за онлайн дезинформацията и следва да доведат до измеримо намаляване на дезинформацията в онлайн среда.

Окончателен текст  се очаква през септември.

Повече информация – на сайта на Европейската комисия

Подготовката за оттеглянето на Обединеното кралство от ЕС

Post Syndicated from nellyo original https://nellyo.wordpress.com/2018/07/19/uk-2/

Из Съобщение на Европейската комисия за медиите от 19 юли 2018 г.:

Днес Европейската комисия прие съобщение, в което се представя текущата работа по подготовката за всички последици от оттеглянето на Обединеното кралство от Европейския съюз.

На 30 март 2019 г. Обединеното кралство ще напусне ЕС и ще се превърне в трета държава. Това ще има отрицателни последици за гражданите, предприятията и администрациите както в Обединеното кралство, така и в ЕС. Такива последици са въвеждането на нови проверки на външната граница на ЕС с Обединеното кралство, промяна във валидността на издаваните от Обединеното кралство лицензи, сертификати и разрешения, както и различни правила за прехвърлянето на данни.

С приетия днес текст се призовават държавите членки и частните субекти да ускорят подготовката. Текстът е съставен в отговор на отправеното миналия месец искане от страна на Европейския съвет (член 50) да се засили готовността на всички равнища и за всички последици.

Текст на съобщението

Списък със законодателните инициативи в процес на разглеждане относно „готовността“

Уебсайт на Европейската комисия за подготовка за Брексит (включително известията за готовност за Брексит)

Европейски съвет (член 50) — заключения от 29 юни 2018 г.

Европейски съвет (член 50) — Насоки относно рамката на бъдещите отношения между ЕС и Обединеното кралство (23 март 2018 г.)

 

How to connect to AWS Secrets Manager service within a Virtual Private Cloud

Post Syndicated from Divya Sridhar original https://aws.amazon.com/blogs/security/how-to-connect-to-aws-secrets-manager-service-within-a-virtual-private-cloud/

You can now use AWS Secrets Manager with Amazon Virtual Private Cloud (Amazon VPC) endpoints powered by AWS Privatelink and keep traffic between your VPC and Secrets Manager within the AWS network.

AWS Secrets Manager is a secrets management service that helps you protect access to your applications, services, and IT resources. This service enables you to rotate, manage, and retrieve database credentials, API keys, and other secrets throughout their lifecycle. When your application running within an Amazon VPC communicates with Secrets Manager, this communication traverses the public internet. By using Secrets Manager with Amazon VPC endpoints, you can now keep this communication within the AWS network and help meet your compliance and regulatory requirements to limit public internet connectivity. You can start using Secrets Manager with Amazon VPC endpoints by creating an Amazon VPC endpoint for Secrets Manager with a few clicks on the VPC console or via AWS CLI. Once you create the VPC endpoint, you can start using it without making any code or configuration changes in your application.

The diagram demonstrates how Secrets Manager works with Amazon VPC endpoints. It shows how I retrieve a secret stored in Secrets Manager from an Amazon EC2 instance. When the request is sent to Secrets Manager, the entire data flow is contained within the VPC and the AWS network.

Figure 1: How Secrets Manager works with Amazon VPC endpoints

Figure 1: How Secrets Manager works with Amazon VPC endpoints

Solution overview

In this post, I show you how to use Secrets Manager with an Amazon VPC endpoint. In this example, we have an application running on an EC2 instance in VPC named vpc-5ad42b3c. This application requires a database password to an RDS instance running in the same VPC. I have stored the database password in Secrets Manager. I will now show how to:

  1. Create an Amazon VPC endpoint for Secrets Manager using the VPC console.
  2. Use the Amazon VPC endpoint via AWS CLI to retrieve the RDS database secret stored in Secrets Manager from an application running on an EC2 instance.

Step 1: Create an Amazon VPC endpoint for Secrets Manager

  1. Open the Amazon VPC console, select Endpoints, and then select Create Endpoint.
  2. Select AWS Services as the Service category, and then, in the Service Name list, select the Secrets Manager endpoint service named com.amazonaws.us-west-2.secrets-manager.
     
    Figure 2: Options to select when creating an endpoint

    Figure 2: Options to select when creating an endpoint

  3. Specify the VPC you want to create the endpoint in. For this post, I chose the VPC named vpc-5ad42b3c where my RDS instance and application are running.
  4. To create a VPC endpoint, you need to specify the private IP address range in which the endpoint will be accessible. To do this, select the subnet for each Availability Zone (AZ). This restricts the VPC endpoint to the private IP address range specific to each AZ and also creates an AZ-specific VPC endpoint. Specifying more than one subnet-AZ combination helps improve fault tolerance and make the endpoint accessible from a different AZ in case of an AZ failure. Here, I specify subnet IDs for availability zones us-west-2a, us-west-2b, and us-west-2c:
     
    Figure 3: Specifying subnet IDs

    Figure 3: Specifying subnet IDs

  5. Select the Enable Private DNS Name checkbox for the VPC endpoint. Private DNS resolves the standard Secrets Manager DNS hostname https://secretsmanager.<region>.amazonaws.com. to the private IP addresses associated with the VPC endpoint specific DNS hostname. As a result, you can access the Secrets Manager VPC Endpoint via the AWS Command Line Interface (AWS CLI) or AWS SDKs without making any code or configuration changes to update the Secrets Manager endpoint URL.
     
    Figure 4: The "Enable Private DNS Name" checkbox

    Figure 4: The “Enable Private DNS Name” checkbox

  6. Associate a security group with this endpoint. The security group enables you to control the traffic to the endpoint from resources in your VPC. For this post, I chose to associate the security group named sg-07e4197d that I created earlier. This security group has been set up to allow all instances running within VPC vpc-5ad42b3c to access the Secrets Manager VPC endpoint. Select Create endpoint to finish creating the endpoint.
     
    Figure 5: Associate a security group and create the endpoint

    Figure 5: Associate a security group and create the endpoint

  7. To view the details of the endpoint you created, select the link on the console.
     
    Figure 6: Viewing the endpoint details

    Figure 6: Viewing the endpoint details

  8. The Details tab shows all the DNS hostnames generated while creating the Amazon VPC endpoint that can be used to connect to Secrets Manager. I can now use the standard endpoint secretsmanager.us-west-2.amazonaws.com or one of the VPC-specific endpoints to connect to Secrets Manager within vpc-5ad42b3c where my RDS instance and application also resides.
     
    Figure 7: The "Details" tab

    Figure 7: The “Details” tab

Step 2: Access Secrets Manager through the VPC endpoint

Now that I have created the VPC endpoint, all traffic between my application running on an EC2 instance hosted within VPC named vpc-5ad42b3c and Secrets Manager will be within the AWS network. This connection will use the VPC endpoint and I can use it to retrieve my RDS database secret stored in Secrets Manager. I can retrieve the secret via the AWS SDK or CLI. As an example, I can use the CLI command shown below to retrieve the current version of my RDS database secret:

$aws secretsmanager get-secret-value –secret-id MyDatabaseSecret –version-stage AWSCURRENT

Since my AWS CLI is configured for us-west-2 region, it uses the standard Secrets Manager endpoint URL https://secretsmanager.us-west-2.amazonaws.com. This standard endpoint automatically routes to the VPC endpoint since I enabled support for Private DNS hostname while creating the VPC endpoint. The above command will result in the following output:


{
  "ARN": "arn:aws:secretsmanager:us-west-2:123456789012:secret:MyDatabaseSecret-a1b2c3",
  "Name": "MyDatabaseSecret",
  "VersionId": "EXAMPLE1-90ab-cdef-fedc-ba987EXAMPLE",
  "SecretString": "{\n  \"username\":\"david\",\n  \"password\":\"BnQw&XDWgaEeT9XGTT29\"\n}\n",
  "VersionStages": [
    "AWSCURRENT"
  ],
  "CreatedDate": 1523477145.713
} 

Summary

I’ve shown you how to create a VPC endpoint for AWS Secrets Manager and retrieve an RDS database secret using the VPC endpoint. Secrets Manager VPC Endpoints help you meet compliance and regulatory requirements about limiting public internet connectivity within your VPC. It enables your applications running within a VPC to use Secrets Manager while keeping traffic between the VPC and Secrets Manager within the AWS network. You can start using Amazon VPC Endpoints for Secrets Manager by creating endpoints in the VPC console or AWS CLI. Once created, your applications that interact with Secrets Manager do not require any code or configuration changes.

To learn more about connecting to Secrets Manager through a VPC endpoint, read the Secrets Manager documentation. For guidance about your overall VPC network structure, see Practical VPC Design.

If you have questions about this feature or anything else related to Secrets Manager, start a new thread in the Secrets Manager forum.

Want more AWS Security news? Follow us on Twitter.

Zelda casemod with levitating Triforce

Post Syndicated from Liz Upton original https://www.raspberrypi.org/blog/zelda-casemod-with-levitating-triforce/

I know: you’ve seen a bajillion RetroPie implementations before, and a bajillion casemods to go with them. But this one’s so hopelessly, magnificently splendid that we felt we had to share. Magnetic levitation. It’s not just for trains and frogs.

This Zelda casemod, covered with engraved pine from the forests of Hyrule and shiny brass mouldings hammered by…dwarves or something, would be gorgeous as-is. The levitating, mirrored Triforce twizzling away on top is the icing on the cake; and a very lovely cake it is too. Here’s some video (in Spanish, with English subtitles) from Tuberviejuner in Spain, walking you through the build.

Raspberry pi Zelda mod: MagicBerry WindWaker by Makomod & Tuberviejuner.

Raspberry pi Zelda mod: Magic Berry WindWaker The Legend of Zelda by Makomod & Tuberviejuner alucinad con el triforce levitador.

This magical piece of work is by MakoMod, a case modder who splits his time between Barcelona and Texas. There’s a Pi inside running RetroPie, and a separate electromagnetic device levitating the Triforce up top. If you’re interested in incorporating something like this into one of your own builds, there are two ways to go: make your own from scratch, as DrewPaul Designs has done here, or buy a pre-built kit.

If you get in there quickly, you’ve a chance to own this one-off case: MakoMod is auctioning it on eBay. You’ve got until July 14 2018 to bid – good luck!

The post Zelda casemod with levitating Triforce appeared first on Raspberry Pi.