Additional Failover Control for Amazon Aurora

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/additional-failover-control-for-amazon-aurora/

Amazon Aurora is a fully-managed, MySQL-compatible, relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source database (read my post, Amazon Aurora – New Cost-Effective MySQL-Compatible Database Engine for Amazon RDS, to learn more).
Aurora allows you create up to 15 read replicas to increase read throughput and for use as failover targets. The replicas share storage with the primary instance and provide lightweight, fine-grained replication that is almost synchronous, with a replication delay on the order of 10 to 20 milliseconds.
Additional Failover Control Today we are making Aurora even more flexible by giving you control over the failover priority of each read replica. Each read replica is now associated with a priority tier (0-15).  In the event of a failover, Amazon RDS will promote the read replica that has the highest priority (the lowest numbered tier). If two or more replicas have the same priority, RDS will promote the one that is the same size as the previous primary instance.
You can set the priority when you create the Aurora DB instance:

This feature is available now and you can start using it today. To learn more, read about Fault Tolerance for an Aurora DB Cluster. —
Jeff;

Additional Failover Control for Amazon Aurora

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/additional-failover-control-for-amazon-aurora/

Amazon Aurora is a fully-managed, MySQL-compatible, relational database engine that combines the speed and availability of high-end commercial databases with the simplicity and cost-effectiveness of open source database (read my post, Amazon Aurora – New Cost-Effective MySQL-Compatible Database Engine for Amazon RDS, to learn more).
Aurora allows you create up to 15 read replicas to increase read throughput and for use as failover targets. The replicas share storage with the primary instance and provide lightweight, fine-grained replication that is almost synchronous, with a replication delay on the order of 10 to 20 milliseconds.
Additional Failover Control Today we are making Aurora even more flexible by giving you control over the failover priority of each read replica. Each read replica is now associated with a priority tier (0-15).  In the event of a failover, Amazon RDS will promote the read replica that has the highest priority (the lowest numbered tier). If two or more replicas have the same priority, RDS will promote the one that is the same size as the previous primary instance.
You can set the priority when you create the Aurora DB instance:

This feature is available now and you can start using it today. To learn more, read about Fault Tolerance for an Aurora DB Cluster. —
Jeff;

Free qwikLABS Online Labs Through the End of March

Post Syndicated from Craig Liebendorfer original https://blogs.aws.amazon.com/security/post/Tx1X7XGKCJDDGGV/Free-qwikLABS-Online-Labs-Through-the-End-of-March

To celebrate 10 years of AWS, qwikLABS is offering 95 free online labs through the end of March 2016. Here are some of the labs related to security and compliance that you can take for free while the offer is live:

These self-paced labs let you learn at your own pace from this AWS training partner. Start today!

– Craig

Free qwikLABS Online Labs Through the End of March

Post Syndicated from Craig Liebendorfer original https://blogs.aws.amazon.com/security/post/Tx1X7XGKCJDDGGV/Free-qwikLABS-Online-Labs-Through-the-End-of-March

To celebrate 10 years of AWS, qwikLABS is offering 95 free online labs through the end of March 2016. Here are some of the labs related to security and compliance that you can take for free while the offer is live:

Introduction to AWS Identity and Access Management (IAM)

Introduction to AWS Key Management Service

Performing a Basic Audit of Your AWS Environment

Auditing Changes to Amazon EC2 Security Groups

Auditing Your Security with AWS Trusted Advisor

Microsoft ADFS and AWS IAM

These self-paced labs let you learn at your own pace from this AWS training partner. Start today!

– Craig

How to Set Up Uninterrupted, Federated User Access to AWS Using AD FS

Post Syndicated from Tracy Pierce original https://blogs.aws.amazon.com/security/post/Tx1MHOLKFJESWBS/How-to-Set-Up-Uninterrupted-Federated-User-Access-to-AWS-Using-AD-FS

Microsoft Active Directory Federation Services (AD FS) is a common identity provider that many AWS customers use to give federated users access to the AWS Management Console. AD FS uses multiple certificates to ensure secure communication between servers and to act as authentication mechanisms. One such mechanism is called the token-signing certificate.

When the token-signing certificate expires, or is changed, the trust relationship between the claim provider, AD FS, and the relying party, AWS Security Token Service (AWS STS), is broken. Without a valid certificate to prove the calling server’s identity, the receiving party cannot verify the certificate, which terminates the request and thus prevents federated users from being able to access the AWS Management Console. Luckily, this can be avoided!

In this blog post, I explain how you can use the AutoCertificateRollover feature in AD FS to enable uninterrupted connections between your claim provider and your relying trust. I also show how to set up a secondary certificate manually in AD FS to avoid service interruption when a server certificate expires.

This post assumes that you have a working AD FS configuration. If you do not, you can follow the steps in the following blog posts to get you up and running:

Let’s start by taking a quick look at how AD FS uses the token-signing certificate.

Background

The token-signing certificate is used by AD FS to sign the Security Assertion Markup Language (SAML) assertion—also known as an AuthN response—that AD FS sends to a relying party to authenticate to Active Directory (AD) its information, such as Role, RoleSessionName, and X509 certificates. For this post’s use case, the relying party is AWS STS, which AD FS uses to provide federated users access to the AWS Management Console and AWS APIs. The following diagram illustrates the authentication process.

  1. The flow is initiated when a user—let’s call him Bob—browses to the AD FS sample site (https://Fully.Qualified.Domain.Name.Here/adfs/ls/IdpInitiatedSignOn.aspx) inside his domain. When Bob installed AD FS, he was given a new virtual directory named adfs for his default website, which includes this sample site.
  2. The sign-on page authenticates Bob against Active Directory. Depending on the browser Bob is using, he might be prompted for his Active Directory user name and password.
  3. Bob’s browser receives an AuthN response in the form of an authentication response from AD FS.
  4. Bob’s browser posts the AuthN response to the AWS sign-in endpoint for SAML (https://signin.aws.amazon.com/saml). Behind the scenes, sign-in uses the AssumeRoleWithSAML API to request temporary security credentials and then constructs a sign-in URL for the AWS Management Console.
  5. Bob’s browser receives the sign-in URL and is redirected to the AWS Management Console.

For a more complete description of the certificates that AD FS uses, see Understanding Certificates Used by AD FS.

Key AD FS settings and how to change them

AD FS has default settings to help ensure that your certificates never expire. You can see these default settings by opening a Windows PowerShell window with administrative rights on your AD FS server, and then running Get-ADFSProperties.

If you are using AD FS 2.0, you first need to import the AD FS cmdlets for Windows by running the following commands. If you are using AD FS 3.0, these cmdlets will already be installed for you.

Add-PSSnapin Microsoft.Adfs.PowerShell
Get-ADFSProperties

The output should resemble what the following screenshot shows.

The output shows you the settings for your AD FS properties, and it tells you whether or not AutoCertificateRollover is enabled, the host name of your server, timeouts, and so on. These settings are handy to know for AD FS server administration tasks, such as checking certificate settings, timeouts, domain, and containers. However, for the purposes of this post, the settings and values I will focus on in the output are the following:

AutoCertificateRollover                    : True
CertificateGenerationThreshold             : 20
CertificatePromotionThreshold              : 5
CertificateRolloverInterval                : 720

To explain these settings:

  • When set to True, AutoCertificateRollover enables the automatic generation of a new self-signed certificate to sign your AuthN response.
  •  There is also a default CertificateGenerationThreshold of 20 days that creates the secondary certificate 20 days before the original certificate expires. This certificate will not be used until the CertificatePromotionThreshold is reached.
  • The default setting for CertificatePromotionThreshold is 5 days. 5 days after the secondary certificate is created, it will automatically be promoted to primary. This gives your relying parties a 5-day window to update the federation metadata so that zero downtime is experienced. After this 5-day window, the primary certificate is set to secondary and will then expire on the original expiration date. You must download the metadata document again so that it contains the newly created certificate.

Because the relying parties count on having a valid certificate to verify identity, they need a copy of this new metadata document to validate requests. This allows you to minimize the concern of the certificate expiring and having downtime. The relying trusts still need to upload the new metadata document to ensure that a copy of the new certificate is readily available for authentication purposes. If the relying trust does not update the metadata document, downtime with connection to that service could be experienced after the original primary certificate expires.

  • AD FS checks the certificate status every CertificateRolloverInterval, which is in minutes. The default value is 720 minutes (12 hours).

You can change the aforementioned AD FS properties by using the Set-ADFSProperties cmdlet. For example, you can change the CertificatePromotionThreshold from 5 to 10 days by running the following command.

Set-ADFSProperties -CertificatePromotionThreshold 10

This change gives you more time to deal with the relying party update.

If certificate rotation is something you would like to handle manually because you want to have full control, you can disable the rollover feature by running the following command.

Set-ADFSProperties –AutoCertificateRollover:$false

To avoid service interruption, you must download the new metadata document and upload it to the relying parties that use it, before the primary certificate expires.

Next, you will change permissions on a certificate so that AD FS has access to it.

Change permissions on the private key of the certificate

A prerequisite for completing this section is that you acquire a new, unexpired certificate. You can do this by creating a self-signed certificate as explained in Enabling Federation to AWS Using Windows Active Directory, ADFS, and SAML 2.0, or you can purchase a Secure Sockets Layer (SSL) certificate from the third-party provider of your choice. Manual rotation is required if you purchase an SSL certificate from a trusted third party because AD FS does not have a rollover feature for these certificates.

After you have installed a new, unexpired certificate on your AD FS server, you must ensure that the AD FS service account has access to the private key for this new server certificate.

To change the permissions on the private key of the certificate:

  1. On your AD FS server, open the MMC Console. Click Start, type MMC, and then press Enter.
  2. In the MMC Console, click File and then click Snap-in Add/Remove. Then click Certificate, Local Computer, and then OK.
  3. Under Certificates (Local Computer), expand Personal, and then click Certificates (as shown in the following screenshot).

  1. Right-click the new certificate you just installed, and then click All Tasks – Manage Private Key.
  2. On the Permissions tab, select Add User and grant the AD FS service account full control. Do this by searching for the service account name you selected when setting up your AD FS server, clicking OK, and then selecting the check box for Full Control under Allow (as shown in the following screenshot). Then click Apply.


     

  3. If you are not sure about which account name to run as, you can retrieve this information by clicking Start and Run, and then typing services.msc. As shown in the following screenshot, check the Log On As column for Active Directory Federation Service. In my case, the account is ADFSSVC$. Windows adds a $ to the end of managed service accounts, such as accounts that are used to run services like AD FS. User accounts do not have such a suffix added to them.

Now that AD FS has access to the private key of the new certificate, you can open the AD FS console and configure the server to add this new certificate as a secondary certificate when signing the SAML AuthN response. This is to ensure that a secondary certificate is available for identity authentication when the primary certificate expires.

Manually add the new certificate as the secondary certificate

As explained previously in this post, the only certificate that AWS needs from AD FS is the token-signing certificate.

To add the new certificate manually for AD FS server authentication:

  1. Open the AD FS console, click the Service folder, and then click the Certificates folder, as shown in the following screenshot.

  1. Click Add Token-Signing Certificate and select the certificate you wish to use as the secondary certificate. After you upload it, this certificate is listed as Secondary.

Note: In order to complete this process, you must disable any self-signed, autorotating certificates you may have configured.

  1. Open Windows PowerShell as an Administrator and run the following commands. If you are using AD FS 3.0, you can skip the first command, which adds the PSSnapin that is already installed in AD FS 3.0.
Add-PSSnapin Microsoft.ADFS.PowerShell
Set-ADFSProperties –AutoCertificateRollover $false
  1. Download the new copy of the metadata document from the following link (placed in a browser on your AD FS server): https://<yourservername>/FederationMetadata/2007-06/FederationMetadata.xml.

You are still using the soon-to-expire certificate (the original primary certificate), but the new certificate is set as a secondary certificate. If you look at your FederationMetadata.xml file now, you will see that both certificates are included. The trust with your relying party is based on the information shared in the FederationMetadata.xml file, so the final step is to update your relying parties with this new metadata document.

To update your relying parties with the new metadata document:

  1. Sign in to the AWS Management Console.
  2. Click Services and then click IAM to go to the IAM console.
  3. Click Identity Providers in the left pane.
  4. Select the name of the identity provider (IdP) you created for your SAML SSO.
  5. Click Upload Metadata and select the file you downloaded from your AD FS server via the federation metadata link just provided in the previous section.
  6. Click Upload. The IdP now has an updated FederationMetadata.xml document to validate authentication requests from your claims provider

Updating the metadata document in AWS after creating and adding the secondary certificate ensures there is no downtime later when transitioning from the primary certificate to the secondary certificate.

Update the AWS configuration before the primary certificate expires

When you have the new federation metadata with the soon-to-expire (primary) certificate and the new (secondary) certificate that was either automatically generated by AD FS or one that you installed, you must update the relying party configuration before the primary certificate expires.

To update the AWS configuration:

  1. Sign in to the AWS Management Console as an IAM user that has access to update IdPs.
  2. In the IAM console in the Identity Providers section, select the IdP you want to update.
  3. Click Upload Metadata and then click Choose File. Navigate to the directory into which you downloaded the new FederationMetadata.xml file and choose the file. Click Upload.

Test it!

Now that you have uploaded the new metadata document with both certificates listed, you can test a sign-in to the AWS Management Console through your normal means of federation. This sign-in should complete without issue. However, if you experience any errors, you can check a few things:

  1. Is the primary certificate still listed in the AD FS console?
  2. Did the relying party upload the certificate correctly?
  3. Check the FederationMetadata.xml file to ensure all security information is still being passed as before (Role, RoleSessionName, X509 certificate, and so on).

If you are using autorollover, the process is complete. No further action is required on your part to ensure a valid certificate is used for identity validation between your claims provider and relying trust. If you manually set the certificate, to ensure zero downtime you must rotate the secondary certificate to become the primary certificate before it expires.

To rotate the secondary certificate to be the primary certificate:

  1. Open the AD FS console and click Certificates.
  2. Right-click the new certificate you uploaded, and then click Primary.

To help keep your setup “clean,” follow these steps to remove the expired certificates from your server:

  1. Open the AD FS console and click Certificates.
  2. Select the old certificate under Token-Signing Certificate, and then click Delete.

Going forward, server certificate expiration should not affect your ability to connect with AWS via your SAML setup.

If you have comments about this blog post, please add them to the “Comments” section below. If you have questions about this blog post, start a new thread on the IAM forum.

– Tracy

How to Set Up Uninterrupted, Federated User Access to AWS Using AD FS

Post Syndicated from Tracy Pierce original https://blogs.aws.amazon.com/security/post/Tx1MHOLKFJESWBS/How-to-Set-Up-Uninterrupted-Federated-User-Access-to-AWS-Using-AD-FS

Microsoft Active Directory Federation Services (AD FS) is a common identity provider that many AWS customers use to give federated users access to the AWS Management Console. AD FS uses multiple certificates to ensure secure communication between servers and to act as authentication mechanisms. One such mechanism is called the token-signing certificate.

When the token-signing certificate expires, or is changed, the trust relationship between the claim provider, AD FS, and the relying party, AWS Security Token Service (AWS STS), is broken. Without a valid certificate to prove the calling server’s identity, the receiving party cannot verify the certificate, which terminates the request and thus prevents federated users from being able to access the AWS Management Console. Luckily, this can be avoided!

In this blog post, I explain how you can use the AutoCertificateRollover feature in AD FS to enable uninterrupted connections between your claim provider and your relying trust. I also show how to set up a secondary certificate manually in AD FS to avoid service interruption when a server certificate expires.

This post assumes that you have a working AD FS configuration. If you do not, you can follow the steps in the following blog posts to get you up and running:

Enabling Federation to AWS Using Windows Active Directory, AD FS, and SAML 2.0

How to Set Up SSO to the AWS Management Console for Multiple Accounts by Using AD FS and SAML 2.0

Let’s start by taking a quick look at how AD FS uses the token-signing certificate.

Background

The token-signing certificate is used by AD FS to sign the Security Assertion Markup Language (SAML) assertion—also known as an AuthN response—that AD FS sends to a relying party to authenticate to Active Directory (AD) its information, such as Role, RoleSessionName, and X509 certificates. For this post’s use case, the relying party is AWS STS, which AD FS uses to provide federated users access to the AWS Management Console and AWS APIs. The following diagram illustrates the authentication process.

The flow is initiated when a user—let’s call him Bob—browses to the AD FS sample site (https://Fully.Qualified.Domain.Name.Here/adfs/ls/IdpInitiatedSignOn.aspx) inside his domain. When Bob installed AD FS, he was given a new virtual directory named adfs for his default website, which includes this sample site.

The sign-on page authenticates Bob against Active Directory. Depending on the browser Bob is using, he might be prompted for his Active Directory user name and password.

Bob’s browser receives an AuthN response in the form of an authentication response from AD FS.

Bob’s browser posts the AuthN response to the AWS sign-in endpoint for SAML (https://signin.aws.amazon.com/saml). Behind the scenes, sign-in uses the AssumeRoleWithSAML API to request temporary security credentials and then constructs a sign-in URL for the AWS Management Console.

Bob’s browser receives the sign-in URL and is redirected to the AWS Management Console.

For a more complete description of the certificates that AD FS uses, see Understanding Certificates Used by AD FS.

Key AD FS settings and how to change them

AD FS has default settings to help ensure that your certificates never expire. You can see these default settings by opening a Windows PowerShell window with administrative rights on your AD FS server, and then running Get-ADFSProperties.

If you are using AD FS 2.0, you first need to import the AD FS cmdlets for Windows by running the following commands. If you are using AD FS 3.0, these cmdlets will already be installed for you.

Add-PSSnapin Microsoft.Adfs.PowerShell
Get-ADFSProperties

The output should resemble what the following screenshot shows.

The output shows you the settings for your AD FS properties, and it tells you whether or not AutoCertificateRollover is enabled, the host name of your server, timeouts, and so on. These settings are handy to know for AD FS server administration tasks, such as checking certificate settings, timeouts, domain, and containers. However, for the purposes of this post, the settings and values I will focus on in the output are the following:

AutoCertificateRollover : True
CertificateGenerationThreshold : 20
CertificatePromotionThreshold : 5
CertificateRolloverInterval : 720

To explain these settings:

When set to True, AutoCertificateRollover enables the automatic generation of a new self-signed certificate to sign your AuthN response.

 There is also a default CertificateGenerationThreshold of 20 days that creates the secondary certificate 20 days before the original certificate expires. This certificate will not be used until the CertificatePromotionThreshold is reached.

The default setting for CertificatePromotionThreshold is 5 days. 5 days after the secondary certificate is created, it will automatically be promoted to primary. This gives your relying parties a 5-day window to update the federation metadata so that zero downtime is experienced. After this 5-day window, the primary certificate is set to secondary and will then expire on the original expiration date. You must download the metadata document again so that it contains the newly created certificate.

Because the relying parties count on having a valid certificate to verify identity, they need a copy of this new metadata document to validate requests. This allows you to minimize the concern of the certificate expiring and having downtime. The relying trusts still need to upload the new metadata document to ensure that a copy of the new certificate is readily available for authentication purposes. If the relying trust does not update the metadata document, downtime with connection to that service could be experienced after the original primary certificate expires.

AD FS checks the certificate status every CertificateRolloverInterval, which is in minutes. The default value is 720 minutes (12 hours).

You can change the aforementioned AD FS properties by using the Set-ADFSProperties cmdlet. For example, you can change the CertificatePromotionThreshold from 5 to 10 days by running the following command.

Set-ADFSProperties -CertificatePromotionThreshold 10

This change gives you more time to deal with the relying party update.

If certificate rotation is something you would like to handle manually because you want to have full control, you can disable the rollover feature by running the following command.

Set-ADFSProperties –AutoCertificateRollover:$false

To avoid service interruption, you must download the new metadata document and upload it to the relying parties that use it, before the primary certificate expires.

Next, you will change permissions on a certificate so that AD FS has access to it.

Change permissions on the private key of the certificate

A prerequisite for completing this section is that you acquire a new, unexpired certificate. You can do this by creating a self-signed certificate as explained in Enabling Federation to AWS Using Windows Active Directory, ADFS, and SAML 2.0, or you can purchase a Secure Sockets Layer (SSL) certificate from the third-party provider of your choice. Manual rotation is required if you purchase an SSL certificate from a trusted third party because AD FS does not have a rollover feature for these certificates.

After you have installed a new, unexpired certificate on your AD FS server, you must ensure that the AD FS service account has access to the private key for this new server certificate.

To change the permissions on the private key of the certificate:

On your AD FS server, open the MMC Console. Click Start, type MMC, and then press Enter.

In the MMC Console, click File and then click Snap-in Add/Remove. Then click Certificate, Local Computer, and then OK.

Under Certificates (Local Computer), expand Personal, and then click Certificates (as shown in the following screenshot).

Right-click the new certificate you just installed, and then click All Tasks – Manage Private Key.

On the Permissions tab, select Add User and grant the AD FS service account full control. Do this by searching for the service account name you selected when setting up your AD FS server, clicking OK, and then selecting the check box for Full Control under Allow (as shown in the following screenshot). Then click Apply.


 

If you are not sure about which account name to run as, you can retrieve this information by clicking Start and Run, and then typing services.msc. As shown in the following screenshot, check the Log On As column for Active Directory Federation Service. In my case, the account is ADFSSVC$. Windows adds a $ to the end of managed service accounts, such as accounts that are used to run services like AD FS. User accounts do not have such a suffix added to them.

Now that AD FS has access to the private key of the new certificate, you can open the AD FS console and configure the server to add this new certificate as a secondary certificate when signing the SAML AuthN response. This is to ensure that a secondary certificate is available for identity authentication when the primary certificate expires.

Manually add the new certificate as the secondary certificate

As explained previously in this post, the only certificate that AWS needs from AD FS is the token-signing certificate.

To add the new certificate manually for AD FS server authentication:

Open the AD FS console, click the Service folder, and then click the Certificates folder, as shown in the following screenshot.

Click Add Token-Signing Certificate and select the certificate you wish to use as the secondary certificate. After you upload it, this certificate is listed as Secondary.

Note: In order to complete this process, you must disable any self-signed, autorotating certificates you may have configured.

Open Windows PowerShell as an Administrator and run the following commands. If you are using AD FS 3.0, you can skip the first command, which adds the PSSnapin that is already installed in AD FS 3.0.

Add-PSSnapin Microsoft.ADFS.PowerShell
Set-ADFSProperties –AutoCertificateRollover $false

Download the new copy of the metadata document from the following link (placed in a browser on your AD FS server): https://<yourservername>/FederationMetadata/2007-06/FederationMetadata.xml.

You are still using the soon-to-expire certificate (the original primary certificate), but the new certificate is set as a secondary certificate. If you look at your FederationMetadata.xml file now, you will see that both certificates are included. The trust with your relying party is based on the information shared in the FederationMetadata.xml file, so the final step is to update your relying parties with this new metadata document.

To update your relying parties with the new metadata document:

Sign in to the AWS Management Console.

Click Services and then click IAM to go to the IAM console.

Click Identity Providers in the left pane.

Select the name of the identity provider (IdP) you created for your SAML SSO.

Click Upload Metadata and select the file you downloaded from your AD FS server via the federation metadata link just provided in the previous section.

Click Upload. The IdP now has an updated FederationMetadata.xml document to validate authentication requests from your claims provider

Updating the metadata document in AWS after creating and adding the secondary certificate ensures there is no downtime later when transitioning from the primary certificate to the secondary certificate.

Update the AWS configuration before the primary certificate expires

When you have the new federation metadata with the soon-to-expire (primary) certificate and the new (secondary) certificate that was either automatically generated by AD FS or one that you installed, you must update the relying party configuration before the primary certificate expires.

To update the AWS configuration:

Sign in to the AWS Management Console as an IAM user that has access to update IdPs.

In the IAM console in the Identity Providers section, select the IdP you want to update.

Click Upload Metadata and then click Choose File. Navigate to the directory into which you downloaded the new FederationMetadata.xml file and choose the file. Click Upload.

Test it!

Now that you have uploaded the new metadata document with both certificates listed, you can test a sign-in to the AWS Management Console through your normal means of federation. This sign-in should complete without issue. However, if you experience any errors, you can check a few things:

Is the primary certificate still listed in the AD FS console?

Did the relying party upload the certificate correctly?

Check the FederationMetadata.xml file to ensure all security information is still being passed as before (Role, RoleSessionName, X509 certificate, and so on).

If you are using autorollover, the process is complete. No further action is required on your part to ensure a valid certificate is used for identity validation between your claims provider and relying trust. If you manually set the certificate, to ensure zero downtime you must rotate the secondary certificate to become the primary certificate before it expires.

To rotate the secondary certificate to be the primary certificate:

Open the AD FS console and click Certificates.

Right-click the new certificate you uploaded, and then click Primary.

To help keep your setup “clean,” follow these steps to remove the expired certificates from your server:

Open the AD FS console and click Certificates.

Select the old certificate under Token-Signing Certificate, and then click Delete.

Going forward, server certificate expiration should not affect your ability to connect with AWS via your SAML setup.

If you have comments about this blog post, please add them to the “Comments” section below. If you have questions about this blog post, start a new thread on the IAM forum.

– Tracy

Using NSURLProtocol for Testing

Post Syndicated from staticpulse original https://yahooeng.tumblr.com/post/141143817861

By Roberto Osorio-Goenaga, iOS Developer

Unit testing networking code can be problematic, due mostly to its asynchronous nature. Using a staging server introduces lag and external factors that can cause your tests to run slowly, or not at all. Frameworks like OCMock exist to specify how an object responds to a specific query to address this behavior, but a mock object must still be set up for each type of behavior being mocked.

Fry Tests a Server Connection

Using Apple’s NSURLProtocol, we can create a test suite that eschews these problems by mocking the response to our network calls centrally, essentially letting your test focus only on business logic. This protocol can be used not only with the built-in NSURLSession class, but can also be used to test classes and structs written with modern third party networking libraries, such as the popular Alamofire. In this article, we will look at mocking network responses in Swift for requests made using Alamofire. The sample project can be found on github.

NSURLProtocol’s main purpose is to extend URL loading to incorporate custom schemes or enhance existing ones. A secondary, yet extremely powerful, use of NSURLProtocol is to mock a server by sending canned responses back to callbacks and delegates. Say we have a very simple struct that uses Alamofire to make an HTTP GET request.

Fig 1 – A simple struct that serves as a REST client

The sample in Figure 1 creates a struct with an NSURL as an init parameter, and a sole method, getAvailableItems(), taking in a completion block as an argument, making a rest call to the NSURL and populating an array of MyItem in the block sent into it. From a testing perspective, we’d like to have a JSON response that matches the expected response, containing an object called items whose value pair is an array of strings. In order to make our tests as thorough and robust as possible, we’d also include at least two other mock responses: a JSON response that does not match this expectation, to test the else clause, and a garbage or erroneous response to check our error handling.

 Fig 2 – A valid response

 Fig 3 – A non-valid response

 Fig 4 – A throw-away garbage response 

Figures 2, 3 and 4 show a valid response for our purposes, a non-valid yet correct JSON response, and a throw-away string that isn’t even valid JSON, respectively. Without having to make a full-blown staging server, let’s see how we could go about testing these using NSURLProtocol.

To understand where NSURLProtocol fits into this problem, it’s important to look at a bit of the architecture Alamofire employs. Alamofire works as a singleton, as one can see from the above example. There is no instantiation required. Just feed a URL in, and make a request. Under the hood, the entity making the request is called the Manager. Manager is the entity that actually stores the URL and parameters, and is responsible for firing off an NSURLSession request abstracted from the caller class.

The manager for Alamofire can be initialized with a custom configuration of type NSURLSessionConfiguration, which has a property called protocolClasses, an array of NSURLProtocol members. By creating a new protocol that defines what happens when NSURLSession tries to reach a certain type of endpoint, loading it into the protocol array of a new configuration at index 0 (the default configuration), and initializing a new Manager object with this configuration, we can inject Alamofire with a simple, local mock server that will return whatever we want, given any request. Let’s start setting up a test class for our REST client by extending NSURLProtocol to respond to GET requests, and creating an Alamofire.Manager object with a custom NSURLSessionConfiguration that employs our protocol.

Fig 5 – Setting up a testing class for our client

Great, now we have an NSURLProtocol class that takes a GET request, checks the URL, and returns either a valid JSON response, or a simple “GARBAGE” response. This should allow us to test how our client responds. We still haven’t written any cases. We have a MyRESTClient property, as well as a Manager property. We also have a setup initial method that instantiates and loads our custom protocol into the manager instance. We now need a way to inject this manager instance into our Alamofire singleton. Let’s extend our client to the following.

Fig 6 – The REST client with an injectable “manager” parameter

We’ve added an initializer to our struct that allows us to send either a custom manager or nil into Alamofire. When the parameter is nil, the manager will load with its standard configuration. We also edited the request execution to be called via the manager we selected instead of directly through Alamofire. We can now add the following test case to our test class.

Fig 7 – Our first test case

In this test case, we create a new client, and give it our custom manager through the new initializer. We set a testing expectation, since the result comes back on a closure, and, after loading our itemsArray inside, fulfill the expectation. We tell the test case to wait for said expectation to be fulfilled, and, once it is, we make sure the itemsArray contains three items. If so, our test is successful, and our business logic is tested for getAvailableItems. Notice that we have used a bogus URL of “http://notNil”, which we have defined in the protocol to be selected in the conditional for populating the response correctly. To test the “garbage” case, we could write a test like the following.

Fig 8 – A test case for verifying a garbage response

In this second test case, the mocked URL of “http://nil” is not recognized, and the protocol responds by returning “GARBAGE”, thus not populating the response array. If our method is written correctly, it will call the closure with a nil array.

Fig 9 – A test case for verifying an incorrect response

In the third and final test case, our protocol class will return a “concepts” array instead of an “items” array, so the end result should still be a nil array in the closure.

As you can see, using NSURLProtocol we have created what amounts to a tiny server that responds to our requests and replies as specified, perfect for testing our asynchronous net calls. Now, go forth and test!

What’s the Diff: RAM vs Storage

Post Syndicated from Peter Cohen original https://www.backblaze.com/blog/whats-diff-ram-vs-storage/

WHATS the DIFF? RAM vs Hard Drives

Perhaps the most common challenge computer users encounter when using a computer involves memory, or the lack thereof in their computer.

Computer support technicians will tell you that computer users are often unclear on the different types of memory in their computer. Users often interchange memory with storage, and vice-versa. Statements like “I have eight gigabytes of disk,” or “I have one terabyte of memory” tell computer support people that they’re dealing with a novice when it comes to computer terminology.

We don’t want you to appear as a novice, so let’s break the concepts down and examine these two parts of your computer, how they work together, and how they affect your computer’s performance.

The Difference Between Memory and Storage

Your computer’s main memory is called RAM. You can think of it as a workspace the computer uses to get work done. When you double-click on an app, or open a document, or, well, do much of anything, RAM gets used to store that data while the computer is working on it. Modern computers often come equipped with 8, 16 or more gigabytes of RAM pre-installed.

There’s also storage: a hard disk drive or solid state drive where data is recorded and can stay indefinitely, to be recalled as necessary. That might be a tax return, a poem in a word processor, or an email. By comparison, RAM is volatile — the information that’s put in there disappears when the power is turned off or when the computer is reset. Stuff written to disk stays there permanently until it’s erased, or until the storage medium fails (more on that later).

What is RAM?

RAM takes the form of computer chips — integrated circuits — that are either soldered directly onto the main logic board of your computer or installed in memory modules that go in sockets on your computer’s logic board.

RAM stands for Random Access Memory. The data stored in RAM can be accessed almost instantly regardless of where in memory it is stored, so it’s very fast — milliseconds fast. RAM has a very fast path to the computer’s CPU, or central processing unit, the brain of the computer that does most of the work.

RAM is random access as opposed to sequential access. Data that’s accessed sequentially includes stuff that’s written to your hard disk drive, for example. It’s commonly written in files, with a specific start location and end location. We’ll get to your hard drive storage in a moment.

If you have general purpose needs for your computer, you probably don’t need to tweak its RAM very much. In fact, depending on what computer you buy, you may very well not be able to change the RAM. (Apple and others have removed RAM upgradability from some of their lower-end or portable computers, for example).

Mojave About Dialog BoxHow much RAM on Mac OS (Apple Menu > About This Mac)

How much RAM -- Windows 10


 

How much RAM on Windows 10 (Control Panel > System and Security > System)

If your computer is older and upgradable, putting in more RAM helps it load and use more apps, more documents, and larger files without slowing down and having to swap that data to disk, which we’ll cover below.

If you work with very large files — big databases, for example, or big image files or video, or if the apps you work with require a large amount of memory to process their data, having more RAM in your computer can help performance significantly.

What is Computer Storage?

Computers need some form of non-volatile storage. That’s a place data can stay even when the computer isn’t being used and is turned off, so you don’t have to reload and re-enter everything each time you use the computer. That’s the point of having storage in addition to RAM.

Storage for the vast majority of computers in use today consists of a drive, either a hard drive or a solid state drive. Drives can provide a lot of space that can be used to store applications, documents, data and all the other stuff you need to get your work done (and your computer needs to operate).

Mac OS Disk SpaceDisk Space on Mac OS (Apple Menu > About This Mac > Storage)

Windows 10 Disk SpaceDisk Space on Windows 10 (This PC > Computer)

No matter what type of drive you have, storage is almost always slower than RAM. Hard disk drives are mechanical devices, so they can’t access information nearly as quickly as memory does. And storage devices in most personal computers use an interface called Serial ATA (SATA), which affects the speed at which data can move between the drive and the CPU.

So why use hard drives at all? Well, they’re cheap and available.

In recent years, more computer makers have begun to offer Solid State Drives (SSDs) as a storage option, in place of or in addition to a conventional hard disk drive.

SSDs are much faster than hard drives since they use integrated circuits. SSDs use a special type of memory circuitry called non-volatile RAM (NVRAM) to store data, so everything stays in place even when the computer is turned off.

Even though SSDs use memory chips instead of a mechanical platter that has to be read sequentially, they’re still slower than the computer’s RAM. That’s partly because of the performance of the memory chips that are being used, and partly also because of the bottleneck created by the interface that connects the storage device to the computer – it’s not nearly as fast as the interface RAM uses.

How RAM and Storage Affect Your Computer’s Performance

RAM

For most of us using computers for general purpose work — checking email, surfing the web, paying the bills, playing a game or two and watching Netflix — the RAM our computer comes with is as much as we’ll need. Further down the road, we might need to add a bit more to keep up with new operating system improvements, updated apps, and new apps that have a heftier memory requirement.

If you’re planning to use your computer for more specialized work, more RAM may benefit you greatly. Examples of those sort of tasks include editing video, editing high-resolution images, recording multi-track audio, 3D rendering, and large scale computations for science and engineering.

Again, depending on what computer you buy, you may not be able to upgrade your RAM. So consider this carefully the next time you buy a new computer, and make sure it’s either upgradeable or comes equipped with as much RAM as you think you’ll need.

Your computer’s RAM can fill up: Load up a bunch of applications, open a bunch of documents, get a bunch of activities going, and RAM will be used up by each of the individual processes, or programs, that are running.

When that happens, your computer will temporarily write information it needs to keep track of to a predefined portion of your hard drive or SSD. This area is called virtual memory, and swapping data from RAM to disk is a pretty standard feature of modern operating systems.

The faster your disk is, the less time it takes for the computer to read and write virtual memory. So a computer with an SSD, for example, will seem faster under load than a computer with a regular hard drive.

SSDs also take less time to load apps and documents than hard drives, too. Really, if your computer is using a hard drive, one of the best things you can do to extend its life and improve performance is replace it with an SSD.

Storage

Besides RAM, the most serious bottleneck to improving performance in your computer can be your storage. Even with plenty of RAM installed, computers need to write information and read it from the storage system — the hard drive or the SSD.

Hard drives come in different speeds and sizes. Many operate at 5400 RPM (their central axes turn at 5400 revolutions per minute). You’ll see snappier performance if you can get a 7200 RPM drive, and some specialized operating environments even call for 10,000 RPM drives. Faster drives cost more, are louder and use more power, but they exist as options.

New disk technologies enable hard drives to be bigger and faster. These technologies include filling the drive with helium instead of air to reduce disk platter friction, and using heat or microwaves to improve disk density, such as with HAMR (Heat-Assisted Magnetic Recording) and MAMR (Microwave-Assisted Magnetic Recording).

Because they use computer chips instead of spinning disks, SSDs are faster still, and they consume less power, produce less heat, and can take up less space. They’re also less susceptible to magnetic fields and physical jolts, which makes them great for portable use. They’re more money per gigabyte (though the price has dropped quite dramatically in recent months), so do what you will based on your budget and your needs.

For more about the difference between hard drives and SSDs, please check out Hard Disk Drive Versus Solid State Drive: What’s the Diff?

Adding More Disk Storage

As a user’s disk storage needs increase, typically they will look to larger drives to store more data. The first step might be to replace an existing drive with a larger, faster drive, or, if space permits, to add a second drive. A common strategy to improve performance is to use an SSD for the operating system and applications, and a larger HDD for data if the SSD can’t hold both.

If more storage space is needed, an external drive can be added, most often using USB or Thunderbolt to connect to the computer. This can be a single or multiple drive and might use a data storage virtualization technology such as RAID to protect the data.

If you have really large amounts of data, or simply wish to make it easy to share data with others in your location or elsewhere, you likely will turn to network-attached storage (NAS). A NAS device holds multiple drives, typically uses a data virtualization technology such as RAID, and is accessible to anyone on your local network, and, if you wish, on the internet, as well. NAS devices can offer a great deal of storage and other services that typically have been offered only by dedicated network servers in the past.

Back Up Early and Often

No matter how you configure your computer’s RAM and hard drive, remember to back up your device. Whether you have an SSD or a hard drive, and regardless of how much RAM is installed, things will eventually slow down and stop working all together.

You don’t want to be caught without any sort of ability to recover. That’s why it’s vital to have a backup strategy in place. A good backup strategy shouldn’t be dependent on any single device, either, so even if you’re backing up to a local hard disk, a network attached storage system, a Time Capsule or some other device on your computer or local network, you’re not doing enough. Having offsite backup like Backblaze can help.

For more on best backup practices, make sure to check out Backblaze’s Backup Guide.

Have a question? Let us know in the comments. And if you have ideas for things you’d like to see featured in future installments of What’s the Diff?, please let us know!


Note: This post was updated from March 15, 2016. — Editor

The post What’s the Diff: RAM vs Storage appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Е-майл измами – малко начини за разпознаване и действие

Post Syndicated from Григор original http://www.gatchev.info/blog/?p=1921

Наскоро моя позната получи писмо „от Булбанк“, с указания как да се логне в електронното си банкиране там, за да потвърди получаването на голяма сума пари. За щастие, антивирусът на компютъра ѝ беше активен и актуален.

Струва ми се обаче, че е добра идея да напиша няколко най-основни начина как да разпознаете подобна измама. Списъкът въобще не е изчерпателен, но поне е някакво начало. Повечето от съветите вършат добра работа не само срещу „банкови“ измами.

Като начало, в съобщението може да пише, че е изпратено от който и да било – Булбанк, Барак Обама, Исус Христос, нигерийски принц, починалата ви леля и т.н. Това не означава, че то има каквото и да било общо с „подателя“ – само че това е написано в полето „подател“. Ако мислите иначе, напишете на лист хартия „100 лева“ и идете да напазарувате с него нещо в магазина. Защо не? Нивото на наивност е същото… А това, че някой фалшифицира подателя, вече би трябвало да ви говори дали е добра идея да изпълнявате указанията му.

Имате ли сметка във въпросната банка? Ако нямате, или „не знаете да имате“, това е (неточно адресирана) измама. Да, в 100% от случаите. Да, независимо колко голяма сума ви обещават. Да, независимо колко сте закъсали за пари. И да, в такива случаи после чистя вируса задължително срещу заплащане, дори на приятели, на които иначе не взимам пари. Който е склонен да се хване на тази въдичка, трябва да се научи, че хващането му бръква в джоба. Иначе няма стимул да престане да се хваща.

Ако имате сметка там, използвате ли я за получавате на преводи по нея, и то големи? Очаква ли се някой да ви преведе по нея нещо неочаквано, различно от заплатата ви? Ако не, това е още един белег, че става дума за измама. Да, в 100% от случаите. Да, независимо колко голяма сума ви обещават, и т.н.

(Ако имате сметка, получавате неочаквани преводи по нея и т.н., но други белези говорят за измама, това е много неприятно. Може измамниците просто да са ви улучили по случайност. Може обаче и да имат отнякъде информация за вас. От „ксерокс“ в банката, или невнимателно ваше споменаване, или… възможности много. В този случай е вероятно и занапред да получавате опити за измама, все по-умели. Внимавайте специално!)

На български ли е съобщението? Българска банка няма никаква причина да пише на клиентите си на английски, китайски или руски език. Ако съобщението не е на български, това е измама. Да, в 100% от случаите, и т.н. Точно същото важи и ако българският му език е леко странен – българските банки не си превеждат съобщенията от английски с преводача на Гугъл. Правят го измамниците.

Колко дълго е съобщението? Ако е десетина кратки реда, с много голяма вероятност е измама. Българските банки обикновено пращат дълги съобщения, разкрасени с логота и символи на банката и пълни с оплетена юридическо-финансова терминология, за да впечатлят клиента. Кратките, делови и целенасочени съобщения – особено ако искат от вас да направите нещо онлайн или да стартирате атачмънт – засега са сигурният белег на измамата. С времето измамниците ще се научат да имитират истинско банково съобщение по-добре, но засега този белег работи. (А кой знае, може пък дори банките някога да се научат да са конкретни и делови…)

Всъщност атачмънт към съобщение, който се иска от вас да „отворите“, като правило е вирус или троянски кон. Единствените случаи, в които съм виждал банка да праща атачмънти, са парагоните по банкови операции. Изпраща ли ви банката ви парагони по принцип? Ако не, това е вирус или троянец. Ако да, сравнете внимателно дали това съобщение е абсолютно еднакво като съдържание с предишните от банката, дали името на файла с парагона е сходно, и т.н. Ако в процеса на сравняването се появят дори най-бледи съмнения, го уточнете с банката лично в техен офис, или в краен случай им звъннете на официалния им телефон. (Не го взимайте от това съобщение – ако то е измама, на телефона ще се обади измамникът и ще се представи за банката.)

Това важи с още повече сила, ако в съобщението има някакъв линк и ви карат да цъкнете на него, примерно за да се логнете в електронното си банкиране. (Ако нямате електронно банкиране, да ви казвам ли какво е това съобщение, или ще се сетите сами?) Дори ако линкът изглежда неразличимо от истинския на банката, той е фалшив. На него ще се отвори сайт, който изглежда повече или по-малко като този на банката. Сертификатът на сайта може да не е точно верният, но я забележите това, я не. Въведете ли потребителското име и паролата си, очаквайте сметката си изпразнена до максималния възможен кредит за минути. А в правилата на банката пише, че тя не носи отговорност, ако предоставите паролата си на трети лица. Честито!

Истинските банки никога не ви пращат линкове или атачмънти, на които да трябва да цъкате. Също така, те никога не искат от вас да предоставите онлайн вашите потребител, парола и/или сертификат. Нито ви питат за тях, когато ви звъннат по телефона – може само да ви помагат с указания как да си решите проблем, заради който вие сте им звъннали, това е ОК. Предоставяйте такива данни на банковите служители единствено и само лично, когато се намирате в техен офис. (Не на „служител“, който ви е настигнал на 5 метра от офиса, след като сте излезли от него – върнете се в офиса и тогава говорете! Дори служителят да е истински и да го помните от офиса, навън има предостатъчно недоброжелателни уши.) Ако са истинската банка и се налага да предоставяте данни, те ще ви предложат да идете в техен клон – направете го.

Ако ви казват – по е-майл, телефон и прочее – че се налага да предоставите такива данни, това е 100% измама. Ако ви карат да се логнете по Интернет, като цъкнете на пратен от тях линк, това е 100% измама – освен ако не са ви предупредили при позвъняване от вас на проверено техен телефон, че ще ви го пратят. Ако ви обясняват, че е важно и/или спешно да го направите, това е 110% измама. Ако няма как да отидете навреме в офис на банката и да свършите работата оттам, това е 200% измама… Понякога дори тези правила за сигурност могат да не са достатъчни, но спазвайте поне тях. Неспазването им означава да се простите с парите си и никога да не ги получите обратно.

Ако ситуацията изглежда добросъвестна, но искат да се свържете по техния линк към техния сайт за електронно банкиране, направете един прост трик. Въведете грешна парола, абсолютно различна от истинската. Ако сайтът ви наплюе, че е грешна, я въведете отново. Ако отново ви наплюе, че е грешна, има шанс сайтът да е истинския – чак тогава въведете правилната парола. Ако първия или втория път я глътне спокойно, имате работа с измамници. Уведомете полицията – не че ще си мръднат пръста, но поне за успокоение на съвестта ви. Ако и при третото въвеждане ви наплюе че е грешна, моментално позвънете на банката по проверено техен телефон и изяснете ситуацията. Ако се окаже, че не са били те, моментално искайте сметката да бъде блокирана и данните за достъп сменени. Ако имате късмет, може и да успеете да спасите някаква част от парите си.

Всичко това важи с особена сила за случая, в който ви молят за съдействие, за да заловят измамници. Напоследък това е любим трик на телефонните измамници. „Обаждаме се от полицията, установихме, че се готвят да ви измамят – като се обадят им хвърлете парите от балкона, за да ги арестуваме на място с наличност“. Ако мислите, че молбата е истинска и много искате да ѝ съдействате, направете го, но задължително с тоалетни хартийки вместо пари или фалшиви потребител и парола. Ако това е истинска хайка, ще ги хванат и така.

Ако въпреки всичко не сте склонни да приемате съвети, или ви мързи да ги спазвате, не забравяйте – измамата е вид търговия. Тя е сделка между двама, единият от които има пари, а другият – опит. След сделката този с опита си тръгва с парите, а този с парите – с опит. Измамят ли ви достатъчно пъти, или ще останете без никакви пари, или ще съберете достатъчно опит, за да почнете да слушате съвети как да се пазите.

Успех!

AWS Lambda and Amazon API Gateway launch in Frankfurt region

Post Syndicated from Vyom Nagrani original https://aws.amazon.com/blogs/compute/aws-lambda-and-amazon-api-gateway-launch-in-frankfurt-region/

Vyom Nagrani Vyom Nagrani, Sr. Product Manager, AWS Lambda
We’re happy to announce that you can now build and deploy serverless applications using AWS Lambda and Amazon API Gateway in the Frankfurt region.
Amazon S3, Amazon Kinesis, Amazon SNS, Amazon DynamoDB Streams, Amazon CloudWatch Events, Amazon CloudWatch Logs, and Amazon API Gateway are available as event sources in the Frankfurt region. You can now trigger a Lambda function to process your data stored in Germany using any of these AWS services.

AWS Lambda and Amazon API Gateway launch in Frankfurt region

Post Syndicated from Vyom Nagrani original https://aws.amazon.com/blogs/compute/aws-lambda-and-amazon-api-gateway-launch-in-frankfurt-region/

Vyom Nagrani Vyom Nagrani, Sr. Product Manager, AWS Lambda
We’re happy to announce that you can now build and deploy serverless applications using AWS Lambda and Amazon API Gateway in the Frankfurt region.
Amazon S3, Amazon Kinesis, Amazon SNS, Amazon DynamoDB Streams, Amazon CloudWatch Events, Amazon CloudWatch Logs, and Amazon API Gateway are available as event sources in the Frankfurt region. You can now trigger a Lambda function to process your data stored in Germany using any of these AWS services.

BetterCap – Modular, Portable MiTM Framework

Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/vnYoRZjizk0/

BetterCAP is a powerful, modular, portable MiTM framework that allows you to perform various types of Man-In-The-Middle attacks against the network. It can also help to manipulate HTTP and HTTPS traffic in real-time and much more. BetterCap has some pretty impressive Spoofing abilities with multiple host discovery (just launch the tool and it will…

Read the full post at darknet.org.uk

New Customer Support Champion – Troy

Post Syndicated from Yev original https://www.backblaze.com/blog/support-tech-troy/

blog-troy
As Backblaze continues growing we need seasoned support veterans to help our customers in times of need. That’s why we hired Troy! He’s been around the block in a few support and tech consulting gigs and is going to be a great addition to the Backblaze team! He’s also musically inclined, which is pretty awesome. Lets learn a bit more about Troy shall we?
What is your Backblaze Title?
Technical Support Technician
Where are you originally from?
Castro Valley, CA
What attracted you to Backblaze?
Backblaze is a company I’ve been familiar with for a long time. In a previous life, I worked as an IT consultant and whenever I was asked for a backup solution, I always recommended Backblaze. They always had a reputation for being honest, affordable, and reliable. I’m excited to be joining such an incredible team.
What do you expect to learn while being at Backblaze?
I expect to learn how to work as part of a closely-knit team. I’ve mostly worked for larger companies so the change of pace will be refreshing.
Where else have you worked?
I worked at Apple as a Lead Genius, Elgato in Technical Support, Sweet Memory as an IT Consultant, I opened up a cafe in Berkeley, CA with my uncle that I managed for 2 years, and most recently I was a Content Specialist at Lyft.
Where did you go to school?
I’m actually still in school! I originally attended Chico State University out of high school. I’m currently attending Chabot College in Hayward, CA and will be transferring to San Francisco State University to complete my Bachelor’s in Business Administration.
What’s your dream job?
My dream job is not so much a specific title, but I’d like to have a leadership position at a company with a great culture that cares about both its customers and employees.
Favorite place you’ve traveled?
Vienna – There’s so much culture and amazing architecture. Also, there are street vendors all over the place that sell these amazing cheese-filled sausages.
Favorite hobby?
I love craft beer and have been homebrewing for several years.
Of what achievement are you most proud?
When I was 20 I dropped out of college to pursue my dream of being a professional musician. I was the lead singer in a band that toured around the country for several years and even got a record deal. I’m incredibly proud that I was able to follow my dream at the time and make it happen.
Star Trek or Star Wars?
Well, my last name looks like “Little Jedi” and I once spent a month choreographing a lightsaber duel with a friend so…
Coke or Pepsi?
Coca-Cola
Favorite food?
Burritos. All day, every day, Burritos
Why do you like certain things?
I like things that challenge me. I also like things that are musical.
Anything else you’d like you’d like to tell us?
I’m a big sports fan (A’s, Raiders, Warriors, Sharks) and a huge Disney nerd.
We keep on hiring Disney fans. Who knew there were so many of them out there? Welcome aboard Troy, we’ll try not to force you to sing Disney jingles too much!
The post New Customer Support Champion – Troy appeared first on Backblaze Blog | The Life of a Cloud Backup Company.

Amazon SES Now Supports Custom MAIL FROM Domains

Post Syndicated from Cristian Smochina original https://aws.amazon.com/blogs/ses/amazon-ses-now-supports-custom-mail-from-domains/

The Amazon SES team is pleased to announce that, to increase your email authentication options, you can now use your own MAIL FROM domain when you send emails with SES.

First, a quick refresher on the different “source” addresses associated with an email: an email has a “From” address and a MAIL FROM address. The “From” address is the address that you pass to SES in the header of your email. This is the address that recipients see when they view your email in their inbox (RFC 5322). The MAIL FROM address (a.k.a. “envelope MAIL FROM”), on the other hand, is the address that the sending mail server (SES) transmits to the receiving mail server to indicate the source of the mail (RFC 5321). The MAIL FROM address is used by the receiving mail server to return bounce messages and other error notifications, and is only viewable by recipients if they inspect the email’s headers in the raw message source. By default, SES uses its own MAIL FROM domain (amazonses.com or a subdomain of that) when it sends your emails.

Why use my own MAIL FROM domain?

You might choose to use your own MAIL FROM domain to give you more flexibility in complying with Domain-based Message Authentication, Reporting and Conformance (DMARC). DMARC is an email authentication protocol that relies on two other authentication protocols (Sender Policy Framework (SPF) and DomainKeys Identified Mail (DKIM)) to enable receiving mail servers to validate that an incoming email is authorized by the owner of the sending domain and has not been modified during transit.

An email can comply with DMARC in two ways: by satisfying the DKIM requirements and/or by satisfying the SPF requirements. You can use either method, but some senders prefer to use both DKIM and SPF for maximum deliverability. As established by DMARC, the requirements for each validation are as follows:

  • DKIM. The requirements to pass DKIM validation for DMARC are: 1) the message must have a valid DKIM signature, and 2) the domain in the DKIM signature must align with the domain in the “From” address in the header of the email. You can easily achieve DKIM validation with SES, which provides a tool (EasyDKIM) to DKIM-sign your messages automatically.
  • SPF. The requirements to pass SPF validation for DMARC are: 1) The domain in the MAIL FROM address of the email must authorize the sending mail server to send for it via a DNS record, and 2) the domain in the email’s “From” address must match the MAIL FROM domain. When SES uses its default MAIL FROM domain, the first SPF requirement is satisfied (because the MAIL FROM domain is amazonses.com, and the mail server is SES), but the second requirement is not satisfied. This is where the benefit of using your own MAIL FROM domain comes in – it enables you to meet that second SPF requirement.

Can I use any domain as my MAIL FROM domain?

The MAIL FROM domain you use with SES must be a subdomain of the verified identity you want to use it with. For example, a MAIL FROM domain of bounce.example.com would be a legitimate MAIL FROM domain for verified domain example.com or verified email address [email protected]. An additional requirement is that the MAIL FROM domain you use with SES must not be a domain that you use in a “From” address if the MAIL FROM domain is the destination of email feedback forwarding.

How do I set it up?

You configure an identity to use a specific MAIL FROM domain within the Identity Management part of the SES console, or by using the SES API. You also must publish MX and SPF records to your domain’s DNS server. When SES successfully detects the MX record, emails you send from the identity will use the specified MAIL FROM domain. For a full description of the set-up procedure, see the developer guide.

Will my sending process change?

No. After you configure a verified identity to use a specified MAIL FROM domain and SES successfully detects the required MX record, you simply continue to send emails in the usual way.

We hope you find this feature useful! If you have any questions or comments, let us know in the SES Forum or here in the comment section of the blog.

Amazon SES Now Supports Custom MAIL FROM Domains

Post Syndicated from Cristian Smochina original http://sesblog.amazon.com/post/TxZAFC38KWZ1FE/Amazon-SES-Now-Supports-Custom-MAIL-FROM-Domains

The Amazon SES team is pleased to announce that, to increase your email authentication options, you can now use your own MAIL FROM domain when you send emails with SES.

First, a quick refresher on the different "source" addresses associated with an email: an email has a "From" address and a MAIL FROM address. The "From" address is the address that you pass to SES in the header of your email. This is the address that recipients see when they view your email in their inbox (RFC 5322). The MAIL FROM address (a.k.a. "envelope MAIL FROM"), on the other hand, is the address that the sending mail server (SES) transmits to the receiving mail server to indicate the source of the mail (RFC 5321). The MAIL FROM address is used by the receiving mail server to return bounce messages and other error notifications, and is only viewable by recipients if they inspect the email’s headers in the raw message source. By default, SES uses its own MAIL FROM domain (amazonses.com or a subdomain of that) when it sends your emails.

Why use my own MAIL FROM domain?

You might choose to use your own MAIL FROM domain to give you more flexibility in complying with Domain-based Message Authentication, Reporting and Conformance (DMARC). DMARC is an email authentication protocol that relies on two other authentication protocols (Sender Policy Framework (SPF) and DomainKeys Identified Mail (DKIM)) to enable receiving mail servers to validate that an incoming email is authorized by the owner of the sending domain and has not been modified during transit.

An email can comply with DMARC in two ways: by satisfying the DKIM requirements and/or by satisfying the SPF requirements. You can use either method, but some senders prefer to use both DKIM and SPF for maximum deliverability. As established by DMARC, the requirements for each validation are as follows:

– DKIM. The requirements to pass DKIM validation for DMARC are: 1) the message must have a valid DKIM signature, and 2) the domain in the DKIM signature must align with the domain in the "From" address in the header of the email. You can easily achieve DKIM validation with SES, which provides a tool (EasyDKIM) to DKIM-sign your messages automatically.

– SPF. The requirements to pass SPF validation for DMARC are: 1) The domain in the MAIL FROM address of the email must authorize the sending mail server to send for it via a DNS record, and 2) the domain in the email’s "From" address must match the MAIL FROM domain. When SES uses its default MAIL FROM domain, the first SPF requirement is satisfied (because the MAIL FROM domain is amazonses.com, and the mail server is SES), but the second requirement is not satisfied. This is where the benefit of using your own MAIL FROM domain comes in – it enables you to meet that second SPF requirement.

Can I use any domain as my MAIL FROM domain?

The MAIL FROM domain you use with SES must be a subdomain of the verified identity you want to use it with. For example, a MAIL FROM domain of bounce.example.com would be a legitimate MAIL FROM domain for verified domain example.com or verified email address [email protected] An additional requirement is that the MAIL FROM domain you use with SES must not be a domain that you use in a "From" address if the MAIL FROM domain is the destination of email feedback forwarding.

How do I set it up?

You configure an identity to use a specific MAIL FROM domain within the Identity Management part of the SES console, or by using the SES API. You also must publish MX and SPF records to your domain’s DNS server. When SES successfully detects the MX record, emails you send from the identity will use the specified MAIL FROM domain. For a full description of the set-up procedure, see the developer guide.

Will my sending process change?

No. After you configure a verified identity to use a specified MAIL FROM domain and SES successfully detects the required MX record, you simply continue to send emails in the usual way.

We hope you find this feature useful! If you have any questions or comments, let us know in the SES Forum or here in the comment section of the blog.

Amazon SES Now Supports Custom MAIL FROM Domains

Post Syndicated from Cristian Smochina original http://sesblog.amazon.com/post/TxZAFC38KWZ1FE/Amazon-SES-Now-Supports-Custom-MAIL-FROM-Domains

The Amazon SES team is pleased to announce that, to increase your email authentication options, you can now use your own MAIL FROM domain when you send emails with SES.

First, a quick refresher on the different "source" addresses associated with an email: an email has a "From" address and a MAIL FROM address. The "From" address is the address that you pass to SES in the header of your email. This is the address that recipients see when they view your email in their inbox (RFC 5322). The MAIL FROM address (a.k.a. "envelope MAIL FROM"), on the other hand, is the address that the sending mail server (SES) transmits to the receiving mail server to indicate the source of the mail (RFC 5321). The MAIL FROM address is used by the receiving mail server to return bounce messages and other error notifications, and is only viewable by recipients if they inspect the email’s headers in the raw message source. By default, SES uses its own MAIL FROM domain (amazonses.com or a subdomain of that) when it sends your emails.

Why use my own MAIL FROM domain?

You might choose to use your own MAIL FROM domain to give you more flexibility in complying with Domain-based Message Authentication, Reporting and Conformance (DMARC). DMARC is an email authentication protocol that relies on two other authentication protocols (Sender Policy Framework (SPF) and DomainKeys Identified Mail (DKIM)) to enable receiving mail servers to validate that an incoming email is authorized by the owner of the sending domain and has not been modified during transit.

An email can comply with DMARC in two ways: by satisfying the DKIM requirements and/or by satisfying the SPF requirements. You can use either method, but some senders prefer to use both DKIM and SPF for maximum deliverability. As established by DMARC, the requirements for each validation are as follows:

– DKIM. The requirements to pass DKIM validation for DMARC are: 1) the message must have a valid DKIM signature, and 2) the domain in the DKIM signature must align with the domain in the "From" address in the header of the email. You can easily achieve DKIM validation with SES, which provides a tool (EasyDKIM) to DKIM-sign your messages automatically.

– SPF. The requirements to pass SPF validation for DMARC are: 1) The domain in the MAIL FROM address of the email must authorize the sending mail server to send for it via a DNS record, and 2) the domain in the email’s "From" address must match the MAIL FROM domain. When SES uses its default MAIL FROM domain, the first SPF requirement is satisfied (because the MAIL FROM domain is amazonses.com, and the mail server is SES), but the second requirement is not satisfied. This is where the benefit of using your own MAIL FROM domain comes in – it enables you to meet that second SPF requirement.

Can I use any domain as my MAIL FROM domain?

The MAIL FROM domain you use with SES must be a subdomain of the verified identity you want to use it with. For example, a MAIL FROM domain of bounce.example.com would be a legitimate MAIL FROM domain for verified domain example.com or verified email address [email protected] An additional requirement is that the MAIL FROM domain you use with SES must not be a domain that you use in a "From" address if the MAIL FROM domain is the destination of email feedback forwarding.

How do I set it up?

You configure an identity to use a specific MAIL FROM domain within the Identity Management part of the SES console, or by using the SES API. You also must publish MX and SPF records to your domain’s DNS server. When SES successfully detects the MX record, emails you send from the identity will use the specified MAIL FROM domain. For a full description of the set-up procedure, see the developer guide.

Will my sending process change?

No. After you configure a verified identity to use a specified MAIL FROM domain and SES successfully detects the required MX record, you simply continue to send emails in the usual way.

We hope you find this feature useful! If you have any questions or comments, let us know in the SES Forum or here in the comment section of the blog.

Case 226: Spring, Fall

Post Syndicated from The Codeless Code original http://thecodelesscode.com/case/226

On the Monday after the first budding of spring, the
entire Temple was called to assembly in the Great Hall by
old Madame Jinyu, the Abbess Over All Clans And Concerns.
Not a single person was excused—indeed, two desperately ill
monks were carried in on stretchers and hoisted upright
against the back wall, next to the propped-up corpse of a
senior nun who had died the previous Thursday without giving
the mandatory two weeks’ notice.

Directly in front of old Jinyu’s podium sat the diligent
monks of the Elephant’s Footprint Clan, who together had
mastered the arcane arts of database design and a hundred
persistence libraries. The monks had arrayed themselves in
perfect rows and columns atop low ceremonial look-up tables
that had been joined together for the occasion.

Behind the Elephant’s Footprint sat the knowledgable
monks of the Laughing Monkey Clan, who implemented the
business logic of the Temple’s many customers. So frightfully
intelligent was the behavior of their rule engines that
their codebase was rumored to be possessed by the spirits of
long-dead business analysts.

Behind the Laughing Monkey sat the prolific monks of the
Spider Clan, who built the web interfaces and services
of every Temple application. Because web technology stacks
came and went so frequently, their novices were trained to
instinctively forget everything that was no longer relevant,
lest they go mad. Curiously, though, when asked how this
Art of Forgetting worked, the monks invariably laughed and
said that there was no such Art; for if there were, they
would surely have remembered learning it.

Proud were these, the Three Great Clans of the Temple. So
it was with great dismay that they learned of Jinyu’s plans
for their future.

- - -

“In autumn, the abbot Ruh Cheen convinced us to taste
the nectar of the Agile methodology,” said old
Jinyu to her audience. “Through the winter we nibbled its
fruit and found it sweet. Now spring has arrived, and we
wish to plant the seeds of a great harvest.

“No longer will we haphazardly select monks from the Three
Clans to work on tasks as they arise. Instead, each product
will have a Tiny Clan of its own, whose members will not
change.

“Some of you will belong to a single Tiny Clan; some to two
or three. Each Tiny Clan will have its own rules, set its own
standards, establish its own traditions. The monks of your
Tiny Clans will be your new brethren. You will work with
them, eat with them, do chores with them, and share a
hall with them.

“Tonight I will post your new assignments. Tomorrow the
Three Clans will be no more. Now, go: prepare yourselves.”

Thus did old Jinyu depart the Great Hall, to a chorus of
worried murmuring. Even the dead senior nun seemed
a trifle unhappier.

- - -

Young master Zjing turned to old Banzen
with a look that was equal parts dread and disbelief.

Said Zjing, “When the Spider learns her craft from the
Monkey and the Elephant, what manner of webs shall we see in
the trees?”

“Creative ones,” replied Banzen.

“And how shall we manage such ‘creativity’?” continued the
nun. “How shall we review code? How shall we mentor? How
shall we plan?”

“Differently,” said Banzen.

“You are infuriatingly calm!” scowled Zjing. “I thought
that Banzen of all people would share my concerns.”

Banzen chuckled. “When Ruh Cheen was brought into the
temple by Jinyu, you told your fellows that
the abbess is no fool. And though you were lying,
you spoke true. Jinyu sees that the new Way of the World
is not the Temple’s Way. She has chosen to follow the World.”

“She is following it over the edge of cliff,” grumbled
the nun.

“Indeed!” said Banzen with a smile. “Yet what is the
Temple: a stone, or a bird?” The old master took Zjing’s
arm in his own and started for the doors, nodding his head
respectfully as they passed the dead senior nun. “I have
lived through such times before. The initial plunge is
always unsettling to the stomach, but we have yet to crash
into the rocks below.”

“So, how long must I wait before I see the Temple
sprout feathers?” asked Zjing.

“My dear young master,” said Banzen. “Did you not understand
the terms of your own promotion? We are the feathers.”

2016-03-13 мина първия bgp workshop

Post Syndicated from Vasil Kolev original https://vasil.ludost.net/blog/?p=3293

И направихме първия BGP workshop. Бяхме бая на гъсто, но като цяло се събрахме.
Бяха се записали 18 човека, 2-3 не дойдоха (Мариян замести единия), всички успяха да пуснат повечето си сесии, като една част от хората подкараха и IPv6. Беше малко разбъркано от организационна гледна точка (хората идваха по различни време, някои си тръгнаха по-рано и им изпаднаха сесиите), но мисля, че всички се забавляваха 🙂
Целта на занятието беше хората да подкарат BGP в трите стандартни сценария – transit (т.е. връзка към internet доставчик, който дава пълна routing таблица), втори transit, два peer-а (т.е. хора, с които си търкаляте трафик само за вашите мрежи), и internet exchange (един vlan, два route server-а и свободна обмяна на prefix-и). Имаше свързаност през AS200533 (initlab) и с една врътка отделно през AS57344 (telehouse), ето и схема на setup-а.
Имаше всякакви забавни проблеми, например основния, че хората по подразбиране не се сещат да сложат филтри и няколко пъти беше пускана пълна таблица през exchange-а. Друг интересен момент беше, че понеже exchange си announce-ваше и собствената мрежа, OpenBGPD по някаква причина слагаше път към нея през самия route server и се губеше възможността да се говори с другия. Имаше и забавни сблъсъци м/у опитите за настройка на мрежата и network manager, който решаваше да изтрие разни неща.
Имахме различни интересни устройства, като повечето бяха linux-и с quagga или bird (доста от тях на flash-ка с ubuntu от Мариян), няколко бяха на raspberry pi или подобни мижави устройства, имаше един малък juniper-ски router и един JunOS във виртуалка.
Имаше сериозен интерес (18 записани човека, и 6-7 още, на които отказах, защото нямаше място), та вероятно ще повторим упражнението пак след няколко седмици, ще се обяви тук и на разните места за целта (и ще пиша на всички, на които отказах за тоя път).

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close