Tag Archives: domain names

ISP Telenor Will Block The Pirate Bay in Sweden Without a Shot Fired

Post Syndicated from Andy original https://torrentfreak.com/isp-telenor-will-block-the-pirate-bay-in-sweden-without-a-shot-fired-180520/

Back in 2014, Universal Music, Sony Music, Warner Music, Nordisk Film and the Swedish Film Industry filed a lawsuit against Bredbandsbolaget, one of Sweden’s largest ISPs.

The copyright holders asked the Stockholm District Court to order the ISP to block The Pirate Bay and streaming site Swefilmer, claiming that the provider knowingly facilitated access to the pirate platforms and assisted their pirating users.

Soon after the ISP fought back, refusing to block the sites in a determined response to the Court.

“Bredbandsbolaget’s role is to provide its subscribers with access to the Internet, thereby contributing to the free flow of information and the ability for people to reach each other and communicate,” the company said in a statement.

“Bredbandsbolaget does not block content or services based on individual organizations’ requests. There is no legal obligation for operators to block either The Pirate Bay or Swefilmer.”

In February 2015 the parties met in court, with Bredbandsbolaget arguing in favor of the “important principle” that ISPs should not be held responsible for content exchanged over the Internet, in the same way the postal service isn’t responsible for the contents of an envelope.

But with TV companies SVT, TV4 Group, MTG TV, SBS Discovery and C More teaming up with the IFPI alongside Paramount, Disney, Warner and Sony in the case, Bredbandsbolaget would need to pull out all the stops to obtain victory. The company worked hard and initially the news was good.

In November 2015, the Stockholm District Court decided that the copyright holders could not force Bredbandsbolaget to block the pirate sites, ruling that the ISP’s operations did not amount to participation in the copyright infringement offenses carried out by some of its ‘pirate’ subscribers.

However, the case subsequently went to appeal, with the brand new Patent and Market Court of Appeal hearing arguments. In February 2017 it handed down its decision, which overruled the earlier ruling of the District Court and ordered Bredbandsbolaget to implement “technical measures” to prevent its customers accessing the ‘pirate’ sites through a number of domain names and URLs.

With nowhere left to go, Bredbandsbolaget and owner Telenor were left hanging onto their original statement which vehemently opposed site-blocking.

“It is a dangerous path to go down, which forces Internet providers to monitor and evaluate content on the Internet and block websites with illegal content in order to avoid becoming accomplices,” they said.

In March 2017, Bredbandsbolaget blocked The Pirate Bay but said it would not give up the fight.

“We are now forced to contest any future blocking demands. It is the only way for us and other Internet operators to ensure that private players should not have the last word regarding the content that should be accessible on the Internet,” Bredbandsbolaget said.

While it’s not clear whether any additional blocking demands have been filed with the ISP, this week an announcement by Bredbandsbolaget parent company Telenor revealed an unexpected knock-on effect. Seemingly without a single shot being fired, The Pirate Bay will now be blocked by Telenor too.

The background lies in Telenor’s acquisition of Bredbandsbolaget back in 2005. Until this week the companies operated under separate brands but will now merge into one entity.

“Telenor Sweden and Bredbandsbolaget today take the final step on their joint trip and become the same company with the same name. As a result, Telenor becomes a comprehensive provider of broadband, TV and mobile communications,” the company said in a statement this week.

“Telenor Sweden and Bredbandsbolaget have shared both logo and organization for the last 13 years. Today, we take the last step in the relationship and consolidate the companies under the same name.”

Up until this final merger, 600,000 Bredbandsbolaget broadband customers were denied access to The Pirate Bay. Now it appears that Telenor’s 700,000 fiber and broadband customers will be affected too. The new single-brand company says it has decided to block the notorious torrent site across its entire network.

“We have not discontinued Bredbandsbolaget, but we have merged Telenor and Bredbandsbolaget and become one,” the company said.

“When we share the same network, The Pirate Bay is blocked by both Telenor and Bredbandsbolaget and there is nothing we plan to change in the future.”

TorrentFreak contacted the PR departments of both Telenor and Bredbandsbolaget requesting information on why a court order aimed at only the latter’s customers would now affect those of the former too, more than doubling the blockade’s reach. Neither company responded which leaves only speculation as to its motives.

On the one hand, the decision to voluntarily implement an expanded blockade could perhaps be viewed as a little unusual given how much time, effort and money has been invested in fighting web-blockades in Sweden.

On the other, the merger of the companies may present legal difficulties as far as the court order goes and it could certainly cause friction among the customer base of Telenor if some customers could access TPB, and others could not.

In any event, the legal basis for web-blocking on copyright infringement grounds was firmly established last year at the EU level, which means that Telenor would lose any future legal battle, should it decide to dig in its heels. On that basis alone, the decision to block all customers probably makes perfect commercial sense.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Aussie Federal Court Orders ISPs to Block Pirate IPTV Service

Post Syndicated from Andy original https://torrentfreak.com/aussie-federal-court-orders-isps-to-block-pirate-iptv-service-180427/

After successful applying for ISP blocks against dozens of traditional torrent and streaming portals, Village Roadshow and a coalition of movie studios switched tack last year.

With the threat of pirate subscription IPTV services looming large, Roadshow, Disney, Universal, Warner Bros, Twentieth Century Fox, and Paramount targeted HDSubs+ (also known as PressPlayPlus), a fairly well-known service that provides hundreds of otherwise premium live channels, movies, and sports for a relatively small monthly fee.

The injunction, which was filed last October, targets Australia’s largest ISPs including Telstra, Optus, TPG, and Vocus, plus subsidiaries.

Unlike blocking injunctions targeting regular sites, the studios sought to have several elements of HD Subs+ infrastructure rendered inaccessible, so that its sales platform, EPG (electronic program guide), software (such as an Android and set-top box app), updates, and sundry other services would fail to operate in Australia.

After a six month wait, the Federal Court granted the application earlier today, compelling Australia’s ISPs to block “16 online locations” associated with the HD Subs+ service, rendering its TV services inaccessible Down Under.

“Each respondent must, within 15 business days of service of these orders, take reasonable steps to disable access to the target online locations,” said Justice Nicholas, as quoted by ZDNet.

A small selection of channels in the HDSubs+ package

The ISPs were given flexibility in how to implement the ban, with the Judge noting that DNS blocking, IP address blocking or rerouting, URL blocking, or “any alternative technical means for disabling access”, would be acceptable.

The rightsholders are required to pay a fee of AU$50 fee for each domain they want to block but Village Roadshow says it doesn’t mind doing so, since blocking is in “public interest”. Continuing a pattern established last year, none of the ISPs showed up to the judgment.

A similar IPTV blocking application was filed by Hong Kong-based broadcaster Television Broadcasts Limited (TVB) last year.

TVB wants ISPs including Telstra, Optus, Vocus, and TPG plus their subsidiaries to block access to seven Android-based services named as A1, BlueTV, EVPAD, FunTV, MoonBox, Unblock, and hTV5.

The application was previously heard alongside the HD Subs+ case but will now be handled separately following complications. In April it was revealed that TVB not only wants to block Internet locations related to the technical operation of the service, but also hosting sites that fulfill a role similar to that of Google Play or Apple’s App Store.

TVB wants to have these app marketplaces blocked by Australian ISPs, which would not only render the illicit apps inaccessible to the public but all of the non-infringing ones too.

Justice Nicholas will now have to decide whether the “primary purpose” of these marketplaces is to infringe or facilitate the infringement of TVB’s copyrights. However, there is also a question of whether China-focused live programming has copyright status in Australia. An additional hearing is scheduled for May 2 for these matters to be addressed.

Also on Friday, Foxtel filed yet another blocking application targeting “15 online locations” involving 27 domain names connected to traditional BitTorrent and streaming services.

According to ComputerWorld the injunction targets the same set of ISPs but this time around, Foxtel is trying to save on costs.

The company doesn’t want to have expert witnesses present in court, doesn’t want to stage live demos of websites, and would like to rely on videos and screenshots instead. Foxtel also says that if the ISPs agree, it won’t serve its evidence on them as it has done previously.

The company asked Justice Nicholas to deal with the injunction application “on paper” but he declined, setting a hearing for June 18 but accepting screenshots and videos as evidence.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

How to centralize DNS management in a multi-account environment

Post Syndicated from Mahmoud Matouk original https://aws.amazon.com/blogs/security/how-to-centralize-dns-management-in-a-multi-account-environment/

In a multi-account environment where you require connectivity between accounts, and perhaps connectivity between cloud and on-premises workloads, the demand for a robust Domain Name Service (DNS) that’s capable of name resolution across all connected environments will be high.

The most common solution is to implement local DNS in each account and use conditional forwarders for DNS resolutions outside of this account. While this solution might be efficient for a single-account environment, it becomes complex in a multi-account environment.

In this post, I will provide a solution to implement central DNS for multiple accounts. This solution reduces the number of DNS servers and forwarders needed to implement cross-account domain resolution. I will show you how to configure this solution in four steps:

  1. Set up your Central DNS account.
  2. Set up each participating account.
  3. Create Route53 associations.
  4. Configure on-premises DNS (if applicable).

Solution overview

In this solution, you use AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD) as a DNS service in a dedicated account in a Virtual Private Cloud (DNS-VPC).

The DNS service included in AWS Managed Microsoft AD uses conditional forwarders to forward domain resolution to either Amazon Route 53 (for domains in the awscloud.com zone) or to on-premises DNS servers (for domains in the example.com zone). You’ll use AWS Managed Microsoft AD as the primary DNS server for other application accounts in the multi-account environment (participating accounts).

A participating account is any application account that hosts a VPC and uses the centralized AWS Managed Microsoft AD as the primary DNS server for that VPC. Each participating account has a private, hosted zone with a unique zone name to represent this account (for example, business_unit.awscloud.com).

You associate the DNS-VPC with the unique hosted zone in each of the participating accounts, this allows AWS Managed Microsoft AD to use Route 53 to resolve all registered domains in private, hosted zones in participating accounts.

The following diagram shows how the various services work together:
 

Diagram showing the relationship between all the various services

Figure 1: Diagram showing the relationship between all the various services

 

In this diagram, all VPCs in participating accounts use Dynamic Host Configuration Protocol (DHCP) option sets. The option sets configure EC2 instances to use the centralized AWS Managed Microsoft AD in DNS-VPC as their default DNS Server. You also configure AWS Managed Microsoft AD to use conditional forwarders to send domain queries to Route53 or on-premises DNS servers based on query zone. For domain resolution across accounts to work, we associate DNS-VPC with each hosted zone in participating accounts.

If, for example, server.pa1.awscloud.com needs to resolve addresses in the pa3.awscloud.com domain, the sequence shown in the following diagram happens:
 

How domain resolution across accounts works

Figure 2: How domain resolution across accounts works

 

  • 1.1: server.pa1.awscloud.com sends domain name lookup to default DNS server for the name server.pa3.awscloud.com. The request is forwarded to the DNS server defined in the DHCP option set (AWS Managed Microsoft AD in DNS-VPC).
  • 1.2: AWS Managed Microsoft AD forwards name resolution to Route53 because it’s in the awscloud.com zone.
  • 1.3: Route53 resolves the name to the IP address of server.pa3.awscloud.com because DNS-VPC is associated with the private hosted zone pa3.awscloud.com.

Similarly, if server.example.com needs to resolve server.pa3.awscloud.com, the following happens:

  • 2.1: server.example.com sends domain name lookup to on-premise DNS server for the name server.pa3.awscloud.com.
  • 2.2: on-premise DNS server using conditional forwarder forwards domain lookup to AWS Managed Microsoft AD in DNS-VPC.
  • 1.2: AWS Managed Microsoft AD forwards name resolution to Route53 because it’s in the awscloud.com zone.
  • 1.3: Route53 resolves the name to the IP address of server.pa3.awscloud.com because DNS-VPC is associated with the private hosted zone pa3.awscloud.com.

Step 1: Set up a centralized DNS account

In previous AWS Security Blog posts, Drew Dennis covered a couple of options for establishing DNS resolution between on-premises networks and Amazon VPC. In this post, he showed how you can use AWS Managed Microsoft AD (provisioned with AWS Directory Service) to provide DNS resolution with forwarding capabilities.

To set up a centralized DNS account, you can follow the same steps in Drew’s post to create AWS Managed Microsoft AD and configure the forwarders to send DNS queries for awscloud.com to default, VPC-provided DNS and to forward example.com queries to the on-premise DNS server.

Here are a few considerations while setting up central DNS:

  • The VPC that hosts AWS Managed Microsoft AD (DNS-VPC) will be associated with all private hosted zones in participating accounts.
  • To be able to resolve domain names across AWS and on-premises, connectivity through Direct Connect or VPN must be in place.

Step 2: Set up participating accounts

The steps I suggest in this section should be applied individually in each application account that’s participating in central DNS resolution.

  1. Create the VPC(s) that will host your resources in participating account.
  2. Create VPC Peering between local VPC(s) in each participating account and DNS-VPC.
  3. Create a private hosted zone in Route 53. Hosted zone domain names must be unique across all accounts. In the diagram above, we used pa1.awscloud.com / pa2.awscloud.com / pa3.awscloud.com. You could also use a combination of environment and business unit: for example, you could use pa1.dev.awscloud.com to achieve uniqueness.
  4. Associate VPC(s) in each participating account with the local private hosted zone.

The next step is to change the default DNS servers on each VPC using DHCP option set:

  1. Follow these steps to create a new DHCP option set. Make sure in the DNS Servers to put the private IP addresses of the two AWS Managed Microsoft AD servers that were created in DNS-VPC:
     
    The "Create DHCP options set" dialog box

    Figure 3: The “Create DHCP options set” dialog box

     

  2. Follow these steps to assign the DHCP option set to your VPC(s) in participating account.

Step 3: Associate DNS-VPC with private hosted zones in each participating account

The next steps will associate DNS-VPC with the private, hosted zone in each participating account. This allows instances in DNS-VPC to resolve domain records created in these hosted zones. If you need them, here are more details on associating a private, hosted zone with VPC on a different account.

  1. In each participating account, create the authorization using the private hosted zone ID from the previous step, the region, and the VPC ID that you want to associate (DNS-VPC).
     
    aws route53 create-vpc-association-authorization –hosted-zone-id <hosted-zone-id> –vpc VPCRegion=<region>,VPCId=<vpc-id>
     
  2. In the centralized DNS account, associate DNS-VPC with the hosted zone in each participating account.
     
    aws route53 associate-vpc-with-hosted-zone –hosted-zone-id <hosted-zone-id> –vpc VPCRegion=<region>,VPCId=<vpc-id>
     

After completing these steps, AWS Managed Microsoft AD in the centralized DNS account should be able to resolve domain records in the private, hosted zone in each participating account.

Step 4: Setting up on-premises DNS servers

This step is necessary if you would like to resolve AWS private domains from on-premises servers and this task comes down to configuring forwarders on-premise to forward DNS queries to AWS Managed Microsoft AD in DNS-VPC for all domains in the awscloud.com zone.

The steps to implement conditional forwarders vary by DNS product. Follow your product’s documentation to complete this configuration.

Summary

I introduced a simplified solution to implement central DNS resolution in a multi-account environment that could be also extended to support DNS resolution between on-premise resources and AWS. This can help reduce operations effort and the number of resources needed to implement cross-account domain resolution.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, start a new thread on the AWS Directory Service forum or contact AWS Support.

Want more AWS Security news? Follow us on Twitter.

Registrars Suspend 11 Pirate Site Domains, 89 More in the Crosshairs

Post Syndicated from Andy original https://torrentfreak.com/registrars-suspend-11-pirate-site-domains-89-more-in-the-crosshairs-180423/

In addition to website blocking which is running rampant across dozens of countries right now, targeting the domains of pirate sites is considered to be a somewhat effective anti-piracy tool.

The vast majority of websites are found using a recognizable name so when they become inaccessible, site operators have to work quickly to get the message out to fans. That can mean losing visitors, at least in the short term, and also contributes to the rise of copy-cat sites that may not have users’ best interests at heart.

Nevertheless, crime-fighting has always been about disrupting the ability of the enemy to do business so with this in mind, authorities in India began taking advice from the UK’s Police Intellectual Property Crime Unit (PIPCU) a couple of years ago.

After studying the model developed by PIPCU, India formed its Digital Crime Unit (DCU), which follows a multi-stage plan.

Initially, pirate sites and their partners are told to cease-and-desist. Next, complaints are filed with advertisers, who are asked to stop funding site activities. Service providers and domain registrars also receive a written complaint from the DCU, asking them to suspend services to the sites in question.

Last July, the DCU earmarked around 9,000 sites where pirated content was being made available. From there, 1,300 were placed on a shortlist for targeted action. Precisely how many have been contacted thus far is unclear but authorities are now reporting success.

According to local reports, the Maharashtra government’s Digital Crime Unit has managed to have 11 pirate site domains suspended following complaints from players in the entertainment industry.

As is often the case (and to avoid them receiving even more attention) the sites in question aren’t being named but according to Brijesh Singh, special Inspector General of Police in Maharashtra, the sites had a significant number of visitors.

Their domain registrars were sent a notice under Section 149 of the Code Of Criminal Procedure, which grants police the power to take preventative action when a crime is suspected. It’s yet to be confirmed officially but it seems likely that pirate sites utilizing local registrars were targeted by the authorities.

“Responding to our notice, the domain names of all these websites, that had a collective viewership of over 80 million, were suspended,” Singh said.

Laxman Kamble, a police inspector attached to the state government’s Cyber Cell, said the pilot project was launched after the government received complaints from Viacom and Star but back in January there were reports that the MPAA had also become involved.

Using the model pioneered by London’s PIPCU, 19 parameters were applied to list of pirate sites in order to place them on the shortlist. They are reported to include the type of content being uploaded, downloaded, and the number of downloads overall.

Kamble reports that a further 89 websites, that have domains registered abroad but are very popular in India, are now being targeted. Whether overseas registrars will prove as compliant will remain to be seen. After booking initial success, even PIPCU itself experienced problems keeping up the momentum with registrars.

In 2014, information obtained by TorrentFreak following a Freedom of Information request revealed that only five out of 70 domain registrars had complied with police requests to suspend domains.

A year later, PIPCU confirmed that suspending pirate domain names was no longer a priority for them after ICANN ruled that registrars don’t have to suspend domain names without a valid court order.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Building a Multi-region Serverless Application with Amazon API Gateway and AWS Lambda

Post Syndicated from Stefano Buliani original https://aws.amazon.com/blogs/compute/building-a-multi-region-serverless-application-with-amazon-api-gateway-and-aws-lambda/

This post written by: Magnus Bjorkman – Solutions Architect

Many customers are looking to run their services at global scale, deploying their backend to multiple regions. In this post, we describe how to deploy a Serverless API into multiple regions and how to leverage Amazon Route 53 to route the traffic between regions. We use latency-based routing and health checks to achieve an active-active setup that can fail over between regions in case of an issue. We leverage the new regional API endpoint feature in Amazon API Gateway to make this a seamless process for the API client making the requests. This post does not cover the replication of your data, which is another aspect to consider when deploying applications across regions.

Solution overview

Currently, the default API endpoint type in API Gateway is the edge-optimized API endpoint, which enables clients to access an API through an Amazon CloudFront distribution. This typically improves connection time for geographically diverse clients. By default, a custom domain name is globally unique and the edge-optimized API endpoint would invoke a Lambda function in a single region in the case of Lambda integration. You can’t use this type of endpoint with a Route 53 active-active setup and fail-over.

The new regional API endpoint in API Gateway moves the API endpoint into the region and the custom domain name is unique per region. This makes it possible to run a full copy of an API in each region and then use Route 53 to use an active-active setup and failover. The following diagram shows how you do this:

Active/active multi region architecture

  • Deploy your Rest API stack, consisting of API Gateway and Lambda, in two regions, such as us-east-1 and us-west-2.
  • Choose the regional API endpoint type for your API.
  • Create a custom domain name and choose the regional API endpoint type for that one as well. In both regions, you are configuring the custom domain name to be the same, for example, helloworldapi.replacewithyourcompanyname.com
  • Use the host name of the custom domain names from each region, for example, xxxxxx.execute-api.us-east-1.amazonaws.com and xxxxxx.execute-api.us-west-2.amazonaws.com, to configure record sets in Route 53 for your client-facing domain name, for example, helloworldapi.replacewithyourcompanyname.com

The above solution provides an active-active setup for your API across the two regions, but you are not doing failover yet. For that to work, set up a health check in Route 53:

Route 53 Health Check

A Route 53 health check must have an endpoint to call to check the health of a service. You could do a simple ping of your actual Rest API methods, but instead provide a specific method on your Rest API that does a deep ping. That is, it is a Lambda function that checks the status of all the dependencies.

In the case of the Hello World API, you don’t have any other dependencies. In a real-world scenario, you could check on dependencies as databases, other APIs, and external dependencies. Route 53 health checks themselves cannot use your custom domain name endpoint’s DNS address, so you are going to directly call the API endpoints via their region unique endpoint’s DNS address.

Walkthrough

The following sections describe how to set up this solution. You can find the complete solution at the blog-multi-region-serverless-service GitHub repo. Clone or download the repository locally to be able to do the setup as described.

Prerequisites

You need the following resources to set up the solution described in this post:

  • AWS CLI
  • An S3 bucket in each region in which to deploy the solution, which can be used by the AWS Serverless Application Model (SAM). You can use the following CloudFormation templates to create buckets in us-east-1 and us-west-2:
    • us-east-1:
    • us-west-2:
  • A hosted zone registered in Amazon Route 53. This is used for defining the domain name of your API endpoint, for example, helloworldapi.replacewithyourcompanyname.com. You can use a third-party domain name registrar and then configure the DNS in Amazon Route 53, or you can purchase a domain directly from Amazon Route 53.

Deploy API with health checks in two regions

Start by creating a small “Hello World” Lambda function that sends back a message in the region in which it has been deployed.


"""Return message."""
import logging

logging.basicConfig()
logger = logging.getLogger()
logger.setLevel(logging.INFO)

def lambda_handler(event, context):
    """Lambda handler for getting the hello world message."""

    region = context.invoked_function_arn.split(':')[3]

    logger.info("message: " + "Hello from " + region)
    
    return {
		"message": "Hello from " + region
    }

Also create a Lambda function for doing a health check that returns a value based on another environment variable (either “ok” or “fail”) to allow for ease of testing:


"""Return health."""
import logging
import os

logging.basicConfig()
logger = logging.getLogger()
logger.setLevel(logging.INFO)

def lambda_handler(event, context):
    """Lambda handler for getting the health."""

    logger.info("status: " + os.environ['STATUS'])
    
    return {
		"status": os.environ['STATUS']
    }

Deploy both of these using an AWS Serverless Application Model (SAM) template. SAM is a CloudFormation extension that is optimized for serverless, and provides a standard way to create a complete serverless application. You can find the full helloworld-sam.yaml template in the blog-multi-region-serverless-service GitHub repo.

A few things to highlight:

  • You are using inline Swagger to define your API so you can substitute the current region in the x-amazon-apigateway-integration section.
  • Most of the Swagger template covers CORS to allow you to test this from a browser.
  • You are also using substitution to populate the environment variable used by the “Hello World” method with the region into which it is being deployed.

The Swagger allows you to use the same SAM template in both regions.

You can only use SAM from the AWS CLI, so do the following from the command prompt. First, deploy the SAM template in us-east-1 with the following commands, replacing “<your bucket in us-east-1>” with a bucket in your account:


> cd helloworld-api
> aws cloudformation package --template-file helloworld-sam.yaml --output-template-file /tmp/cf-helloworld-sam.yaml --s3-bucket <your bucket in us-east-1> --region us-east-1
> aws cloudformation deploy --template-file /tmp/cf-helloworld-sam.yaml --stack-name multiregionhelloworld --capabilities CAPABILITY_IAM --region us-east-1

Second, do the same in us-west-2:


> aws cloudformation package --template-file helloworld-sam.yaml --output-template-file /tmp/cf-helloworld-sam.yaml --s3-bucket <your bucket in us-west-2> --region us-west-2
> aws cloudformation deploy --template-file /tmp/cf-helloworld-sam.yaml --stack-name multiregionhelloworld --capabilities CAPABILITY_IAM --region us-west-2

The API was created with the default endpoint type of Edge Optimized. Switch it to Regional. In the Amazon API Gateway console, select the API that you just created and choose the wheel-icon to edit it.

API Gateway edit API settings

In the edit screen, select the Regional endpoint type and save the API. Do the same in both regions.

Grab the URL for the API in the console by navigating to the method in the prod stage.

API Gateway endpoint link

You can now test this with curl:


> curl https://2wkt1cxxxx.execute-api.us-west-2.amazonaws.com/prod/helloworld
{"message": "Hello from us-west-2"}

Write down the domain name for the URL in each region (for example, 2wkt1cxxxx.execute-api.us-west-2.amazonaws.com), as you need that later when you deploy the Route 53 setup.

Create the custom domain name

Next, create an Amazon API Gateway custom domain name endpoint. As part of using this feature, you must have a hosted zone and domain available to use in Route 53 as well as an SSL certificate that you use with your specific domain name.

You can create the SSL certificate by using AWS Certificate Manager. In the ACM console, choose Get started (if you have no existing certificates) or Request a certificate. Fill out the form with the domain name to use for the custom domain name endpoint, which is the same across the two regions:

Amazon Certificate Manager request new certificate

Go through the remaining steps and validate the certificate for each region before moving on.

You are now ready to create the endpoints. In the Amazon API Gateway console, choose Custom Domain Names, Create Custom Domain Name.

API Gateway create custom domain name

A few things to highlight:

  • The domain name is the same as what you requested earlier through ACM.
  • The endpoint configuration should be regional.
  • Select the ACM Certificate that you created earlier.
  • You need to create a base path mapping that connects back to your earlier API Gateway endpoint. Set the base path to v1 so you can version your API, and then select the API and the prod stage.

Choose Save. You should see your newly created custom domain name:

API Gateway custom domain setup

Note the value for Target Domain Name as you need that for the next step. Do this for both regions.

Deploy Route 53 setup

Use the global Route 53 service to provide DNS lookup for the Rest API, distributing the traffic in an active-active setup based on latency. You can find the full CloudFormation template in the blog-multi-region-serverless-service GitHub repo.

The template sets up health checks, for example, for us-east-1:


HealthcheckRegion1:
  Type: "AWS::Route53::HealthCheck"
  Properties:
    HealthCheckConfig:
      Port: "443"
      Type: "HTTPS_STR_MATCH"
      SearchString: "ok"
      ResourcePath: "/prod/healthcheck"
      FullyQualifiedDomainName: !Ref Region1HealthEndpoint
      RequestInterval: "30"
      FailureThreshold: "2"

Use the health check when you set up the record set and the latency routing, for example, for us-east-1:


Region1EndpointRecord:
  Type: AWS::Route53::RecordSet
  Properties:
    Region: us-east-1
    HealthCheckId: !Ref HealthcheckRegion1
    SetIdentifier: "endpoint-region1"
    HostedZoneId: !Ref HostedZoneId
    Name: !Ref MultiregionEndpoint
    Type: CNAME
    TTL: 60
    ResourceRecords:
      - !Ref Region1Endpoint

You can create the stack by using the following link, copying in the domain names from the previous section, your existing hosted zone name, and the main domain name that is created (for example, hellowordapi.replacewithyourcompanyname.com):

The following screenshot shows what the parameters might look like:
Serverless multi region Route 53 health check

Specifically, the domain names that you collected earlier would map according to following:

  • The domain names from the API Gateway “prod”-stage go into Region1HealthEndpoint and Region2HealthEndpoint.
  • The domain names from the custom domain name’s target domain name goes into Region1Endpoint and Region2Endpoint.

Using the Rest API from server-side applications

You are now ready to use your setup. First, demonstrate the use of the API from server-side clients. You can demonstrate this by using curl from the command line:


> curl https://hellowordapi.replacewithyourcompanyname.com/v1/helloworld/
{"message": "Hello from us-east-1"}

Testing failover of Rest API in browser

Here’s how you can use this from the browser and test the failover. Find all of the files for this test in the browser-client folder of the blog-multi-region-serverless-service GitHub repo.

Use this html file:


<!DOCTYPE HTML>
<html>
<head>
    <meta charset="utf-8"/>
    <meta http-equiv="X-UA-Compatible" content="IE=edge"/>
    <meta name="viewport" content="width=device-width, initial-scale=1"/>
    <title>Multi-Region Client</title>
</head>
<body>
<div>
   <h1>Test Client</h1>

    <p id="client_result">

    </p>

    <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.3/jquery.min.js"></script>
    <script src="settings.js"></script>
    <script src="client.js"></script>
</body>
</html>

The html file uses this JavaScript file to repeatedly call the API and print the history of messages:


var messageHistory = "";

(function call_service() {

   $.ajax({
      url: helloworldMultiregionendpoint+'v1/helloworld/',
      dataType: "json",
      cache: false,
      success: function(data) {
         messageHistory+="<p>"+data['message']+"</p>";
         $('#client_result').html(messageHistory);
      },
      complete: function() {
         // Schedule the next request when the current one's complete
         setTimeout(call_service, 10000);
      },
      error: function(xhr, status, error) {
         $('#client_result').html('ERROR: '+status);
      }
   });

})();

Also, make sure to update the settings in settings.js to match with the API Gateway endpoints for the DNS-proxy and the multi-regional endpoint for the Hello World API: var helloworldMultiregionendpoint = "https://hellowordapi.replacewithyourcompanyname.com/";

You can now open the HTML file in the browser (you can do this directly from the file system) and you should see something like the following screenshot:

Serverless multi region browser test

You can test failover by changing the environment variable in your health check Lambda function. In the Lambda console, select your health check function and scroll down to the Environment variables section. For the STATUS key, modify the value to fail.

Lambda update environment variable

You should see the region switch in the test client:

Serverless multi region broker test switchover

During an emulated failure like this, the browser might take some additional time to switch over due to connection keep-alive functionality. If you are using a browser like Chrome, you can kill all the connections to see a more immediate fail-over: chrome://net-internals/#sockets

Summary

You have implemented a simple way to do multi-regional serverless applications that fail over seamlessly between regions, either being accessed from the browser or from other applications/services. You achieved this by using the capabilities of Amazon Route 53 to do latency based routing and health checks for fail-over. You unlocked the use of these features in a serverless application by leveraging the new regional endpoint feature of Amazon API Gateway.

The setup was fully scripted using CloudFormation, the AWS Serverless Application Model (SAM), and the AWS CLI, and it can be integrated into deployment tools to push the code across the regions to make sure it is available in all the needed regions. For more information about cross-region deployments, see Building a Cross-Region/Cross-Account Code Deployment Solution on AWS on the AWS DevOps blog.

Application Load Balancers Now Support Multiple TLS Certificates With Smart Selection Using SNI

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/new-application-load-balancer-sni/

Today we’re launching support for multiple TLS/SSL certificates on Application Load Balancers (ALB) using Server Name Indication (SNI). You can now host multiple TLS secured applications, each with its own TLS certificate, behind a single load balancer. In order to use SNI, all you need to do is bind multiple certificates to the same secure listener on your load balancer. ALB will automatically choose the optimal TLS certificate for each client. These new features are provided at no additional charge.

If you’re looking for a TL;DR on how to use this new feature just click here. If you’re like me and you’re a little rusty on the specifics of Transport Layer Security (TLS) then keep reading.

TLS? SSL? SNI?

People tend to use the terms SSL and TLS interchangeably even though the two are technically different. SSL technically refers to a predecessor of the TLS protocol. To keep things simple I’ll be using the term TLS for the rest of this post.

TLS is a protocol for securely transmitting data like passwords, cookies, and credit card numbers. It enables privacy, authentication, and integrity of the data being transmitted. TLS uses certificate based authentication where certificates are like ID cards for your websites. You trust the person that signed and issued the certificate, the certificate authority (CA), so you trust that the data in the certificate is correct. When a browser connects to your TLS-enabled ALB, ALB presents a certificate that contains your site’s public key, which has been cryptographically signed by a CA. This way the client can be sure it’s getting the ‘real you’ and that it’s safe to use your site’s public key to establish a secure connection.

With SNI support we’re making it easy to use more than one certificate with the same ALB. The most common reason you might want to use multiple certificates is to handle different domains with the same load balancer. It’s always been possible to use wildcard and subject-alternate-name (SAN) certificates with ALB, but these come with limitations. Wildcard certificates only work for related subdomains that match a simple pattern and while SAN certificates can support many different domains, the same certificate authority has to authenticate each one. That means you have reauthenticate and reprovision your certificate everytime you add a new domain.

One of our most frequent requests on forums, reddit, and in my e-mail inbox has been to use the Server Name Indication (SNI) extension of TLS to choose a certificate for a client. Since TLS operates at the transport layer, below HTTP, it doesn’t see the hostname requested by a client. SNI works by having the client tell the server “This is the domain I expect to get a certificate for” when it first connects. The server can then choose the correct certificate to respond to the client. All modern web browsers and a large majority of other clients support SNI. In fact, today we see SNI supported by over 99.5% of clients connecting to CloudFront.

Smart Certificate Selection on ALB

ALB’s smart certificate selection goes beyond SNI. In addition to containing a list of valid domain names, certificates also describe the type of key exchange and cryptography that the server supports, as well as the signature algorithm (SHA2, SHA1, MD5) used to sign the certificate. To establish a TLS connection, a client starts a TLS handshake by sending a “ClientHello” message that outlines the capabilities of the client: the protocol versions, extensions, cipher suites, and compression methods. Based on what an individual client supports, ALB’s smart selection algorithm chooses a certificate for the connection and sends it to the client. ALB supports both the classic RSA algorithm and the newer, hipper, and faster Elliptic-curve based ECDSA algorithm. ECDSA support among clients isn’t as prevalent as SNI, but it is supported by all modern web browsers. Since it’s faster and requires less CPU, it can be particularly useful for ultra-low latency applications and for conserving the amount of battery used by mobile applications. Since ALB can see what each client supports from the TLS handshake, you can upload both RSA and ECDSA certificates for the same domains and ALB will automatically choose the best one for each client.

Using SNI with ALB

I’ll use a few example websites like VimIsBetterThanEmacs.com and VimIsTheBest.com. I’ve purchased and hosted these domains on Amazon Route 53, and provisioned two separate certificates for them in AWS Certificate Manager (ACM). If I want to securely serve both of these sites through a single ALB, I can quickly add both certificates in the console.

First, I’ll select my load balancer in the console, go to the listeners tab, and select “view/edit certificates”.

Next, I’ll use the “+” button in the top left corner to select some certificates then I’ll click the “Add” button.

There are no more steps. If you’re not really a GUI kind of person you’ll be pleased to know that it’s also simple to add new certificates via the AWS Command Line Interface (CLI) (or SDKs).

aws elbv2 add-listener-certificates --listener-arn <listener-arn> --certificates CertificateArn=<cert-arn>

Things to know

  • ALB Access Logs now include the client’s requested hostname and the certificate ARN used. If the “hostname” field is empty (represented by a “-“) the client did not use the SNI extension in their request.
  • You can use any of your certificates in ACM or IAM.
  • You can bind multiple certificates for the same domain(s) to a secure listener. Your ALB will choose the optimal certificate based on multiple factors including the capabilities of the client.
  • If the client does not support SNI your ALB will use the default certificate (the one you specified when you created the listener).
  • There are three new ELB API calls: AddListenerCertificates, RemoveListenerCertificates, and DescribeListenerCertificates.
  • You can bind up to 25 certificates per load balancer (not counting the default certificate).
  • These new features are supported by AWS CloudFormation at launch.

You can see an example of these new features in action with a set of websites created by my colleague Jon Zobrist: https://www.exampleloadbalancer.com/.

Overall, I will personally use this feature and I’m sure a ton of AWS users will benefit from it as well. I want to thank the Elastic Load Balancing team for all their hard work in getting this into the hands of our users.

Randall

dcrawl – Web Crawler For Unique Domains

Post Syndicated from Darknet original https://www.darknet.org.uk/2017/09/dcrawl-web-crawler-unique-domains/?utm_source=rss&utm_medium=social&utm_campaign=darknetfeed

dcrawl – Web Crawler For Unique Domains

dcrawl is a simple, but smart, multithreaded web crawler for randomly gathering huge lists of unique domain names.

How does dcrawl work?

dcrawl takes one site URL as input and detects all a href= links in the site’s body. Each found link is put into the queue. Successively, each queued link is crawled in the same way, branching out to more URLs found in links on each site’s body.

dcrawl Web Crawler Features

  • Branching out only to predefined number of links found per one hostname.

Read the rest of dcrawl – Web Crawler For Unique Domains now! Only available at Darknet.

Top 10 Most Obvious Hacks of All Time (v0.9)

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/07/top-10-most-obvious-hacks-of-all-time.html

For teaching hacking/cybersecurity, I thought I’d create of the most obvious hacks of all time. Not the best hacks, the most sophisticated hacks, or the hacks with the biggest impact, but the most obvious hacks — ones that even the least knowledgeable among us should be able to understand. Below I propose some hacks that fit this bill, though in no particular order.

The reason I’m writing this is that my niece wants me to teach her some hacking. I thought I’d start with the obvious stuff first.

Shared Passwords

If you use the same password for every website, and one of those websites gets hacked, then the hacker has your password for all your websites. The reason your Facebook account got hacked wasn’t because of anything Facebook did, but because you used the same email-address and password when creating an account on “beagleforums.com”, which got hacked last year.

I’ve heard people say “I’m sure, because I choose a complex password and use it everywhere”. No, this is the very worst thing you can do. Sure, you can the use the same password on all sites you don’t care much about, but for Facebook, your email account, and your bank, you should have a unique password, so that when other sites get hacked, your important sites are secure.

And yes, it’s okay to write down your passwords on paper.

Tools: HaveIBeenPwned.com

PIN encrypted PDFs

My accountant emails PDF statements encrypted with the last 4 digits of my Social Security Number. This is not encryption — a 4 digit number has only 10,000 combinations, and a hacker can guess all of them in seconds.
PIN numbers for ATM cards work because ATM machines are online, and the machine can reject your card after four guesses. PIN numbers don’t work for documents, because they are offline — the hacker has a copy of the document on their own machine, disconnected from the Internet, and can continue making bad guesses with no restrictions.
Passwords protecting documents must be long enough that even trillion upon trillion guesses are insufficient to guess.

Tools: Hashcat, John the Ripper

SQL and other injection

The lazy way of combining websites with databases is to combine user input with an SQL statement. This combines code with data, so the obvious consequence is that hackers can craft data to mess with the code.
No, this isn’t obvious to the general public, but it should be obvious to programmers. The moment you write code that adds unfiltered user-input to an SQL statement, the consequence should be obvious. Yet, “SQL injection” has remained one of the most effective hacks for the last 15 years because somehow programmers don’t understand the consequence.
CGI shell injection is a similar issue. Back in early days, when “CGI scripts” were a thing, it was really important, but these days, not so much, so I just included it with SQL. The consequence of executing shell code should’ve been obvious, but weirdly, it wasn’t. The IT guy at the company I worked for back in the late 1990s came to me and asked “this guy says we have a vulnerability, is he full of shit?”, and I had to answer “no, he’s right — obviously so”.

XSS (“Cross Site Scripting”) [*] is another injection issue, but this time at somebody’s web browser rather than a server. It works because websites will echo back what is sent to them. For example, if you search for Cross Site Scripting with the URL https://www.google.com/search?q=cross+site+scripting, then you’ll get a page back from the server that contains that string. If the string is JavaScript code rather than text, then some servers (thought not Google) send back the code in the page in a way that it’ll be executed. This is most often used to hack somebody’s account: you send them an email or tweet a link, and when they click on it, the JavaScript gives control of the account to the hacker.

Cross site injection issues like this should probably be their own category, but I’m including it here for now.

More: Wikipedia on SQL injection, Wikipedia on cross site scripting.
Tools: Burpsuite, SQLmap

Buffer overflows

In the C programming language, programmers first create a buffer, then read input into it. If input is long than the buffer, then it overflows. The extra bytes overwrite other parts of the program, letting the hacker run code.
Again, it’s not a thing the general public is expected to know about, but is instead something C programmers should be expected to understand. They should know that it’s up to them to check the length and stop reading input before it overflows the buffer, that there’s no language feature that takes care of this for them.
We are three decades after the first major buffer overflow exploits, so there is no excuse for C programmers not to understand this issue.

What makes particular obvious is the way they are wrapped in exploits, like in Metasploit. While the bug itself is obvious that it’s a bug, actually exploiting it can take some very non-obvious skill. However, once that exploit is written, any trained monkey can press a button and run the exploit. That’s where we get the insult “script kiddie” from — referring to wannabe-hackers who never learn enough to write their own exploits, but who spend a lot of time running the exploit scripts written by better hackers than they.

More: Wikipedia on buffer overflow, Wikipedia on script kiddie,  “Smashing The Stack For Fun And Profit” — Phrack (1996)
Tools: bash, Metasploit

SendMail DEBUG command (historical)

The first popular email server in the 1980s was called “SendMail”. It had a feature whereby if you send a “DEBUG” command to it, it would execute any code following the command. The consequence of this was obvious — hackers could (and did) upload code to take control of the server. This was used in the Morris Worm of 1988. Most Internet machines of the day ran SendMail, so the worm spread fast infecting most machines.
This bug was mostly ignored at the time. It was thought of as a theoretical problem, that might only rarely be used to hack a system. Part of the motivation of the Morris Worm was to demonstrate that such problems was to demonstrate the consequences — consequences that should’ve been obvious but somehow were rejected by everyone.

More: Wikipedia on Morris Worm

Email Attachments/Links

I’m conflicted whether I should add this or not, because here’s the deal: you are supposed to click on attachments and links within emails. That’s what they are there for. The difference between good and bad attachments/links is not obvious. Indeed, easy-to-use email systems makes detecting the difference harder.
On the other hand, the consequences of bad attachments/links is obvious. That worms like ILOVEYOU spread so easily is because people trusted attachments coming from their friends, and ran them.
We have no solution to the problem of bad email attachments and links. Viruses and phishing are pervasive problems. Yet, we know why they exist.

Default and backdoor passwords

The Mirai botnet was caused by surveillance-cameras having default and backdoor passwords, and being exposed to the Internet without a firewall. The consequence should be obvious: people will discover the passwords and use them to take control of the bots.
Surveillance-cameras have the problem that they are usually exposed to the public, and can’t be reached without a ladder — often a really tall ladder. Therefore, you don’t want a button consumers can press to reset to factory defaults. You want a remote way to reset them. Therefore, they put backdoor passwords to do the reset. Such passwords are easy for hackers to reverse-engineer, and hence, take control of millions of cameras across the Internet.
The same reasoning applies to “default” passwords. Many users will not change the defaults, leaving a ton of devices hackers can hack.

Masscan and background radiation of the Internet

I’ve written a tool that can easily scan the entire Internet in a short period of time. It surprises people that this possible, but it obvious from the numbers. Internet addresses are only 32-bits long, or roughly 4 billion combinations. A fast Internet link can easily handle 1 million packets-per-second, so the entire Internet can be scanned in 4000 seconds, little more than an hour. It’s basic math.
Because it’s so easy, many people do it. If you monitor your Internet link, you’ll see a steady trickle of packets coming in from all over the Internet, especially Russia and China, from hackers scanning the Internet for things they can hack.
People’s reaction to this scanning is weirdly emotional, taking is personally, such as:
  1. Why are they hacking me? What did I do to them?
  2. Great! They are hacking me! That must mean I’m important!
  3. Grrr! How dare they?! How can I hack them back for some retribution!?

I find this odd, because obviously such scanning isn’t personal, the hackers have no idea who you are.

Tools: masscan, firewalls

Packet-sniffing, sidejacking

If you connect to the Starbucks WiFi, a hacker nearby can easily eavesdrop on your network traffic, because it’s not encrypted. Windows even warns you about this, in case you weren’t sure.

At DefCon, they have a “Wall of Sheep”, where they show passwords from people who logged onto stuff using the insecure “DefCon-Open” network. Calling them “sheep” for not grasping this basic fact that unencrypted traffic is unencrypted.

To be fair, it’s actually non-obvious to many people. Even if the WiFi itself is not encrypted, SSL traffic is. They expect their services to be encrypted, without them having to worry about it. And in fact, most are, especially Google, Facebook, Twitter, Apple, and other major services that won’t allow you to log in anymore without encryption.

But many services (especially old ones) may not be encrypted. Unless users check and verify them carefully, they’ll happily expose passwords.

What’s interesting about this was 10 years ago, when most services which only used SSL to encrypt the passwords, but then used unencrypted connections after that, using “cookies”. This allowed the cookies to be sniffed and stolen, allowing other people to share the login session. I used this on stage at BlackHat to connect to somebody’s GMail session. Google, and other major websites, fixed this soon after. But it should never have been a problem — because the sidejacking of cookies should have been obvious.

Tools: Wireshark, dsniff

Stuxnet LNK vulnerability

Again, this issue isn’t obvious to the public, but it should’ve been obvious to anybody who knew how Windows works.
When Windows loads a .dll, it first calls the function DllMain(). A Windows link file (.lnk) can load icons/graphics from the resources in a .dll file. It does this by loading the .dll file, thus calling DllMain. Thus, a hacker could put on a USB drive a .lnk file pointing to a .dll file, and thus, cause arbitrary code execution as soon as a user inserted a drive.
I say this is obvious because I did this, created .lnks that pointed to .dlls, but without hostile DllMain code. The consequence should’ve been obvious to me, but I totally missed the connection. We all missed the connection, for decades.

Social Engineering and Tech Support [* * *]

After posting this, many people have pointed out “social engineering”, especially of “tech support”. This probably should be up near #1 in terms of obviousness.

The classic example of social engineering is when you call tech support and tell them you’ve lost your password, and they reset it for you with minimum of questions proving who you are. For example, you set the volume on your computer really loud and play the sound of a crying baby in the background and appear to be a bit frazzled and incoherent, which explains why you aren’t answering the questions they are asking. They, understanding your predicament as a new parent, will go the extra mile in helping you, resetting “your” password.

One of the interesting consequences is how it affects domain names (DNS). It’s quite easy in many cases to call up the registrar and convince them to transfer a domain name. This has been used in lots of hacks. It’s really hard to defend against. If a registrar charges only $9/year for a domain name, then it really can’t afford to provide very good tech support — or very secure tech support — to prevent this sort of hack.

Social engineering is such a huge problem, and obvious problem, that it’s outside the scope of this document. Just google it to find example after example.

A related issue that perhaps deserves it’s own section is OSINT [*], or “open-source intelligence”, where you gather public information about a target. For example, on the day the bank manager is out on vacation (which you got from their Facebook post) you show up and claim to be a bank auditor, and are shown into their office where you grab their backup tapes. (We’ve actually done this).

More: Wikipedia on Social Engineering, Wikipedia on OSINT, “How I Won the Defcon Social Engineering CTF” — blogpost (2011), “Questioning 42: Where’s the Engineering in Social Engineering of Namespace Compromises” — BSidesLV talk (2016)

Blue-boxes (historical) [*]

Telephones historically used what we call “in-band signaling”. That’s why when you dial on an old phone, it makes sounds — those sounds are sent no differently than the way your voice is sent. Thus, it was possible to make tone generators to do things other than simply dial calls. Early hackers (in the 1970s) would make tone-generators called “blue-boxes” and “black-boxes” to make free long distance calls, for example.

These days, “signaling” and “voice” are digitized, then sent as separate channels or “bands”. This is call “out-of-band signaling”. You can’t trick the phone system by generating tones. When your iPhone makes sounds when you dial, it’s entirely for you benefit and has nothing to do with how it signals the cell tower to make a call.

Early hackers, like the founders of Apple, are famous for having started their careers making such “boxes” for tricking the phone system. The problem was obvious back in the day, which is why as the phone system moves from analog to digital, the problem was fixed.

More: Wikipedia on blue box, Wikipedia article on Steve Wozniak.

Thumb drives in parking lots [*]

A simple trick is to put a virus on a USB flash drive, and drop it in a parking lot. Somebody is bound to notice it, stick it in their computer, and open the file.

This can be extended with tricks. For example, you can put a file labeled “third-quarter-salaries.xlsx” on the drive that required macros to be run in order to open. It’s irresistible to other employees who want to know what their peers are being paid, so they’ll bypass any warning prompts in order to see the data.

Another example is to go online and get custom USB sticks made printed with the logo of the target company, making them seem more trustworthy.

We also did a trick of taking an Adobe Flash game “Punch the Monkey” and replaced the monkey with a logo of a competitor of our target. They now only played the game (infecting themselves with our virus), but gave to others inside the company to play, infecting others, including the CEO.

Thumb drives like this have been used in many incidents, such as Russians hacking military headquarters in Afghanistan. It’s really hard to defend against.

More: “Computer Virus Hits U.S. Military Base in Afghanistan” — USNews (2008), “The Return of the Worm That Ate The Pentagon” — Wired (2011), DoD Bans Flash Drives — Stripes (2008)

Googling [*]

Search engines like Google will index your website — your entire website. Frequently companies put things on their website without much protection because they are nearly impossible for users to find. But Google finds them, then indexes them, causing them to pop up with innocent searches.
There are books written on “Google hacking” explaining what search terms to look for, like “not for public release”, in order to find such documents.

More: Wikipedia entry on Google Hacking, “Google Hacking” book.

URL editing [*]

At the top of every browser is what’s called the “URL”. You can change it. Thus, if you see a URL that looks like this:

http://www.example.com/documents?id=138493

Then you can edit it to see the next document on the server:

http://www.example.com/documents?id=138494

The owner of the website may think they are secure, because nothing points to this document, so the Google search won’t find it. But that doesn’t stop a user from manually editing the URL.
An example of this is a big Fortune 500 company that posts the quarterly results to the website an hour before the official announcement. Simply editing the URL from previous financial announcements allows hackers to find the document, then buy/sell the stock as appropriate in order to make a lot of money.
Another example is the classic case of Andrew “Weev” Auernheimer who did this trick in order to download the account email addresses of early owners of the iPad, including movie stars and members of the Obama administration. It’s an interesting legal case because on one hand, techies consider this so obvious as to not be “hacking”. On the other hand, non-techies, especially judges and prosecutors, believe this to be obviously “hacking”.

DDoS, spoofing, and amplification [*]

For decades now, online gamers have figured out an easy way to win: just flood the opponent with Internet traffic, slowing their network connection. This is called a DoS, which stands for “Denial of Service”. DoSing game competitors is often a teenager’s first foray into hacking.
A variant of this is when you hack a bunch of other machines on the Internet, then command them to flood your target. (The hacked machines are often called a “botnet”, a network of robot computers). This is called DDoS, or “Distributed DoS”. At this point, it gets quite serious, as instead of competitive gamers hackers can take down entire businesses. Extortion scams, DDoSing websites then demanding payment to stop, is a common way hackers earn money.
Another form of DDoS is “amplification”. Sometimes when you send a packet to a machine on the Internet it’ll respond with a much larger response, either a very large packet or many packets. The hacker can then send a packet to many of these sites, “spoofing” or forging the IP address of the victim. This causes all those sites to then flood the victim with traffic. Thus, with a small amount of outbound traffic, the hacker can flood the inbound traffic of the victim.
This is one of those things that has worked for 20 years, because it’s so obvious teenagers can do it, yet there is no obvious solution. President Trump’s executive order of cyberspace specifically demanded that his government come up with a report on how to address this, but it’s unlikely that they’ll come up with any useful strategy.

More: Wikipedia on DDoS, Wikipedia on Spoofing

Conclusion

Tweet me (@ErrataRob) your obvious hacks, so I can add them to the list.

A proposal to remerge OpenWRT and LEDE

Post Syndicated from corbet original https://lwn.net/Articles/722135/rss

It appears that the OpenWRT and LEDE communities are about to vote on a
proposal covering many of the details behind merging the two projects
(which forked one year ago) back
together. The plan appears to be to go forward with the OpenWRT name, but
with the LEDE repository; domain names would be transferred to SPI.

In Case You Missed These: AWS Security Blog Posts from January, February, and March

Post Syndicated from Craig Liebendorfer original https://aws.amazon.com/blogs/security/in-case-you-missed-these-aws-security-blog-posts-from-january-february-and-march/

Image of lock and key

In case you missed any AWS Security Blog posts published so far in 2017, they are summarized and linked to below. The posts are shown in reverse chronological order (most recent first), and the subject matter ranges from protecting dynamic web applications against DDoS attacks to monitoring AWS account configuration changes and API calls to Amazon EC2 security groups.

March

March 22: How to Help Protect Dynamic Web Applications Against DDoS Attacks by Using Amazon CloudFront and Amazon Route 53
Using a content delivery network (CDN) such as Amazon CloudFront to cache and serve static text and images or downloadable objects such as media files and documents is a common strategy to improve webpage load times, reduce network bandwidth costs, lessen the load on web servers, and mitigate distributed denial of service (DDoS) attacks. AWS WAF is a web application firewall that can be deployed on CloudFront to help protect your application against DDoS attacks by giving you control over which traffic to allow or block by defining security rules. When users access your application, the Domain Name System (DNS) translates human-readable domain names (for example, www.example.com) to machine-readable IP addresses (for example, 192.0.2.44). A DNS service, such as Amazon Route 53, can effectively connect users’ requests to a CloudFront distribution that proxies requests for dynamic content to the infrastructure hosting your application’s endpoints. In this blog post, I show you how to deploy CloudFront with AWS WAF and Route 53 to help protect dynamic web applications (with dynamic content such as a response to user input) against DDoS attacks. The steps shown in this post are key to implementing the overall approach described in AWS Best Practices for DDoS Resiliency and enable the built-in, managed DDoS protection service, AWS Shield.

March 21: New AWS Encryption SDK for Python Simplifies Multiple Master Key Encryption
The AWS Cryptography team is happy to announce a Python implementation of the AWS Encryption SDK. This new SDK helps manage data keys for you, and it simplifies the process of encrypting data under multiple master keys. As a result, this new SDK allows you to focus on the code that drives your business forward. It also provides a framework you can easily extend to ensure that you have a cryptographic library that is configured to match and enforce your standards. The SDK also includes ready-to-use examples. If you are a Java developer, you can refer to this blog post to see specific Java examples for the SDK. In this blog post, I show you how you can use the AWS Encryption SDK to simplify the process of encrypting data and how to protect your encryption keys in ways that help improve application availability by not tying you to a single region or key management solution.

March 21: Updated CJIS Workbook Now Available by Request
The need for guidance when implementing Criminal Justice Information Services (CJIS)–compliant solutions has become of paramount importance as more law enforcement customers and technology partners move to store and process criminal justice data in the cloud. AWS services allow these customers to easily and securely architect a CJIS-compliant solution when handling criminal justice data, creating a durable, cost-effective, and secure IT infrastructure that better supports local, state, and federal law enforcement in carrying out their public safety missions. AWS has created several documents (collectively referred to as the CJIS Workbook) to assist you in aligning with the FBI’s CJIS Security Policy. You can use the workbook as a framework for developing CJIS-compliant architecture in the AWS Cloud. The workbook helps you define and test the controls you operate, and document the dependence on the controls that AWS operates (compute, storage, database, networking, regions, Availability Zones, and edge locations).

March 9: New Cloud Directory API Makes It Easier to Query Data Along Multiple Dimensions
Today, we made available a new Cloud Directory API, ListObjectParentPaths, that enables you to retrieve all available parent paths for any directory object across multiple hierarchies. Use this API when you want to fetch all parent objects for a specific child object. The order of the paths and objects returned is consistent across iterative calls to the API, unless objects are moved or deleted. In case an object has multiple parents, the API allows you to control the number of paths returned by using a paginated call pattern. In this blog post, I use an example directory to demonstrate how this new API enables you to retrieve data across multiple dimensions to implement powerful applications quickly.

March 8: How to Access the AWS Management Console Using AWS Microsoft AD and Your On-Premises Credentials
AWS Directory Service for Microsoft Active Directory, also known as AWS Microsoft AD, is a managed Microsoft Active Directory (AD) hosted in the AWS Cloud. Now, AWS Microsoft AD makes it easy for you to give your users permission to manage AWS resources by using on-premises AD administrative tools. With AWS Microsoft AD, you can grant your on-premises users permissions to resources such as the AWS Management Console instead of adding AWS Identity and Access Management (IAM) user accounts or configuring AD Federation Services (AD FS) with Security Assertion Markup Language (SAML). In this blog post, I show how to use AWS Microsoft AD to enable your on-premises AD users to sign in to the AWS Management Console with their on-premises AD user credentials to access and manage AWS resources through IAM roles.

March 7: How to Protect Your Web Application Against DDoS Attacks by Using Amazon Route 53 and an External Content Delivery Network
Distributed Denial of Service (DDoS) attacks are attempts by a malicious actor to flood a network, system, or application with more traffic, connections, or requests than it is able to handle. To protect your web application against DDoS attacks, you can use AWS Shield, a DDoS protection service that AWS provides automatically to all AWS customers at no additional charge. You can use AWS Shield in conjunction with DDoS-resilient web services such as Amazon CloudFront and Amazon Route 53 to improve your ability to defend against DDoS attacks. Learn more about architecting for DDoS resiliency by reading the AWS Best Practices for DDoS Resiliency whitepaper. You also have the option of using Route 53 with an externally hosted content delivery network (CDN). In this blog post, I show how you can help protect the zone apex (also known as the root domain) of your web application by using Route 53 to perform a secure redirect to prevent discovery of your application origin.

Image of lock and key

February

February 27: Now Generally Available – AWS Organizations: Policy-Based Management for Multiple AWS Accounts
Today, AWS Organizations moves from Preview to General Availability. You can use Organizations to centrally manage multiple AWS accounts, with the ability to create a hierarchy of organizational units (OUs). You can assign each account to an OU, define policies, and then apply those policies to an entire hierarchy, specific OUs, or specific accounts. You can invite existing AWS accounts to join your organization, and you can also create new accounts. All of these functions are available from the AWS Management Console, the AWS Command Line Interface (CLI), and through the AWS Organizations API.To read the full AWS Blog post about today’s launch, see AWS Organizations – Policy-Based Management for Multiple AWS Accounts.

February 23: s2n Is Now Handling 100 Percent of SSL Traffic for Amazon S3
Today, we’ve achieved another important milestone for securing customer data: we have replaced OpenSSL with s2n for all internal and external SSL traffic in Amazon Simple Storage Service (Amazon S3) commercial regions. This was implemented with minimal impact to customers, and multiple means of error checking were used to ensure a smooth transition, including client integration tests, catching potential interoperability conflicts, and identifying memory leaks through fuzz testing.

February 22: Easily Replace or Attach an IAM Role to an Existing EC2 Instance by Using the EC2 Console
AWS Identity and Access Management (IAM) roles enable your applications running on Amazon EC2 to use temporary security credentials. IAM roles for EC2 make it easier for your applications to make API requests securely from an instance because they do not require you to manage AWS security credentials that the applications use. Recently, we enabled you to use temporary security credentials for your applications by attaching an IAM role to an existing EC2 instance by using the AWS CLI and SDK. To learn more, see New! Attach an AWS IAM Role to an Existing Amazon EC2 Instance by Using the AWS CLI. Starting today, you can attach an IAM role to an existing EC2 instance from the EC2 console. You can also use the EC2 console to replace an IAM role attached to an existing instance. In this blog post, I will show how to attach an IAM role to an existing EC2 instance from the EC2 console.

February 22: How to Audit Your AWS Resources for Security Compliance by Using Custom AWS Config Rules
AWS Config Rules enables you to implement security policies as code for your organization and evaluate configuration changes to AWS resources against these policies. You can use Config rules to audit your use of AWS resources for compliance with external compliance frameworks such as CIS AWS Foundations Benchmark and with your internal security policies related to the US Health Insurance Portability and Accountability Act (HIPAA), the Federal Risk and Authorization Management Program (FedRAMP), and other regimes. AWS provides some predefined, managed Config rules. You also can create custom Config rules based on criteria you define within an AWS Lambda function. In this post, I show how to create a custom rule that audits AWS resources for security compliance by enabling VPC Flow Logs for an Amazon Virtual Private Cloud (VPC). The custom rule meets requirement 4.3 of the CIS AWS Foundations Benchmark: “Ensure VPC flow logging is enabled in all VPCs.”

February 13: AWS Announces CISPE Membership and Compliance with First-Ever Code of Conduct for Data Protection in the Cloud
I have two exciting announcements today, both showing AWS’s continued commitment to ensuring that customers can comply with EU Data Protection requirements when using our services.

February 13: How to Enable Multi-Factor Authentication for AWS Services by Using AWS Microsoft AD and On-Premises Credentials
You can now enable multi-factor authentication (MFA) for users of AWS services such as Amazon WorkSpaces and Amazon QuickSight and their on-premises credentials by using your AWS Directory Service for Microsoft Active Directory (Enterprise Edition) directory, also known as AWS Microsoft AD. MFA adds an extra layer of protection to a user name and password (the first “factor”) by requiring users to enter an authentication code (the second factor), which has been provided by your virtual or hardware MFA solution. These factors together provide additional security by preventing access to AWS services, unless users supply a valid MFA code.

February 13: How to Create an Organizational Chart with Separate Hierarchies by Using Amazon Cloud Directory
Amazon Cloud Directory enables you to create directories for a variety of use cases, such as organizational charts, course catalogs, and device registries. Cloud Directory offers you the flexibility to create directories with hierarchies that span multiple dimensions. For example, you can create an organizational chart that you can navigate through separate hierarchies for reporting structure, location, and cost center. In this blog post, I show how to use Cloud Directory APIs to create an organizational chart with two separate hierarchies in a single directory. I also show how to navigate the hierarchies and retrieve data. I use the Java SDK for all the sample code in this post, but you can use other language SDKs or the AWS CLI.

February 10: How to Easily Log On to AWS Services by Using Your On-Premises Active Directory
AWS Directory Service for Microsoft Active Directory (Enterprise Edition), also known as Microsoft AD, now enables your users to log on with just their on-premises Active Directory (AD) user name—no domain name is required. This new domainless logon feature makes it easier to set up connections to your on-premises AD for use with applications such as Amazon WorkSpaces and Amazon QuickSight, and it keeps the user logon experience free from network naming. This new interforest trusts capability is now available when using Microsoft AD with Amazon WorkSpaces and Amazon QuickSight Enterprise Edition. In this blog post, I explain how Microsoft AD domainless logon works with AD interforest trusts, and I show an example of setting up Amazon WorkSpaces to use this capability.

February 9: New! Attach an AWS IAM Role to an Existing Amazon EC2 Instance by Using the AWS CLI
AWS Identity and Access Management (IAM) roles enable your applications running on Amazon EC2 to use temporary security credentials that AWS creates, distributes, and rotates automatically. Using temporary credentials is an IAM best practice because you do not need to maintain long-term keys on your instance. Using IAM roles for EC2 also eliminates the need to use long-term AWS access keys that you have to manage manually or programmatically. Starting today, you can enable your applications to use temporary security credentials provided by AWS by attaching an IAM role to an existing EC2 instance. You can also replace the IAM role attached to an existing EC2 instance. In this blog post, I show how you can attach an IAM role to an existing EC2 instance by using the AWS CLI.

February 8: How to Remediate Amazon Inspector Security Findings Automatically
The Amazon Inspector security assessment service can evaluate the operating environments and applications you have deployed on AWS for common and emerging security vulnerabilities automatically. As an AWS-built service, Amazon Inspector is designed to exchange data and interact with other core AWS services not only to identify potential security findings but also to automate addressing those findings. Previous related blog posts showed how you can deliver Amazon Inspector security findings automatically to third-party ticketing systems and automate the installation of the Amazon Inspector agent on new Amazon EC2 instances. In this post, I show how you can automatically remediate findings generated by Amazon Inspector. To get started, you must first run an assessment and publish any security findings to an Amazon Simple Notification Service (SNS) topic. Then, you create an AWS Lambda function that is triggered by those notifications. Finally, the Lambda function examines the findings and then implements the appropriate remediation based on the type of issue.

February 6: How to Simplify Security Assessment Setup Using Amazon EC2 Systems Manager and Amazon Inspector
In a July 2016 AWS Blog post, I discussed how to integrate Amazon Inspector with third-party ticketing systems by using Amazon Simple Notification Service (SNS) and AWS Lambda. This AWS Security Blog post continues in the same vein, describing how to use Amazon Inspector to automate various aspects of security management. In this post, I show you how to install the Amazon Inspector agent automatically through the Amazon EC2 Systems Manager when a new Amazon EC2 instance is launched. In a subsequent post, I will show you how to update EC2 instances automatically that run Linux when Amazon Inspector discovers a missing security patch.

Image of lock and key

January

January 30: How to Protect Data at Rest with Amazon EC2 Instance Store Encryption
Encrypting data at rest is vital for regulatory compliance to ensure that sensitive data saved on disks is not readable by any user or application without a valid key. Some compliance regulations such as PCI DSS and HIPAA require that data at rest be encrypted throughout the data lifecycle. To this end, AWS provides data-at-rest options and key management to support the encryption process. For example, you can encrypt Amazon EBS volumes and configure Amazon S3 buckets for server-side encryption (SSE) using AES-256 encryption. Additionally, Amazon RDS supports Transparent Data Encryption (TDE). Instance storage provides temporary block-level storage for Amazon EC2 instances. This storage is located on disks attached physically to a host computer. Instance storage is ideal for temporary storage of information that frequently changes, such as buffers, caches, and scratch data. By default, files stored on these disks are not encrypted. In this blog post, I show a method for encrypting data on Linux EC2 instance stores by using Linux built-in libraries. This method encrypts files transparently, which protects confidential data. As a result, applications that process the data are unaware of the disk-level encryption.

January 27: How to Detect and Automatically Remediate Unintended Permissions in Amazon S3 Object ACLs with CloudWatch Events
Amazon S3 Access Control Lists (ACLs) enable you to specify permissions that grant access to S3 buckets and objects. When S3 receives a request for an object, it verifies whether the requester has the necessary access permissions in the associated ACL. For example, you could set up an ACL for an object so that only the users in your account can access it, or you could make an object public so that it can be accessed by anyone. If the number of objects and users in your AWS account is large, ensuring that you have attached correctly configured ACLs to your objects can be a challenge. For example, what if a user were to call the PutObjectAcl API call on an object that is supposed to be private and make it public? Or, what if a user were to call the PutObject with the optional Acl parameter set to public-read, therefore uploading a confidential file as publicly readable? In this blog post, I show a solution that uses Amazon CloudWatch Events to detect PutObject and PutObjectAcl API calls in near-real time and helps ensure that the objects remain private by making automatic PutObjectAcl calls, when necessary.

January 26: Now Available: Amazon Cloud Directory—A Cloud-Native Directory for Hierarchical Data
Today we are launching Amazon Cloud Directory. This service is purpose-built for storing large amounts of strongly typed hierarchical data. With the ability to scale to hundreds of millions of objects while remaining cost-effective, Cloud Directory is a great fit for all sorts of cloud and mobile applications.

January 24: New SOC 2 Report Available: Confidentiality
As with everything at Amazon, the success of our security and compliance program is primarily measured by one thing: our customers’ success. Our customers drive our portfolio of compliance reports, attestations, and certifications that support their efforts in running a secure and compliant cloud environment. As a result of our engagement with key customers across the globe, we are happy to announce the publication of our new SOC 2 Confidentiality report. This report is available now through AWS Artifact in the AWS Management Console.

January 18: Compliance in the Cloud for New Financial Services Cybersecurity Regulations
Financial regulatory agencies are focused more than ever on ensuring responsible innovation. Consequently, if you want to achieve compliance with financial services regulations, you must be increasingly agile and employ dynamic security capabilities. AWS enables you to achieve this by providing you with the tools you need to scale your security and compliance capabilities on AWS. The following breakdown of the most recent cybersecurity regulations, NY DFS Rule 23 NYCRR 500, demonstrates how AWS continues to focus on your regulatory needs in the financial services sector.

January 9: New Amazon GameDev Blog Post: Protect Multiplayer Game Servers from DDoS Attacks by Using Amazon GameLift
In online gaming, distributed denial of service (DDoS) attacks target a game’s network layer, flooding servers with requests until performance degrades considerably. These attacks can limit a game’s availability to players and limit the player experience for those who can connect. Today’s new Amazon GameDev Blog post uses a typical game server architecture to highlight DDoS attack vulnerabilities and discusses how to stay protected by using built-in AWS Cloud security, AWS security best practices, and the security features of Amazon GameLift. Read the post to learn more.

January 6: The Top 10 Most Downloaded AWS Security and Compliance Documents in 2016
The following list includes the 10 most downloaded AWS security and compliance documents in 2016. Using this list, you can learn about what other people found most interesting about security and compliance last year.

January 6: FedRAMP Compliance Update: AWS GovCloud (US) Region Receives a JAB-Issued FedRAMP High Baseline P-ATO for Three New Services
Three new services in the AWS GovCloud (US) region have received a Provisional Authority to Operate (P-ATO) from the Joint Authorization Board (JAB) under the Federal Risk and Authorization Management Program (FedRAMP). JAB issued the authorization at the High baseline, which enables US government agencies and their service providers the capability to use these services to process the government’s most sensitive unclassified data, including Personal Identifiable Information (PII), Protected Health Information (PHI), Controlled Unclassified Information (CUI), criminal justice information (CJI), and financial data.

January 4: The Top 20 Most Viewed AWS IAM Documentation Pages in 2016
The following 20 pages were the most viewed AWS Identity and Access Management (IAM) documentation pages in 2016. I have included a brief description with each link to give you a clearer idea of what each page covers. Use this list to see what other people have been viewing and perhaps to pique your own interest about a topic you’ve been meaning to research.

January 3: The Most Viewed AWS Security Blog Posts in 2016
The following 10 posts were the most viewed AWS Security Blog posts that we published during 2016. You can use this list as a guide to catch up on your blog reading or even read a post again that you found particularly useful.

January 3: How to Monitor AWS Account Configuration Changes and API Calls to Amazon EC2 Security Groups
You can use AWS security controls to detect and mitigate risks to your AWS resources. The purpose of each security control is defined by its control objective. For example, the control objective of an Amazon VPC security group is to permit only designated traffic to enter or leave a network interface. Let’s say you have an Internet-facing e-commerce website, and your security administrator has determined that only HTTP (TCP port 80) and HTTPS (TCP 443) traffic should be allowed access to the public subnet. As a result, your administrator configures a security group to meet this control objective. What if, though, someone were to inadvertently change this security group’s rules and enable FTP or other protocols to access the public subnet from any location on the Internet? That expanded access could weaken the security posture of your assets. Consequently, your administrator might need to monitor the integrity of your company’s security controls so that the controls maintain their desired effectiveness. In this blog post, I explore two methods for detecting unintended changes to VPC security groups. The two methods address not only control objectives but also control failures.

If you have questions about or issues with implementing the solutions in any of these posts, please start a new thread on the forum identified near the end of each post.

– Craig

How to Help Protect Dynamic Web Applications Against DDoS Attacks by Using Amazon CloudFront and Amazon Route 53

Post Syndicated from Holly Willey original https://aws.amazon.com/blogs/security/how-to-protect-dynamic-web-applications-against-ddos-attacks-by-using-amazon-cloudfront-and-amazon-route-53/

Using a content delivery network (CDN) such as Amazon CloudFront to cache and serve static text and images or downloadable objects such as media files and documents is a common strategy to improve webpage load times, reduce network bandwidth costs, lessen the load on web servers, and mitigate distributed denial of service (DDoS) attacks. AWS WAF is a web application firewall that can be deployed on CloudFront to help protect your application against DDoS attacks by giving you control over which traffic to allow or block by defining security rules. When users access your application, the Domain Name System (DNS) translates human-readable domain names (for example, www.example.com) to machine-readable IP addresses (for example, 192.0.2.44). A DNS service, such as Amazon Route 53, can effectively connect users’ requests to a CloudFront distribution that proxies requests for dynamic content to the infrastructure hosting your application’s endpoints.

In this blog post, I show you how to deploy CloudFront with AWS WAF and Route 53 to help protect dynamic web applications (with dynamic content such as a response to user input) against DDoS attacks. The steps shown in this post are key to implementing the overall approach described in AWS Best Practices for DDoS Resiliency and enable the built-in, managed DDoS protection service, AWS Shield.

Background

AWS hosts CloudFront and Route 53 services on a distributed network of proxy servers in data centers throughout the world called edge locations. Using the global Amazon network of edge locations for application delivery and DNS service plays an important part in building a comprehensive defense against DDoS attacks for your dynamic web applications. These web applications can benefit from the increased security and availability provided by CloudFront and Route 53 as well as improving end users’ experience by reducing latency.

The following screenshot of an Amazon.com webpage shows how static and dynamic content can compose a dynamic web application that is delivered via HTTPS protocol for the encryption of user page requests as well as the pages that are returned by a web server.

Screenshot of an Amazon.com webpage with static and dynamic content

The following map shows the global Amazon network of edge locations available to serve static content and proxy requests for dynamic content back to the origin as of the writing of this blog post. For the latest list of edge locations, see AWS Global Infrastructure.

Map showing Amazon edge locations

How AWS Shield, CloudFront, and Route 53 work to help protect against DDoS attacks

To help keep your dynamic web applications available when they are under DDoS attack, the steps in this post enable AWS Shield Standard by configuring your applications behind CloudFront and Route 53. AWS Shield Standard protects your resources from common, frequently occurring network and transport layer DDoS attacks. Attack traffic can be geographically isolated and absorbed using the capacity in edge locations close to the source. Additionally, you can configure geographical restrictions to help block attacks originating from specific countries.

The request-routing technology in CloudFront connects each client to the nearest edge location, as determined by continuously updated latency measurements. HTTP and HTTPS requests sent to CloudFront can be monitored, and access to your application resources can be controlled at edge locations using AWS WAF. Based on conditions that you specify in AWS WAF, such as the IP addresses that requests originate from or the values of query strings, traffic can be allowed, blocked, or allowed and counted for further investigation or remediation. The following diagram shows how static and dynamic web application content can originate from endpoint resources within AWS or your corporate data center. For more details, see How CloudFront Delivers Content and How CloudFront Works with Regional Edge Caches.

Route 53 DNS requests and subsequent application traffic routed through CloudFront are inspected inline. Always-on monitoring, anomaly detection, and mitigation against common infrastructure DDoS attacks such as SYN/ACK floods, UDP floods, and reflection attacks are built into both Route 53 and CloudFront. For a review of common DDoS attack vectors, see How to Help Prepare for DDoS Attacks by Reducing Your Attack Surface. When the SYN flood attack threshold is exceeded, SYN cookies are activated to avoid dropping connections from legitimate clients. Deterministic packet filtering drops malformed TCP packets and invalid DNS requests, only allowing traffic to pass that is valid for the service. Heuristics-based anomaly detection evaluates attributes such as type, source, and composition of traffic. Traffic is scored across many dimensions, and only the most suspicious traffic is dropped. This method allows you to avoid false positives while protecting application availability.

Route 53 is also designed to withstand DNS query floods, which are real DNS requests that can continue for hours and attempt to exhaust DNS server resources. Route 53 uses shuffle sharding and anycast striping to spread DNS traffic across edge locations and help protect the availability of the service.

The next four sections provide guidance about how to deploy CloudFront, Route 53, AWS WAF, and, optionally, AWS Shield Advanced.

Deploy CloudFront

To take advantage of application delivery with DDoS mitigations at the edge, start by creating a CloudFront distribution and configuring origins:

  1. Sign in to the AWS Management Console and open the CloudFront console
  2. Choose Create Distribution.
  3. On the first page of the Create Distribution Wizard, in the Web section, choose Get Started.
  4. Specify origin settings for the distribution. The following screenshot of the CloudFront console shows an example CloudFront distribution configured with an Elastic Load Balancing load balancer origin, as shown in the previous diagram. I have configured this example to set the Origin SSL Protocols to use TLSv1.2 and the Origin Protocol Policy to HTTP Only. For more information about creating an HTTPS listener for your ELB load balancer and requesting a certificate from AWS Certificate Manager (ACM), see Getting Started with Elastic Load BalancingSupported Regions, and Requiring HTTPS for Communication Between CloudFront and Your Custom Origin.
  1. Specify cache behavior settings for the distribution, as shown in the following screenshot. You can configure each URL path pattern with a set of associated cache behaviors. For dynamic web applications, set the Minimum TTL to 0 so that CloudFront will make a GET request with an If-Modified-Since header back to the origin. When CloudFront proxies traffic to the origin from edge locations and back, multiple concurrent requests for the same object are collapsed into a single request. The request is sent over a persistent connection from the edge location to the region over networks monitored by AWS. The use of a large initial TCP window size in CloudFront maximizes the available bandwidth, and TCP Fast Open (TFO) reduces latency.
  2. To ensure that all traffic to CloudFront is encrypted and to enable SSL termination from clients at global edge locations, specify Redirect HTTP to HTTPS for Viewer Protocol Policy. Moving SSL termination to CloudFront offloads computationally expensive SSL negotiation, helps mitigate SSL abuse, and reduces latency with the use of OCSP stapling and session tickets. For more information about options for serving HTTPS requests, see Choosing How CloudFront Serves HTTPS Requests. For dynamic web applications, set Allowed HTTP Methods to include all methods, set Forward Headers to All, and for Query String Forwarding and Caching, choose Forward all, cache based on all.
  1. Specify distribution settings for the distribution, as shown in the following screenshot. Enter your domain names in the Alternate Domain Names box and choose Custom SSL Certificate.
  2. Choose Create Distribution. Note the x.cloudfront.net Domain Name of the distribution. In the next section, you will configure Route 53 to route traffic to this CloudFront distribution domain name.

Configure Route 53

When you created a web distribution in the previous section, CloudFront assigned a domain name to the distribution, such as d111111abcdef8.cloudfront.net. You can use this domain name in the URLs for your content, such as: http://d111111abcdef8.cloudfront.net/logo.jpg.

Alternatively, you might prefer to use your own domain name in URLs, such as: http://example.com/logo.jpg. You can accomplish this by creating a Route 53 alias resource record set that routes dynamic web application traffic to your CloudFront distribution by using your domain name. Alias resource record sets are virtual records specific to Route 53 that are used to map alias resource record sets for your domain to your CloudFront distribution. Alias resource record sets are similar to CNAME records except there is no charge for DNS queries to Route 53 alias resource record sets mapped to AWS services. Alias resource record sets are also not visible to resolvers, and they can be created for the root domain (zone apex) as well as subdomains.

A hosted zone, similar to a DNS zone file, is a collection of records that belongs to a single parent domain name. Each hosted zone has four nonoverlapping name servers in a delegation set. If a DNS query is dropped, the client automatically retries the next name server. If you have not already registered a domain name and have not configured a hosted zone for your domain, complete these two prerequisite steps before proceeding:

After you have registered your domain name and configured your public hosted zone, follow these steps to create an alias resource record set:

  1. Sign in to the AWS Management Console and open the Route 53 console.
  2. In the navigation pane, choose Hosted Zones.
  3. Choose the name of the hosted zone for the domain that you want to use to route traffic to your CloudFront distribution.
  4. Choose Create Record Set.
  5. Specify the following values:
    • Name – Type the domain name that you want to use to route traffic to your CloudFront distribution. The default value is the name of the hosted zone. For example, if the name of the hosted zone is example.com and you want to use acme.example.com to route traffic to your distribution, type acme.
    • Type – Choose A – IPv4 address. If IPv6 is enabled for the distribution and you are creating a second resource record set, choose AAAA – IPv6 address.
    • Alias – Choose Yes.
    • Alias Target – In the CloudFront distributions section, choose the name that CloudFront assigned to the distribution when you created it.
    • Routing Policy – Accept the default value of Simple.
    • Evaluate Target Health – Accept the default value of No.
  6. Choose Create.
  7. If IPv6 is enabled for the distribution, repeat Steps 4 through 6. Specify the same settings except for the Type field, as explained in Step 5.

The following screenshot of the Route 53 console shows a Route 53 alias resource record set that is configured to map a domain name to a CloudFront distribution.

If your dynamic web application requires geo redundancy, you can use latency-based routing in Route 53 to run origin servers in different AWS regions. Route 53 is integrated with CloudFront to collect latency measurements from each edge location. With Route 53 latency-based routing, each CloudFront edge location goes to the region with the lowest latency for the origin fetch.

Enable AWS WAF

AWS WAF is a web application firewall that helps detect and mitigate web application layer DDoS attacks by inspecting traffic inline. Application layer DDoS attacks use well-formed but malicious requests to evade mitigation and consume application resources. You can define custom security rules (also called web ACLs) that contain a set of conditions, rules, and actions to block attacking traffic. After you define web ACLs, you can apply them to CloudFront distributions, and web ACLs are evaluated in the priority order you specified when you configured them. Real-time metrics and sampled web requests are provided for each web ACL.

You can configure AWS WAF whitelisting or blacklisting in conjunction with CloudFront geo restriction to prevent users in specific geographic locations from accessing your application. The AWS WAF API supports security automation such as blacklisting IP addresses that exceed request limits, which can be useful for mitigating HTTP flood attacks. Use the AWS WAF Security Automations Implementation Guide to implement rate-based blacklisting.

The following diagram shows how the (a) flow of CloudFront access logs files to an Amazon S3 bucket (b) provides the source data for the Lambda log parser function (c) to identify HTTP flood traffic and update AWS WAF web ACLs. As CloudFront receives requests on behalf of your dynamic web application, it sends access logs to an S3 bucket, triggering the Lambda log parser. The Lambda function parses CloudFront access logs to identify suspicious behavior, such as an unusual number of requests or errors, and it automatically updates your AWS WAF rules to block subsequent requests from the IP addresses in question for a predefined amount of time that you specify.

Diagram of the process

In addition to automated rate-based blacklisting to help protect against HTTP flood attacks, prebuilt AWS CloudFormation templates are available to simplify the configuration of AWS WAF for a proactive application-layer security defense. The following diagram provides an overview of CloudFormation template input into the creation of the CommonAttackProtection stack that includes AWS WAF web ACLs used to block, allow, or count requests that meet the criteria defined in each rule.

Diagram of CloudFormation template input into the creation of the CommonAttackProtection stack

To implement these application layer protections, follow the steps in Tutorial: Quickly Setting Up AWS WAF Protection Against Common Attacks. After you have created your AWS WAF web ACLs, you can assign them to your CloudFront distribution by updating the settings.

  1. Sign in to the AWS Management Console and open the CloudFront console.
  2. Choose the link under the ID column for your CloudFront distribution.
  3. Choose Edit under the General
  4. Choose your AWS WAF Web ACL from the drop-down
  5. Choose Yes, Edit.

Activate AWS Shield Advanced (optional)

Deploying CloudFront, Route 53, and AWS WAF as described in this post enables the built-in DDoS protections for your dynamic web applications that are included with AWS Shield Standard. (There is no upfront cost or charge for AWS Shield Standard beyond the normal pricing for CloudFront, Route 53, and AWS WAF.) AWS Shield Standard is designed to meet the needs of many dynamic web applications.

For dynamic web applications that have a high risk or history of frequent, complex, or high volume DDoS attacks, AWS Shield Advanced provides additional DDoS mitigation capacity, attack visibility, cost protection, and access to the AWS DDoS Response Team (DRT). For more information about AWS Shield Advanced pricing, see AWS Shield Advanced pricing. To activate advanced protection services, follow these steps:

  1. Sign in to the AWS Management Console and open the AWS WAF console.
  2. If this is your first time signing in to the AWS WAF console, choose Get started with AWS Shield Advanced. Otherwise, choose Protected resources.
  3. Choose Activate AWS Shield Advanced.
  4. Choose the resource type and resource to protect.
  5. For Name, enter a friendly name that will help you identify the AWS resources that are protected. For example, My CloudFront AWS Shield Advanced distributions.
  6. (Optional) For Web DDoS attack, select Enable. You will be prompted to associate an existing web ACL with these resources, or create a new ACL if you don’t have any yet.
  7. Choose Add DDoS protection.

Summary

In this blog post, I outline the steps to deploy CloudFront and configure Route 53 in front of your dynamic web application to leverage the global Amazon network of edge locations for DDoS resiliency. The post also provides guidance about enabling AWS WAF for application layer traffic monitoring and automated rules creation to block malicious traffic. I also cover the optional steps to activate AWS Shield Advanced, which helps build a more comprehensive defense against DDoS attacks for your dynamic web applications.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or issues implementing this solution, please open a new thread on the AWS WAF forum.

– Holly

How to Protect Your Web Application Against DDoS Attacks by Using Amazon Route 53 and an External Content Delivery Network

Post Syndicated from Shawn Marck original https://aws.amazon.com/blogs/security/how-to-protect-your-web-application-against-ddos-attacks-by-using-amazon-route-53-and-a-content-delivery-network/

Distributed Denial of Service (DDoS) attacks are attempts by a malicious actor to flood a network, system, or application with more traffic, connections, or requests than it is able to handle. To protect your web application against DDoS attacks, you can use AWS Shield, a DDoS protection service that AWS provides automatically to all AWS customers at no additional charge. You can use AWS Shield in conjunction with DDoS-resilient web services such as Amazon CloudFront and Amazon Route 53 to improve your ability to defend against DDoS attacks. Learn more about architecting for DDoS resiliency by reading the AWS Best Practices for DDoS Resiliency whitepaper.

In this blog post, I show how you can help protect the zone apex (also known as the root domain) of your web application by using Route 53 to perform a secure redirect to your externally hosted content delivery network (CDN) distribution.

Background

When browsing the Internet, a user might type example.com instead of www.example.com. To make sure these requests are routed properly, it is necessary to create a Route 53 alias resource record set for the zone apex. For example.com, this would be an alias resource record set without any subdomain (www) defined. With Route 53, you can use an alias resource record set to point www or your zone apex directly at a CloudFront distribution. As a result, anyone resolving example.com or www.example.com will see only the CloudFront distribution. This makes it difficult for a malicious actor to find and attack your application origin.

You can also use Route 53 to route end users to a CDN outside AWS. The CDN provider will ask you to create a CNAME alias resource record set to point www.example.com to your CDN distribution’s hostname. Unfortunately, it is not possible to point your zone apex with a CNAME alias resource record set because a zone apex cannot be a CNAME. As a result, users who type example.com without www will not be routed to your web application unless you point the zone apex directly to your application origin.

The benefit of a secure redirect from the zone apex to www is that it helps protect your origin from being exposed to direct attacks.

Solution overview

The following solution diagram shows the AWS services this solution uses and how the solution uses them.

Diagram showing how AWS services are used in this post's solution

Here is how the process works:

  1. A user’s browser makes a DNS request to Route 53.
  2. Route 53 has a hosted zone for the example.com domain.
  3. The hosted zone serves the record:
    1. If the request is for the apex zone, the alias resource record set for the CloudFront distribution is served.
    2. If the request is for the www subdomain, the CNAME for the externally hosted CDN is served.
  4. CloudFront forwards the request to Amazon S3.
  5. S3 performs a secure redirect from example.com to www.example.com.

Note: All of the steps in this blog post’s solution use example.com as a domain name. You must replace this domain name with your own domain name.

AWS services used in this solution

You will use three AWS services in this walkthrough to build your zone apex–to–external CDN distribution redirect:

  • Route 53 – This post assumes that you are already using Route 53 to route users to your web application, which provides you with protection against common DDoS attacks, including DNS query floods. To learn more about migrating to Route 53, see Getting Started with Amazon Route 53.
  • S3 – S3 is object storage with a simple web service interface to store and retrieve any amount of data from anywhere on the web. S3 also allows you to configure a bucket for website hosting. In this walkthrough, you will use the S3 website hosting feature to redirect users from example.com to www.example.com, which points to your externally hosted CDN.
  • CloudFront – When architecting your application for DDoS resiliency, it is important to protect origin resources, such as S3 buckets, from discovery by a malicious actor. This is known as obfuscation. In this walkthrough, you will use a CloudFront distribution to obfuscate your S3 bucket.

Prerequisites

The solution in this blog post assumes that you already have the following components as part of your architecture:

  1. A Route 53 hosted zone for your domain.
  2. A CNAME alias resource record set pointing to your CDN.

Deploy the solution

In this solution, you:

  1. Create an S3 bucket with HTTP redirection. This allows requests made to your zone apex to be redirected to your www subdomain.
  2. Create and configure a CloudFront web distribution. I use a CloudFront distribution in front of my S3 web redirect so that I can leverage the advanced DDoS protection and scale that is native to CloudFront.
  3. Configure an alias resource record set in your hosted zone. Alias resource record sets are similar to CNAME records, but you can set them at the zone apex.
  4. Validate that the redirect is working.

Step 1: Create an S3 bucket with HTTP redirection

The following steps show how to configure your S3 bucket as a static website that will perform HTTP redirects to your www URL:

  1. Open the AWS Management Console. Navigate to the S3 console and create an S3 bucket in the region of your choice.
  2. Configure static website hosting to redirect all requests to another host name:
    1. Choose the S3 bucket you just created and then choose Properties.
      Screenshot showing choosing the S3 bucket and the Properties button
    2. Choose Static Website Hosting.
      Screenshot of choosing Static Website Hosting
    3. Choose Redirect all requests to another host name, and type your zone apex (root domain) in the Redirect all requests to box, as shown in the following screenshot.
      Screenshot of Static Website Hosting settings to choose

Note: At the top of this tab, you will see an endpoint. Copy the endpoint because you will need it in Step 2 when you configure the CloudFront distribution. In this example, the endpoint is example-com.s3-website-us-east-1.amazonaws.com.

Step 2: Create and configure a CloudFront web distribution

The following steps show how to create a CloudFront web distribution that protects the S3 bucket:

  1. From the AWS Management Console, choose CloudFront.
  2. On the first page of the Create Distribution Wizard, in the Web section, choose Get Started.
  3. The Create Distribution page has many values you can specify. For this walkthrough, you need to specify only two settings:
    1. Origin Settings:
      • Origin Domain Name –When you click in this box, a menu appears with AWS resources you can choose. Choose the S3 bucket you created in Step 1, or paste the endpoint URL you copied in Step 1. In this example, the endpoint is example-com.s3-website-us-east-1.amazonaws.com.
        Screenshot of Origin Domain Name
    1. Distribution Settings:
      • Alternate Domain Names (CNAMEs) – Type the root domain (for this walkthrough, it is www.example.com).
        Screenshot of Alternate Domain Names
  4. Click Create Distribution.
  5. Wait for the CloudFront distribution to deploy completely before proceeding to Step 3. After CloudFront creates your distribution, the value of the Status column for your distribution will change from InProgress to Deployed. The distribution is then ready to process requests.

Step 3: Configure an alias resource record set in your hosted zone

In this step, you use Route 53 to configure an alias resource record set for your zone apex that resolves to the CloudFront distribution you made in Step 2:

  1. From the AWS Management Console, choose Route 53 and choose Hosted zones.
  2. On the Hosted zones page, choose your domain. This takes you to the Record sets page.
    Screenshot of choosing the domain on the Hosted zones page
  3. Click Create Record Set.
  4. Leave the Name box blank and choose Alias: Yes.
  5. Click the Alias Target box, and choose the CloudFront distribution you created in Step 2. If the distribution does not appear in the list automatically, you can copy and paste the name exactly as it appears in the CloudFront console.
  6. Click Create.
    Screenshot of creating the record set

Step 4: Validate that the redirect is working

To confirm that you have correctly configured all components of this solution and your zone apex is redirecting to the www domain as expected, open a browser and navigate to your zone apex. In this walkthrough, the zone apex is http://example.com and it should redirect automatically to http://www.example.com.

Summary

In this post, I showed how you can help protect your web application against DDoS attacks by using Route 53 to perform a secure redirect to your externally hosted CDN distribution. This helps protect your origin from being exposed to direct DDoS attacks.

If you have comments about this blog post, submit them in the “Comments” section below. If you have questions about implementing the solution in this blog post, start a new thread in the Route 53 forum.

– Shawn

How to Easily Log On to AWS Services by Using Your On-Premises Active Directory

Post Syndicated from Ron Cully original https://aws.amazon.com/blogs/security/how-to-easily-log-on-to-aws-services-by-using-your-on-premises-active-directory/

AWS Directory Service for Microsoft Active Directory (Enterprise Edition), also known as Microsoft AD, now enables your users to log on with just their on-premises Active Directory (AD) user name—no domain name is required. This new domainless logon feature makes it easier to set up connections to your on-premises AD for use with applications such as Amazon WorkSpaces and Amazon QuickSight, and it keeps the user logon experience free from network naming. This new interforest trusts capability is now available when using Microsoft AD with Amazon WorkSpaces and Amazon QuickSight Enterprise Edition.

In this blog post, I explain how Microsoft AD domainless logon works with AD interforest trusts, and I show an example of setting up Amazon WorkSpaces to use this capability.

To follow along, you must have already implemented an on-premises AD infrastructure. You will also need to have an AWS account with an Amazon Virtual Private Cloud (Amazon VPC). I start with some basic concepts to explain domainless logon. If you have prior knowledge of AD domain names, NetBIOS names, logon names, and AD trusts, you can skip the following “Concepts” section and move ahead to the “Interforest Trust with Domainless Logon” section.

Concepts: AD domain names, NetBIOS names, logon names, and AD trusts

AD directories are distributed hierarchical databases that run on one or more domain controllers. AD directories comprise a forest that contains one or more domains. Each forest has a root domain and a global catalog that runs on at least one domain controller. Optionally, a forest may contain child domains as a way to organize and delegate administration of objects. The domains contain user accounts each with a logon name. Domains also contain objects such as groups, computers, and policies; however, these are outside the scope of this blog post. When child domains exist in a forest, root domains are frequently unused for user accounts. The global catalog contains a list of all user accounts for all domains within the forest, similar to a searchable phonebook listing of all domain accounts. The following diagram illustrates the basic structure and naming of a forest for the company example.com.

Diagram of basic structure and naming of forest for example.com

Domain names

AD domains are Domain Name Service (DNS) names, and domain names are used to locate user accounts and other objects in the directory. A forest has one root domain, and its name consists of a prefix name and a suffix name. Often administrators configure their forest suffix to be the registered DNS name for their organization (for example, example.com) and the prefix is a name associated with their forest root domain (for example, us). Child domain names consist of a prefix followed by the root domain name. For example, let’s say you have a root domain us.example.com, and you created a child domain for your sales organization with a prefix of sales. The FQDN is the domain prefix of the child domain combined with the root domain prefix and the organization suffix, all separated by periods (“.”). In this example, the FQDN for the sales domain is sales.us.example.com.

NetBIOS names

NetBIOS is a legacy application programming interface (API) that worked over network protocols. NetBIOS names were used to locate services in the network and, for compatibility with legacy applications, AD associates a NetBIOS name with each domain in the directory. Today, NetBIOS names continue to be used as simplified names to find user accounts and services that are managed within AD and must be unique within the forest and any trusted forests (see “Interforest trusts” section that follows). NetBIOS names must be 15 or fewer characters long.

For this post, I have chosen the following strategy to ensure that my NetBIOS names are unique across all domains and all forests. For my root domain, I concatenate the root domain prefix with the forest suffix, without the .com and without the periods. In this case, usexample is the NetBIOS name for my root domain us.example.com. For my child domains, I concatenate the child domain prefix with the root domain prefix without periods. This results in salesus as the NetBIOS name for the child domain sales.us.example.com. For my example, I can use the NetBIOS name salesus instead of the FQDN sales.us.example.com when searching for users in the sales domain.

Logon names

Logon names are used to log on to Active Directory and must be 20 or fewer characters long (for example, jsmith or dadams). Logon names must be unique within a domain, but they do not have to be unique between different domains in the same forest. For example, there can be only one dadams in the sales.us.example.com (salesus) domain, but there could also be a dadams in the hr.us.example.com (hrus) domain. When possible, it is a best practice for logon names to be unique across all forests and domains in your AD infrastructure. By doing so, you can typically use the AD logon name as a person’s email name (the local-part of an email address), and your forest suffix as the email domain (for example, [email protected]). This way, end users only have one name to remember for email and logging on to AD. Failure to use unique logon names results in some people having different logon and email names.

For example, let’s say there is a Daryl Adams in hrus with a logon name of dadams and a Dale Adams in salesus with a logon name of dadams. The company is using example.com as its email domain. Because email requires addresses to be unique, you can only have one [email protected] email address. Therefore, you would have to give one of these two people (let’s say Dale Adams) a different email address such as [email protected]. Now Dale has to remember to logon to the network as dadams (the AD logon name) but have an email name of daleadams. If unique user names were assigned instead, Dale could have a logon name of daleadams and an email name of daleadams.

Logging on to AD

To allow AD to find user accounts in the forest during log on, users must include their logon name and the FQDN or the NetBIOS name for the domain where their account is located. Frequently, the computers used by people are joined to the same domain as the user’s account. The Windows desktop logon screen chooses the computer’s domain as the default domain for logon, so users typically only need to type their logon name and password. However, if the computer is joined to a different domain than the user, the user’s FQDN or NetBIOS name are also required.

For example, suppose jsmith has an account in sales.us.example.com, and the domain has a NetBIOS name salesus. Suppose jsmith tries to log on using a shared computer that is in the computers.us.example.com domain with a NetBIOS name of uscomputers. The computer defaults the logon domain to uscomputers, but jsmith does not exist in the uscomputers domain. Therefore, jsmith must type her logon name and her FQDN or NetBIOS name in the user name field of the Windows logon screen. Windows supports multiple syntaxes to do this including NetBIOS\username (salesus\jsmith) and FQDN\username (sales.us.com\jsmith).

Interforest trusts

Most organizations have a single AD forest in which to manage user accounts, computers, printers, services, and other objects. Within a single forest, AD uses a transitive trust between all of its domains. A transitive trust means that within a trust, domains trust users, computers, and services that exist in other domains in the same forest. For example, a printer in printers.us.example.com trusts sales.us.example.com\jsmith. As long as jsmith is given permissions to do so, jsmith can use the printer in printers.us.example.com.

An organization at times might need two or more forests. When multiple forests are used, it is often desirable to allow a user in one forest to access a resource, such as a web application, in a different forest. However, trusts do not work between forests unless the administrators of the two forests agree to set up a trust.

For example, suppose a company that has a root domain of us.example.com has another forest in the EU with a root domain of eu.example.com. The company wants to let users from both forests share the same printers to accommodate employees who travel between locations. By creating an interforest trust between the two forests, this can be accomplished. In the following diagram, I illustrate that us.example.com trusts users from eu.example.com, and the forest eu.example.com trusts users from us.example.com through a two-way forest trust.

Diagram of a two-way forest trust

In rare cases, an organization may require three or more forests. Unlike domain trusts within a single forest, interforest trusts are not transitive. That means, for example, that if the forest us.example.com trusts eu.example.com, and eu.example.com trusts jp.example.com, us.example.com does not automatically trust jp.example.com. For us.example.com to trust jp.example.com, an explicit, separate trust must be created between these two forests.

When setting up trusts, there is a notion of trust direction. The direction of the trust determines which forest is trusting and which forest is trusted. In a one-way trust, one forest is the trusting forest, and the other is the trusted forest. The direction of the trust is from the trusting forest to the trusted forest. A two-way trust is simply two one-way trusts going in opposite directions; in this case, both forests are both trusting and trusted.

Microsoft Windows and AD use an authentication technology called Kerberos. After a user logs on to AD, Kerberos gives the user’s Windows account a Kerberos ticket that can be used to access services. Within a forest, the ticket can be presented to services such as web applications to prove who the user is, without the user providing a logon name and password again. Without a trust, the Kerberos ticket from one forest will not be honored in a different forest. In a trust, the trusting forest agrees to trust users who have logged on to the trusted forest, by trusting the Kerberos ticket from the trusted forest. With a trust, the user account associated with the Kerberos ticket can access services in the trusting forest if the user account has been granted permissions to use the resource in the trusting forest.

Interforest Trust with Domainless Logon

For many users, remembering domain names or NetBIOS names has been a source of numerous technical support calls. With the new updates to Microsoft AD, AWS applications such as Amazon WorkSpaces can be updated to support domainless logon through interforest trusts between Microsoft AD and your on-premises AD. Domainless logon eliminates the need for people to enter a domain name or a NetBIOS name to log on if their logon name is unique across all forests and all domains.

As described in the “Concepts” section earlier in this post, AD authentication requires a logon name to be presented with an FQDN or NetBIOS name. If AD does not receive an FQDN or NetBIOS name, it cannot find the user account in the forest. Windows can partially hide domain details from users if the Windows computer is joined to the same domain in which the user’s account is located. For example, if jsmith in salesus uses a computer that is joined to the sales.us.example.com domain, jsmith does not have to remember her domain name or NetBIOS name. Instead, Windows uses the domain of the computer as the default domain to try when jsmith enters only her logon name. However, if jsmith is using a shared computer that is joined to the computers.us.example.com domain, jsmith must log on by specifying her domain of sales.us.example.com or her NetBIOS name salesus.

With domainless logon, Microsoft AD takes advantage of global catalogs, and because most user names are unique across an entire organization, the need for an FQDN or NetBIOS name for most users to log on is eliminated.

Let’s look at how domainless logon works.

AWS applications that use Directory Service use a similar AWS logon page and identical logon process. Unlike a Windows computer joined to a domain, the AWS logon page is associated with a Directory Service directory, but it is not associated with any particular domain. When Microsoft AD is used, the User name field of the logon page accepts an FQDN\logon name, NetBIOS\logon name, or just a logon name. For example, the logon screen will accept sales.us.example.com\jsmith, salesus\jsmith, or jsmith.

In the following example, the company example.com has a forest in the US and EU, and one in AWS using Microsoft AD. To make NetBIOS names unique, I use my naming strategy described earlier in the section “NetBIOS names.” For the US root domain, the FQDN is us.example.com,and the NetBIOS name is usexample. For the EU, the FQDN is eu.example.com and the NetBIOS is euexample. For AWS, the FQDN is aws.example.com and the NetBIOS awsexample. Continuing with my naming strategy, my unique child domains have the NetBIOS names salesus, hrus, saleseu, hreu. Each of the forests has a global catalog that lists all users from all domains within the forest. The following graphic illustrates the forest configuration.

Diagram of the forest configuration

As shown in the preceding diagram, the global catalog for the US forest contains a jsmith in sales and dadams in hr. For the EU, there is a dadams in sales and a tpella in hr, and the AWS forest has a bharvey. The users shown in green type (jsmith, tpella, and bharvey) have unique names across all forests in the trust and qualify for domainless logon. The two dadams shown in red do not qualify for domainless logon because the user name is not unique across all trusted forests.

As shown in the following diagram, when a user types in only a logon name (such as jsmith or dadams) without an FQDN or NetBIOS name, domainless logon simultaneously searches for a matching logon name in the global catalogs of the Microsoft AD forest (aws.example.com) and all trusted forests (us.example.com and eu.example.com). For jsmith, the domainless logon finds a single user account that matches the logon name in sales.us.example.com and adds the domain to the logon name before authenticating. If no accounts match the logon name, authentication fails before attempting to authenticate. If dadams in the EU attempts to use only his logon name, domainless logon finds two dadams users, one in hr.us.example.com and one in sales.eu.example.com. This ambiguity prevents domainless logon. To log on, dadams must provide his FQDN or NetBIOS name (in other words, sales.eu.example.com\dadams or saleseu\dadams).

Diagram showing when a user types in only a logon name without an FQDN or NetBIOS name

Upon successful logon, the logon page caches in a cookie the logon name and domain that were used. In subsequent logons, the end user does not have to type anything except their password. Also, because the domain is cached, the global catalogs do not need to be searched on subsequent logons. This minimizes global catalog searching, maximizes logon performance, and eliminates the need for users to remember domains (in most cases).

To maximize security associated with domainless logon, all authentication failures result in an identical failure notification that tells the user to check their domain name, user name, and password before trying again. This prevents hackers from using error codes or failure messages to glean information about logon names and domains in your AD directory.

If you follow best practices so that all user names are unique across all domains and all forests, domainless logon eliminates the requirement for your users to remember their FQDN or NetBIOS name to log on. This simplifies the logon experience for end users and can reduce your technical support resources that you use currently to help end users with logging on.

Solution overview

In this example of domainless logon, I show how Amazon WorkSpaces can use your existing on-premises AD user accounts through Microsoft AD. This example requires:

  1. An AWS account with an Amazon VPC.
  2. An AWS Microsoft AD directory in your Amazon VPC.
  3. An existing AD deployment in your on-premises network.
  4. A secured network connection from your on-premises network to your Amazon VPC.
  5. A two-way AD trust between your Microsoft AD and your on-premises AD.

I configure Amazon WorkSpaces to use a Microsoft AD directory that exists in the same Amazon VPC. The Microsoft AD directory is configured to have a two-way trust to the on-premises AD. Amazon WorkSpaces uses Microsoft AD and the two-way trust to find users in your on-premises AD and create Amazon WorkSpaces instances. After the instances are created, I send end users an invitation to use their Amazon WorkSpaces. The invitation includes a link for them to complete their configuration and a link to download an Amazon WorkSpaces client to their directory. When the user logs in to their Amazon WorkSpaces account, the user specifies the login name and password for their on-premises AD user account. Through the two-way trust between Microsoft AD and the on-premises AD, the user is authenticated and gains access to their Amazon WorkSpaces desktop.

Getting started

Now that we have covered how the pieces fit together and you understand how FQDN, NetBIOS, and logon names are used, let’s walk through the steps to use Microsoft AD with domainless logon to your on-premises AD for Amazon WorkSpaces.

Step 1 – Set up your Microsoft AD in your Amazon VPC

If you already have a Microsoft AD directory running, skip to Step 2. If you do not have a Microsoft AD directory to use with Amazon WorkSpaces, you can create the directory in the Directory Service console and attach to it from the Amazon WorkSpaces console, or you can create the directory within the Amazon WorkSpaces console.

To create the directory from Amazon WorkSpaces (as shown in the following screenshot):

  1. Sign in to the AWS Management Console.
  2. Under All services, choose WorkSpaces from the Desktop & App Streaming section.
  3. Choose Get Started Now.
  4. Choose Launch next to Advanced Setup, and then choose Create Microsoft AD.

To create the directory from the Directory Service console:

  1. Sign in to the AWS Management Console.
  2. Under Security & Identity, choose Directory Service.
  3. Choose Get Started Now.
  4. Choose Create Microsoft AD.
    Screenshot of choosing "Create Microsoft AD"

In this example, I use example.com as my organization name. The Directory DNS is the FQDN for the root domain, and it is aws.example.com in this example. For my NetBIOS name, I follow the naming model I showed earlier and use awsexample. Note that the Organization Name shown in the following screenshot is required only when creating a directory from Amazon WorkSpaces; it is not required when you create a Microsoft AD directory from the AWS Directory Service workflow.

Screenshot of establishing directory details

For more details about Microsoft AD creation, review the steps in AWS Directory Service for Microsoft Active Directory (Enterprise Edition). After entering the required parameters, it may take up to 40 minutes for the directory to become active so that you might want to exit the console and come back later.

Note: First-time directory users receive 750 free directory hours.

Step 2 – Create a trust relationship between your Microsoft AD and on-premises domains

To create a trust relationship between your Microsoft AD and on-premises domains:

  1. From the AWS Management Console, open Directory Service.
  2. Locate the Microsoft AD directory to use with Amazon WorkSpaces and choose its Directory ID link (as highlighted in the following screenshot).
    Screenshot of Directory ID link
  3. Choose the Trust relationships tab for the directory and follow the steps in Create a Trust Relationship (Microsoft AD) to create the trust relationships between your Microsoft AD and your on-premises domains.

For details about creating the two-way trust to your on-premises AD forest, see Tutorial: Create a Trust Relationship Between Your Microsoft AD on AWS and Your On-Premises Domain.

Step 3 – Create Amazon Workspaces for on-premises users

For details about getting started with Amazon WorkSpaces, see Getting Started with Amazon WorkSpaces. The following are the setup steps.

  1. From the AWS Management Console, choose
  2. Choose Directories in the left pane.
  3. Locate and select the Microsoft AD directory that you set up in Steps 1 and 2.
  4. If the Registered status for the directory says No, open the Actions menu and choose Register.
    Screenshot of "Register" in "Actions" menu
  5. Wait until the Registered status changes to Yes. The status change should take only a few seconds.
  6. Choose the WorkSpaces in the left pane.
  7. Choose Launch WorkSpaces.
  8. Select the Microsoft AD directory that you set up in Steps 1 and 2 and choose Next Step.
    Screenshot of choosing the Microsoft AD directory
  1. In the Select Users from Directory section, type a partial or full logon name, email address, or user name for an on-premises user for whom you want to create an Amazon WorkSpace and choose Search. The returned list of users should be the users from your on-premises AD forest.
  2. In the returned results, scroll through the list and select the users for whom to create an Amazon WorkSpace and choose Add Selected. You may repeat the search and select processes until up to 20 users appear in the Amazon WorkSpaces list at the bottom of the screen. When finished, choose Next Step.
    Screenshot of identifying users for whom to create a WorkSpace
  3. Select a bundle to be used for the Amazon WorkSpaces you are creating and choose Next Step.
  4. Choose the Running Mode, Encryption settings, and configure any Tags. Choose Next Step.
  5. Review the configuration of the Amazon WorkSpaces and click Launch WorkSpaces. It may take up to 20 minutes for the Amazon WorkSpaces to be available.
    Screenshot of reviewing the WorkSpaces configuration

Step 4 – Invite the users to log in to their Amazon Workspaces

  1. From the AWS Management Console, choose WorkSpaces from the Desktop & App Streaming section.
  2. Choose the WorkSpaces menu item in the left pane.
  3. Select the Amazon WorkSpaces you created in Step 3. Then choose the Actions menu and choose Invite User. A login email is sent to the users.
  4. Copy the text from the Invite screen, then paste the text into an email to the user.

Step 5 – Users log in to their Amazon WorkSpace

  1. The users receive their Amazon WorkSpaces invitations in email and follow the instructions to launch the Amazon WorkSpaces login screen.
  2. Each user enters their user name and password.
  3. After a successful login, future Amazon WorkSpaces logins from the same computer will present what the user last typed on the login screen. The user only needs to provide their password to complete the login. If only a login name were provided by the user in the last successful login, the domain for the user account is silently added to the subsequent login attempt.

To learn more about Directory Service, see the AWS Directory Service home page. If you have questions about Directory Service products, please post them on the Directory Service forum. To learn more about Amazon WorkSpaces, visit the Amazon WorkSpaces home page. For questions related to Amazon WorkSpaces, please post them on the Amazon WorkSpaces forum.

– Ron

Let’s Encrypt 2016 In Review

Post Syndicated from Let's Encrypt - Free SSL/TLS Certificates original https://letsencrypt.org//2017/01/06/le-2016-in-review.html

Our first full year as a live CA was an exciting one. I’m incredibly proud of what our team and community accomplished during 2016. I’d like to share some thoughts about how we’ve changed, what we’ve accomplished, and what we’ve learned.

At the start of 2016, Let’s Encrypt certificates had been available to the public for less than a month and we were supporting approximately 240,000 active (unexpired) certificates. That seemed like a lot at the time! Now we’re frequently issuing that many new certificates in a single day while supporting more than 20,000,000 active certificates in total. We’ve issued more than a million certificates in a single day a few times recently. We’re currently serving an average of 6,700 OCSP responses per second. We’ve done a lot of optimization work, we’ve had to add some hardware, and there have been some long nights for our staff, but we’ve been able to keep up and we’re ready for another year of strong growth.

Let's Encrypt certificate issuance statistics.

We added a number of new features during the past year, including support for the ACME DNS challenge, ECDSA signing, IPv6, and Internationalized Domain Names.

When 2016 started, our root certificate had not been accepted into any major root programs. Today we’ve been accepted into the Mozilla, Apple, and Google root programs. We’re close to announcing acceptance into another major root program. These are major steps towards being able to operate as an independent CA. You can read more about why here.

The ACME protocol for issuing and managing certificates is at the heart of how Let’s Encrypt works. Having a well-defined and heavily audited specification developed in public on a standards track has been a major contributor to our growth and the growth of our client ecosystem. Great progress was made in 2016 towards standardizing ACME in the IETF ACME working group. We’re hoping for a final document around the end of Q2 2017, and we’ll announce plans for implementation of the updated protocol around that time as well.

Supporting the kind of growth we saw in 2016 meant adding staff, and during the past year Internet Security Research Group (ISRG), the non-profit entity behind Let’s Encrypt, went from four full-time employees to nine. We’re still a pretty small crew given that we’re now one of the largest CAs in the world (if not the largest), but it works because of our intense focus on automation, the fact that we’ve been able to hire great people, and because of the incredible support we receive from the Let’s Encrypt community.

Let’s Encrypt exists in order to help create a 100% encrypted Web. Our own metrics can be interesting, but they’re only really meaningful in terms of the impact they have on progress towards a more secure and privacy-respecting Web. The metric we use to track progress towards that goal is the percentage of page loads using HTTPS, as seen by browsers. According to Firefox Telemetry, the Web has gone from approximately 39% of page loads using HTTPS each day to just about 49% during the past year. We’re incredibly close to a Web that is more encrypted than not. We’re proud to have been a big part of that, but we can’t take credit for all of it. Many people and organizations around the globe have come to realize that we need to invest in a more secure and privacy-respecting Web, and have taken steps to secure their own sites as well as their customers’. Thank you to everyone that has advocated for HTTPS this year, or helped to make it easier for people to make the switch.

We learned some lessons this year. When we had service interruptions they were usually related to managing the rapidly growing database backing our CA. Also, while most of our code had proper tests, some small pieces didn’t and that led to incidents that shouldn’t have happened. That said, I’m proud of the way we handle incidents promptly, including quick and transparent public disclosure.

We also learned a lot about our client ecosystem. At the beginning of 2016, ISRG / Let’s Encrypt provided client software called letsencrypt. We’ve always known that we would never be able produce software that would work for every Web server/stack, but we felt that we needed to offer a client that would work well for a large number of people and that could act as a reference client. By March of 2016, earlier than we had foreseen, it had become clear that our community was up to the task of creating a wide range of quality clients, and that our energy would be better spent fostering that community than producing our own client. That’s when we made the decision to hand off development of our client to the Electronic Frontier Foundation (EFF). EFF renamed the client to Certbot and has been doing an excellent job maintaining and improving it as one of many client options.

As exciting as 2016 was for Let’s Encrypt and encryption on the Web, 2017 seems set to be an even more incredible year. Much of the infrastructure and many of the plans necessary for a 100% encrypted Web came into being or solidified in 2016. More and more hosting providers and CDNs are supporting HTTPS with one click or by default, often without additional fees. It has never been easier for people and organizations running their own sites to find the tools, services, and information they need to move to HTTPS. Browsers are planning to update their user interfaces to better reflect the risks associated with non-secure connections.

We’d like to thank our community, including our sponsors, for making everything we did this past year possible. Please consider getting involved or making a donation, and if your company or organization would like to sponsor Let’s Encrypt please email us at [email protected].

Let’s Encrypt 2016 In Review

Post Syndicated from Let's Encrypt - Free SSL/TLS Certificates original https://letsencrypt.org/2017/01/06/le-2016-in-review.html

<p>Our first full year as a live CA was an exciting one. I’m incredibly proud of what our team and community accomplished during 2016. I’d like to share some thoughts about how we’ve changed, what we’ve accomplished, and what we’ve learned.</p>

<p>At the start of 2016, Let’s Encrypt certificates had been available to the public for less than a month and we were supporting approximately 240,000 active (unexpired) certificates. That seemed like a lot at the time! Now we’re frequently issuing that many new certificates in a single day while supporting more than 20,000,000 active certificates in total. We’ve issued more than a million certificates in a single day a few times recently. We’re currently serving an average of 6,700 OCSP responses per second. We’ve done a lot of optimization work, we’ve had to add some hardware, and there have been some long nights for our staff, but we’ve been able to keep up and we’re ready for another year of <a href="https://letsencrypt.org/stats/">strong growth</a>.</p>

<p><center><p><img src="/images/Jan-6-2017-Cert-Stats.png" alt="Let's Encrypt certificate issuance statistics." style="width: 650px; margin-bottom: 17px;"/></p></center></p>

<p>We added a number of <a href="https://letsencrypt.org/upcoming-features/">new features</a> during the past year, including support for the ACME DNS challenge, ECDSA signing, IPv6, and Internationalized Domain Names.</p>

<p>When 2016 started, our root certificate had not been accepted into any major root programs. Today we’ve been accepted into the Mozilla, Apple, and Google root programs. We’re close to announcing acceptance into another major root program. These are major steps towards being able to operate as an independent CA. You can read more about why <a href="https://letsencrypt.org/2016/08/05/le-root-to-be-trusted-by-mozilla.html">here</a>.</p>

<p>The ACME protocol for issuing and managing certificates is at the heart of how Let’s Encrypt works. Having a well-defined and heavily audited specification developed in public on a standards track has been a major contributor to our growth and the growth of our client ecosystem. Great progress was made in 2016 towards standardizing ACME in the <a href="https://datatracker.ietf.org/wg/acme/charter/">IETF ACME working group</a>. We’re hoping for a final document around the end of Q2 2017, and we’ll announce plans for implementation of the updated protocol around that time as well.</p>

<p>Supporting the kind of growth we saw in 2016 meant adding staff, and during the past year Internet Security Research Group (ISRG), the non-profit entity behind Let’s Encrypt, went from four full-time employees to nine. We’re still a pretty small crew given that we’re now one of the largest CAs in the world (if not the largest), but it works because of our intense focus on automation, the fact that we’ve been able to hire great people, and because of the incredible support we receive from the Let’s Encrypt community.</p>

<p>Let’s Encrypt exists in order to help create a 100% encrypted Web. Our own metrics can be interesting, but they’re only really meaningful in terms of the impact they have on progress towards a more secure and privacy-respecting Web. The metric we use to track progress towards that goal is the percentage of page loads using HTTPS, as seen by browsers. According to Firefox Telemetry, the Web has gone from approximately 39% of page loads using HTTPS each day to just about 49% during the past year. We’re incredibly close to a Web that is more encrypted than not. We’re proud to have been a big part of that, but we can’t take credit for all of it. Many people and organizations around the globe have come to realize that we need to invest in a more secure and privacy-respecting Web, and have taken steps to secure their own sites as well as their customers’. Thank you to everyone that has advocated for HTTPS this year, or helped to make it easier for people to make the switch.</p>

<p>We learned some lessons this year. When we had service interruptions they were usually related to managing the rapidly growing database backing our CA. Also, while most of our code had proper tests, some small pieces didn’t and that led to <a href="https://community.letsencrypt.org/c/incidents">incidents</a> that shouldn’t have happened. That said, I’m proud of the way we handle incidents promptly, including quick and transparent public disclosure.</p>

<p>We also learned a lot about our client ecosystem. At the beginning of 2016, ISRG / Let’s Encrypt provided client software called letsencrypt. We’ve always known that we would never be able produce software that would work for every Web server/stack, but we felt that we needed to offer a client that would work well for a large number of people and that could act as a reference client. By March of 2016, earlier than we had foreseen, it had become clear that our community was up to the task of creating a wide range of quality clients, and that our energy would be better spent fostering that community than producing our own client. That’s when we made the decision to <a href="https://letsencrypt.org/2016/03/09/le-client-new-home.html">hand off development of our client</a> to the Electronic Frontier Foundation (EFF). EFF renamed the client to <a href="https://certbot.eff.org/">Certbot</a> and has been doing an excellent job maintaining and improving it as one of <a href="https://letsencrypt.org/docs/client-options/">many client options</a>.</p>

<p>As exciting as 2016 was for Let’s Encrypt and encryption on the Web, 2017 seems set to be an even more incredible year. Much of the infrastructure and many of the plans necessary for a 100% encrypted Web came into being or solidified in 2016. More and more hosting providers and CDNs are supporting HTTPS with one click or by default, often without additional fees. It has never been easier for people and organizations running their own sites to find the tools, services, and information they need to move to HTTPS. Browsers are planning to update their user interfaces to better reflect the risks associated with non-secure connections.</p>

<p>We’d like to thank our community, including our <a href="https://letsencrypt.org/sponsors/">sponsors</a>, for making everything we did this past year possible. Please consider <a href="https://letsencrypt.org/getinvolved/">getting involved</a> or making a <a href="https://letsencrypt.org/donate/">donation</a>, and if your company or organization would like to sponsor Let’s Encrypt please email us at <a href="mailto:[email protected]">[email protected]</a>.</p>

Amazon Lightsail – The Power of AWS, the Simplicity of a VPS

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-lightsail-the-power-of-aws-the-simplicity-of-a-vps/

Some people like to assemble complex systems (houses, computers, or furniture) from parts. They relish the planning process, carefully researching each part and selecting those that give them the desired balance of power and flexibility. With planning out of the way, they enjoy the process of assembling the parts into a finished unit. Other people do not find this do-it-yourself (DIY) approach attractive or worthwhile, and are simply interested in getting to the results as quickly as possible without having to make too many decisions along the way.

Sound familiar?

I believe that this model applies to systems architecture and system building as well. Sometimes you want to take the time to hand-select individual AWS components (servers, storage, IP addresses, and so forth) and put them together on your own. At other times you simply need a system that is preconfigured and preassembled, and is ready to run your web applications with no system-building effort on your part.

In many cases, those seeking a preassembled system turned to a Virtual Private Server, or VPS. With a VPS, you are presented with a handful of options, each ready to run, and available to you for a predictable monthly fee.

While the VPS is a perfect getting-started vehicle, over time the environment can become constrained. At a certain point you may need to step outside the boundaries of the available plans as your needs grow, only to find that you have no options for incremental improvement, and are faced with the need to make a disruptive change. Or, you may find that your options for automated scaling or failover are limited, and that you need to set it all up yourself.

Introducing Amazon Lightsail
Today we are launching Amazon Lightsail. With a couple of clicks you can choose a configuration from a menu and launch a virtual machine preconfigured with SSD-based storage, DNS management, and a static IP address. You can launch your favorite operating system (Amazon Linux AMI, Ubuntu, CentOS, FreeBSD, or Debian), developer stack (LAMP, LEMP, MEAN, or Node.js), or application (Drupal, Joomla, Redmine, GitLab, and many others), with flat-rate pricing plans that start at $5 per month including a generous allowance for data transfer.

Here are the plans and the configurations:

You get the simplicity of a VPS, backed by the power, reliability, and security of AWS. As your needs grow, you will have the ability to smoothly step outside of the initial boundaries and connect to additional AWS database, messaging, and content distribution services.

All in all, Lightsail is the easiest way for you to get started on AWS and jumpstart your cloud projects, while giving you a smooth, clear path into the future.

A Quick Tour
Let’s take a quick tour of Amazon Lightsail! Each page of the Lightsail console includes a Quick Assist tab. You can click on it at any time in order to access context-sensitive documentation that will help you to get the most out of Lightsail:

I start at the main page. I have no instances or other resources at first:

I click on Create Instance to get moving. I choose my machine image (an App and an OS, or simply an OS) an instance plan, and give my instance a name, all on one page:

I can launch multiple instances, set up a configuration script, or specify an alternate SSH keypair if I’d like. I can also choose an Availability Zone. I’ll choose WordPress on the $10 plan, leave everything else as-is, and click on Create. It is up and running within seconds:

I can manage the instance by clicking on it:

My instance has a public IP address that I can open in my browser. WordPress is already installed, configured, and running:

I’ll need the WordPress password in order to finish setting it up. I click on Connect using SSH on the instance management page and I’m connected via a browser-based SSH terminal window without having to do any key management or install any browser plugins. The WordPress admin password is stored in file bitnami_application_password in the ~bitnami directory (the image below shows a made-up password):

You can bookmark the terminal window in order to be able to access it later with just a click or two.

I can manage my instance from the menu bar:

For example, I can access the performance metrics for my instance:

And I can manage my firewall settings:

I can capture the state of my instance by taking a Snapshot:

Later, I can restore the snapshot to a fresh instance:

I can also create static IP addresses and make use of domain names:

Advanced Lightsail – APIs and VPC Peering
Before I wrap up, let’s talk about a few of the more advanced features of Amazon Lightsail – APIs and VPC Peering.

As is almost always the case with AWS, there’s a full set of APIs behind all of the console functionality that we just reviewed. Here are just a few of the more interesting functions:

  • GetBundles – Get a list of the bundles (machine configurations).
  • CreateInstances – Create one or more Lightsail instances.
  • GetInstances – Get a list of all Lightsail instances.
  • GetInstance – Get information about a specific instance.
  • CreateInstanceSnapshot – Create a snapshot of an instance.
  • CreateInstanceFromSnapshot – Create an instance from a snapshot.

All of the Lightsail instances within an account run within a “shadow” VPC that is not visible in the AWS Management Console. If the code that you are running on your Lightsail instances needs access to other AWS resources, you can set up VPC peering between the shadow VPC and another one in your account, and create the resources therein. Click on Account (top right), scroll down to Advanced features, and check VPC peering:

You can now connect your Lightsail apps to other AWS resources that are running within a VPC.

Pricing and Availability
We are launching Amazon Lightsail today in the US East (Northern Virginia) Region, and plan to expand it to other regions in the near future.

Prices start at $5 per month.

Jeff;

Another Shadow Brokers Leak

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/11/another_shadow_.html

There’s another leak of NSA hacking tools and data from the Shadow Brokers. This one includes a list of hacked sites.

According to analyses from researchers here and here, Monday’s dump contains 352 distinct IP addresses and 306 domain names that purportedly have been hacked by the NSA. The timestamps included in the leak indicate that the servers were targeted between August 22, 2000 and August 18, 2010. The addresses include 32 .edu domains and nine .gov domains. In all, the targets were located in 49 countries, with the top 10 being China, Japan, Korea, Spain, Germany, India, Taiwan, Mexico, Italy, and Russia. Vitali Kremez, a senior intelligence analyst at security firm Flashpoint, also provides useful analysis here.

The dump also includes various other pieces of data. Chief among them are configuration settings for an as-yet unknown toolkit used to hack servers running Unix operating systems. If valid, the list could be used by various organizations to uncover a decade’s worth of attacks that until recently were closely guarded secrets. According to this spreadsheet, the servers were mostly running Solaris, an operating system from Sun Microsystems that was widely used in the early 2000s. Linux and FreeBSD are also shown.

The data is old, but you can see if you’ve been hacked.

Honestly, I am surprised by this release. I thought that the original Shadow Brokers dump was everything. Now that we know they held things back, there could easily be more releases.

EDITED TO ADD (11/6): More on the NSA targets. Note that the Hague-based Organization for the Prohibition of Chemical Weapons is on the list, hacked in 2000.