Tag Archives: domain name

Implementing Default Directory Indexes in Amazon S3-backed Amazon CloudFront Origins Using [email protected]

Post Syndicated from Ronnie Eichler original https://aws.amazon.com/blogs/compute/implementing-default-directory-indexes-in-amazon-s3-backed-amazon-cloudfront-origins-using-lambdaedge/

With the recent launch of [email protected], it’s now possible for you to provide even more robust functionality to your static websites. Amazon CloudFront is a content distribution network service. In this post, I show how you can use [email protected] along with the CloudFront origin access identity (OAI) for Amazon S3 and still provide simple URLs (such as www.example.com/about/ instead of www.example.com/about/index.html).

Background

Amazon S3 is a great platform for hosting a static website. You don’t need to worry about managing servers or underlying infrastructure—you just publish your static to content to an S3 bucket. S3 provides a DNS name such as <bucket-name>.s3-website-<AWS-region>.amazonaws.com. Use this name for your website by creating a CNAME record in your domain’s DNS environment (or Amazon Route 53) as follows:

www.example.com -> <bucket-name>.s3-website-<AWS-region>.amazonaws.com

You can also put CloudFront in front of S3 to further scale the performance of your site and cache the content closer to your users. CloudFront can enable HTTPS-hosted sites, by either using a custom Secure Sockets Layer (SSL) certificate or a managed certificate from AWS Certificate Manager. In addition, CloudFront also offers integration with AWS WAF, a web application firewall. As you can see, it’s possible to achieve some robust functionality by using S3, CloudFront, and other managed services and not have to worry about maintaining underlying infrastructure.

One of the key concerns that you might have when implementing any type of WAF or CDN is that you want to force your users to go through the CDN. If you implement CloudFront in front of S3, you can achieve this by using an OAI. However, in order to do this, you cannot use the HTTP endpoint that is exposed by S3’s static website hosting feature. Instead, CloudFront must use the S3 REST endpoint to fetch content from your origin so that the request can be authenticated using the OAI. This presents some challenges in that the REST endpoint does not support redirection to a default index page.

CloudFront does allow you to specify a default root object (index.html), but it only works on the root of the website (such as http://www.example.com > http://www.example.com/index.html). It does not work on any subdirectory (such as http://www.example.com/about/). If you were to attempt to request this URL through CloudFront, CloudFront would do a S3 GetObject API call against a key that does not exist.

Of course, it is a bad user experience to expect users to always type index.html at the end of every URL (or even know that it should be there). Until now, there has not been an easy way to provide these simpler URLs (equivalent to the DirectoryIndex Directive in an Apache Web Server configuration) to users through CloudFront. Not if you still want to be able to restrict access to the S3 origin using an OAI. However, with the release of [email protected], you can use a JavaScript function running on the CloudFront edge nodes to look for these patterns and request the appropriate object key from the S3 origin.

Solution

In this example, you use the compute power at the CloudFront edge to inspect the request as it’s coming in from the client. Then re-write the request so that CloudFront requests a default index object (index.html in this case) for any request URI that ends in ‘/’.

When a request is made against a web server, the client specifies the object to obtain in the request. You can use this URI and apply a regular expression to it so that these URIs get resolved to a default index object before CloudFront requests the object from the origin. Use the following code:

'use strict';
exports.handler = (event, context, callback) => {
    
    // Extract the request from the CloudFront event that is sent to [email protected] 
    var request = event.Records[0].cf.request;

    // Extract the URI from the request
    var olduri = request.uri;

    // Match any '/' that occurs at the end of a URI. Replace it with a default index
    var newuri = olduri.replace(/\/$/, '\/index.html');
    
    // Log the URI as received by CloudFront and the new URI to be used to fetch from origin
    console.log("Old URI: " + olduri);
    console.log("New URI: " + newuri);
    
    // Replace the received URI with the URI that includes the index page
    request.uri = newuri;
    
    // Return to CloudFront
    return callback(null, request);

};

To get started, create an S3 bucket to be the origin for CloudFront:

Create bucket

On the other screens, you can just accept the defaults for the purposes of this walkthrough. If this were a production implementation, I would recommend enabling bucket logging and specifying an existing S3 bucket as the destination for access logs. These logs can be useful if you need to troubleshoot issues with your S3 access.

Now, put some content into your S3 bucket. For this walkthrough, create two simple webpages to demonstrate the functionality:  A page that resides at the website root, and another that is in a subdirectory.

<s3bucketname>/index.html

<!doctype html>
<html>
    <head>
        <meta charset="utf-8">
        <title>Root home page</title>
    </head>
    <body>
        <p>Hello, this page resides in the root directory.</p>
    </body>
</html>

<s3bucketname>/subdirectory/index.html

<!doctype html>
<html>
    <head>
        <meta charset="utf-8">
        <title>Subdirectory home page</title>
    </head>
    <body>
        <p>Hello, this page resides in the /subdirectory/ directory.</p>
    </body>
</html>

When uploading the files into S3, you can accept the defaults. You add a bucket policy as part of the CloudFront distribution creation that allows CloudFront to access the S3 origin. You should now have an S3 bucket that looks like the following:

Root of bucket

Subdirectory in bucket

Next, create a CloudFront distribution that your users will use to access the content. Open the CloudFront console, and choose Create Distribution. For Select a delivery method for your content, under Web, choose Get Started.

On the next screen, you set up the distribution. Below are the options to configure:

  • Origin Domain Name:  Select the S3 bucket that you created earlier.
  • Restrict Bucket Access: Choose Yes.
  • Origin Access Identity: Create a new identity.
  • Grant Read Permissions on Bucket: Choose Yes, Update Bucket Policy.
  • Object Caching: Choose Customize (I am changing the behavior to avoid having CloudFront cache objects, as this could affect your ability to troubleshoot while implementing the Lambda code).
    • Minimum TTL: 0
    • Maximum TTL: 0
    • Default TTL: 0

You can accept all of the other defaults. Again, this is a proof-of-concept exercise. After you are comfortable that the CloudFront distribution is working properly with the origin and Lambda code, you can re-visit the preceding values and make changes before implementing it in production.

CloudFront distributions can take several minutes to deploy (because the changes have to propagate out to all of the edge locations). After that’s done, test the functionality of the S3-backed static website. Looking at the distribution, you can see that CloudFront assigns a domain name:

CloudFront Distribution Settings

Try to access the website using a combination of various URLs:

http://<domainname>/:  Works

› curl -v http://d3gt20ea1hllb.cloudfront.net/
*   Trying 54.192.192.214...
* TCP_NODELAY set
* Connected to d3gt20ea1hllb.cloudfront.net (54.192.192.214) port 80 (#0)
> GET / HTTP/1.1
> Host: d3gt20ea1hllb.cloudfront.net
> User-Agent: curl/7.51.0
> Accept: */*
>
< HTTP/1.1 200 OK
< ETag: "cb7e2634fe66c1fd395cf868087dd3b9"
< Accept-Ranges: bytes
< Server: AmazonS3
< X-Cache: Miss from cloudfront
< X-Amz-Cf-Id: -D2FSRwzfcwyKZKFZr6DqYFkIf4t7HdGw2MkUF5sE6YFDxRJgi0R1g==
< Content-Length: 209
< Content-Type: text/html
< Last-Modified: Wed, 19 Jul 2017 19:21:16 GMT
< Via: 1.1 6419ba8f3bd94b651d416054d9416f1e.cloudfront.net (CloudFront), 1.1 iad6-proxy-3.amazon.com:80 (Cisco-WSA/9.1.2-010)
< Connection: keep-alive
<
<!doctype html>
<html>
    <head>
        <meta charset="utf-8">
        <title>Root home page</title>
    </head>
    <body>
        <p>Hello, this page resides in the root directory.</p>
    </body>
</html>
* Curl_http_done: called premature == 0
* Connection #0 to host d3gt20ea1hllb.cloudfront.net left intact

This is because CloudFront is configured to request a default root object (index.html) from the origin.

http://<domainname>/subdirectory/:  Doesn’t work

› curl -v http://d3gt20ea1hllb.cloudfront.net/subdirectory/
*   Trying 54.192.192.214...
* TCP_NODELAY set
* Connected to d3gt20ea1hllb.cloudfront.net (54.192.192.214) port 80 (#0)
> GET /subdirectory/ HTTP/1.1
> Host: d3gt20ea1hllb.cloudfront.net
> User-Agent: curl/7.51.0
> Accept: */*
>
< HTTP/1.1 200 OK
< ETag: "d41d8cd98f00b204e9800998ecf8427e"
< x-amz-server-side-encryption: AES256
< Accept-Ranges: bytes
< Server: AmazonS3
< X-Cache: Miss from cloudfront
< X-Amz-Cf-Id: Iqf0Gy8hJLiW-9tOAdSFPkL7vCWBrgm3-1ly5tBeY_izU82ftipodA==
< Content-Length: 0
< Content-Type: application/x-directory
< Last-Modified: Wed, 19 Jul 2017 19:21:24 GMT
< Via: 1.1 6419ba8f3bd94b651d416054d9416f1e.cloudfront.net (CloudFront), 1.1 iad6-proxy-3.amazon.com:80 (Cisco-WSA/9.1.2-010)
< Connection: keep-alive
<
* Curl_http_done: called premature == 0
* Connection #0 to host d3gt20ea1hllb.cloudfront.net left intact

If you use a tool such like cURL to test this, you notice that CloudFront and S3 are returning a blank response. The reason for this is that the subdirectory does exist, but it does not resolve to an S3 object. Keep in mind that S3 is an object store, so there are no real directories. User interfaces such as the S3 console present a hierarchical view of a bucket with folders based on the presence of forward slashes, but behind the scenes the bucket is just a collection of keys that represent stored objects.

http://<domainname>/subdirectory/index.html:  Works

› curl -v http://d3gt20ea1hllb.cloudfront.net/subdirectory/index.html
*   Trying 54.192.192.130...
* TCP_NODELAY set
* Connected to d3gt20ea1hllb.cloudfront.net (54.192.192.130) port 80 (#0)
> GET /subdirectory/index.html HTTP/1.1
> Host: d3gt20ea1hllb.cloudfront.net
> User-Agent: curl/7.51.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Thu, 20 Jul 2017 20:35:15 GMT
< ETag: "ddf87c487acf7cef9d50418f0f8f8dae"
< Accept-Ranges: bytes
< Server: AmazonS3
< X-Cache: RefreshHit from cloudfront
< X-Amz-Cf-Id: bkh6opXdpw8pUomqG3Qr3UcjnZL8axxOH82Lh0OOcx48uJKc_Dc3Cg==
< Content-Length: 227
< Content-Type: text/html
< Last-Modified: Wed, 19 Jul 2017 19:21:45 GMT
< Via: 1.1 3f2788d309d30f41de96da6f931d4ede.cloudfront.net (CloudFront), 1.1 iad6-proxy-3.amazon.com:80 (Cisco-WSA/9.1.2-010)
< Connection: keep-alive
<
<!doctype html>
<html>
    <head>
        <meta charset="utf-8">
        <title>Subdirectory home page</title>
    </head>
    <body>
        <p>Hello, this page resides in the /subdirectory/ directory.</p>
    </body>
</html>
* Curl_http_done: called premature == 0
* Connection #0 to host d3gt20ea1hllb.cloudfront.net left intact

This request works as expected because you are referencing the object directly. Now, you implement the [email protected] function to return the default index.html page for any subdirectory. Looking at the example JavaScript code, here’s where the magic happens:

var newuri = olduri.replace(/\/$/, '\/index.html');

You are going to use a JavaScript regular expression to match any ‘/’ that occurs at the end of the URI and replace it with ‘/index.html’. This is the equivalent to what S3 does on its own with static website hosting. However, as I mentioned earlier, you can’t rely on this if you want to use a policy on the bucket to restrict it so that users must access the bucket through CloudFront. That way, all requests to the S3 bucket must be authenticated using the S3 REST API. Because of this, you implement a [email protected] function that takes any client request ending in ‘/’ and append a default ‘index.html’ to the request before requesting the object from the origin.

In the Lambda console, choose Create function. On the next screen, skip the blueprint selection and choose Author from scratch, as you’ll use the sample code provided.

Next, configure the trigger. Choosing the empty box shows a list of available triggers. Choose CloudFront and select your CloudFront distribution ID (created earlier). For this example, leave Cache Behavior as * and CloudFront Event as Origin Request. Select the Enable trigger and replicate box and choose Next.

Lambda Trigger

Next, give the function a name and a description. Then, copy and paste the following code:

'use strict';
exports.handler = (event, context, callback) => {
    
    // Extract the request from the CloudFront event that is sent to [email protected] 
    var request = event.Records[0].cf.request;

    // Extract the URI from the request
    var olduri = request.uri;

    // Match any '/' that occurs at the end of a URI. Replace it with a default index
    var newuri = olduri.replace(/\/$/, '\/index.html');
    
    // Log the URI as received by CloudFront and the new URI to be used to fetch from origin
    console.log("Old URI: " + olduri);
    console.log("New URI: " + newuri);
    
    // Replace the received URI with the URI that includes the index page
    request.uri = newuri;
    
    // Return to CloudFront
    return callback(null, request);

};

Next, define a role that grants permissions to the Lambda function. For this example, choose Create new role from template, Basic Edge Lambda permissions. This creates a new IAM role for the Lambda function and grants the following permissions:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogGroup",
                "logs:CreateLogStream",
                "logs:PutLogEvents"
            ],
            "Resource": [
                "arn:aws:logs:*:*:*"
            ]
        }
    ]
}

In a nutshell, these are the permissions that the function needs to create the necessary CloudWatch log group and log stream, and to put the log events so that the function is able to write logs when it executes.

After the function has been created, you can go back to the browser (or cURL) and re-run the test for the subdirectory request that failed previously:

› curl -v http://d3gt20ea1hllb.cloudfront.net/subdirectory/
*   Trying 54.192.192.202...
* TCP_NODELAY set
* Connected to d3gt20ea1hllb.cloudfront.net (54.192.192.202) port 80 (#0)
> GET /subdirectory/ HTTP/1.1
> Host: d3gt20ea1hllb.cloudfront.net
> User-Agent: curl/7.51.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Thu, 20 Jul 2017 21:18:44 GMT
< ETag: "ddf87c487acf7cef9d50418f0f8f8dae"
< Accept-Ranges: bytes
< Server: AmazonS3
< X-Cache: Miss from cloudfront
< X-Amz-Cf-Id: rwFN7yHE70bT9xckBpceTsAPcmaadqWB9omPBv2P6WkIfQqdjTk_4w==
< Content-Length: 227
< Content-Type: text/html
< Last-Modified: Wed, 19 Jul 2017 19:21:45 GMT
< Via: 1.1 3572de112011f1b625bb77410b0c5cca.cloudfront.net (CloudFront), 1.1 iad6-proxy-3.amazon.com:80 (Cisco-WSA/9.1.2-010)
< Connection: keep-alive
<
<!doctype html>
<html>
    <head>
        <meta charset="utf-8">
        <title>Subdirectory home page</title>
    </head>
    <body>
        <p>Hello, this page resides in the /subdirectory/ directory.</p>
    </body>
</html>
* Curl_http_done: called premature == 0
* Connection #0 to host d3gt20ea1hllb.cloudfront.net left intact

You have now configured a way for CloudFront to return a default index page for subdirectories in S3!

Summary

In this post, you used [email protected] to be able to use CloudFront with an S3 origin access identity and serve a default root object on subdirectory URLs. To find out some more about this use-case, see [email protected] integration with CloudFront in our documentation.

If you have questions or suggestions, feel free to comment below. For troubleshooting or implementation help, check out the Lambda forum.

Pirate Bay’s Iconic .SE Domain has Expired (Updated)

Post Syndicated from Ernesto original https://torrentfreak.com/pirate-bays-iconic-se-domain-has-expired-and-is-for-sale-171016/

When The Pirate Bay first came online during the summer of 2003, its main point of access was thepiratebay.org.

Since then the site has burnt through more than a dozen domains, trying to evade seizures or other legal threats.

For many years thepiratebay.se operated as the site’s main domain name. Earlier this year the site moved back to the good old .org again, and from the looks of it, TPB is ready to say farewell to the Swedish domain.

Thepiratebay.se expired last week and, if nothing happens, it will be de-activated tomorrow. This means that the site might lose control over a piece of its history.

The torrent site moved from the ORG to the SE domain in 2012, fearing that US authorities would seize the former. Around that time the Department of Homeland Security took hundreds of sites offline and the Pirate Bay team feared that they would be next.

Thepiratebay.se has expired

Ironically, however, the next big threat came from Sweden, the Scandinavian country where the site once started.

In 2013, a local anti-piracy group filed a motion targeting two of The Pirate Bay’s domains, ThePirateBay.se and PirateBay.se. This case that has been dragging on for years now.

During this time TPB moved back and forth between domains but the .se domain turned out to be a safer haven than most alternatives, despite the legal issues. Many other domains were simply seized or suspended without prior notice.

When the Swedish Court of Appeal eventually ruled that The Pirate Bay’s domain had to be confiscated and forfeited to the state, the site’s operators moved back to the .org domain, where it all started.

Although a Supreme Court appeal is still pending, according to a report from IDG earlier this year the court has placed a lock on the domain. This prevents the owner from changing or transferring it, which may explain why it has expired.

The lock is relevant, as the domain not only expired but has also been put of for sale again in the SEDO marketplace, with a minimum bid of $90. This sale would be impossible, if the domain is locked.

Thepiratebay.se for sale

Perhaps the most ironic of all is the fact that TPB moved to .se because it feared that the US controlled .org domain was easy prey.

Fast forward half a decade and over a dozen domains have come and gone while thepiratebay.org still stands strong, despite entertainment industry pressure.

Update: We updated the article to mention that the domain name is locked by the Swedish Supreme Court. This means that it can’t be updated and would explain why it has expired.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Application Load Balancers Now Support Multiple TLS Certificates With Smart Selection Using SNI

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/new-application-load-balancer-sni/

Today we’re launching support for multiple TLS/SSL certificates on Application Load Balancers (ALB) using Server Name Indication (SNI). You can now host multiple TLS secured applications, each with its own TLS certificate, behind a single load balancer. In order to use SNI, all you need to do is bind multiple certificates to the same secure listener on your load balancer. ALB will automatically choose the optimal TLS certificate for each client. These new features are provided at no additional charge.

If you’re looking for a TL;DR on how to use this new feature just click here. If you’re like me and you’re a little rusty on the specifics of Transport Layer Security (TLS) then keep reading.

TLS? SSL? SNI?

People tend to use the terms SSL and TLS interchangeably even though the two are technically different. SSL technically refers to a predecessor of the TLS protocol. To keep things simple I’ll be using the term TLS for the rest of this post.

TLS is a protocol for securely transmitting data like passwords, cookies, and credit card numbers. It enables privacy, authentication, and integrity of the data being transmitted. TLS uses certificate based authentication where certificates are like ID cards for your websites. You trust the person that signed and issued the certificate, the certificate authority (CA), so you trust that the data in the certificate is correct. When a browser connects to your TLS-enabled ALB, ALB presents a certificate that contains your site’s public key, which has been cryptographically signed by a CA. This way the client can be sure it’s getting the ‘real you’ and that it’s safe to use your site’s public key to establish a secure connection.

With SNI support we’re making it easy to use more than one certificate with the same ALB. The most common reason you might want to use multiple certificates is to handle different domains with the same load balancer. It’s always been possible to use wildcard and subject-alternate-name (SAN) certificates with ALB, but these come with limitations. Wildcard certificates only work for related subdomains that match a simple pattern and while SAN certificates can support many different domains, the same certificate authority has to authenticate each one. That means you have reauthenticate and reprovision your certificate everytime you add a new domain.

One of our most frequent requests on forums, reddit, and in my e-mail inbox has been to use the Server Name Indication (SNI) extension of TLS to choose a certificate for a client. Since TLS operates at the transport layer, below HTTP, it doesn’t see the hostname requested by a client. SNI works by having the client tell the server “This is the domain I expect to get a certificate for” when it first connects. The server can then choose the correct certificate to respond to the client. All modern web browsers and a large majority of other clients support SNI. In fact, today we see SNI supported by over 99.5% of clients connecting to CloudFront.

Smart Certificate Selection on ALB

ALB’s smart certificate selection goes beyond SNI. In addition to containing a list of valid domain names, certificates also describe the type of key exchange and cryptography that the server supports, as well as the signature algorithm (SHA2, SHA1, MD5) used to sign the certificate. To establish a TLS connection, a client starts a TLS handshake by sending a “ClientHello” message that outlines the capabilities of the client: the protocol versions, extensions, cipher suites, and compression methods. Based on what an individual client supports, ALB’s smart selection algorithm chooses a certificate for the connection and sends it to the client. ALB supports both the classic RSA algorithm and the newer, hipper, and faster Elliptic-curve based ECDSA algorithm. ECDSA support among clients isn’t as prevalent as SNI, but it is supported by all modern web browsers. Since it’s faster and requires less CPU, it can be particularly useful for ultra-low latency applications and for conserving the amount of battery used by mobile applications. Since ALB can see what each client supports from the TLS handshake, you can upload both RSA and ECDSA certificates for the same domains and ALB will automatically choose the best one for each client.

Using SNI with ALB

I’ll use a few example websites like VimIsBetterThanEmacs.com and VimIsTheBest.com. I’ve purchased and hosted these domains on Amazon Route 53, and provisioned two separate certificates for them in AWS Certificate Manager (ACM). If I want to securely serve both of these sites through a single ALB, I can quickly add both certificates in the console.

First, I’ll select my load balancer in the console, go to the listeners tab, and select “view/edit certificates”.

Next, I’ll use the “+” button in the top left corner to select some certificates then I’ll click the “Add” button.

There are no more steps. If you’re not really a GUI kind of person you’ll be pleased to know that it’s also simple to add new certificates via the AWS Command Line Interface (CLI) (or SDKs).

aws elbv2 add-listener-certificates --listener-arn <listener-arn> --certificates CertificateArn=<cert-arn>

Things to know

  • ALB Access Logs now include the client’s requested hostname and the certificate ARN used. If the “hostname” field is empty (represented by a “-“) the client did not use the SNI extension in their request.
  • You can use any of your certificates in ACM or IAM.
  • You can bind multiple certificates for the same domain(s) to a secure listener. Your ALB will choose the optimal certificate based on multiple factors including the capabilities of the client.
  • If the client does not support SNI your ALB will use the default certificate (the one you specified when you created the listener).
  • There are three new ELB API calls: AddListenerCertificates, RemoveListenerCertificates, and DescribeListenerCertificates.
  • You can bind up to 25 certificates per load balancer (not counting the default certificate).
  • These new features are supported by AWS CloudFormation at launch.

You can see an example of these new features in action with a set of websites created by my colleague Jon Zobrist: https://www.exampleloadbalancer.com/.

Overall, I will personally use this feature and I’m sure a ton of AWS users will benefit from it as well. I want to thank the Elastic Load Balancing team for all their hard work in getting this into the hands of our users.

Randall

SOPA Ghosts Hinder U.S. Pirate Site Blocking Efforts

Post Syndicated from Ernesto original https://torrentfreak.com/sopa-ghosts-hinder-u-s-pirate-site-blocking-efforts-171008/

Website blocking has become one of the entertainment industries’ favorite anti-piracy tools.

All over the world, major movie and music industry players have gone to court demanding that ISPs take action, often with great success.

Internal MPAA research showed that website blockades help to deter piracy and former boss Chris Dodd said that they are one of the most effective anti-tools available.

While not everyone is in agreement on this, the numbers are used to lobby politicians and convince courts. Interestingly, however, nothing is happening in the United States, which is where most pirate site visitors come from.

This is baffling to many people. Why would US-based companies go out of their way to demand ISP blocking in the most exotic locations, but fail to do the same at home?

We posed this question to Neil Turkewitz, RIAA’s former Executive Vice President International, who currently runs his own consulting group.

The main reason why pirate site blocking requests have not yet been made in the United States is down to SOPA. When the proposed SOPA legislation made headlines five years ago there was a massive backlash against website blocking, which isn’t something copyright groups want to reignite.

“The legacy of SOPA is that copyright industries want to avoid resurrecting the ghosts of SOPA past, and principally focus on ways to creatively encourage cooperation with platforms, and to use existing remedies,” Turkewitz tells us.

Instead of taking the likes of Comcast and Verizon to court, the entertainment industries focused on voluntary agreements, such as the now-defunct Copyright Alerts System. However, that doesn’t mean that website blocking and domain seizures are not an option.

“SOPA made ‘website blocking’ as such a four-letter word. But this is actually fairly misleading,” Turkewitz says.

“There have been a variety of civil and criminal actions addressing the conduct of entities subject to US jurisdiction facilitating piracy, regardless of the source, including hundreds of domain seizures by DHS/ICE.”

Indeed, there are plenty of legal options already available to do much of what SOPA promised. ABS-CBN has taken over dozens of pirate site domain names through the US court system. Most recently even through an ex-parte order, meaning that the site owners had no option to defend themselves before they lost their domains.

ISP and search engine blocking is also around the corner. As we reported earlier this week, a Virginia magistrate judge recently recommended an injunction which would require search engines and Internet providers to prevent users from accessing Sci-Hub.

Still, the major movie and music companies are not yet using these tools to take on The Pirate Bay or other major pirate sites. If it’s so easy, then why not? Apparently, SOPA may still be in the back of their minds.

Interestingly, the RIAA’s former top executive wasn’t a fan of SOPA when it was first announced, as it wouldn’t do much to extend the legal remedies that were already available.

“I actually didn’t like SOPA very much since it mostly reflected existing law and maintained a paradigm that didn’t involve ISP’s in creative interdiction, and simply preserved passivity. To see it characterized as ‘copyright gone wild’ was certainly jarring and incongruous,” Turkewitz says.

Ironically, it looks like a bill that failed to pass, and didn’t impress some copyright holders to begin with, is still holding them back after five years. They’re certainly not using all the legal options available to avoid SOPA comparison. The question is, for how long?

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

RIAA Identifies Top YouTube MP3 Rippers and Other Pirate Sites

Post Syndicated from Ernesto original https://torrentfreak.com/riaa-identifies-top-youtube-mp3-rippers-and-other-pirate-sites-171006/

Around the same time as Hollywood’s MPAA, the RIAA has also submitted its overview of “notorious markets” to the Office of the US Trade Representative (USTR).

These submissions help to guide the U.S. Government’s position toward foreign countries when it comes to copyright enforcement.

The RIAA’s overview begins positively, announcing two major successes achieved over the past year.

The first is the shutdown of sites such as Emp3world, AudioCastle, Viperial, Album Kings, and im1music. These sites all used the now-defunct Sharebeast platform, whose operator pleaded guilty to criminal copyright infringement.

Another victory followed a few weeks ago when YouTube-MP3.org shut down its services after being sued by the RIAA.

“The most popular YouTube ripping site, youtube-mp3.org, based in Germany and included in last year’s list of notorious markes [sic], recently shut down in response to a civil action brought by major record labels,” the RIAA writes.

This case also had an effect on similar services. Some stream ripping services that were reported to the USTR last year no longer permit the conversion and download of music videos on YouTube, the RIAA reports. However, they add that the problem is far from over.

“Unfortunately, several other stream-ripping sites have ‘doubled down’ and carry on in this illegal behavior, continuing to make this form of theft a major concern for the music industry,” the music group writes.

“The overall popularity of these sites and the staggering volume of traffic it attracts evidences the enormous damage being inflicted on the U.S. record industry.”

The music industry group is tracking more than 70 of these stream ripping sites and the most popular ones are listed in the overview of notorious markets. These are Mp3juices.cc, Convert2mp3.net, Savefrom.net, Ytmp3.cc, Convertmp3.io, Flvto.biz, and 2conv.com.

Youtube2mp3’s listing

The RIAA notes that many sites use domain privacy services to hide their identities, as well as Cloudflare to obscure the sites’ true hosting locations. This frustrates efforts to take action against these sites, they say.

Popular torrent sites are also highlighted, including The Pirate Bay. These sites regularly change domain names to avoid ISP blockades and domain seizures, and also use Cloudflare to hide their hosting location.

“BitTorrent sites, like many other pirate sites, are increasing [sic] turning to Cloudflare because routing their site through Cloudflare obfuscates the IP address of the actual hosting provider, masking the location of the site.”

Finally, the RIAA reports several emerging threats reported to the Government. Third party app stores, such as DownloadAtoZ.com, reportedly offer a slew of infringing apps. In addition, there’s a boom of Nigerian pirate sites that flood the market with free music.

“The number of such infringing sites with a Nigerian operator stands at over 200. Their primary method of promotion is via Twitter, and most sites make use of the Nigerian operated ISP speedhost247.com,” the report notes

The full list of RIAA’s “notorious” pirate sites, which also includes several cyberlockers, MP3 search and download sites, as well as unlicensed pay services, can be found below. The full report is available here (pdf).

Stream-Ripping Sites

– Mp3juices.cc
– Convert2mp3.net
– Savefrom.net
– Ytmp3.cc
– Convertmp3.io
– Flvto.biz
– 2conv.com.

Search-and-Download Sites

– Newalbumreleases.net
– Rnbxclusive.top
– DNJ.to

BitTorrent Indexing and Tracker Sites

– Thepiratebay.org
– Torrentdownloads.me
– Rarbg.to
– 1337x.to

Cyberlockers

– 4shared.com
– Uploaded.net
– Zippyshare.com
– Rapidgator.net
– Dopefile.pk
– Chomikuj.pl

Unlicensed Pay-for-Download Sites

– Mp3va.com
– Mp3fiesta.com

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Judge Recommends ISP and Search Engine Blocking of Sci-Hub in the US

Post Syndicated from Ernesto original https://torrentfreak.com/judge-recommends-isp-search-engine-blocking-sci-hub-us-171003/

Earlier this year the American Chemical Society (ACS), a leading source of academic publications in the field of chemistry, filed a lawsuit against Sci-Hub and its operator Alexandra Elbakyan.

The non-profit organization publishes tens of thousands of articles a year in its peer-reviewed journals. Because many of these are available for free on Sci-Hub, ACS wants to be compensated.

Sci-Hub was made aware of the legal proceedings but did not appear in court. As a result, a default was entered against the site. In addition to millions of dollars in damages, ACS also requested third-party Internet intermediaries to take action against the site.

While the request is rather unprecedented for the US, as it includes search engine and ISP blocking, Magistrate Judge John Anderson has included these measures in his recommendations.

Judge Anderson agrees that Sci-Hub is guilty of copyright and trademark infringement. In addition to $4,800,000 in statutory damages, he recommends a broad injunction that would require search engines, ISPs, domain registrars and other services to block Sci-Hub’s domain names.

“… the undersigned recommends that it be ordered that any person or entity in privity with Sci-Hub and with notice of the injunction, including any Internet search engines, web hosting and Internet service providers, domain name registrars, and domain name registries, cease facilitating access to any or all domain names and websites through which Sci-Hub engages in unlawful access to, use, reproduction, and distribution of ACS’s trademarks or copyrighted works.”

The recommendation

In addition to the above, domain registries and registrars will also be required to suspend Sci-Hub’s domain names. This also happened previously in a different lawsuit, but Sci-Hub swiftly moved to a new domain at the time.

“Finally, the undersigned recommends that it be ordered that the domain name registries and/or registrars for Sci-Hub’s domain names and websites, or their technical administrators, shall place the domain names on registryHold/serverHold or such other status to render the names/sites non-resolving,” the recommendation adds.”

If the U.S. District Court Judge adopts this recommendation, it would mean that Internet providers such as Comcast could be ordered to block users from accessing Sci-Hub. That’s a big deal since pirate site blockades are not common in the United States.

This would likely trigger a response from affected Internet services, who generally want to avoid being dragged into these cases. They would certainly don’t want such far-reaching measure to be introduced through a default order.

Sci-Hub itself doesn’t seem to be too bothered by the blocking prospect or the millions in damages it faces. The site has a Tor version which can’t be blocked by Internet providers, so determined scientists will still be able to access the site if they want.

Magistrate Judge John Anderson’s full findings of fact and recommendations are available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

US Court Orders Dozens of “Pirate” Site Domain Seizures

Post Syndicated from Ernesto original https://torrentfreak.com/us-court-orders-dozens-of-pirate-site-domain-seizures-170927/

ABS-CBN, the largest media and entertainment company in the Philippines, has delivered another strike to pirate sites in the United States.

Last week a federal court in Florida signed a default judgment against 43 websites that offered copyright-infringing streams of ABS-CBN owned movies, including Star Cinema titles.

The order was signed exactly one day after the complaint was filed, in what appears to be a streamlined process.

The media company accused the websites of trademark and copyright infringement by making free streams of its content available without permission. It then asked the court for assistance to shut these sites down as soon as possible.

“Defendants’ websites operating under the Subject Domain Names are classic examples of pirate operations, having no regard whatsoever for the rights of ABS-CBN and willfully infringing ABS-CBN’s intellectual property.

“As a result, ABS-CBN requires this Court’s intervention if any meaningful stop is to be put to Defendants’ piracy,” ABS-CBN wrote.

Instead of a lengthy legal process that can take years to complete, ABS-CBN went for an “ex-parte” request for domain seizures, which means that the websites in question are not notified or involved in the process before the order is issued.

After reviewing the proposed injunction, US District Judge Beth Bloom signed off on it. This means that all the associated registrars must hand over the domain names in question.

“The domain name registrars for the Subject Domain Names shall immediately assist in changing the registrar of record for the Subject Domain Names, to a holding account with a registrar of Plaintiffs’ choosing..,” the order (pdf) reads.

In the days that followed, several streaming-site domains were indeed taken over. Movieonline.io, 1movies.tv, 123movieshd.us, 4k-movie.us, icefilms.ws and others are now linking to a notice page with information about the lawsuit instead.

The notice

Gomovies.es, which is also included, has not been transferred yet, but the operator appears to be aware of the lawsuit as the site now redirects to Gomovies.vg. Other domains, such as Onlinefullmovie.me, Putlockerm.live and Newasiantv.io remain online as well.

While the targeted sites together are good for thousands of daily visitors, they’re certainly not the biggest fish.

That said, the most significant thing about the case is not that these domain names have been taken offline. What stands out is the ability of an ex-parte request from a copyright holder to easily take out dozens of sites in one swoop.

Given ABS-CBN’s legal track record, this is likely not the last effort of this kind. The question now is if others will follow suit.

The full list of targeted domain is as follows.

1 movieonline.io
2 1movies.tv
3 gomovies.es
4 123movieshd.us
5 4k-movie.us
6 desitvflix.net
7 globalpinoymovies.com
8 icefilms.ws
9 jhonagemini.com
10 lambinganph.info
11 mrkdrama.com
12 newasiantv.me
13 onlinefullmovie.me
14 pariwiki.net
15 pinoychannel.live
16 pinoychannel.mobi
17 pinoyfullmovies.net
18 pinoyhdtorrent.com
19 pinoylibangandito.pw
20 pinoymoviepedia.ch
21 pinoysharetv.com
22 pinoytambayanhd.com
23 pinoyteleseryerewind.info
24 philnewsnetwork.com
25 pinoytvrewind.info
26 pinoytzater.com
27 subenglike.com
28 tambayantv.org
29 teleseryi.com
30 thepinoy1tv.com
31 thepinoychannel.com
32 tvbwiki.com
33 tvnaa.com
34 urpinoytv.com
35 vikiteleserye.com
36 viralsocialnetwork.com
37 watchpinoymoviesonline.com
38 pinoysteleserye.xyz
39 pinoytambayan.world
40 lambingan.lol
41 123movies.film
42 putlockerm.live
43 yonip.zone
43 yonipzone.rocks

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Peru Authorities Shut Down First ‘Pirate’ Websites, Three Arrested

Post Syndicated from Andy original https://torrentfreak.com/peru-authorities-shut-down-first-pirate-websites-three-arrested-170925/

For a country with a soaring crime rate, where violent car-jackings and other violent crime are reportedly commonplace, Internet piracy isn’t something that’s been high on the agenda in Peru.

Nevertheless, under pressure from rightsholders, local authorities have now taken decisive action against the country’s most popular ‘pirate’ sites.

On the orders of prosecutor Miguel Ángel Puicón, a specialized police unit carried out searches earlier this month looking for the people behind Pelis24 (Movies24) and Series24, sites that are extremely popular across all of South America, not just Peru.

Local media reports that an initial search took place in the Los Olivos district of the Lima Province where two people were arrested in connection with the sites. On the same day, a second search was executed in the town of Rimac where a third person was detained.

The case was launched following a rightsholder complaint to the Special Prosecutor’s Office for Customs Crimes and Intellectual Property in Lima. It stated that three domains – pelis24.com, pelis24.tv and series24.tv were offering unlicensed movies and TV shows to the public.

“In view of the abundant evidence, the office requested measures indicative of the right to the criminal judge. A search was carried out in search of the property and the preliminary 48-hour detention of the people investigated was requested,” authorities said in a statement.

The warrant not only covered seizure of physical items but also the domain names associated with the platforms. As shown in the image below, they now display the following seizure banner (translated from Spanish).

Pelis24/Series24 Seizure Banner

Authorities say that a detailed preliminary investigation took place in order to corroborate the information provided by the complainant. Once the measures were approved by a judge, the Prosecutor’s Office acted in coordination with the Investigations Division of the High Technology Crimes unit to carry out the operation.

According to Puicón, this is the first action against the operators of a pirate site in Peru.

“The purpose was to have the detainees close the sites voluntarily after providing us with the login codes,” he said. “We do not have a technology department, so the specialized high-tech police and complainants were present to preserve evidence.”

Local sources indicate that sentences for piracy can be as long as six years in serious cases. However, Peru has been exclusively tackling counterfeiting of physical discs, with online piracy being allowed to run rampant.

“The Office of the Prosecutor has the competency to deal with crimes against intellectual property but has been working exclusively in cases of physical piracy,” Puicón says.

“Online piracy has another connotation, we must use other procedures, another form of investigation and another strategy. Therefore, the authorities that are aware of these crimes must be trained on technological issues.”

It’s believed that at least a million Peruvians download infringing content from the Internet each week, a problem that will need to be tackled moving forward, when the authorities can gather the expertise to do so.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Russia’s Largest Torrent Site Celebrates 13 Years Online in a Chinese Restaurant

Post Syndicated from Andy original https://torrentfreak.com/russias-largest-torrent-site-celebrates-13-years-online-in-a-chinese-restaurant-170923/

For most torrent fans around the world, The Pirate Bay is the big symbol of international defiance. Over the years the site has fought, avoided, and snubbed its nose at dozens of battles, yet still remains online today.

But there is another site, located somewhere in the east, that has been online for nearly as long, has millions more registered members, and has proven just as defiant.

RuTracker, for those who haven’t yet found it, is a Russian-focused treasure trove of both local and international content. For many years the site was frequented only by native speakers but with the wonders of tools like Google Translate, anyone can use the site at the flick of the switch. When people are struggling to find content, it’s likely that RuTracker has it.

This position has attracted the negative attention of a wide range of copyright holders and thanks to legislation introduced during 2013, the site is now subject to complete blocking in Russia. In fact, RuTracker has proven so stubborn to copyright holder demands, it is now permanently blocked in the region by all ISPs.

Surprisingly, especially given the enthusiasm for blockades among copyright holders, this doesn’t seem to have dampened demand for the site’s services. According to SimiliarWeb, against all the odds the site is still pulling in around 90 million visitors per month. But the impressive stats don’t stop there.

Impressive stats for a permanently blocked site

This week, RuTracker celebrates its 13th birthday, a relative lifetime for a site that has been front and center of Russia’s most significant copyright battles, trouble which doesn’t look like stopping anytime soon.

Back in 2010, for example, RU-Center, Russia’s largest domain name registrar and web-hosting provider, pulled the plug on the site’s former Torrents.ru domain. The Director of Public Relations at RU-Center said that the domain had been blocked on the orders of the Investigative Division of the regional prosecutor’s office in Moscow. The site never got its domain back but carried on regardless, despite the setbacks.

Back then the site had around 4,000,000 members but now, seven years on, its ranks have swelled to a reported 15,382,907. According to figures published by the site this week, 778,317 of those members signed up this year during a period the site was supposed to be completely inaccessible. Needless to say, its operators remain defiant.

“Today we celebrate the 13th anniversary of our tracker, which is the largest Russian (and not only) -language media library on this planet. A tracker strangely banished in the country where most of its audience is located – in Russia,” a site announcement reads.

“But, despite the prohibitions, with all these legislative obstacles, with all these technical difficulties, we see that our tracker still exists and is successfully developing. And we still believe that the library should be open and free for all, and not be subject to censorship or a victim of legislative and executive power lobbied by the monopolists of the media industry.”

It’s interesting to note the tone of the RuTracker announcement. On any other day it could’ve been written by the crew of The Pirate Bay who, in their prime, loved to stick a finger or two up to the copyright lobby and then rub their noses in it. For the team at RuTracker, that still appears to be one of the main goals.

Like The Pirate Bay but unlike many of the basic torrent indexers that have sprung up in recent years, RuTracker relies on users to upload its content. They certainly haven’t been sitting back. RuTracker reveals that during the past year and despite all the problems, users uploaded a total of 171,819 torrents – on average, 470 torrents per day.

Interestingly, the content most uploaded to the site also points to the growing internationalization of RuTracker. During the past year, the NBA / NCAA section proved most popular, closely followed by non-Russian rock music and NHL games. Non-Russian movies accounted for almost 2,000 fresh torrents in just 12 months.

“It is thanks to you this tracker lives!” the site’s operators informed the users.

“It is thanks to you that it was, is, and, for sure, will continue to offer the most comprehensive, diverse and, most importantly, quality content in the Russian Internet. You stayed with us when the tracker lost its original name: torrents.ru. You stayed with us when access to a new name was blocked in Russia: rutracker.org. You stayed with us when [the site’s trackers] were blocked. We will stay with you as long as you need us!”

So as RuTracker plans for another year online, all that remains is to celebrate its 13th birthday in style. That will be achieved tonight when every adult member of RuTracker is invited to enjoy Chinese meal at the Tian Jin Chinese Restaurant in St. Petersburg.

Turn up early, seating is limited.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Block The Pirate Bay Within 10 Days, Dutch Court Tells ISPs

Post Syndicated from Andy original https://torrentfreak.com/block-the-pirate-bay-within-10-days-dutch-court-tells-isps-170922/

Three years ago in 2014, The Court of The Hague handed down its decision in a long-running case which had previously forced two Dutch ISPs, Ziggo and XS4ALL, to block The Pirate Bay.

Ruling against local anti-piracy outfit BREIN, which brought the case, the Court decided that a blockade would be ineffective and also restrict the ISPs’ entrepreneurial freedoms.

The Pirate Bay was unblocked while BREIN took its case to the Supreme Court, which in turn referred the matter to the EU Court of Justice for clarification. This June, the ECJ ruled that as a platform effectively communicating copyright works to the public, The Pirate Bay can indeed be blocked.

The ruling meant there were no major obstacles preventing the Dutch Supreme Court from ordering a future ISP blockade. Clearly, however, BREIN wanted a blocking decision more quickly. A decision handed down today means the anti-piracy group will achieve that in just a few days’ time.

The Hague Court of Appeal today ruled (Dutch) that the 2014 decision, which lifted the blockade against The Pirate Bay, is now largely obsolete.

“According to the Court of Appeal, the Hague Court did not give sufficient weight to the interests of the beneficiaries represented by BREIN,” BREIN said in a statement.

“The Court also wrongly looked at whether torrent traffic had been reduced by the blockade. It should have also considered whether visits to the website of The Pirate Bay itself decreased with a blockade, which speaks for itself.”

As a result, an IP address and DNS blockade of The Pirate Bay, similar to those already in place in the UK and other EU countries, will soon be put in place. BREIN says that four IP addresses will be affected along with hundreds of domain names through which the torrent platform can be reached.

The ISPs have been given just 10 days to put the blocks in place and if they fail there are fines of 2,000 euros per day, up to a maximum of one million euros.

“It is nice that obviously harmful and illegal sites like The Pirate Bay will be blocked again in the Netherlands,” says BREIN chief Tim Kuik.

“A very bad time for our culture, which was free to access via these sites, is now happily behind us.”

Today’s interim decision by the Court of Appeal will stand until the Supreme Court hands down its decision in the main case between BREIN and Ziggo / XS4ALL.

Looking forward, it seems extremely unlikely that the Supreme Court will hand down a conflicting decision, so we’re probably already looking at the beginning of the end for direct accessibility of The Pirate Bay in the Netherlands.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Manage Kubernetes Clusters on AWS Using CoreOS Tectonic

Post Syndicated from Arun Gupta original https://aws.amazon.com/blogs/compute/kubernetes-clusters-aws-coreos-tectonic/

There are multiple ways to run a Kubernetes cluster on Amazon Web Services (AWS). The first post in this series explained how to manage a Kubernetes cluster on AWS using kops. This second post explains how to manage a Kubernetes cluster on AWS using CoreOS Tectonic.

Tectonic overview

Tectonic delivers the most current upstream version of Kubernetes with additional features. It is a commercial offering from CoreOS and adds the following features over the upstream:

  • Installer
    Comes with a graphical installer that installs a highly available Kubernetes cluster. Alternatively, the cluster can be installed using AWS CloudFormation templates or Terraform scripts.
  • Operators
    An operator is an application-specific controller that extends the Kubernetes API to create, configure, and manage instances of complex stateful applications on behalf of a Kubernetes user. This release includes an etcd operator for rolling upgrades and a Prometheus operator for monitoring capabilities.
  • Console
    A web console provides a full view of applications running in the cluster. It also allows you to deploy applications to the cluster and start the rolling upgrade of the cluster.
  • Monitoring
    Node CPU and memory metrics are powered by the Prometheus operator. The graphs are available in the console. A large set of preconfigured Prometheus alerts are also available.
  • Security
    Tectonic ensures that cluster is always up to date with the most recent patches/fixes. Tectonic clusters also enable role-based access control (RBAC). Different roles can be mapped to an LDAP service.
  • Support
    CoreOS provides commercial support for clusters created using Tectonic.

Tectonic can be installed on AWS using a GUI installer or Terraform scripts. The installer prompts you for the information needed to boot the Kubernetes cluster, such as AWS access and secret key, number of master and worker nodes, and instance size for the master and worker nodes. The cluster can be created after all the options are specified. Alternatively, Terraform assets can be downloaded and the cluster can be created later. This post shows using the installer.

CoreOS License and Pull Secret

Even though Tectonic is a commercial offering, a cluster for up to 10 nodes can be created by creating a free account at Get Tectonic for Kubernetes. After signup, a CoreOS License and Pull Secret files are provided on your CoreOS account page. Download these files as they are needed by the installer to boot the cluster.

IAM user permission

The IAM user to create the Kubernetes cluster must have access to the following services and features:

  • Amazon Route 53
  • Amazon EC2
  • Elastic Load Balancing
  • Amazon S3
  • Amazon VPC
  • Security groups

Use the aws-policy policy to grant the required permissions for the IAM user.

DNS configuration

A subdomain is required to create the cluster, and it must be registered as a public Route 53 hosted zone. The zone is used to host and expose the console web application. It is also used as the static namespace for the Kubernetes API server. This allows kubectl to be able to talk directly with the master.

The domain may be registered using Route 53. Alternatively, a domain may be registered at a third-party registrar. This post uses a kubernetes-aws.io domain registered at a third-party registrar and a tectonic subdomain within it.

Generate a Route 53 hosted zone using the AWS CLI. Download jq to run this command:

ID=$(uuidgen) && \
aws route53 create-hosted-zone \
--name tectonic.kubernetes-aws.io \
--caller-reference $ID \
| jq .DelegationSet.NameServers

The command shows an output such as the following:

[
  "ns-1924.awsdns-48.co.uk",
  "ns-501.awsdns-62.com",
  "ns-1259.awsdns-29.org",
  "ns-749.awsdns-29.net"
]

Create NS records for the domain with your registrar. Make sure that the NS records can be resolved using a utility like dig web interface. A sample output would look like the following:

The bottom of the screenshot shows NS records configured for the subdomain.

Download and run the Tectonic installer

Download the Tectonic installer (version 1.7.1) and extract it. The latest installer can always be found at coreos.com/tectonic. Start the installer:

./tectonic/tectonic-installer/$PLATFORM/installer

Replace $PLATFORM with either darwin or linux. The installer opens your default browser and prompts you to select the cloud provider. Choose Amazon Web Services as the platform. Choose Next Step.

Specify the Access Key ID and Secret Access Key for the IAM role that you created earlier. This allows the installer to create resources required for the Kubernetes cluster. This also gives the installer full access to your AWS account. Alternatively, to protect the integrity of your main AWS credentials, use a temporary session token to generate temporary credentials.

You also need to choose a region in which to install the cluster. For the purpose of this post, I chose a region close to where I live, Northern California. Choose Next Step.

Give your cluster a name. This name is part of the static namespace for the master and the address of the console.

To enable in-place update to the Kubernetes cluster, select the checkbox next to Automated Updates. It also enables update to the etcd and Prometheus operators. This feature may become a default in future releases.

Choose Upload “tectonic-license.txt” and upload the previously downloaded license file.

Choose Upload “config.json” and upload the previously downloaded pull secret file. Choose Next Step.

Let the installer generate a CA certificate and key. In this case, the browser may not recognize this certificate, which I discuss later in the post. Alternatively, you can provide a CA certificate and a key in PEM format issued by an authorized certificate authority. Choose Next Step.

Use the SSH key for the region specified earlier. You also have an option to generate a new key. This allows you to later connect using SSH into the Amazon EC2 instances provisioned by the cluster. Here is the command that can be used to log in:

ssh –i <key> [email protected]<ec2-instance-ip>

Choose Next Step.

Define the number and instance type of master and worker nodes. In this case, create a 6 nodes cluster. Make sure that the worker nodes have enough processing power and memory to run the containers.

An etcd cluster is used as persistent storage for all of Kubernetes API objects. This cluster is required for the Kubernetes cluster to operate. There are three ways to use the etcd cluster as part of the Tectonic installer:

  • (Default) Provision the cluster using EC2 instances. Additional EC2 instances are used in this case.
  • Use an alpha support for cluster provisioning using the etcd operator. The etcd operator is used for automated operations of the etcd master nodes for the cluster itself, in addition to for etcd instances that are created for application usage. The etcd cluster is provisioned within the Tectonic installer.
  • Bring your own pre-provisioned etcd cluster.

Use the first option in this case.

For more information about choosing the appropriate instance type, see the etcd hardware recommendation. Choose Next Step.

Specify the networking options. The installer can create a new public VPC or use a pre-existing public or private VPC. Make sure that the VPC requirements are met for an existing VPC.

Give a DNS name for the cluster. Choose the domain for which the Route 53 hosted zone was configured earlier, such as tectonic.kubernetes-aws.io. Multiple clusters may be created under a single domain. The cluster name and the DNS name would typically match each other.

To select the CIDR range, choose Show Advanced Settings. You can also choose the Availability Zones for the master and worker nodes. By default, the master and worker nodes are spread across multiple Availability Zones in the chosen region. This makes the cluster highly available.

Leave the other values as default. Choose Next Step.

Specify an email address and password to be used as credentials to log in to the console. Choose Next Step.

At any point during the installation, you can choose Save progress. This allows you to save configurations specified in the installer. This configuration file can then be used to restore progress in the installer at a later point.

To start the cluster installation, choose Submit. At another time, you can download the Terraform assets by choosing Manually boot. This allows you to boot the cluster later.

The logs from the Terraform scripts are shown in the installer. When the installation is complete, the console shows that the Terraform scripts were successfully applied, the domain name was resolved successfully, and that the console has started. The domain works successfully if the DNS resolution worked earlier, and it’s the address where the console is accessible.

Choose Download assets to download assets related to your cluster. It contains your generated CA, kubectl configuration file, and the Terraform state. This download is an important step as it allows you to delete the cluster later.

Choose Next Step for the final installation screen. It allows you to access the Tectonic console, gives you instructions about how to configure kubectl to manage this cluster, and finally deploys an application using kubectl.

Choose Go to my Tectonic Console. In our case, it is also accessible at http://cluster.tectonic.kubernetes-aws.io/.

As I mentioned earlier, the browser does not recognize the self-generated CA certificate. Choose Advanced and connect to the console. Enter the login credentials specified earlier in the installer and choose Login.

The Kubernetes upstream and console version are shown under Software Details. Cluster health shows All systems go and it means that the API server and the backend API can be reached.

To view different Kubernetes resources in the cluster choose, the resource in the left navigation bar. For example, all deployments can be seen by choosing Deployments.

By default, resources in the all namespace are shown. Other namespaces may be chosen by clicking on a menu item on the top of the screen. Different administration tasks such as managing the namespaces, getting list of the nodes and RBAC can be configured as well.

Download and run Kubectl

Kubectl is required to manage the Kubernetes cluster. The latest version of kubectl can be downloaded using the following command:

curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl

It can also be conveniently installed using the Homebrew package manager. To find and access a cluster, Kubectl needs a kubeconfig file. By default, this configuration file is at ~/.kube/config. This file is created when a Kubernetes cluster is created from your machine. However, in this case, download this file from the console.

In the console, choose admin, My Account, Download Configuration and follow the steps to download the kubectl configuration file. Move this file to ~/.kube/config. If kubectl has already been used on your machine before, then this file already exists. Make sure to take a backup of that file first.

Now you can run the commands to view the list of deployments:

~ $ kubectl get deployments --all-namespaces
NAMESPACE         NAME                                    DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kube-system       etcd-operator                           1         1         1            1           43m
kube-system       heapster                                1         1         1            1           40m
kube-system       kube-controller-manager                 3         3         3            3           43m
kube-system       kube-dns                                1         1         1            1           43m
kube-system       kube-scheduler                          3         3         3            3           43m
tectonic-system   container-linux-update-operator         1         1         1            1           40m
tectonic-system   default-http-backend                    1         1         1            1           40m
tectonic-system   kube-state-metrics                      1         1         1            1           40m
tectonic-system   kube-version-operator                   1         1         1            1           40m
tectonic-system   prometheus-operator                     1         1         1            1           40m
tectonic-system   tectonic-channel-operator               1         1         1            1           40m
tectonic-system   tectonic-console                        2         2         2            2           40m
tectonic-system   tectonic-identity                       2         2         2            2           40m
tectonic-system   tectonic-ingress-controller             1         1         1            1           40m
tectonic-system   tectonic-monitoring-auth-alertmanager   1         1         1            1           40m
tectonic-system   tectonic-monitoring-auth-prometheus     1         1         1            1           40m
tectonic-system   tectonic-prometheus-operator            1         1         1            1           40m
tectonic-system   tectonic-stats-emitter                  1         1         1            1           40m

This output is similar to the one shown in the console earlier. Now, this kubectl can be used to manage your resources.

Upgrade the Kubernetes cluster

Tectonic allows the in-place upgrade of the cluster. This is an experimental feature as of this release. The clusters can be updated either automatically, or with manual approval.

To perform the update, choose Administration, Cluster Settings. If an earlier Tectonic installer, version 1.6.2 in this case, is used to install the cluster, then this screen would look like the following:

Choose Check for Updates. If any updates are available, choose Start Upgrade. After the upgrade is completed, the screen is refreshed.

This is an experimental feature in this release and so should only be used on clusters that can be easily replaced. This feature may become a fully supported in a future release. For more information about the upgrade process, see Upgrading Tectonic & Kubernetes.

Delete the Kubernetes cluster

Typically, the Kubernetes cluster is a long-running cluster to serve your applications. After its purpose is served, you may delete it. It is important to delete the cluster as this ensures that all resources created by the cluster are appropriately cleaned up.

The easiest way to delete the cluster is using the assets downloaded in the last step of the installer. Extract the downloaded zip file. This creates a directory like <cluster-name>_TIMESTAMP. In that directory, give the following command to delete the cluster:

TERRAFORM_CONFIG=$(pwd)/.terraformrc terraform destroy --force

This destroys the cluster and all associated resources.

You may have forgotten to download the assets. There is a copy of the assets in the directory tectonic/tectonic-installer/darwin/clusters. In this directory, another directory with the name <cluster-name>_TIMESTAMP contains your assets.

Conclusion

This post explained how to manage Kubernetes clusters using the CoreOS Tectonic graphical installer.  For more details, see Graphical Installer with AWS. If the installation does not succeed, see the helpful Troubleshooting tips. After the cluster is created, see the Tectonic tutorials to learn how to deploy, scale, version, and delete an application.

Future posts in this series will explain other ways of creating and running a Kubernetes cluster on AWS.

Arun

dcrawl – Web Crawler For Unique Domains

Post Syndicated from Darknet original https://www.darknet.org.uk/2017/09/dcrawl-web-crawler-unique-domains/?utm_source=rss&utm_medium=social&utm_campaign=darknetfeed

dcrawl – Web Crawler For Unique Domains

dcrawl is a simple, but smart, multithreaded web crawler for randomly gathering huge lists of unique domain names.

How does dcrawl work?

dcrawl takes one site URL as input and detects all a href= links in the site’s body. Each found link is put into the queue. Successively, each queued link is crawled in the same way, branching out to more URLs found in links on each site’s body.

dcrawl Web Crawler Features

  • Branching out only to predefined number of links found per one hostname.

Read the rest of dcrawl – Web Crawler For Unique Domains now! Only available at Darknet.

Sci-Hub Faces $4,8 Million Piracy Damages and ISP Blocking

Post Syndicated from Ernesto original https://torrentfreak.com/sci-hub-faces-48-million-piracy-damages-and-isp-blocking-170905/

In June, a New York District Court handed down a default judgment against Sci-Hub.

The pirate site, operated by Alexandra Elbakyan, was ordered to pay $15 million in piracy damages to academic publisher Elsevier.

With the ink on this order barely dry, another publisher soon tagged on with a fresh complaint. The American Chemical Society (ACS), a leading source of academic publications in the field of chemistry, also accused Sci-Hub of mass copyright infringement.

Founded more than 140 years ago, the non-profit organization has around 157,000 members and researchers who publish tens of thousands of articles a year in its peer-reviewed journals. Because many of its works are available for free on Sci-Hub, ACS wants to be compensated.

Sci-Hub was made aware of the legal proceedings but did not appear in court. As a result, a default was entered against the site, and a few days ago ACS specified its demands, which include $4.8 million in piracy damages.

“Here, ACS seeks a judgment against Sci-Hub in the amount of $4,800,000—which is based on infringement of a representative sample of publications containing the ACS Copyrighted Works multiplied by the maximum statutory damages of $150,000 for each publication,” they write.

The publisher notes that the maximum statutory damages are only requested for 32 of its 9,000 registered works. This still adds up to a significant sum of money, of course, but that is needed as a deterrent, ACS claims.

“Sci-Hub’s unabashed flouting of U.S. Copyright laws merits a strong deterrent. This Court has awarded a copyright holder maximum statutory damages where the defendant’s actions were ‘clearly willful’ and maximum damages were necessary to ‘deter similar actors in the future’,” they write.

Although the deterrent effect may sound plausible in most cases, another $4.8 million in debt is unlikely to worry Sci-Hub’s owner, as she can’t pay it off anyway. However, there’s also a broad injunction on the table that may be more of a concern.

The requested injunction prohibits Sci-Hub’s owner to continue her work on the site. In addition, it also bars a wide range of other service providers from assisting others to access it.

Specifically, it restrains “any Internet search engines, web hosting and Internet service providers, domain name registrars, and domain name registries, to cease facilitating access to any or all domain names and websites through which Defendant Sci-Hub engages in unlawful access to [ACS’s works].”

The above suggests that search engines may have to remove the site from their indexes while ISPs could be required to block their users’ access to the site as well, which goes quite far.

Since Sci-Hub is in default, ACS is likely to get what it wants. However, if the organization intends to enforce the order in full, it’s likely that some of these third-party services, including Internet providers, will have to spring into action.

While domain name registries are regularly ordered to suspend domains, search engine removals and ISP blocking are not common in the United States. It would, therefore, be no surprise if this case lingers a little while longer.

A copy of ACS’s proposed default judgment, obtained by TorrentFreak, is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

YouTube-MP3 Settles With RIAA, Site Will Shut Down

Post Syndicated from Ernesto original https://torrentfreak.com/youtube-mp3-settles-with-riaa-site-will-shut-down-170904/

With millions of visitors per day, YouTube-MP3.org is one of the most visited websites on the Internet.

The site allows its visitors to convert YouTube videos to MP3 files, which they can then listen to where and whenever they want. The music industry sees such “stream ripping” sites as a serious threat to its revenues, worse than traditional pirate sites.

In an attempt to do something about it, a coalition of record labels, represented by the RIAA, took YouTube-MP3 to court last year.

A complaint filed in a California federal court accused the site’s operator of various types of copyright infringement. In addition, the labels accused the site of circumventing YouTube’s copying protection mechanism, violating the DMCA.

“Through the promise of illicit delivery of free music, Defendants have attracted millions of users to the [YouTube-MP3] website, which in turn generates advertising revenues for Defendants,” the labels complained.

Today, a year later, both parties have settled their differences. While there haven’t been many updates in the court docket, a recent filing states that both parties have agreed to a settlement.

The details of the deal are not public, but YouTube-MP3 is willing to take all the blame. In a proposed final judgment, both parties ask the court to rule in favor of the labels on all counts of the complaint. In addition, the site’s owner Philip Matesanz agreed to pay a settlement amount.

On all counts

In addition to the order, a proposed injunction will prohibit the site’s operator from “knowingly designing, developing, offering, or operating any technology or service that allows or facilitates the practice commonly known as “streamripping,” or any other type of copyright infringement for that matter.

This injunction, which RIAA and YouTube-MP3 both agreed on, also states that the site’s domain name will be handed over to one of the record labels.

“Defendants are ordered to transfer the domain name www.youtube-mp3.org to the Plaintiff identified in, and in accordance with the terms of, the confidential Settlement Agreement among the parties,” it reads.

If the owner refuses to comply, the registrar will be ordered to sign over the domain name, which means that there’s no escaping.

While the court has yet to sign the proposed judgment and injunction (pdf), it is clear that YouTube-MP3 has thrown in the towel and will shut down. At the time of writing the site remains online, but this likely won’t be for long.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Torrent Sites Suffer DDoS Attacks and Other Trouble

Post Syndicated from Ernesto original https://torrentfreak.com/torrent-sites-suffer-ddos-attacks-and-other-trouble-170901/

It’s not uncommon for torrent sites to suffer downtime due to technical issues. That happens pretty much every day.

But when close to a dozen large sites go offline, people start to ask questions.

This is exactly what happened this week. As reported previously, The Pirate Bay was hard to reach earlier, after a surge of traffic and a subsequent DDoS attack overloaded its servers. And they were not alone.

TorrentFreak spoke to several torrent site admins who noticed an increase of suspicious traffic which slowed down or toppled their sites, at least temporarily. While most have recovered, some sites remain offline today.

TorrentProject.se, one of the most used torrent search engines, has been down for nearly three days now. The site currently shows a “403 Forbidden” error message. Whether this is a harmless technical issue, the result of a DDoS attack, or worse, is unknown.

TorrentFreak reached out to the owner of the site but we have yet to hear back.

403 error

Another site that appears to be in trouble is WorldWideTorrents. This site, which was started after the KAT shutdown last year, is a home to many comic book fans. However, over the past few days the site has become unresponsive.

Based on WHOIS data, the site’s domain name has been suspended. The name servers were changed to “suspended-domain.com,” which means that it’s unlikely to be reinstated. WorldWideTorrents will reportedly return with a new domain but which one is currently unknown.

Popular uploaders on the site such as Nemesis43, meanwhile, are still active on other sites.

WorldWireTorrents Whois

Then there’s also Isohunt.to, which has been unresponsive for over a week. The search engine, which launched in 2013 less than two weeks after isoHunt.com shut down, has now vanished itself.

With no word from the operators, we can only speculate what happened. The site has seen a sharp decline in traffic over the past year, so it could be that they simply lost interest.

Isohunt.to is not responding

Those who now search for IsoHunt on Google are instead pointed to isohunts.to, which is a scam site advising users to download a “binary client,” which is little more than an ad.

The above shows that the torrent ecosystem remains vulnerable. DDoS attacks and domain issues are nothing new, but after the shutdown of KAT, Torrentz, Extratorrent, and other giants, the remaining sites have to carry a larger burden.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

How to Configure an LDAPS Endpoint for Simple AD

Post Syndicated from Cameron Worrell original https://aws.amazon.com/blogs/security/how-to-configure-an-ldaps-endpoint-for-simple-ad/

Simple AD, which is powered by Samba  4, supports basic Active Directory (AD) authentication features such as users, groups, and the ability to join domains. Simple AD also includes an integrated Lightweight Directory Access Protocol (LDAP) server. LDAP is a standard application protocol for the access and management of directory information. You can use the BIND operation from Simple AD to authenticate LDAP client sessions. This makes LDAP a common choice for centralized authentication and authorization for services such as Secure Shell (SSH), client-based virtual private networks (VPNs), and many other applications. Authentication, the process of confirming the identity of a principal, typically involves the transmission of highly sensitive information such as user names and passwords. To protect this information in transit over untrusted networks, companies often require encryption as part of their information security strategy.

In this blog post, we show you how to configure an LDAPS (LDAP over SSL/TLS) encrypted endpoint for Simple AD so that you can extend Simple AD over untrusted networks. Our solution uses Elastic Load Balancing (ELB) to send decrypted LDAP traffic to HAProxy running on Amazon EC2, which then sends the traffic to Simple AD. ELB offers integrated certificate management, SSL/TLS termination, and the ability to use a scalable EC2 backend to process decrypted traffic. ELB also tightly integrates with Amazon Route 53, enabling you to use a custom domain for the LDAPS endpoint. The solution needs the intermediate HAProxy layer because ELB can direct traffic only to EC2 instances. To simplify testing and deployment, we have provided an AWS CloudFormation template to provision the ELB and HAProxy layers.

This post assumes that you have an understanding of concepts such as Amazon Virtual Private Cloud (VPC) and its components, including subnets, routing, Internet and network address translation (NAT) gateways, DNS, and security groups. You should also be familiar with launching EC2 instances and logging in to them with SSH. If needed, you should familiarize yourself with these concepts and review the solution overview and prerequisites in the next section before proceeding with the deployment.

Note: This solution is intended for use by clients requiring an LDAPS endpoint only. If your requirements extend beyond this, you should consider accessing the Simple AD servers directly or by using AWS Directory Service for Microsoft AD.

Solution overview

The following diagram and description illustrates and explains the Simple AD LDAPS environment. The CloudFormation template creates the items designated by the bracket (internal ELB load balancer and two HAProxy nodes configured in an Auto Scaling group).

Diagram of the the Simple AD LDAPS environment

Here is how the solution works, as shown in the preceding numbered diagram:

  1. The LDAP client sends an LDAPS request to ELB on TCP port 636.
  2. ELB terminates the SSL/TLS session and decrypts the traffic using a certificate. ELB sends the decrypted LDAP traffic to the EC2 instances running HAProxy on TCP port 389.
  3. The HAProxy servers forward the LDAP request to the Simple AD servers listening on TCP port 389 in a fixed Auto Scaling group configuration.
  4. The Simple AD servers send an LDAP response through the HAProxy layer to ELB. ELB encrypts the response and sends it to the client.

Note: Amazon VPC prevents a third party from intercepting traffic within the VPC. Because of this, the VPC protects the decrypted traffic between ELB and HAProxy and between HAProxy and Simple AD. The ELB encryption provides an additional layer of security for client connections and protects traffic coming from hosts outside the VPC.

Prerequisites

  1. Our approach requires an Amazon VPC with two public and two private subnets. The previous diagram illustrates the environment’s VPC requirements. If you do not yet have these components in place, follow these guidelines for setting up a sample environment:
    1. Identify a region that supports Simple AD, ELB, and NAT gateways. The NAT gateways are used with an Internet gateway to allow the HAProxy instances to access the internet to perform their required configuration. You also need to identify the two Availability Zones in that region for use by Simple AD. You will supply these Availability Zones as parameters to the CloudFormation template later in this process.
    2. Create or choose an Amazon VPC in the region you chose. In order to use Route 53 to resolve the LDAPS endpoint, make sure you enable DNS support within your VPC. Create an Internet gateway and attach it to the VPC, which will be used by the NAT gateways to access the internet.
    3. Create a route table with a default route to the Internet gateway. Create two NAT gateways, one per Availability Zone in your public subnets to provide additional resiliency across the Availability Zones. Together, the routing table, the NAT gateways, and the Internet gateway enable the HAProxy instances to access the internet.
    4. Create two private routing tables, one per Availability Zone. Create two private subnets, one per Availability Zone. The dual routing tables and subnets allow for a higher level of redundancy. Add each subnet to the routing table in the same Availability Zone. Add a default route in each routing table to the NAT gateway in the same Availability Zone. The Simple AD servers use subnets that you create.
    5. The LDAP service requires a DNS domain that resolves within your VPC and from your LDAP clients. If you do not have an existing DNS domain, follow the steps to create a private hosted zone and associate it with your VPC. To avoid encryption protocol errors, you must ensure that the DNS domain name is consistent across your Route 53 zone and in the SSL/TLS certificate (see Step 2 in the “Solution deployment” section).
  2. Make sure you have completed the Simple AD Prerequisites.
  3. We will use a self-signed certificate for ELB to perform SSL/TLS decryption. You can use a certificate issued by your preferred certificate authority or a certificate issued by AWS Certificate Manager (ACM).
    Note: To prevent unauthorized connections directly to your Simple AD servers, you can modify the Simple AD security group on port 389 to block traffic from locations outside of the Simple AD VPC. You can find the security group in the EC2 console by creating a search filter for your Simple AD directory ID. It is also important to allow the Simple AD servers to communicate with each other as shown on Simple AD Prerequisites.

Solution deployment

This solution includes five main parts:

  1. Create a Simple AD directory.
  2. Create a certificate.
  3. Create the ELB and HAProxy layers by using the supplied CloudFormation template.
  4. Create a Route 53 record.
  5. Test LDAPS access using an Amazon Linux client.

1. Create a Simple AD directory

With the prerequisites completed, you will create a Simple AD directory in your private VPC subnets:

  1. In the Directory Service console navigation pane, choose Directories and then choose Set up directory.
  2. Choose Simple AD.
    Screenshot of choosing "Simple AD"
  3. Provide the following information:
    • Directory DNS – The fully qualified domain name (FQDN) of the directory, such as corp.example.com. You will use the FQDN as part of the testing procedure.
    • NetBIOS name – The short name for the directory, such as CORP.
    • Administrator password – The password for the directory administrator. The directory creation process creates an administrator account with the user name Administrator and this password. Do not lose this password because it is nonrecoverable. You also need this password for testing LDAPS access in a later step.
    • Description – An optional description for the directory.
    • Directory Size – The size of the directory.
      Screenshot of the directory details to provide
  4. Provide the following information in the VPC Details section, and then choose Next Step:
    • VPC – Specify the VPC in which to install the directory.
    • Subnets – Choose two private subnets for the directory servers. The two subnets must be in different Availability Zones. Make a note of the VPC and subnet IDs for use as CloudFormation input parameters. In the following example, the Availability Zones are us-east-1a and us-east-1c.
      Screenshot of the VPC details to provide
  5. Review the directory information and make any necessary changes. When the information is correct, choose Create Simple AD.

It takes several minutes to create the directory. From the AWS Directory Service console , refresh the screen periodically and wait until the directory Status value changes to Active before continuing. Choose your Simple AD directory and note the two IP addresses in the DNS address section. You will enter them when you run the CloudFormation template later.

Note: Full administration of your Simple AD implementation is out of scope for this blog post. See the documentation to add users, groups, or instances to your directory. Also see the previous blog post, How to Manage Identities in Simple AD Directories.

2. Create a certificate

In the previous step, you created the Simple AD directory. Next, you will generate a self-signed SSL/TLS certificate using OpenSSL. You will use the certificate with ELB to secure the LDAPS endpoint. OpenSSL is a standard, open source library that supports a wide range of cryptographic functions, including the creation and signing of x509 certificates. You then import the certificate into ACM that is integrated with ELB.

  1. You must have a system with OpenSSL installed to complete this step. If you do not have OpenSSL, you can install it on Amazon Linux by running the command, sudo yum install openssl. If you do not have access to an Amazon Linux instance you can create one with SSH access enabled to proceed with this step. Run the command, openssl version, at the command line to see if you already have OpenSSL installed.
    [[email protected] ~]$ openssl version
    OpenSSL 1.0.1k-fips 8 Jan 2015

  2. Create a private key using the command, openssl genrsa command.
    [[email protected] tmp]$ openssl genrsa 2048 > privatekey.pem
    Generating RSA private key, 2048 bit long modulus
    ......................................................................................................................................................................+++
    ..........................+++
    e is 65537 (0x10001)

  3. Generate a certificate signing request (CSR) using the openssl req command. Provide the requested information for each field. The Common Name is the FQDN for your LDAPS endpoint (for example, ldap.corp.example.com). The Common Name must use the domain name you will later register in Route 53. You will encounter certificate errors if the names do not match.
    [[email protected] tmp]$ openssl req -new -key privatekey.pem -out server.csr
    You are about to be asked to enter information that will be incorporated into your certificate request.

  4. Use the openssl x509 command to sign the certificate. The following example uses the private key from the previous step (privatekey.pem) and the signing request (server.csr) to create a public certificate named server.crt that is valid for 365 days. This certificate must be updated within 365 days to avoid disruption of LDAPS functionality.
    [[email protected] tmp]$ openssl x509 -req -sha256 -days 365 -in server.csr -signkey privatekey.pem -out server.crt
    Signature ok
    subject=/C=XX/L=Default City/O=Default Company Ltd/CN=ldap.corp.example.com
    Getting Private key

  5. You should see three files: privatekey.pem, server.crt, and server.csr.
    [[email protected] tmp]$ ls
    privatekey.pem server.crt server.csr

    Restrict access to the private key.

    [[email protected] tmp]$ chmod 600 privatekey.pem

    Keep the private key and public certificate for later use. You can discard the signing request because you are using a self-signed certificate and not using a Certificate Authority. Always store the private key in a secure location and avoid adding it to your source code.

  6. In the ACM console, choose Import a certificate.
  7. Using your favorite Linux text editor, paste the contents of your server.crt file in the Certificate body box.
  8. Using your favorite Linux text editor, paste the contents of your privatekey.pem file in the Certificate private key box. For a self-signed certificate, you can leave the Certificate chain box blank.
  9. Choose Review and import. Confirm the information and choose Import.

3. Create the ELB and HAProxy layers by using the supplied CloudFormation template

Now that you have created your Simple AD directory and SSL/TLS certificate, you are ready to use the CloudFormation template to create the ELB and HAProxy layers.

  1. Load the supplied CloudFormation template to deploy an internal ELB and two HAProxy EC2 instances into a fixed Auto Scaling group. After you load the template, provide the following input parameters. Note: You can find the parameters relating to your Simple AD from the directory details page by choosing your Simple AD in the Directory Service console.
Input parameter Input parameter description
HAProxyInstanceSize The EC2 instance size for HAProxy servers. The default size is t2.micro and can scale up for large Simple AD environments.
MyKeyPair The SSH key pair for EC2 instances. If you do not have an existing key pair, you must create one.
VPCId The target VPC for this solution. Must be in the VPC where you deployed Simple AD and is available in your Simple AD directory details page.
SubnetId1 The Simple AD primary subnet. This information is available in your Simple AD directory details page.
SubnetId2 The Simple AD secondary subnet. This information is available in your Simple AD directory details page.
MyTrustedNetwork Trusted network Classless Inter-Domain Routing (CIDR) to allow connections to the LDAPS endpoint. For example, use the VPC CIDR to allow clients in the VPC to connect.
SimpleADPriIP The primary Simple AD Server IP. This information is available in your Simple AD directory details page.
SimpleADSecIP The secondary Simple AD Server IP. This information is available in your Simple AD directory details page.
LDAPSCertificateARN The Amazon Resource Name (ARN) for the SSL certificate. This information is available in the ACM console.
  1. Enter the input parameters and choose Next.
  2. On the Options page, accept the defaults and choose Next.
  3. On the Review page, confirm the details and choose Create. The stack will be created in approximately 5 minutes.

4. Create a Route 53 record

The next step is to create a Route 53 record in your private hosted zone so that clients can resolve your LDAPS endpoint.

  1. If you do not have an existing DNS domain for use with LDAP, create a private hosted zone and associate it with your VPC. The hosted zone name should be consistent with your Simple AD (for example, corp.example.com).
  2. When the CloudFormation stack is in CREATE_COMPLETE status, locate the value of the LDAPSURL on the Outputs tab of the stack. Copy this value for use in the next step.
  3. On the Route 53 console, choose Hosted Zones and then choose the zone you used for the Common Name box for your self-signed certificate. Choose Create Record Set and enter the following information:
    1. Name – The label of the record (such as ldap).
    2. Type – Leave as A – IPv4 address.
    3. Alias – Choose Yes.
    4. Alias Target – Paste the value of the LDAPSURL on the Outputs tab of the stack.
  4. Leave the defaults for Routing Policy and Evaluate Target Health, and choose Create.
    Screenshot of finishing the creation of the Route 53 record

5. Test LDAPS access using an Amazon Linux client

At this point, you have configured your LDAPS endpoint and now you can test it from an Amazon Linux client.

  1. Create an Amazon Linux instance with SSH access enabled to test the solution. Launch the instance into one of the public subnets in your VPC. Make sure the IP assigned to the instance is in the trusted IP range you specified in the CloudFormation parameter MyTrustedNetwork in Step 3.b.
  2. SSH into the instance and complete the following steps to verify access.
    1. Install the openldap-clients package and any required dependencies:
      sudo yum install -y openldap-clients.
    2. Add the server.crt file to the /etc/openldap/certs/ directory so that the LDAPS client will trust your SSL/TLS certificate. You can copy the file using Secure Copy (SCP) or create it using a text editor.
    3. Edit the /etc/openldap/ldap.conf file and define the environment variables BASE, URI, and TLS_CACERT.
      • The value for BASE should match the configuration of the Simple AD directory name.
      • The value for URI should match your DNS alias.
      • The value for TLS_CACERT is the path to your public certificate.

Here is an example of the contents of the file.

BASE dc=corp,dc=example,dc=com
URI ldaps://ldap.corp.example.com
TLS_CACERT /etc/openldap/certs/server.crt

To test the solution, query the directory through the LDAPS endpoint, as shown in the following command. Replace corp.example.com with your domain name and use the Administrator password that you configured with the Simple AD directory

$ ldapsearch -D "[email protected]corp.example.com" -W sAMAccountName=Administrator

You should see a response similar to the following response, which provides the directory information in LDAP Data Interchange Format (LDIF) for the administrator distinguished name (DN) from your Simple AD LDAP server.

# extended LDIF
#
# LDAPv3
# base <dc=corp,dc=example,dc=com> (default) with scope subtree
# filter: sAMAccountName=Administrator
# requesting: ALL
#

# Administrator, Users, corp.example.com
dn: CN=Administrator,CN=Users,DC=corp,DC=example,DC=com
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: user
description: Built-in account for administering the computer/domain
instanceType: 4
whenCreated: 20170721123204.0Z
uSNCreated: 3223
name: Administrator
objectGUID:: l3h0HIiKO0a/ShL4yVK/vw==
userAccountControl: 512
…

You can now use the LDAPS endpoint for directory operations and authentication within your environment. If you would like to learn more about how to interact with your LDAPS endpoint within a Linux environment, here are a few resources to get started:

Troubleshooting

If you receive an error such as the following error when issuing the ldapsearch command, there are a few things you can do to help identify issues.

ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1)
  • You might be able to obtain additional error details by adding the -d1 debug flag to the ldapsearch command in the previous section.
    $ ldapsearch -D "[email protected]" -W sAMAccountName=Administrator –d1

  • Verify that the parameters in ldap.conf match your configured LDAPS URI endpoint and that all parameters can be resolved by DNS. You can use the following dig command, substituting your configured endpoint DNS name.
    $ dig ldap.corp.example.com

  • Confirm that the client instance from which you are connecting is in the CIDR range of the CloudFormation parameter, MyTrustedNetwork.
  • Confirm that the path to your public SSL/TLS certificate configured in ldap.conf as TLS_CAERT is correct. You configured this in Step 5.b.3. You can check your SSL/TLS connection with the command, substituting your configured endpoint DNS name for the string after –connect.
    $ echo -n | openssl s_client -connect ldap.corp.example.com:636

  • Verify that your HAProxy instances have the status InService in the EC2 console: Choose Load Balancers under Load Balancing in the navigation pane, highlight your LDAPS load balancer, and then choose the Instances

Conclusion

You can use ELB and HAProxy to provide an LDAPS endpoint for Simple AD and transport sensitive authentication information over untrusted networks. You can explore using LDAPS to authenticate SSH users or integrate with other software solutions that support LDAP authentication. This solution’s CloudFormation template is available on GitHub.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or issues implementing this solution, start a new thread on the Directory Service forum.

– Cameron and Jeff

Florida Court Orders ‘Pirate’ Site KissAsian to Pay 1.8M in Damages

Post Syndicated from Ernesto original https://torrentfreak.com/florida-court-orders-pirate-site-kissasian-to-pay-1-8m-in-damages-170825/

ABS-CBN, the largest media and entertainment company in the Philippines, continues its legal campaign against pirate sites in the US.

The company has singled out dozens of streaming sites that offer access to ‘Pinoy’ content without permission, both in the US and abroad.

This week a federal court in Florida signed a default judgment against KissAsian, one of the biggest targets thus far. Since the defendants failed to show up it was a relatively easy win.

The lawsuit in question was filed in February and accused KissAsian of both copyright and trademark infringement. According to ABS-CBN, the site was using its trademarks and copyrighted content to draw visitors and generate profit.

“ABS-CBN is suffering irreparable and indivisible injury and has suffered substantial damages as a result of Defendant’sunauthorized and unlawful use of the ABS-CBN Marks and Copyrighted Works,” the complaint read.

When the operators of the pirate site failed to respond to the allegations, the media company asked for a default judgment. United States District Judge William Dimitrouleas has now approved the company’s request, granting it $1 million in trademark damages, and another $810,000 for copyright infringement.

The order (pdf)

In addition, the judge granted a request to hand over the KissAsian.com domain name to ABS-CBN, which hasn’t happened thus far.

While the order is a clear win for the Philippine media conglomerate, it might be hard to recoup the damages from the unknown operators of the site. In fact, it doesn’t appear that the site is going to cease its activities anytime soon, as the order requires.

Soon after KissAsian.com was put at risk, the site’s operators simply relocated to a new domain name; KissAsian.ch.

“We are transferring domain, new domain is kissasian.ch, and kissasian beta mirror is not working temporarily, it will be done in next 5-10mins. Sorry for the inconvenience!” a statement on Facebook reads.

And so it continues.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.