Tag Archives: ip address

Cloudflare Fights RIAA’s Piracy Blocking Demands in Court

Post Syndicated from Ernesto original https://torrentfreak.com/cloudflare-fights-riaas-piracy-blocking-demands-in-court-160823/

skullRepresenting various major record labels, the RIAA filed a lawsuit against MP3Skull last year.

With millions of visitors per month the MP3 download site had been one of the prime sources of pirated music for a long time, frustrating many music industry insiders.

Although the site was facing a claim of millions of dollars in damages, the owners failed to respond in court. This prompted the RIAA to file for a default judgment, with success.

Earlier this year a Florida federal court awarded the labels more than $22 million in damages. In addition, it issued a permanent injunction which allowed the RIAA to take over the site’s domain names.

However, despite the million dollar verdict, MP3Skull still continues to operate today. The site actually never stopped and simply added several new domain names to its arsenal, with mp3skull.vg as the most recent.

MP3Skull’s most recent home


The RIAA is not happy with MP3Skull’s contempt of court and has asked Cloudflare to help out. As a CDN provider, Cloudflare relays traffic of millions of websites through its network, including many pirate sites.

According to the RIAA, Cloudflare should stop offering its services to any MP3Skull websites, but the CDN provider has thus far refused to do so without a proper court order.

To resolve this difference of opinion, the RIAA has asked the Florida federal court for a “clarification” of the existing injunction, so it applies to Cloudflare as well.

In practice, this would mean that Cloudflare has to block all currently active domains, as well as any future domains with the keyword “MP3Skull,” which are tied to the site’s known IP-addresses.

“Cloudflare should be required to cease its provision of services to any of the Active MP3Skull Domains, as well as any website at either or that includes ‘MP3Skull’ in its name,” RIAA argued.

RIAA’s request


However, Cloudflare believes that this goes too far. While the company doesn’t object to disconnecting existing accounts if ordered to by a court, adding a requirement to block sites based on a keyword and IP-address goes too far.

The proposed injunction goes well beyond the scope of the DMCA, the CDN provider informs the court in an opposition brief this week (pdf).

“…Plaintiffs’ proposed injunction would force Cloudflare —which provides services to millions of websites— to investigate open-ended domain letter-string and IP address combinations to comply with the injunction.

“Cloudflare believes that this Court should hold the Plaintiffs accountable for following clear rules of the road,” Cloudflare adds.

The company suggests that the court could require it to terminate specific accounts that are found to be infringing, but doesn’t want to become the RIAA’s copyright cop.

“What Cloudflare cannot do, and which the Court should not require, is to serve as a deputy for the Plaintiffs and their RIAA trade association in investigating and identifying further targets of an injunction.”

To outsiders the difference between RIAA’s request and what Cloudflare suggests may seem small, but the company draws a clear line to prevent having to scan for pirate sites, proactively. This could turn into a slippery censorship slope, they feel.

This isn’t the first time that the RIAA has requested a keyword ban. In a similar case last year Cloudflare was ordered to terminate any accounts with the term “grooveshark” in them. However, in this case the RIAA owned the trademark, which makes it substantially different as it doesn’t involve the DMCA.

The EFF applauds Cloudflare’s actions and hopes the court will properly limit the scope of these and other blocking efforts.

“The limits on court orders against intermediaries are vital safeguards against censorship, especially where the censorship is done on behalf of a well-financed party,” EFF’s Mitch Stoltz writes.

“That’s why it’s important for courts to uphold those limits even in cases where copyright or trademark infringement seems obvious,” he adds.

The Florida court is expected to rule on the RIAA’s injunction demands during the days to come, a decision that will significantly impact future blocking requests.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Powering Secondary DNS in a VPC using AWS Lambda and Amazon Route 53 Private Hosted Zones

Post Syndicated from Bryan Liston original https://aws.amazon.com/blogs/compute/powering-secondary-dns-in-a-vpc-using-aws-lambda-and-amazon-route-53-private-hosted-zones/

Mark Statham, Senior Cloud Architect

When you implement hybrid connectivity between existing on-premises environments and AWS, there are a number of approaches to provide DNS resolution of both on-premises and VPC resources. In a hybrid scenario, you likely require resolution of on-premises resources, AWS services deployed in VPCs, AWS service endpoints, and your own resources created in your VPCs.

You can leverage Amazon Route 53 private hosted zones to provide private DNS zones for your VPC resources and dynamically register resources, as shown in a previous post, Building a Dynamic DNS for Route 53 using CloudWatch Events and Lambda.

Ultimately, this complex DNS resolution scenario requires that you deploy and manage additional DNS infrastructure, running on EC2 resources, into your VPC to handle DNS requests either from VPCs or on-premises. Whilst this is a familiar approach it adds additional cost and operational complexity, where a solution using AWS managed services can be used instead.

In this post, we explore how you can use leverage Route 53 private hosted zones with AWS Lambda and Amazon CloudWatch Events to mirror on-premises DNS zones which can then be natively resolved from within your VPCs, without the need for additional DNS forwarding resources.

Route 53 private hosted zones

Route 53 offers the convenience of domain name services without having to build a globally distributed highly reliable DNS infrastructure. It allows instances within your VPC to resolve the names of resources that run within your AWS environment. It also lets clients on the Internet resolve names of your public-facing resources. This is accomplished by querying resource record sets that reside within a Route 53 public or private hosted zone.

A private hosted zone is basically a container that holds information about how you want to route traffic for a domain and its subdomains within one or more VPCs and is only resolvable from the VPCs you specify; whereas a public hosted zone is a container that holds information about how you want to route traffic from the Internet.

Route 53 has a programmable API that can be used to automate the creation/removal of records sets which we’re going leverage later in this post.

Using Lambda with VPC support and scheduled events

AWS Lambda is a compute service where you can upload your code and the service runs the code on your behalf using AWS infrastructure. You can create a Lambda function and execute it on a regular schedule. You can specify a fixed rate (for example, execute a Lambda function every hour or 15 minutes), or you can specify a cron expression. This functionality is underpinned by CloudWatch Events.

Lambda runs your function code securely within a VPC by default. However, to enable your Lambda function to access resources inside your private VPC, you must provide additional VPC-specific configuration information that includes VPC subnet IDs and security group IDs. Lambda uses this information to set up elastic network interfaces (ENIs) that enable your function to connect securely to other resources within your private VPC or reach back into your own network via AWS Direct Connect or VPN.

Each ENI is assigned a private IP address from the IP address range within the subnets that you specify, but is not assigned any public IP addresses. You cannot use an Internet gateway attached to your VPC, as that requires the ENI to have public IP addresses. Therefore, if your Lambda function requires Internet access, for example to access AWS APIs, you can use the Amazon VPC NAT gateway. Alternatively, you can leverage a proxy server to handle HTTPS calls, such as those used by the AWS SDK or CLI.

Building an example system

When you combine the power of Route 53 private hosted zones and Lambda, you can create a system that closely mimics the behavior of a stealth DNS to provide resolution of on-premises domains via VPC DNS.

For example, it is possible to schedule a Lambda function that executes every 15 minutes to perform a zone transfer from an on-premises DNS server, using a full zone transfer query (AXFR). The function can check the retrieved zone for differences from a previous version. Changes can then be populated into a Route 53 private hosted zone, which is only resolvable from within your VPCs, effectively mirroring the on-premises master to Route 53.

This then allows your resources deployed in VPC to use just VPC DNS to resolve on-premises, VPC and Internet resources records without the need for any additional forwarding infrastructure to on-premises DNS.

The following example is based on python code running as a Lambda function, invoked using CloudWatch Events with constant text to provide customizable parameters to support the mirroring of multiple zones for both forward and reverse domains.

Prerequisites for the example

Before you get started, make sure you have all the prerequisites in place including installing the AWS CLI and creating a VPC.

  • Region

    Check that the region where your VPC is deployed has the Lambda and CloudWatch Events services available.

  • AWS Command Line Interface (AWS CLI)

    This example makes use of the AWS CLI; however, all actions can be performed via the AWS console as well. Make sure you have the latest version installed, which provides support for creating Lambda functions in a VPC and the required IAM permissions to create resources required. For more information, see Getting Set Up with the AWS Command Line Interface.

  • VPC

    For this example, create or use a VPC configured with at least two private subnets in different Availability Zones and connectivity to the source DNS server. If you are building a new VPC, see Scenario 4: VPC with a Private Subnet Only and Hardware VPN Access.

    Ensure that the VPC has the DNS resolution and DNS hostnames options set to yes, and that you have both connectivity to your source DNS server and the ability to access the AWS APIs. You can create an AWS managed NAT gateway to provide Internet access to AWS APIs or as an alternative leverage a proxy server.

    You may wish to consider creating subnets specifically for your Lambda function, allowing you to restrict the IP address ranges that need access to the source DNS server and configure network access controls accordingly.

    After the subnets are created, take note of them as you’ll need them later to set up the Lambda function: they are in the format subnet-ab12cd34. You also need a security group to assign to the Lambda function; this can be the default security group for the VPC or one you create with limited outbound access to your source DNS: the format is sg-ab12cd34.

  • DNS server

    You need to make sure that you modify DNS zone transfer settings so that your DNS server accepts AXFR queries from the Lambda function. Also, ensure that security groups or firewall policies allow connection via TCP port 53 from the VPC subnet IP ranges created above.

Setting up the example Lambda function

Before you get started, it’s important to understand how the Lambda function works and interacts with the other AWS services and your network resources:

  1. The Lambda function is invoked by CloudWatch Events and configured based on a JSON string passed to the function. This sets a number of parameters, including the DNS domain, source DNS server, and Route 53 zone ID. This allows a single Lambda function to be reused for multiple zones.
  2. A new ENI is created in your VPC subnets and attached to the Lambda function; this allows your function to access your internal network resources based on the security group that you defined.
  3. The Lambda function then transfers the source DNS zone from the IP specified in the JSON parameters. You need to ensure that your DNS server is configured to allow full zone transfers and allow AXFR queries to your DNS server, which happens over TCP port 53.
  4. The Route 53 DNS zone is retrieved via API.
  5. The two zone files are compared; the resulting differences are returned as a set of actions to be performed against Route 53.
  6. Updates to the Route 53 zone are made via API and, finally, the SOA is updated to match the source version.

You’re now ready to set up the example using the following instructions.

Step 1 – Create a Route 53 hosted zone

Before you create the Lambda function, there needs to be a target Route 53 hosted zone to mirror the DNS zone records into. This can either be a public or private zone; however, for the purposes of this example, you will create a private hosted zone that only responds to queries from the VPC you specify.

To create a Route 53 private hosted zone associated with your VPC, provide the region and VPC ID as part of the following command:

aws route53 create-hosted-zone \
--name <domainname> \
--vpc VPCRegion=<region>,VPCId=<vpc-aa11bb22> \
--caller-reference mirror-dns-lambda \
--hosted-zone-config Comment="My DNS Domain"

Save the HostedZone Id returned, since you will need it for future steps.

Step 2 – Create an IAM role for the Lambda function

In this step, you use the AWS CLI to create the Identity and Access Management (IAM) role that the Lambda function assumes when the function is invoked. You need to create an IAM policy with the required permissions and then attach this policy to the role.

Download the mirror-dns-policy.json and mirror-dns-trust.json files from the aws-lambda-ddns-function AWS Labs GitHub repo.


The policy includes EC2 permissions to create and manage ENIs required for the Lambda function to access your VPC, and Route 53 permissions to list and create resource records. The policy also allows the function to create log groups and log events as per standard Lambda functions.

    "Version": "2012-10-17",
    "Statement": [{
        "Effect": "Allow",
        "Action": [
        "Resource": "arn:aws:logs:*:*:*"
    }, {

        "Effect": "Allow",
        "Action": [
        "Resource": "*"
    }, {
        "Sid": "Manage Route 53 records",
        "Effect": "Allow",
        "Action": [

        "Resource": ["*"]

To restrict the Lambda function access, you can control the scope of changes to Route 53 by specifying the hosted zones that are being managed in the format "arn:aws:route53:::hostedzone/Z148QEXAMPLE8V". This policy can be updated later if additional hosted zone IDs are added.


The mirror-dns-trust.json file contains the trust policy that grants the Lambda service permission to assume the role; this is standard for creating Lambda functions.

    "Version": "2012-10-17",
    "Statement": [{
        "Sid": "",
        "Effect": "Allow",
        "Principal": {
            "Service": "lambda.amazonaws.com"
        "Action": "sts:AssumeRole"

Create IAM entities

The next step is to create the following IAM entities for the Lambda function:

  • IAM policy

    Create the IAM policy using the policy document in the mirror-dns-policy.json file, replacing with the local path to the file. The output of the create-policy command includes the Amazon Resource Locator (ARN). Save the ARN, as you need it for future steps.

    aws iam create-policy \
    --policy-name mirror-dns-lambda-policy \
    --policy-document file://<LOCAL PATH>/mirror-dns-policy.json

  • IAM role

    Create the IAM role using the trust policy in the mirror-dns-trust.json file, replacing with the local path to the file. The output of the create-role command includes the ARN associated with the role that you created. Save this ARN, as you need it when you create the Lambda function in the next section.

    aws iam create-role \
    --role-name mirror-dns-lambda-role \
    --assume-role-policy-document file://<LOCAL PATH>/mirror-dns-trust.json

    Attach the policy to the role. Use the ARN returned when you created the IAM policy for the
    –policy-arn input parameter.

    aws iam attach-role-policy \
    --role-name mirror-dns-lambda-role \
    --policy-arn <enter-your-policy-arn-here>

Step 3 – Create the Lambda function

The Lambda function uses modules included in the Python 2.7 Standard Library and the AWS SDK for Python module (boto3), which is preinstalled as part of the Lambda service. Additionally, the function uses the dnspython module, which provides DNS handling functions, and there is also an externalized lookup function.

The additional libraries and functions require that we create a deployment package for this example as follows:

  1. Create a new directory for the Lambda function and download the Python scripts lambdafunction.py and lookuprdtype.py from the aws-lambda-ddns-function AWS Labs GitHub repo. Alternatively, clone the repo locally.
  2. Install the additional dnspython module locally using the pip command. This creates a copy of the require module local to the function.

    pip install dnspython -t .

  3. Update the lambda_function.py to specify proxy server configuration, if required.

  4. Create a Lambda deployment package using the following command:

    zip -rq mirror-dns-lambda.zip lambda_function.py \
    lookup_rdtype.py dns*

Then, you’ll use the AWS CLI to create the Lambda function and upload the deployment package by executing the following command to create the function. Note that you need to update the commands to use the ARN of the IAM role that you created earlier, as well as the local path to the Lambda deployment file containing the Python code for the Lambda function.

aws lambda create-function --function-name mirror-dns-lambda \
--runtime python2.7 \
--role <enter-your-role-arn-here> \
--handler lambda_function.lambda_handler \
--timeout 60 \
--vpc-config SubnetIds=comma-separated-vpc-subnet-ids,SecurityGroupIds=comma-separated-security-group-ids \
--memory-size 128 \
--description "DNS Mirror Function"
--zip-file fileb://<LOCAL PATH>/mirror-dns-lambda.zip

The output of the command returns the FunctionArn of the newly-created function. Save this ARN, as you need it in the next section.

Configure a test event in order to validate that your Lambda function works; it should be in JSON format similar to the following. All keys are required as well as values for Domain, MasterDns, and ZoneId.

    "Domain": "mydomain.com",
    "MasterDns": "",
    "ZoneId": "AA11BB22CC33DD",
    "IgnoreTTL": "False",
    "ZoneSerial": ""

Invoke the Lambda function to test that everything is working; after the function has been invoked, check the file named output to see if the function has worked (you should see a 200 return code). Alternatively, you can test in the AWS console, using the test event to see the log output.

aws lambda invoke \
--function-name mirror-dns-lambda \
--payload fileb://event.json output

Congratulations, you’ve now created a secondary mirrored DNS accessible to your VPC without the need for any servers!

Step 4 – Create the CloudWatch Events rule

After you’ve confirmed that the Lambda function is executing correctly, you can create the CloudWatch Events rule that triggers the Lambda function on a scheduled basis. First, create a new rule with a unique name and schedule expression. You can create rules that self-trigger on schedule in CloudWatch Events, using cron or rate expressions. The following example uses a rate expression to run every 15 minutes.

aws events put-rule \
--name mirror-dns-lambda-rule \
--schedule-expression 'rate(15 minutes)'

The output of the command returns the ARN to the newly-created CloudWatch Events rule. Save the ARN, as you need it to associate the rule with the Lambda function and to set the appropriate Lambda permissions.

Next, add the permissions required for the CloudWatch Events rule to execute the Lambda function. Note that you need to provide a unique value for the –statement-id input parameter. You also need to provide the ARN of the CloudWatch Events rule that you created.

aws lambda add-permission \
--function-name mirror-dns-lambda \
--statement-id Scheduled01 \
--action 'lambda:InvokeFunction' \
--principal events.amazonaws.com \
--source-arn <enter-your-cloudwatch-events-rule-arn-here>

Finally, set the target of the rule to be the Lambda function. Because you are going to pass parameters via a JSON string, the value for –targets also needs to be in JSON format. You need to construct a file containing a unique identifier for the target, the ARN of the Lambda function previously created, and the constant text that contains the function parameters. An example targets.json file would look similar to the following; note that every quote(") in the Input value must be escaped.

        "Id": "RuleStatementId01",
        "Arn": "<arn-of-lambda-function>",
        "Input": "{\"Domain\": \"mydomain.com\",\"MasterDns\": \"\",\"ZoneId\": \"AA11BB22CC33DD\",\"IgnoreTTL\": \"False\",\"ZoneSerial\": \"\"}"

Activate the scheduled event by adding the following target:

aws events put-targets \
--rule mirror-dns-lambda-rule \
--targets file://targets.json

Because a single rule can have multiple targets, every domain that you want to mirror can be defined as another target with a different set of parameters; change the ID and Input values in the target JSON file.


Now that you’ve seen how you can combine various AWS services to automate the mirroring of DNS to Amazon Route 53 hosted zones, we hope that you are inspired to create your own solutions using Lambda in a VPC to enable hybrid integration. Lambda allows you to create highly scalable serverless infrastructures that allow you to reduce cost and operational complexity, while providing high availability. Coupled with CloudWatch Events, you can respond to events in real-time, such as when an instance changes its state or when a customer event is pushed to the CloudWatch Events service.

To learn more about Lambda and serverless infrastructures, see the AWS Lambda Developer Guide and the " Microservices without the Servers" blog post. To learn more about CloudWatch Events, see Using CloudWatch Events in the Amazon CloudWatch Developer Guide.

We’ve open-sourced the code used in this example in the aws-lambda-ddns-function AWS Labs GitHub repo and can’t wait to see your feedback and your ideas about how to improve the solution.

Reddit Refuses to Disclose Alleged Music Leaker’s IP Address

Post Syndicated from Andy original https://torrentfreak.com/reddit-refuses-to-disclose-alleged-music-leakers-ip-address-160816/

redditpBack in June, Atlantic Records were in the final stages of releasing the track ‘Heathens’ by the platinum-certified band Twenty One Pilots. Things didn’t go to plan.

The track, which was also set to appear on “Suicide Squad: The Album”, was leaked online, first appearing on an anonymous Slovakian file-hosting service called Dropfile.to.

From there it’s claimed that the alleged leaker advertised that file on Reddit, posting a link which enabled any viewer to download it for free. The posting, which was made on the ‘Twenty One Pilots’ subreddit by a user called ‘twentyoneheathens’, caught the eye of Atlantic Records.

Earlier this month in the Supreme Court of the State of New York, Atlantic described how the leak had ruined its plans for the release and promotion of the track. Underlying these complaints was the belief that the leak originated close to home.

The label said it had provided an early release copy “to an extremely limited number of individuals”, including members of 21 Pilots, their manager, Atlantic and [record label] Fueled by Ramen executives, plus members of Atlantic’s radio field staff.

According to Atlantic, all of its employees who were aware of the impending release were “contractually obligated and/or under a fiduciary obligation” not to disclose its existence until June 24.

So, in order to find out who was responsible for the pre-release, Atlantic asked the Court to force Reddit to hand over the presumed leaker’s details, including his or her IP address. Reddit, however, doesn’t want to play ball.


In a response to the Court, Reddit’s legal team at Harris Beach PLLC say that Atlantic’s claims fail to reach the standards required for discovery.

“In order to obtain pre-action discovery, Atlantic must demonstrate now that it has meritorious claims against the Reddit user. However, Atlantic has failed to show that its claims are meritorious for two, simple reasons,” Reddit begins (pdf).

“First, it has failed to establish that it has a contractual relationship with the Reddit user. Second, it has failed to establish that it has a fiduciary relationship with the Reddit user. Because Atlantic has not demonstrated that it has meritorious causes of action against the unidentified Reddit user, its petition for pre-action discovery related to such user should be denied.”

The problem lies with Atlantic’s allegation that the person responsible for the leak and the link on Reddit is under contract with the company. Reddit’s lawyers point out that while the label is clear about what action it would take in that instance, it has made no statement detailing what it would do if the person who posted the link on Reddit is disconnected from the initial leak.

“Atlantic does not describe the claims it would bring against a non-employee Reddit user who discovered the link on Dropfile.to and posted it to Reddit.com without assistance from an Atlantic employee or an employee of Fueled by Ramen, the members of Twenty One Pilots, or their manager, each of whom had access to the song at the time of the leak,” Reddit writes.

Underlining its concerns, Reddit points out that Atlantic provides no proof to back up its claims that the “individual or individuals” who uploaded the file to Dropfile.to also posted the link to Reddit.

“[T]he Reddit user may have been a member of the general public, who, after discovering the Dropfile.to link on another publicly available website, decided to resubmit it to Reddit.com. A member of the public would not likely have a contractual relationship with Atlantic that was breached and Atlantic has not alleged as much.”

Furthermore, Reddit says Atlantic has not advised the Court of any efforts made to obtain the alleged poster’s details from Dropfile.to. While that might indeed be the case, the operator of Dropfile previously informed TorrentFreak that his site is completely anonymous and carries no logs, so identifying any user would be impossible.

In closing, Reddit describes Atlantic’s effort as an “impermissible fishing expedition” and asks for its petition for pre-action discovery to be denied. However, should the Court decide otherwise, Reddit has asked for a cap to be placed on the amount of data it must hand over.

“Presently, Atlantic’s subpoena requests not only information related to the user twentyoneheathens, but also for information related to ‘all and any other Reddit accounts which accessed [Reddit’s] service from the same IP address on or about June 15, 2016’,” Reddit notes.

“While such users may share an IP address, they otherwise have no relationship among them. For this reason, any order requiring pre-action discovery should be limited to information directly related to the user twentyoneheathens and not violate the privacy interests of any Reddit users sharing the IP address.”

While Reddit is digging in its heels now, it seems likely that at some point the Court will indeed order the alleged leaker’s IP address to be handed over. However, only time will tell what action Atlantic will publicly take. Leaks are potentially embarrassing, so making their findings widely known may not be a priority.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

‘Mutable’ Torrents Proposal Makes BitTorrent More Resilient

Post Syndicated from Andy original https://torrentfreak.com/mutable-torrents-proposal-makes-bittorrent-resilient-160813/

bittorrent_logoRegardless of differing opinions on what kind of content should be shifted around using the protocol, few will contest the beauty of BitTorrent.

Thanks to the undoubted genius of creator Bram Cohen, it is still extremely robust some 15 years after its debut.

But while some may assume that BitTorrent is no longer under development, the opposite is true. Behind the scenes, groups of developers are working to further develop the protocol via BitTorrent Enhancement Proposals (BEPs).

Early BEPs, such as those covering DHT, PEX and private torrents, have long since been implemented but the process continues today.

Just one of the P2P developers involved is Luca Matteis. He lives in Rome, Italy, where he studies Computer Science at the Sapienza University and works part-time on various projects.

Passionate about P2P and decentralized systems, Luca informs TorrentFreak that his goal is to enable people to share and communicate in a censorship resistant manner. His fresh proposal, Updating Torrents Via DHT Mutable Items, was submitted last month and aims to live up to that billing.

We asked Luca to explain what his group’s proposal (it’s a team effort) is all about and he kindly obliged. It begins with the Distributed Hash Table (DHT) and a previous enhancement proposal.

“So currently the DHT in BitTorrent is used as a peer discovery mechanism for torrents, and it has really nice decentralized properties. It works just like a tracker, with the difference being that trackers are on central servers with a domain name, and therefore can be easily shut down,” Luca begins.

“[An earlier enhancement proposal] BEP44 added some interesting properties to the DHT network, namely the feature of being able to store arbitrary data. So instead of just storing IP addresses of people downloading specific torrents, we can now store any kind of data (max 1000 bytes per item).”

Luca says that so far this functionality hasn’t been used by torrent clients. uTorrent apparently has it under the hood, with some developers believing it’s there for reasons connected to BitTorrent Inc’s Bleep software. At this point, however, it only exists at the network level.

Importantly, however, Luca says that BEP44 allows one to store changing values under a key.

“We call these mutable items. So what you could do is generate a public key, which can be thought of as your address, and share this with the world. Then you use this public key to store stuff in BitTorrent’s DHT network. And, because it’s your public key, you (and only you) can change the value pointed by your public key.”

As mentioned earlier, only a 1000 bytes can be stored (less than 1kB), but Luca points out that it’s possible to store the info hash of a torrent, 79816060EA56D56F2A2148CD45705511079F9BCA, for example. Now things get interesting.

“At this point, your public key has very similar properties to an HTTP URL [a website address], with the difference that (just like trackers before) the value does not exist on a single computer/server, but is constantly shared across the DHT network,” he explains.

“Our BEP46 extension is an actual standardization of what the value, pointed by your public key, should look like. Our standard says it should be an info hash of a torrent. This allows for a multitude of use cases, but more practically it allows for torrents to automatically change what they’re downloading based on the public key value inside the DHT.”

While the technically minded out there might already know where this is going, Luca is kind enough to spell it out.

“Torrent sites (such as The Pirate Bay) could share a magnet link they control, which contains their public key. What they would store at this ‘address’ is the infohash of a torrent which contains a database of all their torrents,” he says.

“Users who trust them would bookmark the magnet link, and when they click on it, a torrent will start downloading. Specifically, they’d start downloading the database dump of the torrent site.”

While that might not yet sound like magic, the ability to change the value held in the DHT proves extremely useful.

“The cool thing is that when the torrent site decides to share more torrents (new releases, better quality stuff, more quality reviews), all they need to do is update the value in the DHT with a new torrent containing a new .rss file.

“At this point, all the users downloading from their magnet link will automatically be downloading the new torrent and will always have an up-to-date .RSS dump of torrents,” he says.

But while this would be useful to users, Luca says that sites like The Pirate Bay could also benefit.

“For torrent sites, this would be an attractive solution because they wouldn’t need to maintain a central HTTP server which implies costs and can be easily shut down. On the other hand, their mutable torrent magnet link cannot be easily shut down, does not imply maintenance costs, and cannot be easily tracked down,” he concludes.

For those interested in the progress of this enhancement proposal and others like it, all BEPs can be found here.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

AWS Snowball Update – Job Management API & S3 Adapter

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-snowball-update-job-management-api-s3-adapter/

We introduced AWS Import/Export Snowball last fall from the re:Invent stage. The Snowball appliance is designed for customers who need to transfer large amounts of data into or out of AWS on a one-time or recurring basis (read AWS Import/Export Snowball – Transfer 1 Petabyte Per Week Using Amazon-Owned Storage Appliances to learn more).

Today we are launching two important additions to Snowball. Here’s the scoop:

  • Snowball Job Management API – The new Snowball API lets you build applications that create and manage Snowball jobs.
  • S3 Adapter – The new Snowball S3 Adapter lets you access a Snowball appliance as if it were an S3 endpoint.

Time to dive in!

Snowball Job Management API
The original Snowball model was interactive and console-driven. You could create a job (basically “Send me a Snowball”) and then monitor its progress, tracking the shipment, transit, delivery, and return to AWS visually. This was great for one-off jobs, but did not meet the needs of customers who wanted to integrate Snowball into their existing backup or data transfer model. Based on the requests that we received from these customers and from our Storage Partners, we are introducing a Snowball Job Management API today.

The Snowball Job Management API gives our customers and partners the power to make Snowball an intrinsic, integrated part of their data management solutions. Here are the primary functions:

  • CreateJob – Create an import or export job & initiates shipment of an appliance.
  • ListJobs – Fetch a list of jobs and associated job states.
  • DescribeJob – Fetch information about a specific job.

Read the API Reference to learn more!

I’m looking forward to reading about creative and innovative applications that make use of this new API! Leave me a comment and let me know what you come up with.

S3 Adapter
The new Snowball S3 Adapter allows you to access a Snowball as if it were an Amazon S3 endpoint running on-premises. This allows you to use your existing, S3-centric tools to move data to or from a Snowball.

The adapter is available for multiple Linux distributions and Windows releases, and is easy to install:

  1. Download the appropriate file from the Snowball Tools page and extract its contents to a local directory.
  2. Verify that the adapter’s configuration is appropriate for your environment (the adapter listens on port 8080 by default).
  3. Connect your Snowball to your network and get its IP address from the built-in display on the appliance.
  4. Visit the Snowball Console to obtain the unlock code and the job manifest.
  5. Launch the adapter, providing it with the IP address, unlock code, and manifest file.

With the adapter up and running, you can use your existing S3-centric tools by simply configuring them to use the local endpoint (the IP address of the on-premises host and the listener port). For example, here’s how you would run the s3 ls command on the on-premises host:

$ aws s3 ls --endpoint http://localhost:8080

After you copy your files to the Snowball, you can easily verify that the expected number of files were copied:

$ snowball validate

The initial release of the adapter supports a subset of the S3 API including GET on buckets and on the service, HEAD on a bucket and on objects, PUT and DELETE on objects, and all of the multipart upload operations. If you plan to access the adapter using your own code or third party tools, some testing is advisable.

To learn more, read about the Snowball Transfer Adapter.

Available Now
These new features are available now and you can start using them today!



Now Available – IPv6 Support for Amazon S3

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-available-ipv6-support-for-amazon-s3/

As you probably know, every server and device that is connected to the Internet must have a unique IP address. Way back in 1981, RFC 791 (“Internet Protocol”) defined an IP address as a 32-bit entity, with three distinct network and subnet sizes (Classes A, B, and C – essentially large, medium, and small) designed for organizations with requirements for different numbers of IP addresses. In time, this format came to be seen as wasteful and the more flexible CIDR (Classless Inter-Domain Routing) format was standardized and put in to use. The 32-bit entity (commonly known as an IPv4 address) has served the world well, but the continued growth of the Internet means that all available IPv4 addresses will ultimately be assigned and put to use.

In order to accommodate this growth and to pave the way for future developments, networks, devices, and service providers are now in the process of moving to IPv6. With 128 bits per IP address, IPv6 has plenty of address space (according to my rough calculation, 128 bits is enough to give 3.5 billion IP addresses to every one of the 100 octillion or so stars in the universe). While the huge address space is the most obvious benefit of IPv6, there are other more subtle benefits as well. These include extensibility, better support for dynamic address allocation, and additional built-in support for security.

Today I am happy to announce that objects in Amazon S3 buckets are now accessible via IPv6 addresses via new “dual-stack” endpoints. When a DNS lookup is performed on an endpoint of this type, it returns an “A” record with an IPv4 address and an “AAAA” record with an IPv6 address. In most cases the network stack in the client environment will automatically prefer the AAAA record and make a connection using the IPv6 address.

Accessing S3 Content via IPv6
In order to start accessing your content via IPv6, you need to switch to new dual-stack endpoints that look like this:


or this:


If you are using the AWS Command Line Interface (CLI) or the AWS Tools for Windows PowerShell you can use the --enabledualstack flag to switch to the dual-stack endpoints.

We are currently updating the AWS SDKs to support the use_dualstack_endpoint setting and expect to push them out to production by the middle of next week. Until then, refer to the developer guide for your SDK to learn how to enable this feature.

Things to Know
Here are some things that you need to know in order to make a smooth transition to IPv6:

Bucket and IAM Policies – If you use policies to grant or restrict access via IP address, update them to include the desired IPv6 ranges before you switch to the new endpoints. If you don’t do this, clients may incorrectly gain or lose access to the AWS resources. Update any policies that exclude access from certain IPv4 addresses by adding the corresponding IPv6 addresses.

IPv6 Connectivity – Because the network stack will prefer an IPv6 address to an IPv4 address, an unusual situation can arise under certain circumstances. The client system can be configured for IPv6 but connected to a network that is not configured to route IPv6 packets to the Internet. Be sure to test for end-to-end connectivity before you switch to the dual-stack endpoints.

Log Entries – Log entries will include the IPv4 or IPv6 address, as appropriate. If you analyze your log files using internal or third-party applications, you should ensure that they are able to recognize and process entries that include an IPv6 address.

S3 Feature Support – IPv6 support is available for all S3 features with the exception of Website Hosting, S3 Transfer Acceleration, and access via BitTorrent.

Region Support – IPv6 support is available in all commercial AWS Regions and in AWS GovCloud (US). It is not available in the China (Beijing) Region.


Study Highlights Serious Security Threat to Many Internet Users (UCR Today)

Post Syndicated from ris original http://lwn.net/Articles/696847/rss

UCR Today reports that
researchers at the University of California, Riverside have identified a
weakness in the Transmission Control Protocol (TCP) in Linux that enables
attackers to hijack users’ internet communications remotely. “The
UCR researchers didn’t rely on chance, though. Instead, they identified a
subtle flaw (in the form of ‘side channels’) in the Linux software that
enables attackers to infer the TCP sequence numbers associated with a
particular connection with no more information than the IP address of the
communicating parties. This means that given any two arbitrary machines on
the internet, a remote blind attacker, without being able to eavesdrop on
the communication, can track users’ online activity, terminate connections
with others and inject false material into their communications.

Pi 3 booting part II: Ethernet

Post Syndicated from Gordon Hollingworth original https://www.raspberrypi.org/blog/pi-3-booting-part-ii-ethernet-all-the-awesome/

Yesterday, we introduced the first of two new boot modes which have now been added to the Raspberry Pi 3. Today, we introduce an even more exciting addition: network booting a Raspberry Pi with no SD card.

Again, rather than go through a description of the boot mode here, we’ve written a fairly comprehensive guide on the Raspberry Pi documentation pages, and you can find a tutorial to get you started here. Below are answers to what we think will be common questions, and a look at some limitations of the boot mode.

Note: this is still in beta testing and uses the “next” branch of the firmware. If you’re unsure about using the new boot modes, it’s probably best to wait until we release it fully.

What is network booting?

Network booting is a computer’s ability to load all its software over a network. This is useful in a number of cases, such as remotely operated systems or those in data centres; network booting means they can be updated, upgraded, and completely re-imaged, without anyone having to touch the device!

The main advantages when it comes to the Raspberry Pi are:

  1. SD cards are difficult to make reliable unless they are treated well; they must be powered down correctly, for example. A Network File System (NFS) is much better in this respect, and is easy to fix remotely.
  2. NFS file systems can be shared between multiple Raspberry Pis, meaning that you only have to update and upgrade a single Pi, and are then able to share users in a single file system.
  3. Network booting allows for completely headless Pis with no external access required. The only desirable addition would be an externally controlled power supply.

I’ve tried doing things like this before and it’s really hard editing DHCP configurations!

It can be quite difficult to edit DHCP configurations to allow your Raspberry Pi to boot, while not breaking the whole network in the process. Because of this, and thanks to input from Andrew Mulholland, I added the support of proxy DHCP as used with PXE booting computers.

What’s proxy DHCP and why does it make it easier?

Standard DHCP is the protocol that gives a system an IP address when it powers up. It’s one of the most important protocols, because it allows all the different systems to coexist. The problem is that if you edit the DHCP configuration, you can easily break your network.

So proxy DHCP is a special protocol: instead of handing out IP addresses, it only hands out the TFTP server address. This means it will only reply to devices trying to do netboot. This is much easier to enable and manage, because we’ve given you a tutorial!

Are there any bugs?

At the moment we know of three problems which need to be worked around:

  • When the boot ROM enables the Ethernet link, it first waits for the link to come up, then sends its first DHCP request packet. This is sometimes too quick for the switch to which the Raspberry Pi is connected: we believe that the switch may throw away packets it receives very soon after the link first comes up.
  • The second bug is in the retransmission of the DHCP packet: the retransmission loop is not timing out correctly, so the DHCP packet will not be retransmitted.

The solution to both these problems is to find a suitable switch which works with the Raspberry Pi boot system. We have been using a Netgear GS108 without a problem.

  • Finally, the failing timeout has a knock-on effect. This means it can require the occasional random packet to wake it up again, so having the Raspberry Pi network wired up to a general network with lots of other computers actually helps!

Can I use network boot with Raspberry Pi / Pi 2?

Unfortunately, because the code is actually in the boot ROM, this won’t work with Pi 1, Pi B+, Pi 2, and Pi Zero. But as with the MSD instructions, there’s a special mode in which you can copy the ‘next’ firmware bootcode.bin to an SD card on its own, and then it will try and boot from the network.

This is also useful if you’re having trouble with the bugs above, since I’ve fixed them in the bootcode.bin implementation.

Finally, I would like to thank my Slack beta testing team who provided a great testing resource for this work. It’s been a fun few weeks! Thanks in particular to Rolf Bakker for this current handy status reference…

Current state of network boot on all Pis

Current state of network boot on all Pis

The post Pi 3 booting part II: Ethernet appeared first on Raspberry Pi.

Atlantic Records Subpoenas Reddit to Identify Music Leaker

Post Syndicated from Andy original https://torrentfreak.com/atlantic-records-subpoenas-reddit-to-identify-music-leaker-160803/

cofeeleakMusic gets uploaded to the Internet every minute of every day, much to the irritation of recording labels. Largely these uploads are dealt with via takedown notices but occasionally there is a desire to track down the individual behind the unauthorized distribution.

One such case currently before the Supreme Court of the State of New York sees Atlantic Recording Corporation trying to obtain the identity of a person who uploaded some of their copyrighted content to the Internet in June.

The complaint concerns the track ‘Heathens’ by the platinum-certified band Twenty One Pilots.

“Prior to June 15, 2016, Atlantic had provided access to a digital copy of Heathens only to an extremely limited number of individuals,” the complaint reads.

“These individuals included members of 21 Pilots, their manager, Atlantic and [record label] Fueled by Ramen executives and members of Atlantic’s radio field staff. In each such case, the individual was barred from distributing the recording until the scheduled release date of June 24, 2016.”

Atlantic says that all of its employees who were aware of the impending release were “contractually obligated and/or under a fiduciary obligation” not to disclose its existence until June 24. However, things didn’t go to plan.

On or around June 15, someone with early access to the track posted it to a file-hosting service called Dropfile.to. Following that upload it’s alleged that the poster then advertised the track for download on Reddit.

“The Poster posted a link to the file he or she uploaded to Dropfile.to to the Twenty One Pilots subreddit (a publicly accessible message board, hosted by Reddit Inc.), with the title “[Leak] New Song – ‘Heathens”,” the complaint reads.


As illustrated in the image above, the Reddit thread was indeed started on June 15 and can still be found today. It was posted by a user calling him/herself “twentyoneheathens.”

That user account is still on Reddit but its solitary purpose appears to have been the advertise the availability of the track on Dropfile. No other actions are registered against the account, a hindrance to anyone trying to find out who is behind it.

After becoming aware of the leak, Atlantic says it tried to stop distribution but had minimal success.

“Upon becoming aware of the Posting, Atlantic attempted to have the illegally distributed copies of Heathens removed from the Internet. Despite expending significant effort and funds in this attempt, the removal efforts were ultimately unsuccessful in curtailing further widespread distribution,” the label says.

TorrentFreak has been able to confirm a fairly broad takedown campaign which began with RIAA action on June 16. From there, the UK’s BPI appears to have taken over, sending hundreds of takedowns to Google referencing dozens of sites.

Fighting a losing battle, Atlantic took the decision to release the track on June 16, the day after its leak online and well ahead of its planned August 5 album debut. The track had been scheduled to appear on “Suicide Squad: The Album” this week to coincide with the release of the movie of the same name.

Atlantic says this early release frustrated its marketing efforts, something which directly hit sales.

“Following the June 16, 2016, release, sales of the Heathens single, which were unsupported by Atlantic’s carefully-planned marketing strategy, failed to reach predicted levels, causing substantial harm to Atlantic in the form of lost single and album sales revenue,” the complaint reads.

So now, as expected, Atlantic is on the warpath. In its complaint the company asks the Court to force Reddit to hand over the information, IP addresses included, it holds on the person who uploaded the link to the track.


Suspecting that Atlantic would also try to get information from Dropfile, TorrentFreak contacted the site’s operator for comment. He informs us that the label hasn’t made contact with him.

“[This news] comes as a surprise to me, we have not heard from anyone about it prior to this,” he says.

“I guess that Atlantic Records figured out it will be easier to get the poster’s data from Reddit or they will use official authorities to contact us in that matter (which might take months).”

Further complicating any retrieval of data from Dropfile are the site’s logging policies.

“Dropfile is a simple service for anonymous sharing of files that need to be placed online only temporarily,” the site explains.

“We keep no logs on our side whatsoever. We don’t use cookies, any kind of traffic tracking (Google Analytics), social media buttons that could track you (Facebook, Twitter) and have no ads that could track you.”

Furthermore, Dropfile is located in Slovakia where there is no mandatory requirement to log visitor data.

From Atlantic’s complaint, it seems clear the label is expecting the culprit to be close to home.

“If the Poster is not an Atlantic employee, then he or she likely obtained the Recording from an Atlantic employee, who would have breached his or her contract and/or fiduciary duties to Atlantic by providing the Poster access to the Recording. Atlantic is unaware of the true identity of the Poster and is unable to ascertain that information from any source other than Reddit,” the label adds.

It seems likely that Reddit will comply with the subpoena but only time will tell whether it will lead to the leaker. The original track uploaded to Dropfile has now expired.

“Files are physically removed from servers after 24 hours from their upload or when reported. After that, we have no clue what the file was. And we never knew who uploaded it,” Dropfile concludes.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

How to Remove Single Points of Failure by Using a High-Availability Partition Group in Your AWS CloudHSM Environment

Post Syndicated from Tracy Pierce original https://blogs.aws.amazon.com/security/post/Tx7VU4QS5RCK7Q/How-to-Remove-Single-Points-of-Failure-by-Using-a-High-Availability-Partition-Gr

A hardware security module (HSM) is a hardware device designed with the security of your data and cryptographic key material in mind. It is tamper-resistant hardware that prevents unauthorized users from attempting to pry open the device, plug any extra devices in to access data or keys such as subtokens, or damage the outside housing. If any such interference occurs, the device wipes all information stored so that unauthorized parties do not gain access to your data or cryptographic key material. A high-availability (HA) setup could be beneficial because, with multiple HSMs kept in different data centers and all data synced between them, the loss of one HSM does not mean the loss of your data.

In this post, I will walk you through steps to remove single points of failure in your AWS CloudHSM environment by setting up an HA partition group. Single points of failure occur when a single CloudHSM device fails in a non-HA configuration, which can result in the permanent loss of keys and data. The HA partition group, however, allows for one or more CloudHSM devices to fail, while still keeping your environment operational.


You will need a few things to build your HA partition group with CloudHSM:

  • 2 CloudHSM devices. AWS offers a free two-week trial. AWS will provision the trial for you and send you the CloudHSM information such as the Elastic Network Interface (ENI) and the private IP address assigned to the CloudHSM device so that you may begin testing. If you have used CloudHSM before, another trial cannot be provisioned, but you can set up production CloudHSM devices on your own. See Provisioning Your HSMs.
  • A client instance from which to access your CloudHSM devices. You can create this manually, or via an AWS CloudFormation template. You can connect to this instance in your public subnet, and then it can communicate with the CloudHSM devices in your private subnets.
  • An HA partition group, which ensures the syncing and load balancing of all CloudHSM devices you have created.

The CloudHSM setup process takes about 30 minutes from beginning to end. By the end of this how-to post, you should be able to set up multiple CloudHSM devices and an HA partition group in AWS with ease. Keep in mind that each production CloudHSM device you provision comes with an up-front fee of $5,000. You are not charged for any CloudHSM devices provisioned for a trial, unless you decide to move them to production when the trial ends.

If you decide to move your provisioned devices to your production environment, you will be billed $5,000 per device. If you decide to stop the trial so as not to be charged, you have up to 24 hours after the trial ends to let AWS Support know of your decision.

Solution overview

How HA works

HA is a feature of the Luna SA 7000 HSM hardware device AWS uses for its CloudHSM service. (Luna SA 7000 HSM is also known as the “SafeNet Network HSM” in more recent SafeNet documentation. Because AWS documentation refers to this hardware as “Luna SA 7000 HSM,” I will use this same product name in this post.) This feature allows more than one CloudHSM device to be placed as members in a load-balanced group setup. By having more than one device on which all cryptographic material is stored, you remove any single points of failure in your environment.

You access your CloudHSM devices in this HA partition group through one logical endpoint, which distributes traffic to the CloudHSM devices that are members of this group in a load-balanced fashion. Even though traffic is balanced between the HA partition group members, any new data or changes in data that occur on any CloudHSM device will be mirrored for continuity to the other members of the HA partition group. A single HA partition group is logically represented by a slot, which is physically composed of multiple partitions distributed across all HA nodes. Traffic is sent through the HA partition group, and then distributed to the partitions that are linked. All partitions are then synced so that data is persistent on each one identically.

The following diagram illustrates the HA partition group functionality.

  1. Application servers send traffic to your HA partition group endpoint.
  2. The HA partition group takes all requests and distributes them evenly between the CloudHSM devices that are members of the HA partition group.
  3. Each CloudHSM device mirrors itself to each other member of the HA partition group to ensure data integrity and availability.

Automatic recovery

If you ever lose data, you want a hands-off, quick recovery. Before autoRecovery was introduced, you could take advantage of the redundancy and performance HA partition groups offer, but you were still required to manually intervene when a group member was lost.

HA partition group members may fail for a number of reasons, including:

  • Loss of power to a CloudHSM device.
  • Loss of network connectivity to a CloudHSM device. If network connectivity is lost, it will be seen as a failed device and recovery attempts will be made.

Recovery of partition group members will only work if the following are true:

  • HA autoRecovery is enabled.
  • There are at least two nodes (CloudHSM devices) in the HA partition group.
  • Connectivity is established at startup.
  • The recover retry limit is not reached (if reached or exceeded, the only option is manual recovery).

HA autoRecovery is not enabled by default and must be explicitly enabled by running the following command, which is found in Enabling Automatic Recovery.

>vtl haAdmin –autoRecovery –retry <count>

When enabling autoRecovery, set the –retry and –interval parameters. The –retry parameter can be a value between 0 and 500 (or -1 for infinite retries), and equals the number of times the CloudHSM device will attempt automatic recovery. The –interval parameter is in seconds and can be any value between 60 and 1200. This is the amount of time between automatic recovery tries that the CloudHSM will attempt.

Setting up two Production CloudHSM devices in AWS

Now that I have discussed how HA partition groups work and why they are useful, I will show how to set up your CloudHSM environment and the HA partition group itself. To create an HA partition group environment, you need a minimum of two CloudHSM devices. You can have as many as 16 CloudHSM devices associated with an HA partition group at any given time. These must be associated with the same account and region, but can be spread across multiple Availability Zones, which is the ideal setup for an HA partition group. Automatic recovery is great for larger HA partition groups because it allows the devices to quickly attempt recovery and resync data in the event of a failure, without requiring manual intervention.

Set up the CloudHSM environment

To set up the CloudHSM environment, you must have a few things already in place:

  • An Amazon VPC.
  • At least one public subnet and two private subnets.
  • An Amazon EC2 client instance (m1.small running Amazon Linux x86 64-bit) in the public subnet, with the SafeNet client software already installed. This instance uses the key pair that you specified during creation of the CloudFormation stack. You can find a ready-to-use Amazon Machine Image (AMI) in our Community AMIs. Simply log into the EC2 console, choose Launch Instance, click Community AMIs, and search for CloudHSM. Because we regularly release new AMIs with software updates, searching for CloudHSM will show all available AMIs for a region. Select the AMI with the most recent client version.
  • Two security groups, one for the client instance and one for the CloudHSM devices. The security group for the client instance, which resides in the public subnet, will allow SSH on port 22 from your local network. The security group for the CloudHSM devices, which resides in the private subnet, will allow SSH on port 22 and NTLS on port 1792 from your public subnet. These will both be ingress rules (egress rules allow all traffic).
  • An Elastic IP address for the client instance.
  • An IAM role that delegates AWS resource access to CloudHSM. You can create this role in the IAM console:

    1. Click Roles and then click Create New Role.
    2. Type a name for the role and then click Next Step.
    3. Under AWS Service Roles, click Select next to AWS CloudHSM.
    4. In the Attach Policy step, select AWSCloudHSMRole as the policy. Click Next Step.
    5. Click Create Role.

We have a CloudFormation template available that will set up the CloudHSM environment for you:

  1. Go to the CloudFormation console.
  2. Choose Create Stack. Specify https://cloudhsm.s3.amazonaws.com/cloudhsm-quickstart.json as the Amazon S3 template URL.
  3. On the next two pages, specify parameters such as the Stack name, SSH Key Pair, Tags, and SNS Topic for alerts. You will find SNS Topic under the Advanced arrow. Then, click Create.

When the new stack is in the CREATION_COMPLETE state, you will have the IAM role to be used for provisioning your CloudHSM devices, the private and public subnets, your client instance with Elastic IP (EIP), and the security groups for both the CloudHSM devices and the client instance. The CloudHSM security group will already have its necessary rules in place to permit SSH and NTLS access from your public subnet; however, you still must add the rules to the client instance’s security group to permit SSH access from your allowed IPs. To do this:

  1. In the VPC console, make sure you select the same region as the region in which your HSM VPC resides.
  2. Select the security group in your HSM VPC that will be used for the client instance.
  3. Add an inbound rule that allows TCP traffic on port 22 (SSH) from your local network IP addresses.
  4. On the Inbound tab, from the Create a new rule list, select SSH, and enter the IP address range of the local network from which you will connect to your client instance.
  5. Click Add Rule, and then click Apply Rule Changes.

After adding the IP rules for SSH (port 22) to your client instance’s security group, test the connection by attempting to make a SSH connection locally to your client instance EIP. Make sure to write down all the subnet and role information, because you will need this later.

Create an SSH key pair

The SSH key pair that you will now create will be used by CloudHSM devices to authenticate the manager account when connecting from your client instance. The manager account is simply the user that is permitted to SSH to your CloudHSM devices. Before provisioning the CloudHSM devices, you create the SSH key pair so that you can provide the public key to the CloudHSM during setup. The private key remains on your client instance to complete the authentication process. You can generate the key pair on any computer, as long as you ensure the client instance has the private key copied to it. You can also create the key pair on Linux or Windows. I go over both processes in this section of this post.

In Linux, you will use the ssh-keygen command. By typing just this command into the terminal window, you will receive output similar to the following.

$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/user/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/user/.ssh/id_rsa.
Your public key has been saved in /home/user/.ssh/id_rsa.pub.
The key fingerprint is: df:c4:49:e9:fe:8e:7b:eb:28:d5:1f:72:82:fb:f2:69
The key's randomart image is:
+--[ RSA 2048]----+
|                 |
|             .   |
|            o    |
|           + .   |
|        S   *.   |
|         . =.o.o |
|          ..+ +..|
|          .o Eo .|
|           .OO=. |

In Windows, use PuTTYgen to create your key pair:

  1. Start PuTTYgen. For Type of key to generate, select SSH-2 RSA.
  2. In the Number of bits in a generated key field, specify 2048.
  3. Click Generate.
  4. Move your mouse pointer around in the blank area of the Key section below the progress bar (to generate some randomness) until the progress bar is full.
  5. A private/public key pair has now been generated.
  6. In the Key comment field, type a name for the key pair that you will remember.
  7. Click Save public key and name your file.
  8. Click Save private key and name your file. It is imperative that you do not lose this key, so make sure to store it somewhere safe.
  9. Right-click the text field labeled Public key for pasting into OpenSSH authorized_keys file and choose Select All.
  10. Right-click again in the same text field and choose Copy.

The following screenshot shows what the PuTTYgen output will look like after you have created the key pair.

You must convert the keys created by PuTTYgen to OpenSSH format for use with other clients by using the following command.

ssh-keygen –i –f puttygen_key > openssh_key

The public key will be used to provision the CloudHSM device and the private key will be stored on the client instance to authenticate the SSH sessions. The public SSH key will look something like the following. If it does not, it is not in the correct format and must be converted using the preceding procedure.

ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA6bUsFjDSFcPC/BZbIAv8cAR5syJMB GiEqzFOIEHbm0fPkkQ0U6KppzuXvVlc2u7w0mg
PMhnkEfV6j0YBITu0Rs8rNHZFJs CYXpdoPxMMgmCf/FaOiKrb7+1xk21q2VwZyj13GPUsCxQhRW7dNidaaYTf14sbd9A qMUH4UOUjs
27MhO37q8/WjV3wVWpFqexm3f4HPyMLAAEeExT7UziHyoMLJBHDKMN7 1Ok2kV24wwn+t9P/Va/6OR6LyCmyCrFyiNbbCDtQ9JvCj5
RVBla5q4uEkFRl0t6m9 XZg+qT67sDDoystq3XEfNUmDYDL4kq1xPM66KFk3OS5qeIN2kcSnQ==

Whether you are saving the private key on your local computer or moving it to the client instance, you must ensure that the file permissions are correct. You can do this by running the following commands (throughout this post, be sure to replace placeholder content with your own values). The first command sets the necessary permissions; the second command adds the private key to the authentication agent.

$ chmod 600 ~/.ssh/<private_key_file>
$ ssh-add ~/.ssh/<private_key_file>

Set up the AWS CLI tools

Now that you have your SSH key pair ready, you can set up the AWS CLI tools so that you may provision and manage your CloudHSM devices. If you used the CloudFormation template or the CloudHSM AMI to set up your client instance, you already have the CLI installed. You can check this by running at the command prompt: CloudHSM version. The resulting output should be “Version”: “3.0.5”. If you chose to use your own AMI and install the Luna SA software, you can install the CloudHSM CLI Tools with the following steps. The current version in use is 3.0.5.

$ wget https://s3.amazon.com/cloudhsm-software/CloudHsmCLI.egg
$ sudo easy_install-2.7 –s /usr/local/bin CloudHsmCLI.egg
$ cloudhsm version
      “Version”: “<version>”

You must also set up file and directory ownership for your user on the client instance and the Chrystoki.conf file. The Chrystoki.conf file is the configuration file for the CloudHSM device. By default, CloudHSM devices come ready from the factory for immediate housing of cryptographic keys and performing cryptographic processes on data, but must be configured to connect to your client instances:

  1. On the client instance, set the owner and write permission on the Chrystoki.conf file.
$ sudo chown <owner> /etc/Chrystoki.conf
$ sudo chmod +w /etc/Chrystoki.conf

The <owner> can be either the user or a group the user belongs to (for example, ec2-user).

  1. On the client instance set the owner of the Luna client directory:
$ sudo chown <owner> -R <luna_client_dir>

The <owner> should be the same as the <owner> of the Chrystoki.conf file. The <luna_client_dir> differs based on the version of the LunaSA client software installed. If these are new setups, use version 5.3 or newer; however, if you have older clients with version 5.1 installed, use version 5.1:

  • Client software version 5.3: /usr/safenet/lunaclient/
  • Client software version 5.1: /usr/lunasa/

You also must configure the AWS CLI tools with AWS credentials to use for the API calls. These can be set by config files, passing the credentials in the commands, or by instance profile. The most secure option, which eliminates the need to hard-code credentials in a config file, is to use an instance profile on your client instance. All CLI commands in this post are performed on a client instance launched with a IAM role that has CloudHSM permissions. If you want to set your credentials in a config file instead, you must remember that each CLI command should include –profile <profilename>, with <profilename> being the name you assigned in the config file for these credentials.See Configuring the AWS CloudHSM CLI Tools for help with setting up the AWS CLI tools.

You will then set up a persistent SSH tunnel for all CloudHSM devices to communicate with the client instance. This is done by editing the ~/.ssh/config file. Replace <CloudHSM_ip_address> with the private IP of your CloudHSM device, and replace <private_key_file> with the file location of your SSH private key created earlier (for example, /home/user/.ssh/id_rsa).

Host <CloudHSM_ip_address>
User manager
IdentityFile <private_key_file>

Also necessary for the client instance to authenticate with the CloudHSM partitions or partition group are client certificates. Depending on the LunaSA client software you are using, the location of these files can differ. Again, if these are new setups, use version 5.3 or newer; however, if you have older clients with version 5.1 installed, use version 5.1:

  • Linux clients

    • Client software version 5.3: /usr/safenet/lunaclient/cert
    • Client software version 5.1: /usr/lunasa/cert
  • Windows clients

    • Client software version 5.3: %ProgramFiles%\SafeNet\LunaClient\cert
    • Client software version 5.1: %ProgramFiles%\LunaSA\cert

To create the client certificates, you can use the OpenSSL Toolkit or the LunaSA client-side vtl commands. The OpenSSL Toolkit is a program that allows you to manage TLS (Transport Layer Security) and SSL (Secure Sockets Layer) protocols. It is commonly used to create SSL certificates for secure communication between internal network devices. The LunaSA client-side vtl commands are installed on your client instance along with the Luna software. If you used either CloudFormation or the CloudHSM AMI, the vtl commands are already installed for you. If you chose to launch a different AMI, you can download the Luna software. After you download the software, run the command linux/64/install.sh as root on a Linux instance and install the Luna SA option. If you install the softtware on a Windows instance, run the command windows\64\LunaClient.msi to install the Luna SA option. I show certificate creation in both OpenSSL Toolkit and LunaSA in the following section.

OpenSSL Toolkit

      $ openssl genrsa –out <luna_client_cert_dir>/<client_name>Key.pem 2048
      $ openssl req –new –x509 –days 3650 –key <luna_client_cert_dir>/<client_cert>Key.pem –out <client_name>.pem

The <luna_client_cert_dir> is the LunaSA Client certificate directory on the client and the <client_name> can be whatever you choose.


      $ sudo vtl createCert –n <client_name>

The output of the preceding LunaSA command will be similar to the following.

Private Key created and written to:


Certificate created and written to:


You will need these key file locations later on, so make sure you write them down or save them to a file. One last thing to do at this point is create the client Amazon Resource Name (ARN), which you do by running the following command.

$ CloudHSM create-client –certificate-file <luna_client_cert_dir>/<client_name>.pem

      “ClientARN”: “<client_arn>”,
      “RequestId”: “<request_id>”

Also write down in a safe location the client ARN because you will need it when registering your client instances to the HA partition group.

Provision your CloudHSM devices

Now for the fun and expensive part. Always remember that for each CloudHSM device you provision to your production environment, there is an upfront fee of $5,000. Because you need more than one CloudHSM device to set up an HA partition group, provisioning two CloudHSM devices to production will cost an upfront fee of $10,000.

If this is your first time trying out CloudHSM, you can have a two-week trial provisioned for you at no cost. The only cost will occur if you decide to keep your CloudHSM devices and move them into production. If you are unsure of the usage in your company, I highly suggest doing a trial first. You can open a support case requesting a trial at any time. You must have a paid support plan to request a CloudHSM trial.

To provision the two CloudHSM devices, SSH into your client instance and run the following CLI command.

$ CloudHSM create-CloudHSM \
--subnet-id <subnet_id> \
--ssh-public-key-file <public_key_file> \
--iam-role-arn <iam_role_arn> \
--syslog-ip <syslog_ip_address>

The response should resemble the following.

      “CloudHSMArn”: “<CloudHSM_arn>”,
      “RequestId”: “<request_id>”

Make note of each CloudHSM ARN because you will need them to initialize the CloudHSM devices and later add them to the HA partition group.

Initialize the CloudHSM devices

Configuring your CloudHSM devices, or initializing them as the process is formally called, is what allows you to set up the configuration files, certificate files, and passwords on the CloudHSM itself. Because you already have your CloudHSM ARNs from the previous section, you can run the describe-hsm command to get the EniId and the EniIp of the CloudHSM devices. Your results should be similar to the following.

$ CloudHSM describe-CloudHSM --CloudHSM-arn <CloudHSM_arn>
     "EniId": "<eni_id>",   
     "EniIp": "<eni_ip>",    
     "CloudHSMArn": "<CloudHSM_arn>",    
     "IamRoleArn": "<iam_role_arn>",    
     "Partitions": [],    
     "RequestId": "<request_id>",    
     "SerialNumber": "<serial_number>",    
     "SoftwareVersion": "5.1.3-1",    
     "SshPublicKey": "<public_key_text>",    
     "Status": "<status>",    
     "SubnetId": "<subnet_id>",    
     "SubscriptionStartDate": "2014-02-05T22:59:38.294Z",    
     "SubscriptionType": "PRODUCTION",    
     "VendorName": "SafeNet Inc."

Now that you know the EniId of each CloudHSM, you need to apply the CloudHSM security group to them. This ensures that connection can occur from any instance with the client security group assigned. When a trial is provisioned for you, or you provision CloudHSM devices yourself, the default security group of the VPC is automatically assigned to the ENI.You must change this to the security group that permits ingress ports 22 and 1792 from your client instance.

To apply a CloudHSM security group to an EniId:

  1. Go to the EC2 console, and choose Network Interfaces in the left pane.
  2. Select the EniId of the CloudHSM.
  3. From the Actions drop-down list, choose Change Security Groups. Choose the security group for your CloudHSM device, and then click Save.

To complete the initialization process, you must ensure a persistent SSH connection is in place from your client to the CloudHSM. Remember the ~/.ssh/config file you edited earlier? Now that you have the IP address of the CloudHSM devices and the location of the private SSH key file, go back and fill in that config file’s parameters by using your favorite text editor.

Now, initialize using the initialize-hsm command with the information you gathered from the provisioning steps. The values in red in the following example are meant as placeholders for your own naming and password conventions, and should be replaced with your information during the initialization of the CloudHSM devices.

$CloudHSM initialize-CloudHSM \
--CloudHSM-arn <CloudHSM_arn> \
--label <label> \
--cloning-domain <cloning_domain> \
--so-password <so_password>

The <label> is a unique name for the CloudHSM device that should be easy to remember. You can also use this name as a descriptive label that tells what the CloudHSM device is for. The <cloning_domain> is a secret used to control cloning of key material from one CloudHSM to another. This can be any unique name that fits your company’s naming conventions. Examples could be exampleproduction or exampledevelopment. If you are going to set up an HA partition group environment, the <cloning_domain> must be the same across all CloudHSMs. The <so_password> is the security officer password for the CloudHSM device, and for ease of remembrance, it should be the same across all devices as well. It is important you use passwords and cloning domain names that you will remember, because they are unrecoverable and the loss of them means loss of all data on a CloudHSM device. For your use, we do supply a Password Worksheet if you want to write down your passwords and store the printed page in a secure place.

Configure the client instance

Configuring the client instance is important because it is the secure link between you, your applications, and the CloudHSM devices. The client instance opens a secure channel to the CloudHSM devices and sends all requests over this channel so that the CloudHSM device can perform the cryptographic operations and key storage. Because you already have launched the client instance and mostly configured it, the only step left is to create the Network Trust Link (NTL) between the client instance and the CloudHSM. For this, we will use the LunaSA vtl commands again.

  1. Copy the server certificate from the CloudHSM to the client.
$ scp –i ~/.ssh/<private_key_file> manager@<CloudHSM_ip_address>:server.pem
  1. Register the CloudHSM certificate with the client.
$ sudo vtl addServer –n <CloudHSM_ip_address> -c server.pem
New server <CloudHSM_ip_address> successfully added to server list.
  1. Copy the client certificate to the CloudHSM.
$ scp –i ~/.ssh/<private_key_file> <client_cert_directory>/<client_name>.pem manager@<CloudHSM_ip_address>:
  1. Connect to the CloudHSM.
$ ssh –i ~/.ssh/<private_key_file> manager@<CloudHSM_ip_address>
  1. Register the client.
lunash:> client register –client <client_id> -hostname <client_name>

The <client_id> and <client_name> should be the same for ease of use, and this should be the same as the name you used when you created your client certificate.

  1. On the CloudHSM, log in with the SO password.
lunash:> hsm login
  1. Create a partition on each CloudHSM (use the same name for ease of remembrance).
lunash:> partition create –partition <name> -password <partition_password> -domain <cloning_domain>

The <partition_password> does not have to be the same as the SO password, and for security purposes, it should be different.

  1. Assign the client to the partition.
lunash:> client assignPartition –client <client_id> -partition <partition_name>
  1. Verify that the partition assigning went correctly.
lunash:> client show –client <client_id>
  1. Log in to the client and verify it has been properly configured.
$ vtl verify
The following Luna SA Slots/Partitions were found:
Slot    Serial #         Label
====    =========        ============
1      <serial_num1>     <partition_name>
2      <serial_num2>     <partition_name>

You should see an entry for each partition created on each CloudHSM device. This step lets you know that the CloudHSM devices and client instance were properly configured.

The partitions created and assigned via the previous steps are for testing purposes only and will not be used in the HA parition group setup. The HA partition group workflow will automatically create a partition on each CloudHSM device for its purposes. At this point, you have created the client and at least two CloudHSM devices. You also have set up and tested for validity the connection between the client instance and the CloudHSM devices. The next step in to ensure fault tolerance by setting up the HA partition group.

Set up the HA partition group for fault tolerance

Now that you have provisioned multiple CloudHSM devices in your account, you will add them to an HA partition group. As I explained earlier in this post, an HA partition group is a virtual partition that represents a group of partitions distributed over many physical CloudHSM devices for HA. Automatic recovery is also a key factor in ensuring HA and data integrity across your HA partition group members. If you followed the previous procedures in this post, setting up the HA partition group should be relatively straightforward.

Create the HA partition group

First, you will create the actual HA partition group itself. Using the CloudHSM CLI on your client instance, run the following command to create the HA partition group and name it per your company’s naming conventions. In the following command, replace <label> with the name you chose.

$ CloudHSM create-hapg –group-label <label>

Register the CloudHSM devices with the HA partition group

Now, add the already initialized CloudHSM devices to the HA partition group. You will need to run the following command for each CloudHSM device you want to add to the HA partition group.

$ CloudHSM add-CloudHSM-to-hapg \
--CloudHSM-arn <CloudHSM_arn> \
--hapg-arn <hapg_arn> \
--cloning-domain <cloning_domain> \
--partition-password <partition_password> \
--so-password <so_password>

You should see output similar to the following after each successful addition to the HA partition group.

      “Status”: “Addition of CloudHSM <CloudHSM_arn> to HA partition group <hapg_arn> successful”

Register the client with the HA partition group

The last step is to register the client with the HA partition group. You will need the client ARN from earlier in the post, and you will use the CloudHSM CLI command register-client-to-hapg to complete this process.

$ CloudHSM register-client-to-hapg \
--client-arn <client_arn> \
--hapg-arn <hapg_arn>
      “Status”: “Registration of the client <client_arn> to the HA partition group <hapg_arn> successful”

After you register the client with the HA partition group, you have the client configuration file and the server certificates. You have already registered the client with the HA partition group, but now you have to actually assign it as well, which you do by using the get-client-configuration AWS CLI command.

$ CloudHSM get-client-configuration \
--client-arn <client_arn> \
--hapg-arn <hapg_arn> \
--cert-directory <server_cert_location> \
--config-directory /etc/

The configuration file has been copied to /etc/
The server certificate has been copied to <server_cert_location>

The <server_cert_location> will differ depending on the LunaSA client software you are using:

  • Client software version 5.3: /usr/safenet/lunaclient/cert/server
  • Client software version 5.1: /usr/lunasa/cert/server

Lastly, to verify the client configuration, run the following LunaSA vtl command.

$ vtl haAdmin show

In the output, you will see a heading, HA Group and Member Information. Ensure that the number of group members equals the number of CloudHSM devices you added to the HA partition group. If the number does not match what you have provisioned, you might have missed a step in the provisioning process. Going back through the provisioning process usually repairs this. However, if you still encounter issues, opening a support case is the quickest way to get assistance.

Another way to verify the HA partition group setup is to check the /etc/Chrystoki.conf file for output similar to the following.

VirtualToken = {
   VirtualToken00Label = hapg1;
   VirtualToken00SN = 1529127380;
   VirtualToken00Members = 475256026,511541022;
HASynchronize = {
   hapg1 = 1;
HAConfiguration = {
   reconnAtt = -1;
   AutoReconnectInterval = 60;
   HAOnly = 1;


You have now completed the process of provisioning CloudHSM devices, the client instance for connection, and your HA partition group for fault tolerance. You can begin using an application of your choice to access the CloudHSM devices for key management and encryption. By accessing CloudHSM devices via the HA partition group, you ensure that all traffic is load balanced between all backing CloudHSM devices. The HA partition group will ensure that each CloudHSM has identical information so that it can respond to any request issued.

Now that you have an HA partition group set up with automatic recovery, if a CloudHSM device fails, the device will attempt to recover itself, and all traffic will be rerouted to the remaining CloudHSM devices in the HA partition group so as not to interrupt traffic. After recovery (manual or automatic), all data will be replicated across the CloudHSM devices in the HA partition group to ensure consistency.

If you have questions about any part of this blog post, please post them on the IAM forum.

– Tracy

How to Audit Cross-Account Roles Using AWS CloudTrail and Amazon CloudWatch Events

Post Syndicated from Michael Raposa original https://blogs.aws.amazon.com/security/post/Tx3FNU2RFN0BW3W/How-to-Audit-Cross-Account-Roles-Using-AWS-CloudTrail-and-Amazon-CloudWatch-Even

You can use AWS Identity and Access Management (IAM) roles to grant access to resources in your AWS account, another AWS account you own, or a third-party account. For example, you may have an AWS account used for production resources and a separate AWS account for development resources. Throughout this post, I will refer to these as the Production account and the Development account. Developers often want some degree of access to production resources in the Production account. To control access, you would create a role in the Production account that allows restricted access to resources in that account. Developers can assume the role and access resources in the Production account.

In this blog post, I will walk through the process of auditing access across AWS accounts by a cross-account role. This process links API calls that assume a role in one account to resource-related API calls in a different account. To develop this process, I will use AWS CloudTrail, Amazon CloudWatch Events, and AWS Lambda functions. When complete, the process will provide a full audit chain from end user to resource access across separate AWS accounts.

The following diagram shows the process workflow of the solution discussed in this post.

In this workflow:

  1. An IAM user connects to the AWS Security Token Service (AWS STS) and assumes a role in the Production account.
  2. AWS STS returns a set of temporary credentials.
  3. The IAM user uses the set of temporary credentials to access resources and services in the production account.

You can extend this model to third-party accounts. For example, you may want to give a managed services provider (MSP) limited access to your AWS account to manage your resources. This scenario is similar to the first in that the MSP assumes a role in your AWS account to gain access to resources.

For more information about these two use cases, see Tutorial: Delegate Access Across AWS Accounts Using IAM Roles and How to Use an External ID When Granting Access to Your AWS Resources to a Third Party.

Solution overview

Security teams often want to track and audit who is assuming a role and what specifically that person did to resources after they assumed the role. CloudTrail provides a partial solution to this problem by recording AWS API calls for your account and delivering log files to you. The recorded information includes the identity of the API caller, the time of the API call, the source IP address of the API caller, the request parameters, and the response elements returned by the AWS service.

However, as I will explain below, there are some missing pieces. Here is a sample CloudTrail record from the Production account with the fictitious account ID, 999999999999

      "arn":"arn:aws:sts:: 999999999999:assumed-role/CrossAccountTest/TestSessionCrossAccount",
   "sourceIPAddress":"AWS Internal",
   "userAgent":"[Boto3/1.3.0 Python/3.4.4 Darwin/15.4.0 Botocore/1.4.8 Resource]",

In the preceding example, I can see that someone used an AssumedRole credential with the CrossAccountTest role to make the API call ListBuckets (see the blue highlights). However, what is missing from the CloudTrail record is information about the user that assumed the role in the first place. I need this information to determine who made the ListBuckets API call.

The information about who assumed the role is only available in the Development account. The following is a sample CloudTrail record from that account.

     "version": "0",
     "id": "c204c067-a376-47a8-a760-f0bf97b89aae",
     "detail-type": "AWS API Call via CloudTrail",
     "source": "aws.sts",
     "account": "1111111111111",
     "time": "2016-04-05T20:39:37Z",
     "region": "us-east-1",
     "resources": [],
     "detail": {
         "eventVersion": "1.04",
         "userIdentity": {
             "type": "IAMUser",
             "principalId": "AIDAIDVUOOO7V6R6HKL6E",
             "arn": "arn:aws:iam::1111111111111:user/jsmith",
             "accountId": "1111111111111",
             "accessKeyId": "AKIAJ2DZP3QVQ3D6VJBQ",
             "userName": "jsmith"
         "eventTime": "2016-04-05T20:39:37Z",
         "eventSource": "sts.amazonaws.com",
         "eventName": "AssumeRole",
         "awsRegion": "global",
         "sourceIPAddress": "",
         "userAgent": "Boto3/1.3.0 Python/3.4.4 Darwin/15.4.0 Botocore/1.4.8",
         "requestParameters": {
             "roleArn": "arn:aws:iam::999999999999:role/CrossAccountTest",
             "roleSessionName": "TestSessionCrossAccount",
             "externalId": "3414"
         "responseElements": {
             "credentials": {
                 "accessKeyId": "ASIAJJQOJ64OAM7C65AA",
                 "expiration": "Apr 5, 2016 9:39:37 PM",
                 "sessionToken": "FQoDYXdzEH4aDLnt4a+IhSowXRB+0iLXATIl"
             "assumedRoleUser": {
                 "assumedRoleId": "AROAICKBBQTXWLOLJLHW4:TestSessionCrossAccount",
                 "arn": "arn:aws:sts::999999999999:assumed-role/CrossAccountTest/TestSessionCrossAccount"
         "requestID": "83a263cd-fb6e-11e5-88cf-c19f9d99b57d",
         "eventID": "e5a09871-dc41-4979-8cd3-e6e0fdf5ebaf",
         "eventType": "AwsApiCall"

In this record, I see that the user jsmith called the AssumeRole API for the role CrossAccountTest. This role exists in the Production account. We now know user jsmith assumed the role and then later called ListBuckets in the Production account.

Because multiple users are able to assume the same role, I need to determine that it was jsmith and not some other user. To do this, make careful note of the accessKeyId (highlighted in green in the preceding record). AWS STS provided this key to jsmith when he assumed the role. This is a temporary set of credentials that jsmith then used in the Production account. Notice that the two accessKeyIds are the same in the two CloudTrail records. This pairing is how I linked the ListBucket action to the user performing the action, jsmith.

Now, that we have all the information I need, I can look at how a security operations person in the Production account can easily link the two records. The previous method requires manual reviews of CloudTrail records. This becomes complicated if I extend the use case to third parties. In this case, instead of a Development account that the security professional may have access to, a third party owns the account and they will not allow access to their internal CloudTrail records.

Therefore, how do you transfer the CloudTrail records for this role from one account to another? I will use the majority of this blog post to walk through this solution. The solution involves using CloudWatch Events in the Development account to publish CloudTrail records to Amazon Simple Notification Service (Amazon SNS). The SNS-Cross-Account Lambda function in the Production account then subscribes to this Amazon SNS topic and receives the CloudTrail records related to assuming the specific role in question. Using the S3-Cross-Account Lambda function in the Production account, I parse the CloudTrail logs, looking for relevant CloudTrail events related to the specific role. Finally, for auditing and reporting, I store the two audit records in Amazon DynamoDB where the two records are linked together.

The following diagram shows the entire process workflow.

In this workflow:

  1. An IAM user connects to AWS STS and assumes a role in the Production account.
  2. AWS STS returns a set of temporary credentials.
  3. The IAM user uses the set of temporary credentials to access resources and services in the Production account.
  4. CloudWatch Events detects the event emitted by CloudTrail when the AssumeRole API is called in Step 1.
  5. CloudWatch Events then publishes a message to SNS.
  6. SNS in the Development account then triggers the SNS-Cross-Account Lambda function in the Production account.
  7. The SNS-Cross-Account Lambda function extracts the CloudTrail record and saves the information to DynamoDB. The information saved includes information about the IAM user who assumed the role in the Production account.
  8. When the IAM user assumed the role (Step 2) and uses temporary credentials to access resources in the Production account, a record of the API call is logged to CloudTrail.
  9. CloudTrail saves the records to an Amazon S3 bucket.
  10. Using S3 event notifications, CloudTrail triggers the S3-Cross-Account  Lambda function each time CloudTrail saves records to S3.
  11. The S3-Cross-Account Lambda function downloads the CloudTrail records from S3, unzips them, and parses the logs for records related to the role in the Production account. The S3-Cross-Account Lambda function saves in DynamoDB any CloudTrail-relevant records. The information saved will be used to link to the information saved in Step 8 and to track to the IAM user who assumed the role.

Note that I could have built a similar workflow using CloudWatch Logs destinations. For more information, see Cross-Account Log Data Sharing with Subscriptions.

Deploying the solution


You will need two AWS accounts for the following walkthrough. For simplicity, the two accounts will be labeled Production and Development with fictitious AWS account IDs of 999999999999 and 111111111111, respectively. These two accounts will be abbreviated prod and dev.

This walkthrough assumes that you have configured separate AWS CLI named profiles for each of these two accounts. This is important because you will be switching between accounts, and using profiles will make this much easier. For more information, see Named Profiles.

The walkthrough

Step 1: Create the cross-account role

I put the following trust policy document in a file named cross_account_role_trust_policy.json. (Through the rest of this blog post, I have highlighted in red placeholder values. Be sure to replace those placeholder values with your own values.)

   "Version": "2012-10-17",
   "Statement": [
       "Effect": "Allow",
       "Principal": {
         "AWS": "arn:aws:iam::111111111111:root"
       "Action": "sts:AssumeRole",
       "Condition": {
         "StringEquals": {
           "sts:ExternalId": "3414"

Note that I am specifying an ExternalId to further restrict who can assume the role. With this restriction in place, to assume the role a user must be a member of the Development account and know the ExternalId value.

I then create the cross-account role in the prod account.

aws iam create-role --profile prod \
     --role-name CrossAccountTest \
     --assume-role-policy-document file://cross_account_role_trust_policy.json

Next, I need to assign a resource policy to the role so that it can access resources in the Production account. Here, I attach the AWS managed policy, AmazonS3ReadOnlyAccess. You can add a more or less restrictive policy to meet your organization’s needs.

aws iam attach-role-policy --profile prod \
     --role-name CrossAccountTest \
     --policy-arn arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess

Step 2: Create a DynamoDB table

I create the DynamoDB table in the prod account to store the CloudTrail records from both the Development and Production accounts. I am using the eventID as a HASH key. I am also using the time that the event occurred as a time stamp, and I am saving the accessKeyId as an attribute, which will make reporting and event correlation easier.

aws dynamodb create-table --profile prod \
     --table-name CrossAccountAuditing \
     --attribute-definitions \
         AttributeName=eventID,AttributeType=S \
         AttributeName=eventTime,AttributeType=S \
     --key-schema AttributeName=eventID,KeyType=HASH AttributeName=eventTime,KeyType=RANGE \
     --provisioned-throughput ReadCapacityUnits=1,WriteCapacityUnits=1

Step 3: Create the SNS topic

CloudWatch Events will send a record to an SNS topic. This SNS topic will then publish messages to the SNS-Cross-Account Lambda function in the Production account. To create the SNS topic in the dev account, I run the following command.

aws sns create-topic --name CrossAccountSNS --profile dev

Step 4: Create CloudWatch rule

The CloudWatch rule will detect the AssumeRole API call in the Development account and then publish details for this API call to the SNS topic I just created. To do this, I first create the CloudWatch rule.

aws events put-rule --profile dev \
           --name CrossAccountRoleToProd \
           --event-pattern  file://cw_event_pattern.json \
           --state ENABLED \
           --description "Detects AssumeRole API calls and sends event to Production Account"

Here, I am referencing a JSON document that has the following rule-matching pattern. I put the document in a file named cw_event_pattern.json.

   "detail-type": [
     "AWS API Call via CloudTrail"
   "detail": {
     "eventSource": [
     "eventName": [
       "AssumeRole"     ],
     "requestParameters": {
       "externalId": [

The pattern will match on an event source coming from AWS STS, the AssumeRole API call, and the externalID of 3414, which is passed as a parameter to the AssumeRole call. All of these filters must match to trigger the rule successfully.

I could have matched on other fields as well. The following is a sample CloudWatch Events record.

     "version": "0",
     "id": "c204c067-a376-47a8-a760-f0bf97b89aae",
     "detail-type": "AWS API Call via CloudTrail",
     "source": "aws.sts",
     "account": "844014150832",
     "time": "2016-04-05T20:39:37Z",
     "region": "us-east-1",
     "resources": [],
     "detail": {
         "eventVersion": "1.04",
         "userIdentity": {
             "type": "IAMUser",
             "principalId": "AIDAIDVUOOO7V6R6HKL6E",
             "arn": "arn:aws:iam::111111111111:user/jsmith",
             "accountId": "111111111111",
             "accessKeyId": "AKIAJ2DZP3QVQ3D6VJBQ",
             "userName": "jsmith "
         "eventTime": "2016-04-05T20:39:37Z",
         "eventSource": "sts.amazonaws.com",
         "eventName": "AssumeRole",
         "awsRegion": "global",
         "sourceIPAddress": "",
         "userAgent": "Boto3/1.3.0 Python/3.4.4 Darwin/15.4.0 Botocore/1.4.8",
         "requestParameters": {
             "roleArn": "arn:aws:iam::999999999999:role/CrossAccountTest",
             "roleSessionName": "TestSessionCrossAccount",
             "externalId": "3414"
         "responseElements": {
             "credentials": {
                 "accessKeyId": "ASIAJJQOJ64OAM7C65AA",
                 "expiration": "Apr 5, 2016 9:39:37 PM",
                 "sessionToken": "FQoDYXdzEH4aDLnt4a+IhSowXRB+0iLXATIl"
             "assumedRoleUser": {
                 "assumedRoleId": "AROAICKBBQTXWLOLJLHW4:TestSessionCrossAccount",
                 "arn": "arn:aws:sts::9999999999999:assumed-role/CrossAccountTest/TestSessionCrossAccount"
         "requestID": "83a263cd-fb6e-11e5-88cf-c19f9d99b57d",
         "eventID": "e5a09871-dc41-4979-8cd3-e6e0fdf5ebaf",
         "eventType": "AwsApiCall"

Instead of matching on the externalId, I could have matched on the specific role and captured only events related to that role. To configure such a rule, I would have changed the matching event pattern to the following.

   "detail-type": [
     "AWS API Call via CloudTrail"
   "detail": {
     "eventSource": [
       "sts.amazonaws.com"     ],
     "eventName": [
       "AssumeRole"     ],
     "requestParameters": {
       "roleArn": [

Now that I have created the rule, I need to add a target to the rule. This tells the CloudWatch rule where to send matching events. In this case, I am sending matching events to the SNS topic that I created previously.

aws events put-targets --profile dev \
           --rule CrossAccountRoleToProd \
           --targets Id=1,Arn=arn:aws:sns:us-east-1:999999999999:CrossAccountSNS

Step 5: Create the Lambda-SNS function

I will subscribe the SNS-Cross-Account Lambda function in the Production account to the SNS topic in the Development account. The SNS-Cross-Account function will process CloudWatch Events that are triggered when the role is assumed.

The SNS-Cross-Account Lambda function parses the incoming SNS message and then saves the CloudTrail record to DynamoDB. Save this code as lambda_function.py. Then zip that file into LambdaWithSNS.zip.

import json
import logging 
import boto3 

DYNAMODB_TABLE_NAME = "CrossAccountAuditing" 

logger = logging.getLogger() 

DYNAMO = boto3.resource("dynamodb") 

logger.info('Loading function') 

def save_record(record):
     Save the record to DyanmoDB
     :param record:
     logger.info("Saving record to DynamoDB...")
             'accessKeyId': record['detail']['responseElements']['credentials']['accessKeyId'],
             'eventTime': record['detail']['eventTime'],
             'eventID': record['detail']['eventID'],
             'record': json.dumps(record)
     logger.info("Saved record to DynamoDB")

def lambda_handler(event, context):
     # Loop through records delivered by SNS
     for record in event['Records']:
         # Extract the SNS message from the record
         message = record['Sns']['Message']
         logger.info("SNS Message: {}".format(message))

Next, I need to create the execution role that Lambda will use when it runs. To do this, I first create the role.

aws iam create-role --profile prod \
     --role-name LambdaSNSExecutionRole \
     --assume-role-policy-document file://lambda_trust_policy.json

The trust policy for this role allows Lambda to assume the role. I put the following trust policy document in a file named lambda_trust_policy.json.

   "Version": "2012-10-17",
   "Statement": [
       "Sid": "",
       "Effect": "Allow",
       "Principal": {
         "Service": "lambda.amazonaws.com"
       "Action": "sts:AssumeRole"

When creating the following required access policy, I give the SNS-Cross-Account Lambda function the minimum rights required to save its logs to CloudWatch Logs and additional rights to the DynamoDB table. I save the following access policy document in a file named lambda_sns_access_policy.json.

     "Version": "2012-10-17",
     "Statement": [
             "Action": [
             "Effect": "Allow",
             "Resource": "arn:aws:logs:*:*:*"
             "Sid": "PutUpdateDeleteOnCrossAccountAuditing",
             "Effect": "Allow",
             "Action": [
             "Resource": "arn:aws:dynamodb:us-east-1:999999999999:table/CrossAccountAuditing"

I then create an access policy and attach it to the role in the prod account.

aws iam create-policy --profile prod \
     --policy-name LambdaSNSExecutionRolePolicy \
     --policy-document file://lambda_sns_access_policy.json 

aws iam attach-role-policy --profile prod \
     --role-name LambdaSNSExecutionRole \

Finally, I can create the SNS-Cross-Account Lambda function in the prod account.

aws lambda create-function --profile prod \
     --function-name SNS-Cross-Account \
     --runtime python2.7 \
     --role arn:aws:iam::999999999999:role/LambdaSNSExecutionRole \
     --handler lambda_function.lambda_handler \
     --description "SNS Cross Account Function" \
     --timeout 60 \
     --memory-size 128 \
     --zip-file fileb://LambdaWithSNS.zip

Step 6: Subscribe the Lambda function cross-account to SNS

I now need to subscribe the SNS-Cross-Account Lambda function that I just created to the SNS topic. This is usually a straightforward process. However, it is a bit more complicated than usual in this use case because the SNS topic and the SNS-Cross-Account Lambda function are in two different AWS accounts.

First, I need to add permission to the SNS topic in the dev account to allow access from the prod account.

aws sns add-permission --profile dev \
     --topic-arn arn:aws:sns:us-east-1:111111111111:CrossAccountSNS \
     --label lambda-access \
     --aws-account-id 999999999999 \
     --action-name Subscribe ListSubscriptionsByTopic Receive   

Next, I add a permission to the Lambda function in the prod account to allow the SNS topic in the dev account to invoke the function.

aws lambda add-permission --profile prod \
     --function-name SNS-Cross-Account \
     --statement-id SNS-Cross-Account \
     --action "lambda:InvokeFunction" \
     --principal sns.amazonaws.com \
     --source-arn arn:aws:sns:us-east-1:111111111111:CrossAccountSNS 

Finally, I subscribe the SNS-Cross-Account Lambda function to the SNS topic.

aws sns subscribe --profile prod \
     --topic-arn arn:aws:sns:us-east-1:111111111111:CrossAccountSNS  \
     --protocol lambda \
     --notification-endpoint arn:aws:lambda:us-east-1:999999999999:function:SNS-Cross-Account

Step 7: Create the Lambda-S3-CloudTrail function

CloudTrail Logs are saved to S3. Saving the logs to S3 will trigger an S3 event.

I will create a second Lambda function called S3-Cross-Account. The S3 event will execute the S3-Cross-Account Lambda function.

The S3-Cross-Account function will parse the CloudTrail records that were saved to S3 and look for any relevant entries for the assumed role. Those relevant resource access–related entries will be stored in DynamoDB. These entries will be “linked” to the AssumeRole entries stored by the SNS-Cross-Account Lambda function.

The code of the S3-Cross-Account Lambda function follows. Edit the placeholder value in the CROSS_ACCOUNT_ROLE_ARN parameter, and save this code as lambda_function.py. Then zip that file into a file called LambdaWithS3.zip.

import json 
import urllib 
import boto3 
import zlib 
import logging 

DYNAMODB_TABLE_NAME = "CrossAccountAuditing" 
CROSS_ACCOUNT_ROLE_ARN = 'arn:aws:sts::999999999999:assumed-role/CrossAccountTest' 

logger = logging.getLogger() 

S3 = boto3.resource('s3') 
DYNAMO = boto3.resource("dynamodb") 

logger.info('Loading function')

def parse_s3_cloudtrail(bucket_name, key):
     Parse the CloudTrail record and stores it in DyanmoDB
     :param bucket_name:
     :param key:
     s3_object = S3.Object(bucket_name, key)
     # Download the file contents from S3 to memory
     payload_gz = s3_object.get()['Body'].read()
     # The CloudTrail record is gzipped by default so it must be decompressed
     payload = zlib.decompress(payload_gz, 16 + zlib.MAX_WBITS)
     # Convert the text JSON into a Python dictionary
     payload_json = json.loads(payload)
     # Loop through the records in the CloudTrail file
     for record in payload_json['Records']:
         # If a record matching the role is found, save it to DynamoDB
         if CROSS_ACCOUNT_ROLE_ARN in record['userIdentity']['arn']:
             logger.info("Access Key ID: {}".format(record['userIdentity']['accessKeyId']))
             logger.info("Record: {}".format(record))

def save_record(record):
     Save record to DynamoDB

     :param record:
     logger.info("Saving record to DynamoDB...")
             'accessKeyId': record['userIdentity']['accessKeyId'],
             'eventTime': record['eventTime'],
             'eventID': record['eventID'],
             'record': json.dumps(record)
     logger.info("Saved record to DynamoDB") 

def lambda_handler(event, context):
     # Get the object from the S3 event
     for record in event['Records']:
         bucket = record['s3']['bucket']['name']
         key = urllib.unquote_plus(record['s3']['object']['key']).decode('utf8')
         parse_s3_cloudtrail(bucket, key)

Next, I need to create the execution role that Lambda will use when it runs. First, I create the role.

aws iam create-role --profile prod \
     --role-name LambdaS3ExecutionRole \
     --assume-role-policy-document file://lambda_trust_policy.json

Note that the trust policy is the same one used for the SNS-Cross-Account Lambda function.

When creating the following required access policy, I give the S3-Cross-Account Lambda function the minimum rights required to save its logs to CloudWatch Logs and additional rights to the DynamoDB table and S3. For S3, I also give minimal rights to allow access to the S3 bucket that holds the CloudTrail records (highlighted in blue in the following code). Your bucket name will be different and you will need to edit the following document. I put the following access policy document in a file named lambda_s3_access_policy.json.

     "Version": "2012-10-17",
     "Statement": [
             "Action": [
             "Effect": "Allow",
             "Resource": "arn:aws:logs:*:*:*"
           "Effect": "Allow",
           "Action": [
           "Resource": "arn:aws:s3:::cloudtrailbucket/*"
             "Sid": "PutUpdateDeleteOnCrossAccountAuditing",
             "Effect": "Allow",
             "Action": [
             "Resource": "arn:aws:dynamodb:us-east-1:999999999999:table/CrossAccountAuditing"

I then create an access policy and attach it to the role.

aws iam create-policy --profile prod \
     --policy-name LambdaS3ExecutionRolePolicy \
     --policy-document file://lambda_s3_access_policy.json
aws iam attach-role-policy --profile prod \
     --role-name LambdaS3ExecutionRole \
     --policy-arn arn:aws:iam::999999999999:policy/LambdaS3ExecutionRolePolicy

I next create the S3-Cross-Account function.

aws lambda create-function --profile prod \
     --function-name S3-Cross-Account \
     --runtime python2.7 \
     --role arn:aws:iam::999999999999:role/LambdaS3ExecutionRole \
     --handler lambda_function.lambda_handler \
     --description "S3 X Account Function" \
     --timeout 60 \
     --memory-size 128 \
     --zip-file fileb://LambdaWithS3.zip

Finally, I add an S3 event to the S3 CloudTrail bucket that will trigger the S3-Cross-Account function when new CloudTrail records are put in the bucket. To do this, I first add a permission allowing S3 to invoke the S3-Cross-Account function. You will need to change the source-arn to the CloudTrail bucket for your account (highlighted in blue in the following code)

aws lambda add-permission --profile prod \
     --function-name S3-Cross-Account \
     --statement-id Id-1 \
     --action "lambda:InvokeFunction" \
     --principal s3.amazonaws.com \
     --source-arn arn:aws:s3:::cloudtrailbucket \
     --source-account 999999999999

I put the following policy document in a file named notification.json.

   "CloudFunctionConfiguration": {
     "Id": "ObjectCreatedEvents",
     "Events": [ "s3:ObjectCreated:*" ],
     "CloudFunction": "arn:aws:lambda:us-east-1:999999999999:function:S3-Cross-Account"

In the configuration file, the CloudFunction is the ARN of the S3-Cross-Account function I just created.

Next, I add an S3 event that will be triggered when an object is added to the bucket. You will need to change the CloudTrail bucket for your account (highlighted in blue in the following code)

aws s3api put-bucket-notification --profile prod \
     --bucket cloudtrailbucket \
     --notification-configuration file://notification.json

Step 8: Test everything

To test that everything is working properly, the first step is to assume the role.

aws sts assume-role --profile dev \
     --role-arn "arn:aws:iam::999999999999:role/CrossAccountTest" \
     --role-session-name "CrossAccountTest" \
     --external-id 3414

The preceding command will output the temporary credentials that allow access to the Production account. Here is a sample of the output.

     "AssumedRoleUser": {
         "AssumedRoleId": "AROAJZRFWJRM4FHMGQ74K:CrossAccountTest",
         "Arn": "arn:aws:sts::999999999999:assumed-role/CrossAccountTest/CrossAccountTest"
     "Credentials": {
         "SecretAccessKey": "EXAMPLEWEl02dd2kXW6d/Z7F/voe0TNR/G2gD/fB",
         "SessionToken": "EXAMPLE//////////wEaDO441gL1nAGt9M3XjyLQAR",
         "Expiration": "2016-04-08T18:08:46Z",
         "AccessKeyId": "ASIAJPVGV2XRFLHSWW7Q"

Using the credentials generated by the assumed-role, you can log in to the Production account and work with the S3 resources there, such as ListBuckets. You should start to see entries appear in the DynamoDB table, as shown in the following screenshot using the DynamoDB console.

Note how the accessKeyId is the same for both eventIDs. Opening the first eventID, I can see that this record is for the AssumeRole API call and that user jsmith is making the call.

When I open the second eventID, I see that jsmith has used the cross-account role to list the buckets in the Production account. Note how the accessKeyId is the same in both screenshots, indicating the link between the AssumeRole and the ListBuckets.


In this post, I have shown you how to develop a method to provide end-to-end auditing of cross-account roles. With this method in place, you now have a full audit trail when a user accesses your AWS resources via a cross-account role. To develop this workflow, I relied heavily on Lambda, CloudTrail, and CloudWatch Events. The workflow also depended on exchanging data across accounts using SNS. To learn more see:

If you have comments about this blog post, submit them in the “Comments” section below. If you have questions, please start a new thread on the CloudTrail forum.

– Michael

Tracking the Owner of Kickass Torrents

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2016/07/tracking_the_ow.html

Here’s the story of how it was done. First, a fake ad on torrent listings linked the site to a Latvian bank account, an e-mail address, and a Facebook page.

Using basic website-tracking services, Der-Yeghiayan was able to uncover (via a reverse DNS search) the hosts of seven apparent KAT website domains: kickasstorrents.com, kat.cr, kickass.to, kat.ph, kastatic.com, thekat.tv and kickass.cr. This dug up two Chicago IP addresses, which were used as KAT name servers for more than four years. Agents were then able to legally gain a copy of the server’s access logs (explaining why it was federal authorities in Chicago that eventually charged Vaulin with his alleged crimes).

Using similar tools, Homeland Security investigators also performed something called a WHOIS lookup on a domain that redirected people to the main KAT site. A WHOIS search can provide the name, address, email and phone number of a website registrant. In the case of kickasstorrents.biz, that was Artem Vaulin from Kharkiv, Ukraine.

Der-Yeghiayan was able to link the email address found in the WHOIS lookup to an Apple email address that Vaulin purportedly used to operate KAT. It’s this Apple account that appears to tie all of pieces of Vaulin’s alleged involvement together.

On July 31st 2015, records provided by Apple show that the me.com account was used to purchase something on iTunes. The logs show that the same IP address was used on the same day to access the KAT Facebook page. After KAT began accepting Bitcoin donations in 2012, $72,767 was moved into a Coinbase account in Vaulin’s name. That Bitcoin wallet was registered with the same me.com email address.

Another article.

Scalable and Secure MQTT Load Balancing with Elastic Beam and HiveMQ

Post Syndicated from The HiveMQ Team original http://www.hivemq.com/blog/scalable-and-secure-mqtt-load-balancing-with-elastic-beam-and-hivemq/


A key challenge for a scalable and resilient MQTT broker infrastructure is load balancing the MQTT broker cluster nodes to ensure optimal performance and maximum reliability. Historically, all load balancing strategies for MQTT typically involve L4 load balancing, which means the load balancing takes place on the Transport OSI layer, which only has limited value for MQTT broker clusters.

Elastic Beam™ Secure Proxy is one of the first products that supports first-class MQTT routing features out-of-the-box to overcome MQTT load balancing limits. This blog post shows how HiveMQ and Elastic Beam can be used together to create truly resilient and secure MQTT cloud infrastructures.

Why are load balancers beneficial for MQTT?

Load balancers play a significant role in traffic routing and traffic shaping for the Internet and the IoT. Most load balancer products focus on L4 load balancing that routes traffic based on information like IP address, port and protocol (e.g. TCP or UDP).

L4 load balancing is typically pretty simple and only a few traffic delivery strategies are supported (e.g round robin or Sticky IP). It’s important to note that such a L4 load balancer is not aware of the Layer 7 protocol that is used (e.g. MQTT) and is not able to make delivery decisions based on high level protocol information.

Key advantages using a load balancer in MQTT deployments are:

  • TLS offloading: Expensive cryptographic operations take place on the load balancer and not on the brokers
  • Perfect for broker clusters: A MQTT client does not need to be aware of the MQTT broker topologies; it connects to the load balancer and the load balancer is responsible for establishing a connection with the “right” broker
  • First line of defense: The MQTT brokers are not exposed directly to the Internet and – depending on the load balancing product – sophisticated attack prevention mechanisms on different levels of the OSI stack are used. Malicious clients won’t be able to hit the brokers directly
  • Failover: When a MQTT broker node is unavailable, the load balancer will route traffic to healthy nodes to compensate for the unavailable node

Elastic Beam and HiveMQ

For sophisticated MQTT broker cluster implementations like HiveMQ, next-generation load balancers are needed to bring additional value to the table. This is where Elastic Beam, a commercial load balancer and IoT proxy router, comes into play. Beside the typical MQTT load balancing advantages we discussed above, the following additional advantages are available when combining Elastic Beam and HiveMQ:

  • L7 MQTT load balancing: Elastic Beam understands MQTT natively and can make sophisticated routing decisions based on MQTT characteristics (e.g. client identifier)
  • No single point of failure: Elastic Beam can be clustered and high availability can be achieved with additional mechanisms like DNS round robin for the load balancers. This means the broker cluster is highly available and the load balancer is also highly available
  • Hybrid cloud support: Elastic Beam supports all major cloud providers and data center deployments – at the same time. Your MQTT clients from on-premise installations can communicate with the brokers easily as well as clients with Internet connectivity
  • Additional security: Elastic Beam implements state-of-the-art security mechanisms as well as innovative features like machine learning for intrusion detection
  • MQTT over websockets: Elastic Beam has first-class websocket support, which enables MQTT clients to use MQTT over (secure) websockets

Reference Architecture with Elastic Beam and HiveMQ


The reference architecture of Elastic Beam and HiveMQ as joint solution includes these components:

  • A variety of MQTT clients connected to the backend with either plain MQTT (with TLS) or websockets (with TLS). Communication via both channels is possible simultaneously
  • One or more Elastic Beam Secure Proxy nodes terminate TLS traffic, block compromised clients and route the traffic to the MQTT brokers
  • Multiple HiveMQ MQTT broker cluster nodes for high availability and scalability
  • (optional) Enterprise applications that are connected to the MQTT brokers either via Enterprise Integrations, plain MQTT or Shared Subscriptions. They can connect directly to the MQTT brokers or via the Elastic Beam load balancer, depending on the requirements

Easy Integration

Both software products, Elastic Beam and HiveMQ, are very easy to install and get started. To get started, you literally just need to download the software and run the start script for both products.

For a kickstart with Elastic Beam and HiveMQ, there is an official HiveMQ Elastic Beam Integration Plugin available. With that HiveMQ plugin installed, Elastic Beam is able to integrate with the MQTT broker and can detect topology changes if nodes are unavailable.

Application Note

To learn more about the Elastic Beam and HiveMQ integration, we recommend to download the application note that includes details about the solution, the reference architecture and benchmarks.

Downlad the application note

KickassTorrents’ Connections to the US Doomed the Site

Post Syndicated from Andy original https://torrentfreak.com/kickasstorrents-connections-to-the-us-doomed-the-site-160723/

katTo the huge disappointment of millions of BitTorrent users, KickassTorrents disappeared this week following an investigation by the Department of Homeland Security in the United States.

With a huge hole now present at the top of the torrent landscape, other sites plus interested groups and individuals will be considering their options. Step up their game and take over the top slot? Cautiously maintain the status quo? Or pull out altogether…

Make no mistake, this is a game of great reward, matched only by the risk. If the DHS complaint is to be believed, Kickass made dozens of millions of euros, enough to tempt even the nerviest of individuals. But while that might attract some, is avoiding detection almost impossible these days?

The complaint against KAT shows that while not inevitable, it’s becoming increasingly difficult. It also shows that carelessness plays a huge part in undermining security and that mistakes made by others in the past are always worth paying attention to.

Servers in the United States

Perhaps most tellingly, in the first instance KAT failed to learn from the ‘mistakes’ made by Megaupload. While the cases are somewhat dissimilar, both entities chose to have a US presence for at least some of their servers. This allowed US authorities to get involved. Not a great start.

“[Since 2008], KAT has relied on a network of computer servers around the world to operate, including computer servers located in Chicago, Illinois,” the complaint against the site reads.

The Chicago server weren’t trivial either.

“According to a reverse DNS search conducted by the hosting company on or about May 5, 2015, that server was the mail client ‘mail.kat.ph’.”

Torrent site mail servers. In the United States. What could go possibly go wrong?

In a word? Everything. In January 2016, DHS obtained a search warrant and cloned the Chicago servers. Somewhat unsurprisingly this gifted investigating agent Jared Der-Yeghiayan (the same guy who infiltrated Silk Road) valuable information.

“I located multiple files that contained unique user information, access logs, and other information. These files include a file titled ‘passwd’ located in the ‘etc’ directory, which was last accessed on or about January 13, 2016, and which identified the users who had access to the operating system,” Der-Yeghiayan said.

Servers in Canada

KAT also ran several servers hosted with Montreal-based Netelligent Hosting Services. There too, KAT was vulnerable.

In response to a Mutual Legal Assistance Treaty request, in April 2016 the Royal Canadian Mounted Police obtained business records associated with KAT’s account and made forensic images of the torrent site’s hard drives.

Why KAT chose Netelligent isn’t clear, but the site should have been aware that the hosting company would be forced to comply with law enforcement requests. After all, it had happened at least once before in a case involving Swedish torrent site, Sparvar.

Mistakes at the beginning

When pirate sites first launch, few admins expect them to become world leaders. If they did, they’d probably approach things a little differently at the start. In KAT’s case, alleged founder Artem Vaulin registered several of the site’s domains in his own name, information that was happily handed to the DHS by US-based hosting company GoDaddy.

Vaulin also used a Gmail account, operated by US-based Google. The complaint doesn’t explicitly say that Google handed over information, but it’s a distinct possibility. In any event, an email sent from that account in 2009 provided a helpful bridge to investigators.

“I changed my gmail. now it’s admin@kickasstorrents.com,” it read.

Forging further connections from his private email accounts to those operated from KAT, in 2012 Vaulin sent ‘test’ emails from KAT email addresses to his Apple address. This, HSI said, signaled the point that Vaulin began using KAT emails for business.

No time to relax, even socially

In addition to using an email account operated by US-based Apple, (in which HSI found Vaulin’s passport and driver’s license details, plus his banking info), the Ukranian also had an iTunes account.

Purchases he made there were logged by Apple, down to the IP address. Then, thanks to information provided by US-based Facebook (notice the recurring Stateside theme?), HSI were able to match that same IP address against a login to KAT’s Facebook page.

Anonymous Bitcoin – not quite

If the irony of the legitimate iTunes purchases didn’t quite hit the spot, the notion that Bitcoin could land someone in trouble should tick all the boxes. According to the complaint, US-based Bitcoin exchange Coinbase handed over information on Vaulin’s business to HSI.

“Records received from the bitcoin exchange company Coinbase revealed that the KAT Bitcoin Donation Address sent bitcoins it received to a user’s account maintained at Coinbase. This account was identified as belonging to Artem Vaulin located in Kharkov, Ukraine,” it reads.

Final thoughts

For a site that the US Government had always insisted was operating overseas, KickassTorrents clearly had a huge number of United States connections. This appears to have made the investigation much more simple than it would have been had the site and its owner had maintained a presence solely in Eastern Europe.

Why the site chose to maintain these connections despite the risks might never be answered, but history has shown us time and again that US-based sites are not only vulnerable but also open to the wrath of the US Government. With decades of prison time at stake, that is clearly bad news.

But for now at least, Vaulin is being detained in Poland, waiting to hear of his fate. Whether or not he’ll quickly be sent to the United States is unclear, but it seems unlikely that a massively prolonged Kim Dotcom-style extradition battle is on the agenda. A smaller one might be, however.

While the shutdown of KAT and the arrest of its owner came out of the blue, the writing has always been on the wall. The shutdown is just one of several momentous ‘pirate’ events in the past 18 months including the closure (and resurrection) of The Pirate Bay, the dismantling of the main Popcorn Time fork, and the end of YTS/YIFY.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Solarmovie Disappears Following KAT Shutdown

Post Syndicated from Andy original https://torrentfreak.com/solarmovie-disappears-following-kat-shutdown-160721/

solarmovieIn the most dramatic turn of events since the raid of The Pirate Bay in December 2014, KickassTorrents went dark yesterday.

Previously the world’s largest torrent site, KAT shut down following the arrest of its alleged founder. Artem Vaulin, a 30-year-old from Ukraine, was arrested in Poland after his entire operation had been well and truly compromised by the Department of Homeland Security (DHS).

When large sites are raided it is common for other sites in a similar niche to consider their positions. This phenomenon was illustrated perfectly when the 2012 raids on Megaupload resulted in sites such as BTjunkie taking the decision to shut down.

At this point, most other torrent sites seem fairly stable but there appears to have been at least one ‘pirate’ casualty following yesterday’s drama.

For many years, Solarmovie has been one of the most visible and visited ‘pirate’ streaming portals. Like many others, the site has had its fair share of domain issues, starting out at .COM and more recently ending up at .PH. However, sometime during the past few hours, Solarmovie disappeared.


No official announcement concerning the site’s fate has been made but it’s clear from the criminal complaint filed against KickassTorrents that Artem Vaulin had close connections to Solarmovie.

As reported yesterday, the Department of Homeland Security obtained a copy of KickassTorrents’ servers from its Canadian host and also gained access to the site’s servers in Chicago. While conducting his inquiries, the Special Agent handling the case spotted an email address for the person responsible for renting KAT’s servers.

Further investigation of Vaulin’s Apple email account showed the Ukrainian corresponding with this person back in 2010.

“The subject of the email was ‘US Server’ and stated: ‘Hello, here is access to the new server’ followed by a private and public IP address located in Washington DC, along with the user name ‘root’ and a password,” the complaint reveals.

Perhaps tellingly, the IP address provided by this individual to Vaulin was found to have hosted Solarmovie.com from August 2010 through to April 2011. Furthermore, up until just last month, the IP address was just one away from an IP address used to host KickassTorrents.

“As of on or about June 27, 2016, one of the IP addresses hosting solarmovie.ph was one IP address away ( from an IP address that was being used to host KAT ( and,” the complaint adds.

While none of the above is proof alone that Vaulin was, for example, the owner of Solarmovie, it’s clear that at some point he at least had some connections with the site or its operator.

On the other hand, in torrent and streaming circles it’s common for people to use services already being used by others they know and trust, so that might provide an explanation for the recent IP address proximity.

In any event, last night’s shutdown of Solarmovie probably indicates that the heat in the kitchen has become just a little too much. Expect more fallout in the days to come.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Canadian Man Behind Popular ‘Orcus RAT’

Post Syndicated from BrianKrebs original https://krebsonsecurity.com/2016/07/canadian-man-is-author-of-popular-orcus-rat/

Far too many otherwise intelligent and talented software developers these days apparently think they can get away with writing, selling and supporting malicious software and then couching their commerce as a purely legitimate enterprise. Here’s the story of how I learned the real-life identity of Canadian man who’s laboring under that same illusion as proprietor of one of the most popular and affordable tools for hacking into someone else’s computer.

Earlier this week I heard from Daniel Gallagher, a security professional who occasionally enjoys analyzing new malicious software samples found in the wild. Gallagher said he and members of @malwrhunterteam and @MalwareTechBlog recently got into a Twitter fight with the author of Orcus RAT, a tool they say was explicitly designed to help users remotely compromise and control computers that don’t belong to them.

A still frame from a Youtube video showing Orcus RAT's keylogging ability to steal passwords from Facebook users and other credentials.

A still frame from a Youtube video demonstrating Orcus RAT’s keylogging ability to steal passwords from Facebook and other sites.

The author of Orcus — a person going by the nickname “Ciriis Mcgraw” a.k.a. “Armada” on Twitter and other social networks — claimed that his RAT was in fact a benign “remote administration tool” designed for use by network administrators and not a “remote access Trojan” as critics charged. Gallagher and others took issue with that claim, pointing out that they were increasingly encountering computers that had been infected with Orcus unbeknownst to the legitimate owners of those machines.

The malware researchers noted another reason that Mcgraw couldn’t so easily distance himself from how his clients used the software: He and his team are providing ongoing technical support and help to customers who have purchased Orcus and are having trouble figuring out how to infect new machines or hide their activities online.

What’s more, the range of features and plugins supported by Armada, they argued, go well beyond what a system administrator would look for in a legitimate remote administration client like Teamviewer, including the ability to launch a keylogger that records the victim’s every computer keystroke, as well as a feature that lets the user peek through a victim’s Web cam and disable the light on the camera that alerts users when the camera is switched on.

A new feature of Orcus announced July 7 lets users configure the RAT so that it evades digital forensics tools used by malware researchers, including an anti-debugger and an option that prevents the RAT from running inside of a virtual machine.

Other plugins offered directly from Orcus’s tech support page (PDF) and authored by the RAT’s support team include a “survey bot” designed to “make all of your clients do surveys for cash;” a “USB/.zip/.doc spreader,” intended to help users “spread a file of your choice to all clients via USB/.zip/.doc macros;” a “Virustotal.com checker” made to “check a file of your choice to see if it had been scanned on VirusTotal;” and an “Adsense Injector,” which will “hijack ads on pages and replace them with your Adsense ads and disable adblocker on Chrome.”


Gallagher said he was so struck by the guy’s “smugness” and sheer chutzpah that he decided to look closer at any clues that Ciriis Mcgraw might have left behind as to his real-world identity and location. Sure enough, he found that Ciriis Mcgraw also has a Youtube account under the same name, and that a video Mcgraw posted in July 2013 pointed to a 33-year-old security guard from Toronto, Canada.

ciriis-youtubeGallagher noticed that the video — a bystander recording on the scene of a police shooting of a Toronto man — included a link to the domain policereview[dot]info. A search of the registration records attached to that Web site name show that the domain was registered to a John Revesz in Toronto and to the email address john.revesz@gmail.com.

A reverse WHOIS lookup ordered from Domaintools.com shows the same john.revesz@gmail.com address was used to register at least 20 other domains, including “thereveszfamily.com,” “johnrevesz.com, revesztechnologies[dot]com,” and — perhaps most tellingly —  “lordarmada.info“.

Johnrevesz[dot]com is no longer online, but this cached copy of the site from the indispensable archive.org includes his personal résumé, which states that John Revesz is a network security administrator whose most recent job in that capacity was as an IT systems administrator for TD Bank. Revesz’s LinkedIn profile indicates that for the past year at least he has served as a security guard for GardaWorld International Protective Services, a private security firm based in Montreal.

Revesz’s CV also says he’s the owner of the aforementioned Revesz Technologies, but it’s unclear whether that business actually exists; the company’s Web site currently redirects visitors to a series of sites promoting spammy and scammy surveys, come-ons and giveaways.


Contacted by KrebsOnSecurity, Revesz seemed surprised that I’d connected the dots, but beyond that did not try to disavow ownership of the Orcus RAT.

“Profit was never the intentional goal, however with the years of professional IT networking experience I have myself, knew that proper correct development and structure to the environment is no free venture either,” Revesz wrote in reply to questions about his software. “Utilizing my 15+ years of IT experience I have helped manage Orcus through its development.”

Revesz continued:

“As for your legalities question.  Orcus Remote Administrator in no ways violates Canadian laws for software development or sale.  We neither endorse, allow or authorize any form of misuse of our software.  Our EULA [end user license agreement] and TOS [terms of service] is very clear in this matter. Further we openly and candidly work with those prudent to malware removal to remove Orcus from unwanted use, and lock out offending users which may misuse our software, just as any other company would.”

Revesz said none of the aforementioned plugins were supported by Orcus, and were all developed by third-party developers, and that “Orcus will never allow implementation of such features, and or plugins would be outright blocked on our part.”

In an apparent contradiction to that claim, plugins that allow Orcus users to disable the Webcam light on a computer running the software and one that enables the RAT to be used as a “stresser” to knock sites and individuals users offline are available directly from Orcus Technologies’ Github page.

Revesz’s also offers a service to help people cover their tracks online. Using his alter ego “Armada” on the hacker forum Hackforums[dot]net, Revesz also sells a “bulletproof dynamic DNS service” that promises not to keep records of customer activity.

Dynamic DNS services allow users to have Web sites hosted on servers that frequently change their Internet addresses. This type of service is useful for people who want to host a Web site on a home-based Internet address that may change from time to time, because dynamic DNS services can be used to easily map the domain name to the user’s new Internet address whenever it happens to change.


Unfortunately, these dynamic DNS providers are extremely popular in the attacker community, because they allow bad guys to keep their malware and scam sites up even when researchers manage to track the attacking IP address and convince the ISP responsible for that address to disconnect the malefactor. In such cases, dynamic DNS allows the owner of the attacking domain to simply re-route the attack site to another Internet address that he controls.

Free dynamic DNS providers tend to report or block suspicious or outright malicious activity on their networks, and may well share evidence about the activity with law enforcement investigators. In contrast, Armada’s dynamic DNS service is managed solely by him, and he promises in his ad on Hackforums that the service — to which he sells subscriptions of various tiers for between $30-$150 per year — will not log customer usage or report anything to law enforcement.

According to writeups by Kaspersky Lab and Heimdal Security, Revesz’s dynamic DNS service has been seen used in connection with malicious botnet activity by another RAT known as Adwind.  Indeed, Revesz’s service appears to involve the domain “nullroute[dot]pw”, which is one of 21 domains registered to a “Ciriis Mcgraw,” (as well as orcus[dot]pw and orcusrat[dot]pw).

I asked Gallagher (the researcher who originally tipped me off about Revesz’s activities) whether he was persuaded at all by Revesz’s arguments that Orcus was just a tool and that Revesz wasn’t responsible for how it was used.

Gallagher said he and his malware researcher friends had private conversations with Revesz in which he seemed to acknowledge that some aspects of the RAT went too far, and promised to release software updates to remove certain objectionable functionalities. But Gallagher said those promises felt more like the actions of someone trying to cover himself.

“I constantly try to question my assumptions and make sure I’m playing devil’s advocate and not jumping the gun,” Gallagher said. “But I think he’s well aware that what he’s doing is hurting people, it’s just now he knows he’s under the microscope and trying to do and say enough to cover himself if it ever comes down to him being questioned by law enforcement.”

Can KickassTorrents Make a Comeback?

Post Syndicated from Ernesto original https://torrentfreak.com/can-kickasstorrents-make-a-comeback-160721/

kickasstorrents_500x500Founded in 2009, KickassTorrents (KAT) grew out to become the largest torrent site on the Internet with millions of visitors a day.

As a result, copyright holders and law enforcement have taken aim at the site in recent years. This resulted in several ISP blockades around the world, but yesterday the big hit came when the site’s alleged founder was arrested in Poland.

Soon after the news was made public KAT disappeared, leaving its users without their favorite site. The question that’s on many people’s minds right now is whether the site will make a Pirate Bay-style comeback.

While it’s impossible to answer this question with certainty, the odds can be more carefully weighed by taking a closer look at the events that led up to the bust and what may follow.

First off, KickassTorrents is now down across all the site’s official domain names. This downtime seems to be voluntary in part, as the authorities haven’t seized the servers. Also, several domains are still in the hands of the KAT-team.

That said, the criminal complaint filed in the U.S. District Court in Chicago does reveal that KAT has been heavily compromised (pdf).

According to the feds, Artem Vaulin, a 30-year-old from Ukraine, is the key player behind the site. Over the years, he obfuscated his connections to the site, but several security holes eventually revealed his identity.

With help from several companies in the United States and abroad, Homeland Security Investigations (HSI) agent Jared Der-Yeghiayan identifies the Ukrainian as the driving force behind the site.

The oldest traces to Vaulin are the WHOIS records for various domains, registered in his name early 2009.

“A review of historical Whois information for KAT….identified that it was registered on or about January 19, 2009, to Artem Vaulin with an address located in Kharkiv, Ukraine,” the affidavit reads.

This matches with records obtained from domain registrar GoDaddy, which indicate that Vaulin purchased three KAT-related domain names around the same time.

The agent further uncovered that the alleged KAT founder used an email address with the nickname “tirm.” The same name was listed as KAT’s “owner” on the site’s “People” page in the early days, but was eventually removed in 2011.

Tirm on KAT’s people page


The HSI agent also looked at several messages posted on KAT, which suggest that “tirm” was actively involved in operating the site.

“As part of this investigation, I also reviewed historical messages posted by tirm, KAT’s purported ‘Owner.’ These postings and others indicate that tirm was actively engaged in the early running of KAT in addition to being listed as an administrator and the website’s owner,” the HSI agent writes.

Assisted by Apple and Facebook the feds were then able to strengthen the link between Vaulin, tirm, and his involvement in the site.

Facebook, for example, handed over IP-address logs from the KAT fanpage. With help from Apple, the investigator was then able to cross-reference this with an IP-address Vaulin used for an iTunes transaction.

“Records provided by Apple showed that tirm@me.com conducted an iTunes transaction using IP Address on or about July 31, 2015. The same IP Address was used on the same day to login into the KAT Facebook Account.”

In addition, Apple appears to have handed over private email conversations which reference KAT, dating back several years. These emails also mention a “kickasstorrent payment,” which is believed to be revenue related.

“I identified a number of emails in the tirm@me.com account relating to Vaulin’s operation of KAT. In particular, between on or about June 8, 2010, and on or about September 3, 2010,” the HSI agent writes.

More recent records show that an IP-address linked to KAT’s Facebook page was also used to access Vaulin’s Coinbase account, suggesting that the Bitcoin wallet also assisted in the investigation.

“Notably, IP address accessed the KAT Facebook Account about a dozen times in September and October 2015. This same IP Address was used to login to Vaulin’s Coinbase account 47 times between on or about January 28, 2014, through on or about November 13, 2014.”

As for the business side, the complaint mentions a variety of ad payments, suggesting that KAT made over a dozen million dollars in revenue per year.

It also identifies the company Cryptoneat as KAT’s front. The Cryptoneat.com domain was registered by Vaulin and LinkedIn lists several employees of the company who were involved in the early development of the site.

“Many of the employees found on LinkedIn who present themselves as working for Cryptoneat are the same employees who received assignments from Vaulin in the KAT alert emails,” the complaint reads.

Interestingly, none of the other employees are identified or charged.

To gather further information on the money side, the feds also orchestrated an undercover operation where they posed as an advertiser. This revealed details of several bank accounts, with one receiving over $28 million in just eight months.

“Those records reflect that the Subject Account received a total of approximately €28,411,357 in deposits between on or about August 28, 2015, and on or about March 10, 2016.”

Bank account


Finally, and crucially, the investigators issued a warrant directed at the Canadian webhost of KickassTorrents. This was one of the biggest scores as it provided them with full copies of KAT’s hard drives, including the email server.

“I observed […] that they were all running the same Linux Gentoo operating system, and that they contained files with user information, SSH access logs, and other information, including a file titled ‘passwd’ located in the ‘etc’ directory,” the HSI agent writes.

“I also located numerous files associated with KAT, including directories and logs associated to their name servers, emails and other files,” he adds.

Considering all the information U.S. law enforcement has in its possession, it’s doubtful that KAT will resume its old operation anytime soon.

Technically it won’t be hard to orchestrate a Pirate Bay-style comeback, as there are probably some backups available. However, now that the site has been heavily compromised and an ongoing criminal investigation is underway, it would be a risky endeavor.

Similarly, uploaders and users may also worry about what information the authorities have in their possession. The complaint cites private messages that were sent through KAT, suggesting that the authorities have access to a significant amount of data.

While regular users are unlikely to be targeted, the information may provide useful for future investigations into large-scale uploaders. More clarity on this, the site’s future, and what it means for the torrent ecosystem, is expected to become evident when the dust settles.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Feds Seize KickassTorrents Domains, Arrest Owner

Post Syndicated from Ernesto original https://torrentfreak.com/feds-seize-kickasstorrents-domains-charge-owner-160720/

kickasstorrents_500x500With millions of unique visitors per day KickassTorrents (KAT) has become the most-used torrent site on the Internet, beating even The Pirate Bay.

Today, however, the site has run into a significant roadblock after U.S. authorities announced the arrest of the site’s alleged owner.

The 30-year-old Artem Vaulin, from Ukraine, was arrested today in Poland from where the United States has requested his extradition.

In a criminal complaint filed in U.S. District Court in Chicago, the owner is charged with conspiracy to commit criminal copyright infringement, conspiracy to commit money laundering, and two counts of criminal copyright infringement.


The complaint further reveals that the feds posed as an advertiser, which revealed a bank account associated with the site.

It also shows that Apple handed over personal details of Vaulin after the investigator cross-referenced an IP-address used for an iTunes transaction with an IP-address that was used to login to KAT’s Facebook account.

“Records provided by Apple showed that tirm@me.com conducted an iTunes transaction using IP Address on or about July 31, 2015. The same IP Address was used on the same day to login into the KAT Facebook,” the complaint reads.

In addition to the arrest in Poland, the court also granted the seizure of a bank account associated with KickassTorrents, as well as several of the site’s domain names.

Commenting on the announcement, Assistant Attorney General Caldwell said that KickassTorrents helped to distribute over $1 billion in pirated files.

“Vaulin is charged with running today’s most visited illegal file-sharing website, responsible for unlawfully distributing well over $1 billion of copyrighted materials.”

“In an effort to evade law enforcement, Vaulin allegedly relied on servers located in countries around the world and moved his domains due to repeated seizures and civil lawsuits. His arrest in Poland, however, demonstrates again that cybercriminals can run, but they cannot hide from justice.”

KAT’s .com and .tv domains are expected to be seized soon by Verisign. For the main Kat.cr domain as well as several others, seziure warrants will be sent to the respective authorities under the MLAT treaty.

At the time of writing the main domain name Kat.cr has trouble loading, but various proxies still appear to work. KAT’s status page doesn’t list any issues, but we assume that this will be updated shortly.

TorrentFreak has reached out to the KAT team for a comment on the news and what it means for the site’s future, but we have yet to hear back.

Breaking story, in depth updates will follow.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

How to Use AWS CloudFormation to Automate Your AWS WAF Configuration with Example Rules and Match Conditions

Post Syndicated from Ben Potter original https://blogs.aws.amazon.com/security/post/Tx3NYSJHO8RK22S/How-to-Use-AWS-CloudFormation-to-Automate-Your-AWS-WAF-Configuration-with-Exampl

AWS WAF is a web application firewall that integrates closely with Amazon CloudFront (AWS’s content delivery network [CDN]). AWS WAF gives you control to allow or block traffic to your web applications by, for example, creating custom rules that block common attack patterns.

We recently announced AWS CloudFormation support for all current features of AWS WAF. This enables you to leverage CloudFormation templates to configure, customize, and test AWS WAF settings across all your web applications. Using CloudFormation templates can help you reduce the time required to configure AWS WAF. In this blog post, I will show you how to use CloudFormation to automate your AWS WAF configuration with example rules and match conditions.

AWS WAF overview

If you are not familiar with AWS WAF configurations, let me try to catch you up quickly. AWS WAF consists of three main components: a web access control list (web ACL), rules, and filters (also known as a match set). A web ACL is associated with a given CloudFront distribution. Each web ACL is a collection of one or more rules, and each rule can have one or more match conditionswhich are composed of one or more filters. The filters inspect components of the request (such as its headers or URI) to match for certain match conditions.

Solution overview

The solution in this blog post uses AWS CloudFormation in an automated fashion to provision, update, and optionally delete the components that form the AWS WAF solution. The CloudFormation template will deploy the following rules and conditions as part of this solution:

  • A manual IP rule that contains an empty IP match set that must be updated manually with IP addresses to be blocked.
  • An auto IP rule that contains an empty IP match condition for optionally implementing an automated AWS Lambda function, such as is shown in How to Import IP Address Reputation Lists to Automatically Update AWS WAF IP Blacklists and How to Use AWS WAF to Block IP Addresses That Generate Bad Requests.
  • A SQL injection rule and condition to match SQL injection-like patterns in URI, query string, and body.
  • A cross-site scripting rule and condition to match Xss-like patterns in URI and query string.
  • A size-constraint rule and condition to match requests with URI or query string >= 8192 bytes which may assist in mitigating against buffer overflow type attacks.
  • ByteHeader rules and conditions (split into two sets) to match user agents that include spiders for non–English-speaking countries that are commonly blocked in a robots.txt file, such as sogou, baidu, and etaospider, and tools that you might choose to monitor use of, such as wget and cURL. Note that the WordPress user agent is included because it is used commonly by compromised systems in reflective attacks against non–WordPress sites.
  • ByteUri rules and conditions (split into two sets) to match request strings containing install, update.php, wp-config.php, and internal functions including $password, $user_id, and $session.
  • A whitelist IP condition (empty) is included and added as an exception to the ByteURIRule2 rule as an example of how to block unwanted user agents, unless they match a list of known good IP addresses.

All example rules configured by the template as part of the solution will count requests that match the rules for you to test with your web application. This template makes use of CloudFormation to provide a modular, manageable method of creating and updating nested stacks. A nested stack aligns with CloudFormation best practices to separate common components for reuse and ensure you do not reach the template body size limit, which is currently 51,200 bytes. All rules and conditions in this CloudFormation template are referenced with a resource reference number internal to the stack for each resource (for example, ByteBodyCondition1), so you can easily duplicate and extend each component. As with any example CloudFormation template, you can edit and reuse the template to suit your needs.

The following architecture diagram shows the overview of this solution, which consists of a single web ACL and multiple rules with match conditions:

Descriptions of key details in the preceding diagram are as follows:

  1. Requests are resolved by DNS to CloudFront, configured with Web ACL to filter all requests.
  2. AWS WAF Web ACL evaluates each request with configured rules containing conditions.
  3. If a request matches a block condition, the request results in returning an HTTP 403 error (forbidden) to the client computer. If a request matches a count rule, the requests are served. 
  4. The origin configured in CloudFront serves allowed or counted requests.

Deploying the solution


The following deployment steps assume that you already have a CloudFront distribution that you use to deliver content for your web applications. If you do not already have a CloudFront distribution, see Creating or Updating a Web Distribution Using the CloudFront Console. This solution also uses CloudFormation to simplify the provisioning process. For more information, see What is AWS CloudFormation?

Step 1: Create the example configuration CloudFormation stack

  1. To start the wizard that creates a CloudFormation stack, choose the link for the region in which you want to create AWS resources:
  1. If you are not already signed in to the AWS Management Console, sign in when prompted.
  2. On the Select Template page, choose Next.
  3. On the Specify Details page, specify the following values:

  • Stack name – You can use the default name AWSWafSample, or you can change the name. The stack name cannot contain spaces and must be unique within your AWS account.
  • WAF Web ACL Name – Specify a name for the web ACL that CloudFormation will create. The name that you specify is also used as a prefix for the match conditions and rules that CloudFormation will create, so you can easily find all of the related objects.
  • Action For All Rules – Specify the action for all rules. The default of COUNT will pass all requests and monitor, and BLOCK will block all requests that match.
  • White List CIDR – Specify a single IP range that will be allowed to bypass all rules in CIDR notation. Note that only /8, /16, /24, and /32 are accepted. For a single IP you would enter x.x.x.x/32, for example.
  • Max Size of URI – Select from the list an acceptable size limit for the URI of a request.
  • Max Size of Query String – Select from the list an acceptable size limit for the query string of a request.
  1. Choose Next.
  2. (Optional) On the Options page, enter tags and advanced settings, or leave the fields blank. Choose Next.
  3. On the Review page, review the configuration and then choose Create. CloudFormation then creates the AWS WAF resources.

Step 2: Update your CloudFront distribution settings

After CloudFormation creates the AWS WAF stack, associate the CloudFront distribution with the new AWS WAF web ACL.

To associate your CloudFront distribution with AWS WAF:

  1. Open the CloudFront console.
  2. In the top pane of the console, select the distribution for which you want AWS WAF to monitor requests. (If you do not already have a distribution, see Getting Started with CloudFront.)
  3. In the Distribution Settings pane, choose the General tab, and then choose Edit.
  4. In the AWS WAF Web ACL list, choose the web ACL that CloudFormation created for you in Step 1.
  5. Choose Yes, Edit to save your changes.

Step 3: (Optional) Delete your CloudFormation stack

If you want to delete the CloudFormation stack created in the previous steps (including example rules and match conditions):

  1. Open the CloudFormation console.
  2. Select the check box for the stack; the default name is AWSWafSample
  3. Choose Delete Stack from the Actions drop-down menu.
  4. Choose Yes, Delete to confirm.
  5. To track the progress of the stack deletion, select the check box for the stack, and choose the Events tab in the bottom pane.

Testing the solution

After creating the example CloudFormation stack (Step 1) and associating the AWS WAF web ACL with a CloudFront distribution (Step 2), you can monitor the web requests and determine if the rules require modification to suit your web application.

In the AWS WAF console, you can view a sample of the requests that CloudFront has forwarded to AWS WAF for inspection. For each sampled request, you can view detailed information about the request, such as the originating IP address and the headers included in the request. You can also view which rule the request matched, and whether the rule is configured to allow or block requests.

To view a sample of the web requests that CloudFront has forwarded to AWS WAF:

  1. Sign in to the AWS WAF console.
  2. In the navigation pane, click the name of the web ACL for which you want to view requests.
  3. In the right pane, choose the Requests tab. The Sampled requests table displays the following values for each request:

  • Source IP – Either the IP address that the request originated from or, if the viewer used an HTTP proxy or a load balancer to send the request, the IP address of the proxy or load balancer. 

  • URI – The part of a URL that identifies a resource (for example, /images/daily-ad.jpg). 

  • Matches rule – The first rule in the web ACL for which the web request matched all of the match conditions. If a web request does not match all of the conditions in any rule in the web ACL, the value of Matches rule is Default. Note that when a web request matches all of the conditions in a rule and the action for that rule is Count, AWS WAF continues inspecting the web request based on subsequent rules in the web ACL. In this case, a web request could appear twice in the list of sampled requests: once for the rule that has an action of Count, and again for a subsequent rule or the default action.
  • Action – Whether the action for the corresponding rule is Allow, Block, or Count.
  • Time – The time when AWS WAF received the request from CloudFront.

  1. To refresh the list of sample requests, choose Get new samples.

You may also want to analyze your CloudFront or web application log files for bots, scrapers, or generally unwanted behavior, and modify the rules and match conditions to block them. For further information about CloudFront logs, see Access Logs.

Finally, to enforce blocking of malicious requests for all rules:

  1. Open the CloudFormation console.
  2. Select the check box for the master stack. The default name is AWSWafSample.
  3. Choose Update Stack from the Actions drop-down menu.
  4. Choose Use Current Template and Next.
  5. Choose BLOCK for Actions for All Rules.
  6. Accept changes and choose Next.
  7. To track the progress of the stack update, select the check box for the stack, and choose the Events tab in the bottom pane.

A zipped version of the CloudFormation templates for the example stack and other AWS WAF example solutions are available in our GitHub repository: aws-waf-sample repository.


This blog post has shown you how to use CloudFormation to automate the configuration of a basic set of rules and match conditions to get started with AWS WAF. If you would like to see more sample rule sets for a specific platform or application, or if you have a comment about this blog post, submit a comment in the “Comments” section below. If you have questions about this blog post, please start a new thread on the AWS WAF forum.

– Ben

Automater – IP & URL OSINT Tool For Analysis

Post Syndicated from Darknet original http://feedproxy.google.com/~r/darknethackers/~3/_-OKcJophfU/

Automater is a URL/Domain, IP Address, and Md5 Hash OSINT tool aimed at making the analysis process easier for intrusion Analysts. Given a target (URL, IP, or HASH) or a file full of targets Automater will return relevant results from sources like the following: IPvoid.com, Robtex.com, Fortiguard.com, unshorten.me, Urlvoid.com,…

Read the full post at darknet.org.uk