Tag Archives: forum

Now Use AWS IAM to Delete a Service-Linked Role When You No Longer Require an AWS Service to Perform Actions on Your Behalf

Post Syndicated from Ujjwal Pugalia original https://aws.amazon.com/blogs/security/now-use-aws-iam-to-delete-a-service-linked-role-when-you-no-longer-require-an-aws-service-to-perform-actions-on-your-behalf/

Earlier this year, AWS Identity and Access Management (IAM) introduced service-linked roles, which provide you an easy and secure way to delegate permissions to AWS services. Each service-linked role delegates permissions to an AWS service, which is called its linked service. Service-linked roles help with monitoring and auditing requirements by providing a transparent way to understand all actions performed on your behalf because AWS CloudTrail logs all actions performed by the linked service using service-linked roles. For information about which services support service-linked roles, see AWS Services That Work with IAM. Over time, more AWS services will support service-linked roles.

Today, IAM added support for the deletion of service-linked roles through the IAM console and the IAM API/CLI. This means you now can revoke permissions from the linked service to create and manage AWS resources in your account. When you delete a service-linked role, the linked service no longer has the permissions to perform actions on your behalf. To ensure your AWS services continue to function as expected when you delete a service-linked role, IAM validates that you no longer have resources that require the service-linked role to function properly. This prevents you from inadvertently revoking permissions required by an AWS service to manage your existing AWS resources and helps you maintain your resources in a consistent state. If there are any resources in your account that require the service-linked role, you will receive an error when you attempt to delete the service-linked role, and the service-linked role will remain in your account. If you do not have any resources that require the service-linked role, you can delete the service-linked role and IAM will remove the service-linked role from your account.

In this blog post, I show how to delete a service-linked role by using the IAM console. To learn more about how to delete service-linked roles by using the IAM API/CLI, see the DeleteServiceLinkedRole API documentation.

Note: The IAM console does not currently support service-linked role deletion for Amazon Lex, but you can delete your service-linked role by using the Amazon Lex console. To learn more, see Service Permissions.

How to delete a service-linked role by using the IAM console

If you no longer need to use an AWS service that uses a service-linked role, you can remove permissions from that service by deleting the service-linked role through the IAM console. To delete a service-linked role, you must have permissions for the iam:DeleteServiceLinkedRole action. For example, the following IAM policy grants the permission to delete service-linked roles used by Amazon Redshift. To learn more about working with IAM policies, see Working with Policies.

{ 
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowDeletionOfServiceLinkedRolesForRedshift",
            "Effect": "Allow",
            "Action": ["iam:DeleteServiceLinkedRole"],
            "Resource": ["arn:aws:iam::*:role/aws-service-role/redshift.amazonaws.com/AWSServiceRoleForRedshift*"]
	 }
    ]
}

To delete a service-linked role by using the IAM console:

  1. Navigate to the IAM console and choose Roles from the navigation pane.

Screenshot of the Roles page in the IAM console

  1. Choose the service-linked role you want to delete and then choose Delete role. In this example, I choose the  AWSServiceRoleForRedshift service-linked role.

Screenshot of the AWSServiceRoleForRedshift service-linked role

  1. A dialog box asks you to confirm that you want to delete the service-linked role you have chosen. In the Last activity column, you can see when the AWS service last used the service-linked role, which tells you when the linked service last used the service-linked role to perform an action on your behalf. If you want to continue to delete the service-linked role, choose Yes, delete to delete the service-linked role.

Screenshot of the "Delete role" window

  1. IAM then checks whether you have any resources that require the service-linked role you are trying to delete. While IAM checks, you will see the status message, Deletion in progress, below the role name. Screenshot showing "Deletion in progress"
  1. If no resources require the service-linked role, IAM deletes the role from your account and displays a success message on the console.

Screenshot of the success message

  1. If there are AWS resources that require the service-linked role you are trying to delete, you will see the status message, Deletion failed, below the role name.

Screenshot showing the "Deletion failed"

  1. If you choose View details, you will see a message that explains the deletion failed because there are resources that use the service-linked role.
    Screenshot showing details about why the role deletion failed
  2. Choose View Resources to view the Amazon Resource Names (ARNs) of the first five resources that require the service-linked role. You can delete the service-linked role only after you delete all resources that require the service-linked role. In this example, only one resource requires the service-linked role.

Conclusion

Service-linked roles make it easier for you to delegate permissions to AWS services to create and manage AWS resources on your behalf and to understand all actions the service will perform on your behalf. If you no longer need to use an AWS service that uses a service-linked role, you can remove permissions from that service by deleting the service-linked role through the IAM console. However, before you delete a service-linked role, you must delete all the resources associated with that role to ensure that your resources remain in a consistent state.

If you have any questions, submit a comment in the “Comments” section below. If you need help working with service-linked roles, start a new thread on the IAM forum or contact AWS Support.

– Ujjwal

Greater Transparency into Actions AWS Services Perform on Your Behalf by Using AWS CloudTrail

Post Syndicated from Ujjwal Pugalia original https://aws.amazon.com/blogs/security/get-greater-transparency-into-actions-aws-services-perform-on-your-behalf-by-using-aws-cloudtrail/

To make managing your AWS account easier, some AWS services perform actions on your behalf, including the creation and management of AWS resources. For example, AWS Elastic Beanstalk automatically handles the deployment details of capacity provisioning, load balancing, auto-scaling, and application health monitoring. To make these AWS actions more transparent, AWS adds an AWS Identity and Access Management (IAM) service-linked roles to your account for each linked service you use. Service-linked roles let you view all actions an AWS service performs on your behalf by using AWS CloudTrail logs. This helps you monitor and audit the actions AWS services perform on your behalf. No additional actions are required from you and you can continue using AWS services the way you do today.

To learn more about which AWS services use service-linked roles and log actions on your behalf to CloudTrail, see AWS Services That Work with IAM. Over time, more AWS services will support service-linked roles. For more information about service-linked roles, see Role Terms and Concepts.

In this blog post, I demonstrate how to view CloudTrail logs so that you can more easily monitor and audit AWS services performing actions on your behalf. First, I show how AWS creates a service-linked role in your account automatically when you configure an AWS service that supports service-linked roles. Next, I show how you can view the policies of a service-linked role that grants an AWS service permission to perform actions on your behalf. Finally, I  use the configured AWS service to perform an action and show you how the action appears in your CloudTrail logs.

How AWS creates a service-linked role in your account automatically

I will use Amazon Lex as the AWS service that performs actions on your behalf for this post. You can use Amazon Lex to create chatbots that allow for highly engaging conversational experiences through voice and text. You also can use chatbots on mobile devices, web browsers, and popular chat platform channels such as Slack. Amazon Lex uses Amazon Polly on your behalf to synthesize speech that sounds like a human voice.

Amazon Lex uses two IAM service-linked roles:

  • AWSServiceRoleForLexBots — Amazon Lex uses this service-linked role to invoke Amazon Polly to synthesize speech responses for your chatbot.
  • AWSServiceRoleForLexChannels — Amazon Lex uses this service-linked role to post text to your chatbot when managing channels such as Slack.

You don’t need to create either of these roles manually. When you create your first chatbot using the Amazon Lex console, Amazon Lex creates the AWSServiceRoleForLexBots role for you. When you first associate a chatbot with a messaging channel, Amazon Lex creates the AWSServiceRoleForLexChannels role in your account.

1. Start configuring the AWS service that supports service-linked roles

Navigate to the Amazon Lex console, and choose Get Started to navigate to the Create your Lex bot page. For this example, I choose a sample chatbot called OrderFlowers. To learn how to create a custom chatbot, see Create a Custom Amazon Lex Bot.

Screenshot of making the choice to create an OrderFlowers chatbot

2. Complete the configuration for the AWS service

When you scroll down, you will see the settings for the OrderFlowers chatbot. Notice the field for the IAM role with the value, AWSServiceRoleForLexBots. This service-linked role is “Automatically created on your behalf.” After you have entered all details, choose Create to build your sample chatbot.

Screenshot of the automatically created service-linked role

AWS has created the AWSServiceRoleForLexBots service-linked role in your account. I will return to using the chatbot later in this post when I discuss how Amazon Lex performs actions on your behalf and how CloudTrail logs these actions. First, I will show how you can view the permissions for the AWSServiceRoleForLexBots service-linked role by using the IAM console.

How to view actions in the IAM console that AWS services perform on your behalf

When you configure an AWS service that supports service-linked roles, AWS creates a service-linked role in your account automatically. You can view the service-linked role by using the IAM console.

1. View the AWSServiceRoleForLexBots service-linked role on the IAM console

Go to the IAM console, and choose AWSServiceRoleForLexBots on the Roles page. You can confirm that this role is a service-linked role by viewing the Trusted entities column.

Screenshot of the service-linked role

2.View the trusted entities that can assume the AWSServiceRoleForLexBots service-linked role

Choose the Trust relationships tab on the AWSServiceRoleForLexBots role page. You can view the trusted entities that can assume the AWSServiceRoleForLexBots service-linked role to perform actions on your behalf. In this example, the trusted entity is lex.amazonaws.com.

Screenshot of the trusted entities that can assume the service-linked role

3. View the policy attached to the AWSServiceRoleForLexBots service-linked role

Choose AmazonLexBotPolicy on the Permissions tab to view the policy attached to the AWSServiceRoleForLexBots service-linked role. You can view the policy summary to see that AmazonLexBotPolicy grants permission to Amazon Lex to use Amazon Polly.

Screenshot showing that AmazonLexBotPolicy grants permission to Amazon Lex to use Amazon Polly

4. View the actions that the service-linked role grants permissions to use

Choose Polly to view the action, SynthesizeSpeech, that the AmazonLexBotPolicy grants permission to Amazon Lex to perform on your behalf. Amazon Lex uses this permission to synthesize speech responses for your chatbot. I show later in this post how you can monitor this SynthesizeSpeech action in your CloudTrail logs.

Screenshot showing the the action, SynthesizeSpeech, that the AmazonLexBotPolicy grants permission to Amazon Lex to perform on your behalf

Now that I know the trusted entity and the policy attached to the service-linked role, let’s go back to the chatbot I created earlier and see how CloudTrail logs the actions that Amazon Lex performs on my behalf.

How to use CloudTrail to view actions that AWS services perform on your behalf

As discussed already, I created an OrderFlowers chatbot on the Amazon Lex console. I will use the chatbot and display how the AWSServiceRoleForLexBots service-linked role helps me track actions in CloudTrail. First, though, I must have an active CloudTrail trail created that stores the logs in an Amazon S3 bucket. I will use a trail called TestTrail and an S3 bucket called account-ids-slr.

1. Use the Amazon Lex chatbot via the Amazon Lex console

In Step 2 in the first section of this post, when I chose Create, Amazon Lex built the OrderFlowers chatbot. After the chatbot was built, the right pane showed that a Test Bot was created. Now, I choose the microphone symbol in the right pane and provide voice input to test the OrderFlowers chatbot. In this example, I tell the chatbot, “I would like to order some flowers.” The bot replies to me by asking, “What type of flowers would you like to order?”

Screenshot of voice input to test the OrderFlowers chatbot

When the chatbot replies using voice, Amazon Lex uses Amazon Polly to synthesize speech from text to voice. Amazon Lex assumes the AWSServiceRoleForLexBots service-linked role to perform the SynthesizeSpeech action.

2. Check CloudTrail to view actions performed on your behalf

Now that I have created the chatbot, let’s see which actions were logged in CloudTrail. Choose CloudTrail from the Services drop-down menu to reach the CloudTrail console. Choose Trails and choose the S3 bucket in which you are storing your CloudTrail logs.

Screenshot of the TestTrail trail

In the S3 bucket, you will find log entries for the SynthesizeSpeech event. This means that CloudTrail logged the action when Amazon Lex assumed the AWSServiceRoleForLexBots service-linked role to invoke Amazon Polly to synthesize speech responses for your chatbot. You can monitor and audit this invocation, and it provides you with transparency into Amazon Polly’s SynthesizeSpeech action that Amazon Lex invoked on your behalf. The applicable CloudTrail log section follows and I have emphasized the key lines.

{  
         "eventVersion":"1.05",
         "userIdentity":{  
           "type":"AssumedRole",
            "principalId":"{principal-id}:OrderFlowers",
            "arn":"arn:aws:sts::{account-id}:assumed-role/AWSServiceRoleForLexBots/OrderFlowers",
            "accountId":"{account-id}",
            "accessKeyId":"{access-key-id}",
            "sessionContext":{  
               "attributes":{  
                  "mfaAuthenticated":"false",
                  "creationDate":"2017-09-17T17:30:05Z"
               },
               "sessionIssuer":{  
                  "type":"Role",
                  "principalId":"{principal-id}",
                  "arn":"arn:aws:iam:: {account-id}:role/aws-service-role/lex.amazonaws.com/AWSServiceRoleForLexBots",
                  "accountId":"{account-id",
                  "userName":"AWSServiceRoleForLexBots"
               }
            },
            "invokedBy":"lex.amazonaws.com"
         },
         "eventTime":"2017-09-17T17:30:05Z",
         "eventSource":"polly.amazonaws.com",
         "eventName":"SynthesizeSpeech",
         "awsRegion":"us-east-1",
         "sourceIPAddress":"lex.amazonaws.com",
         "userAgent":"lex.amazonaws.com",
         "requestParameters":{  
            "outputFormat":"mp3",
            "textType":"text",
            "voiceId":"Salli",
            "text":"**********"
         },
         "responseElements":{  
            "requestCharacters":45,
            "contentType":"audio/mpeg"
         },
         "requestID":"{request-id}",
         "eventID":"{event-id}",
         "eventType":"AwsApiCall",
         "recipientAccountId":"{account-id}"
      }

Conclusion

Service-linked roles make it easier for you to track and view actions that linked AWS services perform on your behalf by using CloudTrail. When an AWS service supports service-linked roles to enable this additional logging, you will see a service-linked role added to your account.

If you have comments about this post, submit a comment in the “Comments” section below. If you have questions about working with service-linked roles, start a new thread on the IAM forum or contact AWS Support.

– Ujjwal

How to Query Personally Identifiable Information with Amazon Macie

Post Syndicated from Chad Woolf original https://aws.amazon.com/blogs/security/how-to-query-personally-identifiable-information-with-amazon-macie/

Amazon Macie logo

In August 2017 at the AWS Summit New York, AWS launched a new security and compliance service called Amazon Macie. Macie uses machine learning to automatically discover, classify, and protect sensitive data in AWS. In this blog post, I demonstrate how you can use Macie to help enable compliance with applicable regulations, starting with data retention.

How to query retained PII with Macie

Data retention and mandatory data deletion are common topics across compliance frameworks, so knowing what is stored and how long it has been or needs to be stored is of critical importance. For example, you can use Macie for Payment Card Industry Data Security Standard (PCI DSS) 3.2, requirement 3, “Protect stored cardholder data,” which mandates a “quarterly process for identifying and securely deleting stored cardholder data that exceeds defined retention.” You also can use Macie for ISO 27017 requirement 12.3.1, which calls for “retention periods for backup data.” In each of these cases, you can use Macie’s built-in queries to identify the age of data in your Amazon S3 buckets and to help meet your compliance needs.

To get started with Macie and run your first queries of personally identifiable information (PII) and sensitive data, follow the initial setup as described in the launch post on the AWS Blog. After you have set up Macie, walk through the following steps to start running queries. Start by focusing on the S3 buckets that you want to inventory and capture important compliance related activity and data.

To start running Macie queries:

  1. In the AWS Management Console, launch the Macie console (you can type macie to find the console).
  2. Click Dashboard in the navigation pane. This shows you an overview of the risk level and data classification type of all inventoried S3 buckets, categorized by date and type.
    Screenshot of "Dashboard" in the navigation pane
  3. Choose S3 objects by PII priority. This dashboard lets you sort by PII priority and PII types.
    Screenshot of "S3 objects by PII priority"
  4. In this case, I want to find information about credit card numbers. I choose the magnifying glass for the type cc_number (note that PII types can be used for custom queries). This view shows the events where PII classified data has been uploaded to S3. When I scroll down, I see the individual files that have been identified.
    Screenshot showing the events where PII classified data has been uploaded to S3
  5. Before looking at the files, I want to continue to build the query by only showing items with high priority. To do so, I choose the row called Object PII Priority and then the magnifying glass icon next to High.
    Screenshot of refining the query for high priority events
  6. To view the results matching these queries, I scroll down and choose any file listed. This shows vital information such as creation date, location, and object access control list (ACL).
  7. The piece I am most interested in this case is the Object PII details line to understand more about what was found in the file. In this case, I see name and credit card information, which is what caused the high priority. Scrolling up again, I also see that the query fields have updated as I interacted with the UI.
    Screenshot showing "Object PII details"

Let’s say that I want to get an alert every time Macie finds new data matching this query. This alert can be used to automate response actions by using AWS Lambda and Amazon CloudWatch Events.

  1. I choose the left green icon called Save query as alert.
    Screenshot of "Save query as alert" button
  2. I can customize the alert and change things like category or severity to fit my needs based on the alert data.
  3. Another way to find the information I am looking for is to run custom queries. To start using custom queries, I choose Research in the navigation pane.
    1. To learn more about custom Macie queries and what you can do on the Research tab, see Using the Macie Research Tab.
  4. I change the type of query I want to run from CloudTrail data to S3 objects in the drop-down list menu.
    Screenshot of choosing "S3 objects" from the drop-down list menu
  5. Because I want PII data, I start typing in the query box, which has an autocomplete feature. I choose the pii_types: query. I can now type the data I want to look for. In this case, I want to see all files matching the credit card filter so I type cc_number and press Enter. The query box now says, pii_types:cc_number. I press Enter again to enable autocomplete, and then I type AND pii_types:email to require both a credit card number and email address in a single object.
    The query looks for all files matching the credit card filter ("cc_number")
  6. I choose the magnifying glass to search and Macie shows me all S3 objects that are tagged as PII of type Credit Cards. I can further specify that I only want to see PII of type Credit Card that are classified as High priority by adding AND and pii_impact:high to the query.
    Screenshot showing narrowing the query results furtherAs before, I can save this new query as an alert by clicking Save query as alert, which will be triggered by data matching the query going forward.

Advanced tip

Try the following advanced queries using Lucene query syntax and save the queries as alerts in Macie.

  • Use a regular-expression based query to search for a minimum of 10 credit card numbers and 10 email addresses in a single object:
    • pii_explain.cc_number:/([1-9][0-9]|[0-9]{3,}) distinct Credit Card Numbers.*/ AND pii_explain.email:/([1-9][0-9]|[0-9]{3,}) distinct Email Addresses.*/
  • Search for objects containing at least one credit card, name, and email address that have an object policy enabling global access (searching for S3 AllUsers or AuthenticatedUsers permissions):
    • (object_acl.Grants.Grantee.URI:”http\://acs.amazonaws.com/groups/global/AllUsers” OR  object_acl.Grants.Grantee.URI:”http\://acs.amazonaws.com/groups/global/AllUsers”) AND (pii_types.cc_number AND pii_types.email AND pii_types.name)

These are two ways to identify and be alerted about PII by using Macie. In a similar way, you can create custom alerts for various AWS CloudTrail events by choosing a different data set on which to run the queries again. In the examples in this post, I identified credit cards stored in plain text (all data in this post is example data only), determined how long they had been stored in S3 by viewing the result details, and set up alerts to notify or trigger actions on new sensitive data being stored. With queries like these, you can build a reliable data validation program.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about how to use Macie, start a new thread on the Macie forum or contact AWS Support.

-Chad

Kodi ‘Trademark Troll’ Has Interesting Views on Co-Opting Other People’s Work

Post Syndicated from Andy original https://torrentfreak.com/kodi-trademark-troll-has-interesting-views-on-co-opting-other-peoples-work-170917/

The Kodi team, operating under the XBMC Foundation, announced last week that a third-party had registered the Kodi trademark in Canada and was using it for their own purposes.

That person was Geoff Gavora, who had previously been in communication with the Kodi team, expressing how important the software was to his sales.

“We had hoped, given the positive nature of his past emails, that perhaps he was doing this for the benefit of the Foundation. We learned, unfortunately, that this was not the case,” XBMC Foundation President Nathan Betzen said.

According to the Kodi team, Gavora began delisting Amazon ads placed by companies selling Kodi-enabled products, based on infringement of Gavora’s trademark rights.

“[O]nly Gavora’s hardware can be sold, unless those companies pay him a fee to stay on the store,” Betzen explained.

Predictably, Gavora’s move is being viewed as highly controversial, not least since he’s effectively claiming licensing rights in Canada over what should be a free and open source piece of software. TF obtained one of the notices Amazon sent to a seller of a Kodi-enabled device in Canada, following a complaint from Gavora.

Take down Kodi from Amazon, or pay Gavora

So who is Geoff Gavora and what makes him tick? Thanks to a 2016 interview with Ali Salman of the Rapid Growth Podcast, we have a lot of information from the horse’s mouth.

It all began in 2011, when Gavora began jailbreaking Apple TVs, loading them with XBMC, and selling them to friends.

“I did it as a joke, for beer money from my friends,” Gavora told Salman.

“I’d do it for $25 to $50 and word of mouth spread that I was doing this so we could load on this media center to watch content and online streams from it.”

Intro to the interview with Ali Salman

Soon, however, word of mouth caused the business to grow wings, Gavora claims.

“So they started telling people and I start telling people it’s $50, and then I got so busy so I start telling people it’s $75. I’m getting too busy with my work and with this. And it got to the point where I was making more jailbreaking these Apple TVs than I was at my career, and I wasn’t very happy at my career at that time.”

Jailbreaking was supposed to be a side thing to tide Gavora over until another job came along, but he had a problem – he didn’t come from a technical background. Nevertheless, what Gavora did have was a background in marketing and with a decent knowledge of how to succeed in customer service, he majored on that front.

Gavora had come to learn that while people wanted his devices, they weren’t very good at operating XBMC (Kodi’s former name) which he’d loaded onto them. With this in mind, he began offering web support and phone support via a toll-free line.

“I started receiving calls from New York, Dallas, and then Australia, Hong Kong. Everyone around the world was calling me and saying ‘we hear there’s some kid in Calgary, some young child, who’s offering tech support for the Apple TV’,” Gavora said.

But with things apparently going well, a wrench was soon thrown into the works when Apple released the third variant of its Apple TV and Gavorra was unable to jailbreak it. This prompted him to market his own Linux-based set-top device and his business, Raw-Media, grew from there.

While it seems likely that so-called ‘Raw Boxes’ were doing reasonably well with consumers, what was the secret of their success? Podcast host Salman asked Gavora for his ‘networking party 10-second pitch’, and the Canadian was happy to oblige.

“I get this all the time actually. I basically tell people that I sell a box that gives them free TV and movies,” he said.

This was met with laughter from the host, to which Gavora added, “That’s sort of the three-second pitch and everyone’s like ‘Oh, tell me more’.”

“Who doesn’t like free TV, come on?” Salman responded. “Yeah exactly,” Gavora said.

The image below, taken from a January 2016 YouTube unboxing video, shows one of the products sold by Gavora’s company.

Raw-Media Kodi Box packaging (note Kodi logo)

Bearing in mind the offer of free movies and TV, the tagline on the box, “Stop paying for things you don’t want to watch, watch more free tv!” initially looks quite provocative. That being said, both the device and Kodi are perfectly capable of playing plenty of legal content from free sources, so there’s no problem there.

What is surprising, however, is that the unboxing video shows the device being booted up, apparently already loaded with infamous third-party Kodi addons including PrimeWire, Genesis, Icefilms, and Navi-X.

The unboxing video showing the Kodi setup

Given that Gavora has registered the Kodi trademark in Canada and prints the official logo on his packaging, this runs counter to the official Kodi team’s aggressive stance towards boxes ready-configured with what they categorize as banned addons. Matters are compounded when one visits the product support site.

As seen in the image below, Raw-Media devices are delivered with a printed card in the packaging informing people where to get the after-sales services Gavora says he built his business upon. The cards advise people to visit No-Issue.ca, a site setup to offer text and video-based support to set-top box buyers.

No-Issue.ca (which is hosted on the same server as raw-media.ca and claimed officially as a sister site here) now redirects to No-Issue.is, as per a 2016 announcement. It has a fairly bland forum but the connected tutorial videos, found on No Issue’s YouTube channel, offer a lot more spice.

Registered under Gavora’s online nickname Gombeek (which is also used on the official Kodi forums), the channel is full of videos detailing how to install and use a wide range of addons.

The No-issue YouTube Channel tutorials

But while supplying tutorial videos is one thing, providing the actual software addons is another. Surprisingly, No-Issue does that too. Filed away under the URL http://solved.no-issue.is/ is a Kodi repository which distributes a wide range of addons, including many that specialize in infringing content, according to the Kodi team.

The No-Issue repository

A source familiar with Raw-Media’s devices informs TF that they’re no longer delivered with addons installed. However, tools hosted on No-Issue.is automate the installation process for the customer, with unlisted YouTube Videos (1,2) providing the instructions.

XBMC Foundation President Nathan Betzen says that situation isn’t ideal.

“If that really is his repo it is disappointing to see that Gavora is charging a fee or outright preventing the sale of boxes with Kodi installed that do not include infringing add-ons, while at the same time he is distributing boxes himself that do include the infringing add-ons like this,” Betzen told TF.

While the legality of this type of service is yet to be properly tested in Canada and may yet emerge as entirely permissible under local law, Gavora himself previously described his business as operating in a gray area.

“If I could go back in time four years, I would’ve been more aggressive in the beginning because there was a lot of uncertainty being in a gray market business about how far I could push it,” he said.

“I really shouldn’t say it’s a gray market because everything I do is completely above board, I just felt it was more gray market so I was a bit scared,” he added.

But, legality aside (which will be determined in due course through various cases 1,2), the situation is still problematic when it comes to the Kodi trademark.

The official Kodi team indicate they don’t want to be associated with any kind of questionable addon or even tutorials for the same. Nevertheless, several of the addons installed by No-Issue (including PrimeWire, cCloud TV, Genesis, Icefilms, MoviesHD, MuchMovies and Navi-X, to name a few), are present on the Kodi team’s official ban list.

The fact remains, however, that Gavora successfully registered the trademark in Canada (one month later it was transferred to a brand new company at the same address), and Kodi now have no control over the situation in the country, short of a settlement or some kind of legal action.

Kodi matters aside, though, we get more insight into Gavora’s attitudes towards intellectual property after learning that he studied gemology and jewelry at school. He’s a long-standing member of jewelry discussion forum Ganoskin.com (his profile links to Gavora.com, a domain Gavora owns, as per information supplied by Amazon).

Things get particularly topical in a 2006 thread titled “When your work gets ripped“. The original poster asked how people feel when their jewelry work gets copied and Gavora made his opinions known.

“I think that what most people forget to remember is that when a piece from Tiffany’s or Cartier is ripped off or copied they don’t usually just copy the work, they will stamp it with their name as well,” Gavora said.

“This is, in fact, fraud and they are deceiving clients into believing they are purchasing genuine Tiffany’s or Cartier pieces. The client is in fact more interested in purchasing from an artist than they are the piece. Laying claim to designs (unless a symbol or name is involved) is outrageous.”

Unless that ‘design’ is called Kodi, of course, then it’s possible to claim it as your own through an administrative process and begin demanding licensing fees from the public. That being said, Gavora does seem to flip back and forth a little, later suggesting that being copied is sometimes ok.

“If someone copies your design and produces it under their own name, I think one should be honored and revel in the fact that your design is successful and has caused others to imitate it and grow from it,” he wrote.

“I look forward to the day I see one of my original designs copied, that is the day I will know my design is a success.”

From their public statements, this opinion isn’t shared by the Kodi team in respect of their product. Despite the Kodi name, software and logo being all their own work, they now find themselves having to claw back rights in Canada, in order to keep the product free in the region. For now, however, that seems like a difficult task.

TorrentFreak wrote to Gavora and asked him why he felt the need to register the Kodi trademark, but we received no response. That means we didn’t get the chance to ask him why he’s taking down Amazon listings for other people’s devices, or about something else that came up in the podcast.

“My biggest weakness, I guess, is that I’m too ethical about how I do my business,” he said, referring to how he deals with customers.

Only time will tell how that philosophy will affect Gavora’s attitudes to trademarks and people’s desire not to be charged for using free, open source software.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

UK Copyright Trolls Cite Hopeless Case to Make People Pay Up

Post Syndicated from Andy original https://torrentfreak.com/uk-copyright-trolls-cite-hopeless-case-to-make-people-pay-up-170916/

Our coverage of Golden Eye International dates back more than five years. Much like similar companies in the copyright troll niche, the outfit monitors BitTorrent swarms, collects IP addresses, and then heads off to court to obtain alleged pirates’ identities.

From there it sends letters threatening legal action, unless recipients pay a ‘fine’ of hundreds of pounds to settle an alleged porn piracy case. While some people pay up, others refuse to do so on the basis they are innocent, the ISP bill payer, or simply to have their day in court. Needless to say, a full-on court battle on the merits is never on the agenda.

Having gone quiet for an extended period of time, it was assumed that Golden Eye had outrun its usefulness as a ‘fine’ collection outfit. Just lately, however, there are signs that the company is having another go at reviving old cases against people who previously refused to pay.

A post on Slyck forums, which runs a support thread for people targeted by trolls, reveals the strategy.

“I dealt with these Monkeys last year. I spent 5 weeks practically arguing with them. They claim they have to prove it based on the balance of probability’s [sic]. I argue that they actually have to prove it was me,” ‘Matt’ wrote in August.

“It wasn’t me, and despite giving them reasonable doubt it wasn’t me. (I’m Gay… why would I be downloading straight porn?) They still persuaded it, trying to dismiss anything that cast any doubt on their claim. The emails finished how I figured they would…. They were going to send court documentation. It never arrived.”

After months of silence, at the end of August this year ‘Matt’ says GoldenEye got in touch again, suggesting that a conclusion to another copyright case might encourage him to cough up. He says that Golden Eye contacted him saying that someone settled out of court with TCYK, another copyright troll, for £1,000.

“My thoughts…Idiots and doubt it,” ‘Matt’ said. “Honestly, I almost cried I thought I had got rid of these trolls and they are back for round two.”

This wasn’t an isolated case. Another recipient of a Golden Eye threat also revealed getting contacted by the company, also with fresh pressure to pay.

“You may be interested to know that a solicitor, acting on behalf of Robert Kemble in a claim similar to ours but brought by TCYK LLC, entered into an agreement to settle the court case by paying £1,000,” Golden Eye told the individual.

“In view of the agreement reached in the Kemble case, we would invite you to reconsider your position as to whether you would like to reach settlement with us. We would point out, that, despite the terms of settlement in the Kemble case, we remain prepared to stand by our original offer of settlement with you, that is payment of £500.00.”

After last corresponding with the Golden Eye in January after repeated denials, new contact from the company would be worrying for anyone. It certainly affected this person negatively.

“I am now at a loss and don’t know what more I can do. I do not want to settle this, but also I cannot afford a solicitor. Any further advice would be gratefully appreciated as [i’m] now having panic attacks,” the person wrote.

After citing the Robert Kemble case, one might think that Golden Eye would be good enough to explain the full situation. They didn’t – so let’s help them a little bit in that respect, to help their targets make an informed decision.

Robert Kemble was a customer of Sky Broadband. TCYK, in conjunction with UK-based Hatton and Berkeley, sent a letter to Kemble in July 2015 asking him to pay a ‘fine’ for alleged Internet piracy of the Robert Redford movie The Company You Keep, way back in April 2013.

So far, so ordinary – but here’s the big deal.

Unlike the people being re-targeted by Golden Eye this time around, Kemble admitted in writing that infringement had been going on via his account.

In a response, Kemble told TCYK that he was shocked to receive their letter but after speaking to people in his household, had discovered that a child had been downloading films. He didn’t say that the Redford film was among them but he apologized to the companies all the same. Clearly, that wasn’t going to be enough.

In August 2015, TCYK wrote back to Kemble, effectively holding him responsible for other people’s actions while demanding a settlement of £600 to be paid to third-party company, Ranger Bay Limited.

“The child who is responsible for the infringement should sign the undertakings in our letter to you. Please when replying specify clearly on the undertakings the child’s full name and age,” the company later wrote. Nice.

What took place next was a round of letter tennis between Kemble’s solicitor and those acting for TCYK, with the latter insisting that Kemble had already admitted infringement (or authorizing the same) and demanding around £2000 to settle the case at this later stage.

With no settlement forthcoming, TCYK demanded £5,000 in the small claims court.

“The Defendant has admitted that his internet address has been used to infringe the Claimant’s copyright whereby, through the Defendant’s licencees’ use of the Defendant’s internet address, he acquired the Work and then communicated the Work in a digital form via the internet to the public without the license or consent of the Claimant,” the TCYK claim form reads.

TorrentFreak understands that the court process that followed didn’t center on the merits of the infringement case, but procedural matters over how the case was handled. On this front, Kemble failed in his efforts to have the case – which was heard almost a year ago – decided in his favor.

Now, according to Golden Eye at least, Kemble has settled with TCYK for £1000, which is just £300 more than their final pre-court offer. Hardly sounds like good value for money.

The main point, though, is that this case wouldn’t have gotten anywhere near a court if Kemble hadn’t admitted liability of sorts in the early stages. This is a freak case in all respects and has no bearing on anyone’s individual case, especially those who haven’t admitted liability.

So, for people getting re-hounded by Golden Eye now, remember the Golden Rule. If you’re innocent, by all means tell them, and stick to your guns. But, at your peril tell them anything else on top, or risk having it used against you.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

The Pirate Bay Website Runs a Cryptocurrency Miner

Post Syndicated from Ernesto original https://torrentfreak.com/the-pirate-bay-website-runs-a-cryptocurrency-miner-170916/

Four years ago many popular torrent sites added an option to donate via Bitcoin. The Pirate Bay was one of the first to jump on board and still lists its address on the website.

While there’s nothing wrong with using Bitcoin as a donation tool, adding a Javascript cryptocurrency miner to a site is of a totally different order.

A few hours ago many Pirate Bay users began noticing that their CPU usage increased dramatically when they browsed certain Pirate Bay pages. Upon closer inspection, this spike appears to have been caused by a Bitcoin miner embedded on the site.

The code in question is tucked away in the site’s footer and uses a miner provided by Coinhive. This service offers site owners the option to convert the CPU power of users into Monero coins.

The miner does indeed appear to increase CPU usage quite a bit. It is throttled at different rates (we’ve seen both 0.6 and 0.8) but the increase in resources is immediately noticeable.

The miner is not enabled site-wide. When we checked, it appeared in the search results and category listings, but not on the homepage or individual torrent pages.

There has been no official comment from the site operators on the issue (update, see below), but many users have complained about it. In the official site forums, TPB supermoderator Sid is clearly not in agreement with the site’s latest addition.

“That really is serious, so hopefully we can get some action on it quickly. And perhaps get some attention for the uploading and commenting bugs while they’re at it,” Sid writes.

Like many others, he also points out that blocking or disabling Javascript can stop the automatic mining. This can be done via browser settings or through script blocker addons such as NoScript and ScriptBlock. Alternatively, people can block the miner URL with an ad-blocker.

Whether the miner is a new and permanent tool, or perhaps triggered by an advertiser, is unknown at the point. When we hear more this article will be updated accordingly.

Update: We were told that the miner is being tested for a short period as a new way to generate revenue. This could eventually replace the ads on the site. More info may be revealed later.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

BREIN Tracks Down and Settles With “Libra Release Team”

Post Syndicated from Ernesto original https://torrentfreak.com/brein-tracks-down-and-settles-with-libra-release-team-170916/

Dutch anti-piracy outfit BREIN has been very active in recent years, targeting uploaders on various sharing sites and services.

This week the anti-piracy group announced yet another victory against a group of frequent copyright infringers in the Netherlands.

BREIN successfully tracked down and settled with two key members of the “Libra Release Team” (LRT), which is estimated to consist of eight to ten people in total.

LRT is best known in the Netherlands for repackaging English movie and TV releases with Dutch subtitles. These were then shared on torrent sites and Usenet forums.

According to court papers, the files in question were uploaded to place2home.org and place2home.net. However, they often spread out over other sites as well. In total, the release team has published nearly 800 titles.

BREIN tracked down the founder of LRT, who had already stopped uploading, and obtained an ex-parte court order against a more recent uploader. Both have settled with the anti-piracy group for a total of 8,000 euros, an amount that takes their financial situations into account.

The uploader was further summoned to and stop his activities effective immediately. If not, an ex-parte court order requires him to pay an additional penalty of €2,000 per day, up to a maximum of €50,000.

The court papers don’t mention how the members were uncovered, but it is likely that they left traces to their real identities online, which is often the case. The group also recruited new members publicly, using Skype and Gmail as contact addresses.

It’s unclear whether the settlements means the end of the Libra Release Team. While the targeted persons are unlikely to pick up their old habit, some of the others may still continue, perhaps under a new name.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

AWS IAM Policy Summaries Now Help You Identify Errors and Correct Permissions in Your IAM Policies

Post Syndicated from Joy Chatterjee original https://aws.amazon.com/blogs/security/iam-policy-summaries-now-help-you-identify-errors-and-correct-permissions-in-your-iam-policies/

In March, we made it easier to view and understand the permissions in your AWS Identity and Access Management (IAM) policies by using IAM policy summaries. Today, we updated policy summaries to help you identify and correct errors in your IAM policies. When you set permissions using IAM policies, for each action you specify, you must match that action to supported resources or conditions. Now, you will see a warning if these policy elements (Actions, Resources, and Conditions) defined in your IAM policy do not match.

When working with policies, you may find that although the policy has valid JSON syntax, it does not grant or deny the desired permissions because the Action element does not have an applicable Resource element or Condition element defined in the policy. For example, you may want to create a policy that allows users to view a specific Amazon EC2 instance. To do this, you create a policy that specifies ec2:DescribeInstances for the Action element and the Amazon Resource Name (ARN) of the instance for the Resource element. When testing this policy, you find AWS denies this access because ec2:DescribeInstances does not support resource-level permissions and requires access to list all instances. Therefore, to grant access to this Action element, you need to specify a wildcard (*) in the Resource element of your policy for this Action element in order for the policy to function correctly.

To help you identify and correct permissions, you will now see a warning in a policy summary if the policy has either of the following:

  • An action that does not support the resource specified in a policy.
  • An action that does not support the condition specified in a policy.

In this blog post, I walk through two examples of how you can use policy summaries to help identify and correct these types of errors in your IAM policies.

How to use IAM policy summaries to debug your policies

Example 1: An action does not support the resource specified in a policy

Let’s say a human resources (HR) representative, Casey, needs access to the personnel files stored in HR’s Amazon S3 bucket. To do this, I create the following policy to grant all actions that begin with s3:List. In addition, I grant access to s3:GetObject in the Action element of the policy. To ensure that Casey has access only to a specific bucket and not others, I specify the bucket ARN in the Resource element of the policy.

Note: This policy does not grant the desired permissions.

This policy does not work. Do not copy.
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "ThisPolicyDoesNotGrantAllListandGetActions",
            "Effect": "Allow",
            "Action": ["s3:List*",
                       "s3:GetObject"],
            "Resource": ["arn:aws:s3:::HumanResources"]
        }
    ]
}

After I create the policy, HRBucketPermissions, I select this policy from the Policies page to view the policy summary. From here, I check to see if there are any warnings or typos in the policy. I see a warning at the top of the policy detail page because the policy does not grant some permissions specified in the policy, which is caused by a mismatch among the actions, resources, or conditions.

Screenshot showing the warning at the top of the policy

To view more details about the warning, I choose Show remaining so that I can understand why the permissions do not appear in the policy summary. As shown in the following screenshot, I see no access to the services that are not granted by the IAM policy in the policy, which is expected. However, next to S3, I see a warning that one or more S3 actions do not have an applicable resource.

Screenshot showing that one or more S3 actions do not have an applicable resource

To understand why the specific actions do not have a supported resource, I choose S3 from the list of services and choose Show remaining. I type List in the filter to understand why some of the list actions are not granted by the policy. As shown in the following screenshot, I see these warnings:

  • This action does not support resource-level permissions. This means the action does not support resource-level permissions and requires a wildcard (*) in the Resource element of the policy.
  • This action does not have an applicable resource. This means the action supports resource-level permissions, but not the resource type defined in the policy. In this example, I specified an S3 bucket for an action that supports only an S3 object resource type.

From these warnings, I see that s3:ListAllMyBuckets, s3:ListBucketMultipartUploadsParts3:ListObjects , and s3:GetObject do not support an S3 bucket resource type, which results in Casey not having access to the S3 bucket. To correct the policy, I choose Edit policy and update the policy with three statements based on the resource that the S3 actions support. Because Casey needs access to view and read all of the objects in the HumanResources bucket, I add a wildcard (*) for the S3 object path in the Resource ARN.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "TheseActionsSupportBucketResourceType",
            "Effect": "Allow",
            "Action": ["s3:ListBucket",
                       "s3:ListBucketByTags",
                       "s3:ListBucketMultipartUploads",
                       "s3:ListBucketVersions"],
            "Resource": ["arn:aws:s3:::HumanResources"]
        },{
            "Sid": "TheseActionsRequireAllResources",
            "Effect": "Allow",
            "Action": ["s3:ListAllMyBuckets",
                       "s3:ListMultipartUploadParts",
                       "s3:ListObjects"],
            "Resource": [ "*"]
        },{
            "Sid": "TheseActionsRequireSupportsObjectResourceType",
            "Effect": "Allow",
            "Action": ["s3:GetObject"],
            "Resource": ["arn:aws:s3:::HumanResources/*"]
        }
    ]
}

After I make these changes, I see the updated policy summary and see that warnings are no longer displayed.

Screenshot of the updated policy summary that no longer shows warnings

In the previous example, I showed how to identify and correct permissions errors that include actions that do not support a specified resource. In the next example, I show how to use policy summaries to identify and correct a policy that includes actions that do not support a specified condition.

Example 2: An action does not support the condition specified in a policy

For this example, let’s assume Bob is a project manager who requires view and read access to all the code builds for his team. To grant him this access, I create the following JSON policy that specifies all list and read actions to AWS CodeBuild and defines a condition to limit access to resources in the us-west-2 Region in which Bob’s team develops.

This policy does not work. Do not copy. 
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "ListReadAccesstoCodeServices",
            "Effect": "Allow",
            "Action": [
                "codebuild:List*",
                "codebuild:BatchGet*"
            ],
            "Resource": ["*"], 
             "Condition": {
                "StringEquals": {
                    "ec2:Region": "us-west-2"
                }
            }
        }
    ]	
}

After I create the policy, PMCodeBuildAccess, I select this policy from the Policies page to view the policy summary in the IAM console. From here, I check to see if the policy has any warnings or typos. I see an error at the top of the policy detail page because the policy does not grant any permissions.

Screenshot with an error showing the policy does not grant any permissions

To view more details about the error, I choose Show remaining to understand why no permissions result from the policy. I see this warning: One or more conditions do not have an applicable action. This means that the condition is not supported by any of the actions defined in the policy.

From the warning message (see preceding screenshot), I realize that ec2:Region is not a supported condition for any actions in CodeBuild. To correct the policy, I separate the list actions that do not support resource-level permissions into a separate Statement element and specify * as the resource. For the remaining CodeBuild actions that support resource-level permissions, I use the ARN to specify the us-west-2 Region in the project resource type.

CORRECT POLICY 
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "TheseActionsSupportAllResources",
            "Effect": "Allow",
            "Action": [
                "codebuild:ListBuilds",
                "codebuild:ListProjects",
                "codebuild:ListRepositories",
                "codebuild:ListCuratedEnvironmentImages",
                "codebuild:ListConnectedOAuthAccounts"
            ],
            "Resource": ["*"] 
        }, {
            "Sid": "TheseActionsSupportAResource",
            "Effect": "Allow",
            "Action": [
                "codebuild:ListBuildsForProject",
                "codebuild:BatchGet*"
            ],
            "Resource": ["arn:aws:codebuild:us-west-2:123456789012:project/*"] 
        }

    ]	
}

After I make the changes, I view the updated policy summary and see that no warnings are displayed.

Screenshot showing the updated policy summary with no warnings

When I choose CodeBuild from the list of services, I also see that for the actions that support resource-level permissions, the access is limited to the us-west-2 Region.

Screenshow showing that for the Actions that support resource-level permissions, the access is limited to the us-west-2 region.

Conclusion

Policy summaries make it easier to view and understand the permissions and resources in your IAM policies by displaying the permissions granted by the policies. As I’ve demonstrated in this post, you can also use policy summaries to help you identify and correct your IAM policies. To understand the types of warnings that policy summaries support, you can visit Troubleshoot IAM Policies. To view policy summaries in your AWS account, sign in to the IAM console and navigate to any policy on the Policies page of the IAM console or the Permissions tab on a user’s page.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or suggestions for this solution, start a new thread on the IAM forum or contact AWS Support.

– Joy

timeShift(GrafanaBuzz, 1w) Issue 13

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/09/15/timeshiftgrafanabuzz-1w-issue-13/

It’s been a busy week here at Grafana Labs – Grafana 4.5 is now available! We’ve made a lot of enhancements and added new features in this release, so be sure and check out the release blog post to see the full changelog. The GrafanaCon EU CFP is officially open so please don’t forget to submit your topic. We’re looking for technical and non-technical talks of all sizes.


Latest Release

Grafana v4.5 is available for download. The new Grafana 4.5 release includes major improvements to the query editors for Prometheus, Elasticsearch and MySQL.
View the changelog.

Download Grafana 4.5 Now


From the Blogosphere

Percona Live Europe Featured Talks: Visualize Your Data with Grafana Featuring Daniel Lee: The folks from Percona sat down with Grafana Labs Software Developer Daniel Lee to discuss his upcoming talk at PerconaLive Europe 2017, Dublin, and how data can drive better decision making for your business. Get your tickets now, and use code: SeeMeSpeakPLE17 for 10% off!

Register Now

Performance monitoring with ELK / Grafana: This article walks you through setting up the ELK stack to monitor webpage load time, but switches out Kibana for Grafana so you can visualize data from other sources right next to this performance data.

ESXi Lab Series: Aaron created a video mini-series about implementing both offensive and defensive security in an ESXi Lab environment. Parts four and five focus on monitoring with Grafana, but you’ll probably want to start with one.

Raspberry Pi Monitoring with Grafana: We’ve been excited to see more and more articles about Grafana from Raspberry Pi users. This article helps you install and configure Grafana, and also touches on what monitoring is and why it’s important.


Grafana Plugins

This week we were busy putting the finishing touches on the new release, but we do have an update to the Gnocchi data source plugin to announce, and a new annotation plugin that works with any data source. Install or update plugins on an on-prem instance using the Grafana-cli, or with one click on Hosted Grafana.

NEW PLUGIN

Simple Annotations – Frustrated with using a data source that doesn’t support annotations? This is a simple annotation plugin for Grafana that works with any data source!

Install Now

UPDATED PLUGIN

Gnocchi Data Source – The latest release adds the reaggregation feature. Gnocchi can pre-compute the aggregation of timeseries (ex: aggregate the mean every 10 minute for 1 year). Then allows you to (re)aggregate timeseries, since stored timeseries have already been aggregated. A big shout out to sileht for adding new features to the Gnocchi plugin.

Update Now


GrafanaCon EU Call for Papers is Open

Have a big idea to share? A shorter talk or a demo you’d like to show off? We’re looking for technical and non-technical talks of all sizes.

I’d Like to Speak at GrafanaCon


Tweet of the Week

We scour Twitter each week to find an interesting/beautiful dashboard and show it off! #monitoringLove

Awesome – really looking forward to seeing updates as you get to 1.0!

We Need Your Help

We’re conducting an experiment and need your help. Do you have a graph that you love because the data is beautiful or because the graph provides interesting information? Please get in touch. Tweet or send us an email with a screenshot, and we’ll tell you about the experiment.

Be Part of the Experiment


Grafana Labs is Hiring!

We are passionate about open source software and thrive on tackling complex challenges to build the future. We ship code from every corner of the globe and love working with the community. If this sounds exciting, you’re in luck – WE’RE HIRING!

Check out our Open Positions


What do you think?

We’re always interested in how we can improve our weekly roundups. Submit a comment on this article below, or post something at our community forum. Help us make these roundups better and better!

Follow us on Twitter, like us on Facebook, and join the Grafana Labs community.

Demonoid Hopes to Return to Its Former Glory

Post Syndicated from Ernesto original https://torrentfreak.com/demonoid-hopes-to-return-to-its-former-glory-170910/

Demonoid has been around for well over a decade but the site is not really known for having a stable presence.

Quite the opposite, the torrent tracker has a ‘habit’ of going offline for weeks or even months on end, only to reappear as if nothing ever happened.

Earlier this year the site made another one if its trademark comebacks and it has been sailing relatively smoothly since then. Interestingly, the site is once again under the wings of a familiar face, its original founder Deimos.

Deimos decided to take the lead again after some internal struggles. “I gave control to the wrong guys while the problems started, but it’s time to control stuff again,” Deimos told us earlier.

Since the return a few months back, the site’s main focus has been on rebuilding the community and improving the site. Some may have already noticed the new logo, but more changes are coming, both on the front and backend.

“The backend development is going a bit slow, it’s a big change that will allow the server to run off a bunch of small servers all over the world,” Deimos informs TorrentFreak.

“For the frontend, we’re working on new features including a karma system, integrated forums, buddy list, etc. That part is faster to build once you have everything in the back working,” he adds.

Demonoid’s new logo

Deimos has been on and off the site a few times, but he and a few others most recently returned to get it back on track and increase its popularity. While the site has around eight million registered users, many of these have moved elsewhere in recent years.

“I want to to see the community we had back. Don’t know if it’s possible but that’s my aim,” Deimos says, admitting that he may not stay on forever.

Many torrent sites have come and gone in recent years, but they are still here today. Looking back, Demonoid has come a long way. What many people don’t know, is that it was originally a place to share demo tapes of metal bands. Hence the name DEMOnoid.

“It originally started as a modified PHP based forum that allowed posting of .torrent files. At some point, we started using a full torrent indexing script written in PHP that included a tracker, and started building the first version of the indexing site it is today,” Deimos says.

The site required users to have an invite to sign up, making it a semi-private tracker. This wasn’t done to encourage people to maintain a certain ratio, as some other trackers do, but mostly to keep unsavory characters away.

“The invitation system was implemented to keep spammers, trolls and the like out,” Deimos says. “Originally it was due to some very problematic people who happened to have a death metal band, back in the DEMOnoid days.

“We try to keep it open as often as possible but when we start to get these kinds of issues, we close it,” he adds.

In recent years, the site has had quite a few setbacks, but Deimos doesn’t want to dwell on these in public. Instead, he prefers to focus on the future. While torrent sites are no longer at the center of media distribution, there will always be a place for dedicated sharing communities.

Whether Demonoid will ever return to its former glory is a big unknown for now, but Deimos is sure to do his best.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

WordPress Reports Surge in ‘Piracy’ Takedown Notices, Rejects 78%

Post Syndicated from Ernesto original https://torrentfreak.com/wordpress-reports-surge-in-piracy-takedown-notices-rejects-78-170909/

Automattic, the company behind the popular WordPress.com blogging platform, receives thousands of takedown requests from rightsholders.

A few days ago the company published its latest transparency report, showing that it had processed 9,273 requests during the first half of 2017.

This is more than double the amount it received during the same period last year, which is a significant increase. Looking more closely at the numbers, we see that this jump is solely due to an increase in incomplete and abusive requests.

Of all the DMCA notices received, only 22% resulted in the takedown of allegedly infringing content. This translates to 2,040 legitimate requests, which is less than the 2,342 Automattic received during the same period last year.

This logically means that the number of abusive and incomplete DMCA notices has skyrocketed. And indeed, in its most recent report, 78% of all requests were rejected due to missing information or plain abuse. That’s much more than the year before when 42% were rejected.

Automattic’s transparency report (first half of 2017)

WordPress prides itself on carefully reviewing the content of each and every takedown notice, to protect its users. This means checking whether a takedown request is properly formatted but also reviewing the legitimacy of the claims.

“We also may decline to remove content if a notice is abusive. ‘Abusive’ notices may be formally complete, but are directed at fair use of content, material that isn’t copyrightable, or content the complaining party misrepresents ownership of a copyright,” Automattic notes.

During the first half of 2017, a total of 649 takedown requests were categorized as abuse. Some of the most blatant examples go into the “Hall of Shame,” such as a recent case where the Canadian city of Abbotsford tried to censor a parody of its logo, which replaced a pine tree with a turd.

While some abuse cases sound trivial they can have a real impact on website operators, as examples outside of WordPress show. Most recently the operator of Oro Jackson, a community dedicated to the anime series “One Piece,” was targeted with several dubious DMCA requests.

The takedown notices were sent by the German company Comeso and were forwarded through their hosting company Linode. The notices urged the operator to remove various forum threads because they included words of phrases such as “G’day” and “Reveries of the Moonlight,” not actual infringing content.

G’day

Fearing legal repercussions, the operator saw no other option than to censor these seemingly harmless discussions (starting a thread with “G’day”!!), until there’s a final decision on the counter-notice. They remain offline today.

It’s understandable that hosting companies have to be strict sometimes, as reviewing copyright claims is not their core business. However, incidents like these show how valuable the skeptical review process of Automattic is.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

The Weather Station and the eclipse

Post Syndicated from Richard Hayler original https://www.raspberrypi.org/blog/weather-station-eclipse/

As everyone knows, one of the problems with the weather is that it can be difficult to predict a long time in advance. In the UK we’ve had stormy conditions for weeks but, of course, now that I’ve finished my lightning detector, everything has calmed down. If you’re planning to make scientific measurements of a particular phenomenon, patience is often required.

Oracle Weather Station

Wake STEM ECH get ready to safely observe the eclipse

In the path of the eclipse

Fortunately, this wasn’t a problem for Mr Burgess and his students at Wake STEM Early College High School in Raleigh, North Carolina, USA. They knew exactly when the event they were interested in studying was going to occur: they were going to use their Raspberry Pi Oracle Weather Station to monitor the progress of the 2017 solar eclipse.

Wake STEM EC HS on Twitter

Through the @Celestron telescope #Eclipse2017 @WCPSS via @stemburgess

Measuring the temperature drop

The Raspberry Pi Oracle Weather Stations are always active and recording data, so all the students needed to do was check that everything was connected and working. That left them free to enjoy the eclipse, and take some amazing pictures like the one above.

You can see from the data how the changes in temperature lag behind the solar events – this makes sense, as it takes a while for the air to cool down. When the sun starts to return, the temperature rise continues on its pre-eclipse trajectory.

Oracle Weather Station

Weather station data 21st Aug: the yellow bars mark the start and end of the eclipse, the red bar marks the maximum sun coverage.

Reading Mr Burgess’ description, I’m feeling rather jealous. Being in the path of the Eclipse sounds amazing: “In North Carolina we experienced 93% coverage, so a lot of sunlight was still shining, but the landscape took on an eerie look. And there was a cool wind like you’d experience at dusk, not at 2:30 pm on a hot summer day. I was amazed at the significant drop in temperature that occurred in a small time frame.”

Temperature drop during Eclipse Oracle Weather Station.

Close up of data showing temperature drop as recorded by the Raspberry Pi Oracle Weather Station. The yellow bars mark the start and end of the eclipse, the red bar marks the maximum sun coverage.

 Weather Station in the classroom

I’ve been preparing for the solar eclipse for almost two years, with the weather station arriving early last school year. I did not think about temperature data until I read about citizen scientists on a NASA website,” explains Mr Burgess, who is now in his second year of working with the Raspberry Pi Oracle Weather Station. Around 120 ninth-grade students (ages 14-15) have been involved with the project so far. “I’ve found that students who don’t have a strong interest in meteorology find it interesting to look at real data and figure out trends.”

Wake STEM EC Raspberry Pi Oracle Weather Station installation

Wake STEM EC Raspberry Pi Oracle Weather Station installation

As many schools have discovered, Mr Burgess found that the biggest challenge with the Weather Station project “was finding a suitable place to install the weather station in a place that could get power and Ethernet“. To help with this problem, we’ve recently added two new guides to help with installing the wind sensors outside and using WiFi to connect the kit to the Internet.

Raspberry Pi Oracle Weather Station

If you want to keep up to date with all the latest Raspberry Pi Oracle Weather Station activities undertaken by our network of schools around the world, make sure you regularly check our weather station forum. Meanwhile, everyone at Wake STEM ECH is already starting to plan for their next eclipse on Monday, April 8, 2024. I wonder if they’d like some help with their Weather Station?

The post The Weather Station and the eclipse appeared first on Raspberry Pi.

New Techniques in Fake Reviews

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/09/new_techniques_.html

Research paper: “Automated Crowdturfing Attacks and Defenses in Online Review Systems.”

Abstract: Malicious crowdsourcing forums are gaining traction as sources of spreading misinformation online, but are limited by the costs of hiring and managing human workers. In this paper, we identify a new class of attacks that leverage deep learning language models (Recurrent Neural Networks or RNNs) to automate the generation of fake online reviews for products and services. Not only are these attacks cheap and therefore more scalable, but they can control rate of content output to eliminate the signature burstiness that makes crowdsourced campaigns easy to detect.

Using Yelp reviews as an example platform, we show how a two phased review generation and customization attack can produce reviews that are indistinguishable by state-of-the-art statistical detectors. We conduct a survey-based user study to show these reviews not only evade human detection, but also score high on “usefulness” metrics by users. Finally, we develop novel automated defenses against these attacks, by leveraging the lossy transformation introduced by the RNN training and generation cycle. We consider countermeasures against our mechanisms, show that they produce unattractive cost-benefit tradeoffs for attackers, and that they can be further curtailed by simple constraints imposed by online service providers.

Amazon QuickSight Now Supports Search, Filter Groups, and Amazon S3 Analytics Connector

Post Syndicated from Luis Wang original https://aws.amazon.com/blogs/big-data/amazon-quicksight-now-supports-search-filter-groups-and-amazon-s3-analytics-connector/

Today, I’m excited to share information about some new features in Amazon QuickSight. First, you can now search for datasets, analyses, and dashboards in Amazon QuickSight using the unified search box, making it faster and easier to find and access your data. Next, you can now create filter groups with multiple filter conditions that are evaluated together using the OR operation. Finally, you can now use the built-in Amazon S3 analytics connector to visualize your S3 storage access patterns across multiple S3 buckets and configurations within a single Amazon QuickSight dashboard to optimize for cost.

Search

You can now easily and quickly find and access your datasets, analyses, and dashboards using the unified search box in Amazon QuickSight. Type in what you’re looking for and you get a list of all matches in a unified view. From there, you can take actions such as creating an analysis from a dataset, modifying a dataset, or accessing an analysis or dashboard.

Filter groups

Filters are one of the most important features in Amazon QuickSight. Before this release, you could create multiple filters that were evaluated using the AND operation. With this release, you can now create multiple filters that are evaluated using the OR operation. This provides you with the flexibility to apply more complex filters to your data and visualizations. For example, you can create a chart that shows customers who have spent less than $100 OR made three or more purchases.

Amazon S3 analytics connector

In July, AWS introduced the ability to analyze and visualize your Amazon S3 storage access patterns using Amazon QuickSight in one click from the S3 console. Today, AWS released a dedicated S3 analytics connector in Amazon QuickSight. This connector allows you to import S3 analytics data for different buckets and configurations into a single Amazon QuickSight dataset. With this dataset, you can then create analyses and dashboards that tracks all of your S3 usage patterns in a single view.

Learn more

To learn more about these capabilities and start using them in your dashboards, see the Amazon QuickSight User Guide.

Stay engaged

If you have questions or suggestions, you can post them on the Amazon QuickSight discussion forum.

Not an Amazon QuickSight user?

To get started for FREE, see quicksight.aws.

 

timeShift(GrafanaBuzz, 1w) Issue 11

Post Syndicated from Blogs on Grafana Labs Blog original https://grafana.com/blog/2017/09/01/timeshiftgrafanabuzz-1w-issue-11/

September is here and summer is officially drawing to a close, but the Grafana team has stayed busy. We’re prepping for an upcoming Grafana 4.5 release, had some new and updated plugins, and would like to thank two contributors for fixing a non-obvious bug. Also – The CFP for GrafanaCon EU is open, and we’d like you to speak!


GrafanaCon EU CFP is Open

Have a big idea to share? Have a shorter talk or a demo you’d like to show off?
We’re looking for 40-minute detailed talks, 20-minute general talks and 10-minute lightning talks. We have a perfect slot for any type of content.

I’d Like to Speak at GrafanaCon

Grafana Labs is Hiring!

Do you believe in open source software? Build the future with us, and ship code.

Check out our open positions

From the Blogosphere

Zabbix, Grafana and Python, a Match Made in Heaven: David’s article, published earlier this year, hits on some great points about open source software and how you don’t have to spend much (or any) money to get valuable monitoring for your infrastructure.

The Business of Democratizing Metrics: Our friends over at Packet stopped by the office recently to sit down and chat with the Grafana Labs co-founders. They discussed how Grafana started, how monitoring has evolved, and democratizing metrics.

Visualizing CloudWatch with Grafana: Yuzo put together an article outlining his first experience adding a CloudWatch data source in Grafana, importing his first dashboard, then comparing the graphs between Grafana and CloudWatch.

Monitoring Linux performance with Grafana: Jim wanted to monitor his CentOS home router to get network traffic and disk usage stats, but wanted to try something different than his previous cacti monitoring. This walkthrough shows how he set things up to collect, store and visualize the data.

Visualizing Jenkins Pipeline Results in Grafana: Piotr provides a walkthrough of his setup and configuration to view Jenkins build results for his continuous delivery environment in Grafana.


Grafana Plugins

This week we’ve added a plugin for the new time series database Sidewinder, and updates to the Carpet Plot graph panel. If you haven’t installed a plugin, it’s easy. For on-premises installations, the Grafana-cli will do the work for you. If you’re using Hosted Grafana, you can install any plugin with one click.

NEW PLUGIN

Sidewinder Data Source – This is a data source plugin for the new Sidewinder database. Sidewinder is an open source, fast time series database designed for real-time analytics. It can be used for a variety of use cases that need storage of metrics data like APM and IoT.

Install Now

UPDATED PLUGIN

Carpet Plot Panel – This plugin received an update, which includes the following features and fixes:

  • New aggregate functions: Min, Max, First, Last
  • Possibility to invert color scheme
  • Possibility to change X axis label format
  • Possibility to hide X and Y axis labels

Update Now


This week’s MVC (Most Valuable Contributor)

This week we want to thank two contributors who worked together to fix a non-obvious bug in the new MySQL data source (a bug with sorting values in the legend).

robinsonjj
Thank you Joe, for tackling this issue and submitting a PR with an initial fix.

pdoan017
pdoan017 took robinsonjj’s contribution and added a new PR to retain the order in which keys are added.

Thank you both for taking the time to both troubleshoot and fix the issue. Much appreciated!


Tweet of the Week

We scour Twitter each week to find an interesting/beautiful dashboard and show it off! #monitoringLove

Nice! Combining different panel types on a dashboard can add more context to your data – Looks like a very functional dashboard.


What do you think?

Let us know how we’re doing! Submit a comment on this article below, or post something at our community forum. Help us make these roundups better and better!

Follow us on Twitter, like us on Facebook, and join the Grafana Labs community.

How to Configure an LDAPS Endpoint for Simple AD

Post Syndicated from Cameron Worrell original https://aws.amazon.com/blogs/security/how-to-configure-an-ldaps-endpoint-for-simple-ad/

Simple AD, which is powered by Samba  4, supports basic Active Directory (AD) authentication features such as users, groups, and the ability to join domains. Simple AD also includes an integrated Lightweight Directory Access Protocol (LDAP) server. LDAP is a standard application protocol for the access and management of directory information. You can use the BIND operation from Simple AD to authenticate LDAP client sessions. This makes LDAP a common choice for centralized authentication and authorization for services such as Secure Shell (SSH), client-based virtual private networks (VPNs), and many other applications. Authentication, the process of confirming the identity of a principal, typically involves the transmission of highly sensitive information such as user names and passwords. To protect this information in transit over untrusted networks, companies often require encryption as part of their information security strategy.

In this blog post, we show you how to configure an LDAPS (LDAP over SSL/TLS) encrypted endpoint for Simple AD so that you can extend Simple AD over untrusted networks. Our solution uses Elastic Load Balancing (ELB) to send decrypted LDAP traffic to HAProxy running on Amazon EC2, which then sends the traffic to Simple AD. ELB offers integrated certificate management, SSL/TLS termination, and the ability to use a scalable EC2 backend to process decrypted traffic. ELB also tightly integrates with Amazon Route 53, enabling you to use a custom domain for the LDAPS endpoint. The solution needs the intermediate HAProxy layer because ELB can direct traffic only to EC2 instances. To simplify testing and deployment, we have provided an AWS CloudFormation template to provision the ELB and HAProxy layers.

This post assumes that you have an understanding of concepts such as Amazon Virtual Private Cloud (VPC) and its components, including subnets, routing, Internet and network address translation (NAT) gateways, DNS, and security groups. You should also be familiar with launching EC2 instances and logging in to them with SSH. If needed, you should familiarize yourself with these concepts and review the solution overview and prerequisites in the next section before proceeding with the deployment.

Note: This solution is intended for use by clients requiring an LDAPS endpoint only. If your requirements extend beyond this, you should consider accessing the Simple AD servers directly or by using AWS Directory Service for Microsoft AD.

Solution overview

The following diagram and description illustrates and explains the Simple AD LDAPS environment. The CloudFormation template creates the items designated by the bracket (internal ELB load balancer and two HAProxy nodes configured in an Auto Scaling group).

Diagram of the the Simple AD LDAPS environment

Here is how the solution works, as shown in the preceding numbered diagram:

  1. The LDAP client sends an LDAPS request to ELB on TCP port 636.
  2. ELB terminates the SSL/TLS session and decrypts the traffic using a certificate. ELB sends the decrypted LDAP traffic to the EC2 instances running HAProxy on TCP port 389.
  3. The HAProxy servers forward the LDAP request to the Simple AD servers listening on TCP port 389 in a fixed Auto Scaling group configuration.
  4. The Simple AD servers send an LDAP response through the HAProxy layer to ELB. ELB encrypts the response and sends it to the client.

Note: Amazon VPC prevents a third party from intercepting traffic within the VPC. Because of this, the VPC protects the decrypted traffic between ELB and HAProxy and between HAProxy and Simple AD. The ELB encryption provides an additional layer of security for client connections and protects traffic coming from hosts outside the VPC.

Prerequisites

  1. Our approach requires an Amazon VPC with two public and two private subnets. The previous diagram illustrates the environment’s VPC requirements. If you do not yet have these components in place, follow these guidelines for setting up a sample environment:
    1. Identify a region that supports Simple AD, ELB, and NAT gateways. The NAT gateways are used with an Internet gateway to allow the HAProxy instances to access the internet to perform their required configuration. You also need to identify the two Availability Zones in that region for use by Simple AD. You will supply these Availability Zones as parameters to the CloudFormation template later in this process.
    2. Create or choose an Amazon VPC in the region you chose. In order to use Route 53 to resolve the LDAPS endpoint, make sure you enable DNS support within your VPC. Create an Internet gateway and attach it to the VPC, which will be used by the NAT gateways to access the internet.
    3. Create a route table with a default route to the Internet gateway. Create two NAT gateways, one per Availability Zone in your public subnets to provide additional resiliency across the Availability Zones. Together, the routing table, the NAT gateways, and the Internet gateway enable the HAProxy instances to access the internet.
    4. Create two private routing tables, one per Availability Zone. Create two private subnets, one per Availability Zone. The dual routing tables and subnets allow for a higher level of redundancy. Add each subnet to the routing table in the same Availability Zone. Add a default route in each routing table to the NAT gateway in the same Availability Zone. The Simple AD servers use subnets that you create.
    5. The LDAP service requires a DNS domain that resolves within your VPC and from your LDAP clients. If you do not have an existing DNS domain, follow the steps to create a private hosted zone and associate it with your VPC. To avoid encryption protocol errors, you must ensure that the DNS domain name is consistent across your Route 53 zone and in the SSL/TLS certificate (see Step 2 in the “Solution deployment” section).
  2. Make sure you have completed the Simple AD Prerequisites.
  3. We will use a self-signed certificate for ELB to perform SSL/TLS decryption. You can use a certificate issued by your preferred certificate authority or a certificate issued by AWS Certificate Manager (ACM).
    Note: To prevent unauthorized connections directly to your Simple AD servers, you can modify the Simple AD security group on port 389 to block traffic from locations outside of the Simple AD VPC. You can find the security group in the EC2 console by creating a search filter for your Simple AD directory ID. It is also important to allow the Simple AD servers to communicate with each other as shown on Simple AD Prerequisites.

Solution deployment

This solution includes five main parts:

  1. Create a Simple AD directory.
  2. Create a certificate.
  3. Create the ELB and HAProxy layers by using the supplied CloudFormation template.
  4. Create a Route 53 record.
  5. Test LDAPS access using an Amazon Linux client.

1. Create a Simple AD directory

With the prerequisites completed, you will create a Simple AD directory in your private VPC subnets:

  1. In the Directory Service console navigation pane, choose Directories and then choose Set up directory.
  2. Choose Simple AD.
    Screenshot of choosing "Simple AD"
  3. Provide the following information:
    • Directory DNS – The fully qualified domain name (FQDN) of the directory, such as corp.example.com. You will use the FQDN as part of the testing procedure.
    • NetBIOS name – The short name for the directory, such as CORP.
    • Administrator password – The password for the directory administrator. The directory creation process creates an administrator account with the user name Administrator and this password. Do not lose this password because it is nonrecoverable. You also need this password for testing LDAPS access in a later step.
    • Description – An optional description for the directory.
    • Directory Size – The size of the directory.
      Screenshot of the directory details to provide
  4. Provide the following information in the VPC Details section, and then choose Next Step:
    • VPC – Specify the VPC in which to install the directory.
    • Subnets – Choose two private subnets for the directory servers. The two subnets must be in different Availability Zones. Make a note of the VPC and subnet IDs for use as CloudFormation input parameters. In the following example, the Availability Zones are us-east-1a and us-east-1c.
      Screenshot of the VPC details to provide
  5. Review the directory information and make any necessary changes. When the information is correct, choose Create Simple AD.

It takes several minutes to create the directory. From the AWS Directory Service console , refresh the screen periodically and wait until the directory Status value changes to Active before continuing. Choose your Simple AD directory and note the two IP addresses in the DNS address section. You will enter them when you run the CloudFormation template later.

Note: Full administration of your Simple AD implementation is out of scope for this blog post. See the documentation to add users, groups, or instances to your directory. Also see the previous blog post, How to Manage Identities in Simple AD Directories.

2. Create a certificate

In the previous step, you created the Simple AD directory. Next, you will generate a self-signed SSL/TLS certificate using OpenSSL. You will use the certificate with ELB to secure the LDAPS endpoint. OpenSSL is a standard, open source library that supports a wide range of cryptographic functions, including the creation and signing of x509 certificates. You then import the certificate into ACM that is integrated with ELB.

  1. You must have a system with OpenSSL installed to complete this step. If you do not have OpenSSL, you can install it on Amazon Linux by running the command, sudo yum install openssl. If you do not have access to an Amazon Linux instance you can create one with SSH access enabled to proceed with this step. Run the command, openssl version, at the command line to see if you already have OpenSSL installed.
    [[email protected] ~]$ openssl version
    OpenSSL 1.0.1k-fips 8 Jan 2015

  2. Create a private key using the command, openssl genrsa command.
    [[email protected] tmp]$ openssl genrsa 2048 > privatekey.pem
    Generating RSA private key, 2048 bit long modulus
    ......................................................................................................................................................................+++
    ..........................+++
    e is 65537 (0x10001)

  3. Generate a certificate signing request (CSR) using the openssl req command. Provide the requested information for each field. The Common Name is the FQDN for your LDAPS endpoint (for example, ldap.corp.example.com). The Common Name must use the domain name you will later register in Route 53. You will encounter certificate errors if the names do not match.
    [[email protected] tmp]$ openssl req -new -key privatekey.pem -out server.csr
    You are about to be asked to enter information that will be incorporated into your certificate request.

  4. Use the openssl x509 command to sign the certificate. The following example uses the private key from the previous step (privatekey.pem) and the signing request (server.csr) to create a public certificate named server.crt that is valid for 365 days. This certificate must be updated within 365 days to avoid disruption of LDAPS functionality.
    [[email protected] tmp]$ openssl x509 -req -sha256 -days 365 -in server.csr -signkey privatekey.pem -out server.crt
    Signature ok
    subject=/C=XX/L=Default City/O=Default Company Ltd/CN=ldap.corp.example.com
    Getting Private key

  5. You should see three files: privatekey.pem, server.crt, and server.csr.
    [[email protected] tmp]$ ls
    privatekey.pem server.crt server.csr

    Restrict access to the private key.

    [[email protected] tmp]$ chmod 600 privatekey.pem

    Keep the private key and public certificate for later use. You can discard the signing request because you are using a self-signed certificate and not using a Certificate Authority. Always store the private key in a secure location and avoid adding it to your source code.

  6. In the ACM console, choose Import a certificate.
  7. Using your favorite Linux text editor, paste the contents of your server.crt file in the Certificate body box.
  8. Using your favorite Linux text editor, paste the contents of your privatekey.pem file in the Certificate private key box. For a self-signed certificate, you can leave the Certificate chain box blank.
  9. Choose Review and import. Confirm the information and choose Import.

3. Create the ELB and HAProxy layers by using the supplied CloudFormation template

Now that you have created your Simple AD directory and SSL/TLS certificate, you are ready to use the CloudFormation template to create the ELB and HAProxy layers.

  1. Load the supplied CloudFormation template to deploy an internal ELB and two HAProxy EC2 instances into a fixed Auto Scaling group. After you load the template, provide the following input parameters. Note: You can find the parameters relating to your Simple AD from the directory details page by choosing your Simple AD in the Directory Service console.
Input parameter Input parameter description
HAProxyInstanceSize The EC2 instance size for HAProxy servers. The default size is t2.micro and can scale up for large Simple AD environments.
MyKeyPair The SSH key pair for EC2 instances. If you do not have an existing key pair, you must create one.
VPCId The target VPC for this solution. Must be in the VPC where you deployed Simple AD and is available in your Simple AD directory details page.
SubnetId1 The Simple AD primary subnet. This information is available in your Simple AD directory details page.
SubnetId2 The Simple AD secondary subnet. This information is available in your Simple AD directory details page.
MyTrustedNetwork Trusted network Classless Inter-Domain Routing (CIDR) to allow connections to the LDAPS endpoint. For example, use the VPC CIDR to allow clients in the VPC to connect.
SimpleADPriIP The primary Simple AD Server IP. This information is available in your Simple AD directory details page.
SimpleADSecIP The secondary Simple AD Server IP. This information is available in your Simple AD directory details page.
LDAPSCertificateARN The Amazon Resource Name (ARN) for the SSL certificate. This information is available in the ACM console.
  1. Enter the input parameters and choose Next.
  2. On the Options page, accept the defaults and choose Next.
  3. On the Review page, confirm the details and choose Create. The stack will be created in approximately 5 minutes.

4. Create a Route 53 record

The next step is to create a Route 53 record in your private hosted zone so that clients can resolve your LDAPS endpoint.

  1. If you do not have an existing DNS domain for use with LDAP, create a private hosted zone and associate it with your VPC. The hosted zone name should be consistent with your Simple AD (for example, corp.example.com).
  2. When the CloudFormation stack is in CREATE_COMPLETE status, locate the value of the LDAPSURL on the Outputs tab of the stack. Copy this value for use in the next step.
  3. On the Route 53 console, choose Hosted Zones and then choose the zone you used for the Common Name box for your self-signed certificate. Choose Create Record Set and enter the following information:
    1. Name – The label of the record (such as ldap).
    2. Type – Leave as A – IPv4 address.
    3. Alias – Choose Yes.
    4. Alias Target – Paste the value of the LDAPSURL on the Outputs tab of the stack.
  4. Leave the defaults for Routing Policy and Evaluate Target Health, and choose Create.
    Screenshot of finishing the creation of the Route 53 record

5. Test LDAPS access using an Amazon Linux client

At this point, you have configured your LDAPS endpoint and now you can test it from an Amazon Linux client.

  1. Create an Amazon Linux instance with SSH access enabled to test the solution. Launch the instance into one of the public subnets in your VPC. Make sure the IP assigned to the instance is in the trusted IP range you specified in the CloudFormation parameter MyTrustedNetwork in Step 3.b.
  2. SSH into the instance and complete the following steps to verify access.
    1. Install the openldap-clients package and any required dependencies:
      sudo yum install -y openldap-clients.
    2. Add the server.crt file to the /etc/openldap/certs/ directory so that the LDAPS client will trust your SSL/TLS certificate. You can copy the file using Secure Copy (SCP) or create it using a text editor.
    3. Edit the /etc/openldap/ldap.conf file and define the environment variables BASE, URI, and TLS_CACERT.
      • The value for BASE should match the configuration of the Simple AD directory name.
      • The value for URI should match your DNS alias.
      • The value for TLS_CACERT is the path to your public certificate.

Here is an example of the contents of the file.

BASE dc=corp,dc=example,dc=com
URI ldaps://ldap.corp.example.com
TLS_CACERT /etc/openldap/certs/server.crt

To test the solution, query the directory through the LDAPS endpoint, as shown in the following command. Replace corp.example.com with your domain name and use the Administrator password that you configured with the Simple AD directory

$ ldapsearch -D "[email protected]corp.example.com" -W sAMAccountName=Administrator

You should see a response similar to the following response, which provides the directory information in LDAP Data Interchange Format (LDIF) for the administrator distinguished name (DN) from your Simple AD LDAP server.

# extended LDIF
#
# LDAPv3
# base <dc=corp,dc=example,dc=com> (default) with scope subtree
# filter: sAMAccountName=Administrator
# requesting: ALL
#

# Administrator, Users, corp.example.com
dn: CN=Administrator,CN=Users,DC=corp,DC=example,DC=com
objectClass: top
objectClass: person
objectClass: organizationalPerson
objectClass: user
description: Built-in account for administering the computer/domain
instanceType: 4
whenCreated: 20170721123204.0Z
uSNCreated: 3223
name: Administrator
objectGUID:: l3h0HIiKO0a/ShL4yVK/vw==
userAccountControl: 512
…

You can now use the LDAPS endpoint for directory operations and authentication within your environment. If you would like to learn more about how to interact with your LDAPS endpoint within a Linux environment, here are a few resources to get started:

Troubleshooting

If you receive an error such as the following error when issuing the ldapsearch command, there are a few things you can do to help identify issues.

ldap_sasl_bind(SIMPLE): Can't contact LDAP server (-1)
  • You might be able to obtain additional error details by adding the -d1 debug flag to the ldapsearch command in the previous section.
    $ ldapsearch -D "[email protected]" -W sAMAccountName=Administrator –d1

  • Verify that the parameters in ldap.conf match your configured LDAPS URI endpoint and that all parameters can be resolved by DNS. You can use the following dig command, substituting your configured endpoint DNS name.
    $ dig ldap.corp.example.com

  • Confirm that the client instance from which you are connecting is in the CIDR range of the CloudFormation parameter, MyTrustedNetwork.
  • Confirm that the path to your public SSL/TLS certificate configured in ldap.conf as TLS_CAERT is correct. You configured this in Step 5.b.3. You can check your SSL/TLS connection with the command, substituting your configured endpoint DNS name for the string after –connect.
    $ echo -n | openssl s_client -connect ldap.corp.example.com:636

  • Verify that your HAProxy instances have the status InService in the EC2 console: Choose Load Balancers under Load Balancing in the navigation pane, highlight your LDAPS load balancer, and then choose the Instances

Conclusion

You can use ELB and HAProxy to provide an LDAPS endpoint for Simple AD and transport sensitive authentication information over untrusted networks. You can explore using LDAPS to authenticate SSH users or integrate with other software solutions that support LDAP authentication. This solution’s CloudFormation template is available on GitHub.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or issues implementing this solution, start a new thread on the Directory Service forum.

– Cameron and Jeff