Tag Archives: ip address

Swedish Internet Users Face New Wave of Piracy Cash Demands

Post Syndicated from Andy original https://torrentfreak.com/swedish-internet-users-face-new-wave-of-piracy-cash-demands-170225/

Last year, mass ‘copyright-trolling’ hit Sweden for the first time. An organization calling itself Spridningskollen (Distribution Check) claimed its new initiative would save the entertainment industries and educate the masses.

Predictably there was a huge backlash, both among the public and in the media, something which eventually led the group to discontinue its operations in the country. Now, however, a new wave of trolling is about to hit the country.

Swedish publication Breakit.se reports that a major new offensive is about to begin, with a Danish law firm Njord and movie company Zentropa at the helm.

The companies are targeting the subscribers of several ISPs, including Telia, Tele2 and Bredbandsbolaget, the provider that will shortly begin blocking The Pirate Bay. It’s not clear how many people will be targeted but Breakit says that many thousands of IP addresses cover 42 pages of court documents.

Bredbandsbolaget confirmed that a court order exists and it will be forced to hand over the personal details of its subscribers.

“The first time we received such a request, we appealed because we do not think that the privacy-related sacrifice is proportionate to the crimes that were allegedly committed. Unfortunately we lost and must now follow the court order,” a spokesperson said.

It appears the trolls are taking extreme measures to ensure that ISPs comply. Some Swedish ISPs have a policy of deleting IP address logs but earlier this week a court ordered Telia to preserve data or face a $22,000 fine.

Jeppe Brogaard Clausen of the Njord lawfirm says that after identifying the subscribers he wants to “enter into non-aggressive dialogue” with them. But while this might sound like a friendly approach, the ultimate aim will be to extract money. It’s also worth considering who is behind this operation.

The BitTorrent tracking in the case was carried out by MaverickEye, a German-based company that continually turns up in similar cases all over Europe and the United States. The company and its operator Patrick Achache are part of the notorious Guardaley trolling operation.

Also of interest is the involvement of UK-based Copyright Management Services Ltd, whose sole director is none other than Patrick Achache himself. The company is based at the same London address as fellow copyright trolling partner Hatton and Berkeley, which previously sent cash settlement demands to Internet users in the UK.

In addition to two Zentropa titles, the movies involved in the Swedish action are CELL, IT, London Has Fallen, Mechanic: Resurrection, Criminal and September of Shiraz. All have featured in previous Guardaley cases in the United States.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Get ‘Back to my Pi’ from anywhere with VNC Connect

Post Syndicated from Alex Bate original https://www.raspberrypi.org/blog/get-back-to-my-pi-from-anywhere-with-vnc-connect/

In today’s guest blog, Andy Clark, Engineering Manager at RealVNC, introduces VNC Connect: a brand-new, and free, version of VNC that makes it simple to connect securely to your Raspberry Pi from anywhere in the world.

Since September 2016, every version of Raspbian has come with the built-in ability to remotely access and control your Raspberry Pi’s screen from another computer, using a technology called VNC. As the original inventors of this technology, RealVNC were happy to partner with Raspberry Pi to provide the community with the latest and most secure version of VNC for free.

We’re always looking to improve things, and one criticism of VNC technology over the years has been its steep learning curve. In particular, you need a bit of networking knowledge in order to connect to a Pi on the same network, and a heck of a lot to get a connection working across the internet!

This is why we developed VNC Connect, a brand-new version of VNC that allows you not only to make direct connections within your own networks, but also to make secure cloud-brokered connections back to your computer from anywhere in the world, with no specialist networking knowledge needed.

I’m delighted to announce that VNC Connect is available for Raspberry Pi, and from today is included in the Raspbian repositories. What’s more, we’ve added some extra features and functionality tailored to the Raspberry Pi community, and it’s all still free for non-commercial and educational use.

‘Back to my Pi’ and direct connections

The main change in VNC Connect is the ability to connect back to your Raspberry Pi from anywhere in the world, from a wide range of devices, without any complex port forwarding or IP addressing configuration. Our cloud service brokers a secure, end-to-end encrypted connection back to your Pi, letting you take control simply and securely from wherever you happen to be.

RealVNC

While this convenience is great for a lot of our standard home users, it’s not enough for the demands of the Raspberry Pi community! The Raspberry Pi is a great educational platform, and gets used in inventive and non-standard ways all the time. So on the Raspberry Pi, you can still make direct TCP connections the way you’ve always done with VNC. This way, you can have complete control over your project and learn all about IP networking if you want, or you can choose the simplicity of a cloud-brokered connection if that’s what you need.

Simpler connection management

Choosing the computer to connect to using VNC has historically been a fiddly process, requiring you to remember IP addresses or hostnames, or use a separate application to keep track of things. With VNC Connect we’ve introduced a new VNC Viewer with a built-in address book and enhanced UI, making it much simpler and quicker to manage your devices and connections. You now have the option of securely saving passwords for frequently used connections, and you can synchronise your entries with other VNC Viewers, making it easier to access your Raspberry Pi from other computers, tablets, or mobile devices.

RealVNC

Direct capture performance improvements

We’ve been working hard to make improvements to the experimental ‘direct capture’ feature of VNC Connect that’s unique to the Raspberry Pi. This feature allows you to see and control applications that render directly to the screen, like Minecraft, omxplayer, or even the terminal. You should find that performance of VNC in direct capture mode has improved, and is much more usable for interactive tasks.

RealVNC

Getting VNC Connect

VNC Connect is available in the Raspbian repositories from today, so running the following commands at a terminal will install it:

sudo apt-get update

sudo apt-get install realvnc-vnc-server realvnc-vnc-viewer

If you’re already running VNC Server or VNC Viewer, the same commands will install the update; then you’ll need to restart it to use the latest version.

There’s more information about getting set up on the RealVNC Raspberry Pi page. If you want to take advantage of the cloud connectivity, you’ll need to sign up for a RealVNC account, and you can do that here too.

Come and see us!

We’ve loved working with the Raspberry Pi Foundation and the community over the past few years, and making VNC Connect available for free on the Raspberry Pi is just the next phase of our ongoing relationship.

We’d love to get your feedback on Twitter, in the forums, or in the comments below. We’ll be at the Raspberry Pi Big Birthday Weekend again this year on 4-5 March in Cambridge, so please come and say hi and let us know how you use VNC Connect!

The post Get ‘Back to my Pi’ from anywhere with VNC Connect appeared first on Raspberry Pi.

CloudFlare Puts Pirate Sites on New IP Addresses, Avoids Cogent Blockade

Post Syndicated from Ernesto original https://torrentfreak.com/cloudflare-puts-pirate-sites-on-new-ip-addresses-avoids-cogent-blockade-170215/

Last week the news broke that Cogent, which operates one of the largest Internet backbone networks, blackholed IP-addresses that were linked to several notorious sites including The Pirate Bay.

As a result of this action, people from all over the world were unable to get to their favorite download or streaming portals.

The blocking intervention is quite controversial, not least because the IP-addresses in question don’t belong to the sites themselves, but to the popular CDN provider CloudFlare.

While CloudFlare hasn’t publicly commented on the issue yet, it now appears to have taken countermeasures. A little while ago the company moved The Pirate Bay and many other sites such as Primewire, Popcorn-Time.se, and Torrentz.cd to a new set of IP-addresses.

As of yesterday, the sites in question have been assigned the IP-addresses 104.31.16.3 and 104.31.17.3, still grouped together. Most, if not all of the sites, are blocked by court order in the UK so this is presumably done to prevent ISP overblocking of ‘regular’ CloudFlare subscribers.

TPB accessible on the new CloudFlare IP-address

Since Cogent hasn’t blackholed the new addresses, yet, the sites are freely accessible on their network once again. At the same time, the old CloudFlare IP-addresses remain blocked.

Old CloudFlare IP-addresses remains blocked

TorrentFreak spoke to the operator of one of the sites involved who said that he made no changes on his end. CloudFlare didn’t alert the site owner about the issue either.

We contacted CloudFlare yesterday asking for a comment of the situation, but a company could not give an official response at the time.

It seems likely that the change of IP-addresses is an intentional response from CloudFlare to bypass the blocking. The company has a reputation of fighting overreach and keeping its subscribers online so that it would be fitting.

The next question that comes to mind is will Cogent respond, and if so, how? Or has the underlying issue perhaps been resolved in another way?

If their original blockade was meant to block one or more of the sites involved, will they then also block the new IP-addresses? Only time will tell.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

How to Enable Multi-Factor Authentication for AWS Services by Using AWS Microsoft AD and On-Premises Credentials

Post Syndicated from Peter Pereira original https://aws.amazon.com/blogs/security/how-to-enable-multi-factor-authentication-for-amazon-workspaces-and-amazon-quicksight-by-using-microsoft-ad-and-on-premises-credentials/

You can now enable multi-factor authentication (MFA) for users of AWS services such as Amazon WorkSpaces and Amazon QuickSight and their on-premises credentials by using your AWS Directory Service for Microsoft Active Directory (Enterprise Edition) directory, also known as AWS Microsoft AD. MFA adds an extra layer of protection to a user name and password (the first “factor”) by requiring users to enter an authentication code (the second factor), which has been provided by your virtual or hardware MFA solution. These factors together provide additional security by preventing access to AWS services, unless users supply a valid MFA code.

To enable MFA for AWS services such as Amazon WorkSpaces and QuickSight, a key requirement is an MFA solution that is a Remote Authentication Dial-In User Service (RADIUS) server or a plugin to a RADIUS server already implemented in your on-premises infrastructure. RADIUS is an industry-standard client/server protocol that provides authentication, authorization, and accounting management to enable users to connect network services. The RADIUS server connects to your on-premises AD to authenticate and authorize users. For the purposes of this blog post, I will use “RADIUS/MFA” to refer to your on-premises RADIUS and MFA authentication solution.

In this blog post, I show how to enable MFA for your Amazon WorkSpaces users in two steps: 1) Configure your RADIUS/MFA server to accept Microsoft AD requests, and 2) configure your Microsoft AD directory to enable MFA.

Getting started

The solution in this blog post assumes that you already have the following components running:

  1. An active Microsoft AD directory
  2. An on-premises AD
  3. A trust relationship between your Microsoft AD and on-premises AD directories

To learn more about how to set up Microsoft AD and create trust relationships to enable Amazon WorkSpaces users to use AD on-premises credentials, see Now Available: Simplified Configuration of Trust Relationship in the AWS Directory Service Console.

Solution overview

The following network diagram shows the components you must have running to enable RADIUS/MFA for Amazon WorkSpaces. The left side in the diagram (covered in Step 1 below) represents your corporate data center with your on-premises AD connected to your RADIUS/MFA infrastructure that will provide the RADIUS user authentication. The right side (covered in Step 2 below) shows your Microsoft AD directory in the AWS Cloud connected to your on-premises AD via trust relationship, and the Amazon WorkSpaces joined to your Microsoft AD directory that will require the MFA code when you configure your environment by following Step 1 and Step 2.
Network diagram

Step 1 – Configure your RADIUS/MFA server to accept Microsoft AD requests

The following steps show you how to configure your RADIUS/MFA server to accept requests from your Microsoft AD directory.

  1. Obtain the Microsoft AD domain controller (DC) IP addresses to configure your RADIUS/MFA server:
    1. Open the AWS Management Console, choose Directory Service, and then choose your Microsoft AD Directory ID link.
      Screenshot of choosing Microsoft AD Directory ID link
    2. On the Directory details page, you will see the two DC IP addresses for your Microsoft AD directory (shown in the following screenshot as DNS Address). Your Microsoft AD DCs are the RADIUS clients to your RADIUS/MFA server.
      Screenshot of the two DC IP addresses for your Microsoft AD directory
  2. Configure your RADIUS/MFA server to add the RADIUS clients. If your RADIUS/MFA server supports DNS addresses, you will need to create only one RADIUS client configuration. Otherwise, you must create one RADIUS client configuration for each Microsoft AD DC, using the DC IP addresses you obtained in Step 1:
    1. Open your RADIUS client configuration screen in your RADIUS/MFA solution.
    2. Create one RADIUS client configuration for each Microsoft AD DC. The following are the common parameters (your RADIUS/MFA server may vary):
      • Address (DNS or IP): Type the DNS address of your Microsoft AD directory or the IP address of your Microsoft AD DC you obtained in Step 1.
      • Port number: You might need to configure the port number of your RADIUS/MFA server on which your RADIUS/MFA server accepts RADIUS client connections. The standard RADIUS port is 1812.
      • Shared secret: Type or generate a shared secret that will be used by the RADIUS/MFA server to connect with RADIUS clients.
      • Protocol: You might need to configure the authentication protocol between the Microsoft AD DCs and the RADIUS/MFA server. Supported protocols are PAP, CHAP MS-CHAPv1, and MS-CHAPv2. MS-CHAPv2 is recommended because it provides the strongest security of the three options.
      • Application name: This may be optional in some RADIUS/MFA servers and usually identifies the application in messages or reports.
    3. Configure your on-premises network to allow inbound traffic from the RADIUS clients (Microsoft AD DCs IP addresses) to your RADIUS/MFA server port, defined in Step 1.
    4. Add a rule to the Amazon security group of your Microsoft AD directory to allow inbound traffic from the RADIUS/MFA server IP address and port number defined previously.

Step 2 – Configure your Microsoft AD directory to enable MFA

The final step is to configure your Microsoft AD directory to enable MFA. When you enable MFA, Amazon WorkSpaces that are enabled in your Microsoft AD directory will require the user to enter an MFA code along with their user name and password.

To enable MFA in your Microsoft AD directory:

  1. Open the AWS Management Console, choose Directory Service, and then choose your Microsoft AD Directory ID link.
  2. Choose the Multi-Factor authentication tab and you will see what the following screenshot shows.
    Screenshot of Multi-Factor authentication tab
  3. Enter the following values to configure your RADIUS/MFA server to connect to your Microsoft AD directory:
    • Enable Multi-Factor Authentication: Select this check box to enable MFA configuration input settings fields.
    • RADIUS server IP address(es): Enter the IP addresses of your RADIUS/MFA server. You can enter multiple IP addresses, if you have more than one RADIUS/MFA server, by separating them with a comma (for example, 192.0.0.0, 192.0.0.12). Alternatively, you can use a DNS name for your RADIUS server when using AWS CLI.
    • Port: Enter the port number of your RADIUS/MFA server that you set in Step 1B.
    • Shared secret code: Enter the same shared secret you created in your RADIUS/MFA server in Step 1B.
    • Confirm shared secret code: Reenter your shared secret code.
    • Protocol: Select the authentication protocol between the Microsoft AD DCs and the RADIUS/MFA server. Supported protocols are PAP, CHAP MS-CHAPv1, and MS-CHAPv2. I recommend MS-CHAPv2 because it provides the strongest security of the three options.
    • Server timeout (in seconds): Enter the amount of time to wait for the RADIUS/MFA server to respond to authentication requests. If the RADIUS/MFA server does not respond in time, authentication will be retried (see Max retries). This value must be from 1 to 20.
    • Max retries: Specify the number of times that communication with the RADIUS/MFA server is attempted before failing. This must be a value from 0 to 10.
  4. Choose Update directory to update the RADIUS/MFA settings for your directory. The update process will take less than two minutes to complete. When the RADIUS/MFA Status changes to Completed, Amazon WorkSpaces will automatically prompt users to enter their user name and password from the on-premises AD, as well as an MFA code at next sign-in.
    1. If you receive a Failed status after choosing the Update directory button, check the following three most common errors (if you make a change to the configuration, choose Update to apply the changes):
      1. A mismatch between the shared key provided in the RADIUS/MFA server and Microsoft AD configurations.
      2. Network connectivity issues between your Microsoft AD and RADIUS/MFA server, because the on-premises network infrastructure or Amazon security groups are not properly set.
      3. The authentication protocol configured in Microsoft AD does not match or is not supported by the RADIUS/MFA server.

Summary

In this blog post, I provided a solution overview and walked through the two main steps to provide an extra layer of protection for Amazon WorkSpaces by enabling RADIUS/MFA by using Microsoft AD. Because users will be required to provide an MFA code (and have a virtual or hardware MFA device) immediately after you complete the configuration in Step 2, be sure you test this implementation in a test/development environment before deploying it in production.

You can also configure the MFA settings for Microsoft AD using the Directory Service APIs. To learn more about AWS Directory Service, see the AWS Directory Service home page. If you have questions, please post them on the Directory Service forum.

– Peter

Russia Orders Public Tracker to Block Itself, Site Refuses

Post Syndicated from Andy original https://torrentfreak.com/russia-orders-public-tracker-block-site-refuses-170211/

While torrents will work without them, trackers are valuable tools for finding other BitTorrent peers with the same content. They’re vital for those who have DHT and PEX disabled in their clients.

Trackers are fairly thin on the ground so to help to fill that gap, in early 2016, zer0day was born. The tracker can be used by anyone, but it had a spurt of growth when ETRG (ExtraTorrent’s release group) began using it.

As previously reported, the site had a bit of a bumpy ride in its early days but since August last year has been operating smoothly and without complaint. However, that all changed recently when the tracker was contacted by Russian telecoms watchdog Rozcomnadzor.

“We send you a notification on violation of exclusive rights to objects of copyright and (or) related rights (except photographic works and works obtained by processes analogous to photography), published on the website in the information and telecommunication network Internet tracker.zer0day.to,” Rozcomnadzor’s email reads.

“According to the article 15.2 of the Federal Law No. 149-FZ of July 27, 2006 ‘On Information, Information Technologies and on Protection of Information’, the access to the illegal published content is to be restricted within three (3) working days on receipt of this notice.”

Rozcomnadzor follows up by stating clearly that if the site doesn’t remove the “illegal published content”, it will have the tracker blocked by Russian ISPs.

Of course, stand-alone trackers do not carry any content, illicit or otherwise. They do not even have direct links to content. Trackers carry a hash value which can be matched to an IP address which in turn may be sharing that content. Even if a tracker ceases to exist, the content will continue to be shared, so targeting a tracker is useless.

It seems, however, that Rozcomnadzor either doesn’t a) understand or b) particularly care. In an attachment, the watchdog references a decision of the Moscow City Court dated November 28, 2016.

The movie ‘Viking’ listed in the complaint has been at the center of several other blocking actions in Russia, so it’s no surprise to see it listed here. However, zer0day isn’t hosting the movie and the URL cited by the watchdog is part of the tracker’s main announce URL and doesn’t link to the content either.

Furthermore, zer0day’s admin informs TorrentFreak that he can’t comply since the tracker doesn’t even have the feature to kill a hash.

“I won’t be removing anything. The tracker is compiled so that it doesn’t have a hash blacklist,” he says.

The only solution would be to block the whole site. Zer0day won’t be doing that but the Moscow City Court could if it processes multiple unresolved complaints about the tracker. But, as mentioned earlier, that won’t do a thing to stop people sharing Viking, as anyone with DHT and PEX enabled in their torrent client will carry on as usual.

Since the last time we spoke with zer0day, things have continued much as before with the tracker, albeit with some developments in the pipeline.

“It is tracking a modest 1 million torrents with around 3 to 4 million peers,” its admin says.

“Also, I started working on a simple online torrent editor which will be available on the tracker’s website. The script is quite simple but I’m so busy that I don’t know when it will be finished.”

Zer0day can be found here.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

BitTorrent Expert Report Slams Movie Piracy Evidence

Post Syndicated from Ernesto original https://torrentfreak.com/bittorrent-expert-report-slams-movie-piracy-evidence-170210/

In recent years many people have accused so-called ‘copyright trolls’ of using dubious tactics and shoddy evidence, to extract cash settlements from alleged movie pirates.

As the most active copyright litigant in the United States, adult entertainment outfit Malibu Media has been subjected to these allegations as well.

The company, widely known for its popular “X-Art” brand, has gone after thousands of alleged offenders in recent years earning millions of dollars in the process. While many of its targets eventually pay up, now and then the company faces fierce resistance.

This is also true in the case Malibu launched against the Californian Internet subscriber behind the IP-address 76.126.99.126. This defendant has put up quite a fight in recent months and invested some healthy resources into it.

A few days ago, the defendant’s lawyer submitted a motion (pdf) for summary judgment, pointing out several flaws in the rightsholder’s complaint. While this kind of pushback is not new, the John Doe backed them up with a very detailed expert report.

The 74-page report provides an overview of the weaknesses in Malibu’s claims and the company’s evidence. It was put together by Bradley Witteman, an outside expert who previously worked as Senior Director Product Management at BitTorrent Inc.

In common with other aspects, Malibu’s file-sharing evidence was also carefully inspected. Like many other rightsholders, the adult company teamed up with the German outfit Excipio which collects data through its custom monitoring technology.

According to Witteman’s expert analysis, the output of this torrent tracking system is unreliable.

One of the major complaints is that the tracking system only takes 16k blocks from the target IP addresses, not the entire file. This means that they can’t prove that the defendant actually downloaded a full copy of the infringing work. In addition, they can’t do a proper hash comparison to verify the contents of the file.

From the expert report

That’s only part of the problem, as Mr. Witteman lists a range of possible issues in his conclusions, arguing that the reliability of the system can’t be guaranteed.

  • Human error when IPP enters information from Malibu Media into the Excipio system.
  • Mr. Patzer stated that the Excipio system does not know if the user has a complete copy of the material.
  • The Excipio system only take 16k blocks from the target IP addresses.
  • There has not been any description of the chain of custody of the IPP verification affidavits nor that the process is valid and secure.
  • IP address false positives can occur in the system.
  • The user’s access point could have been incorrectly secured.
  • The user’s computer or network interface may have been compromised and is being used as a conduit for another user’s traffic.
  • VPN software could produce an inaccurate IP address of a swarm member.
  • The fuzzy name search of file names as described by Mr. Patzer could not have identified the file kh4k52qr.125.mp4 as the content “Romp at the Ranch.”
  • Proprietary BitTorrent Client may or may not be properly implemented.
  • Claim of “zero bugs” is suspect when one of the stated components has had 5 over 431 bugs, 65 currently unresolved.
  • Zero duration data transfer times on two different files.
  • The lack of any available academic paper on, or security audit of, the software system in question.

In addition to the technical evidence, the expert report also sums up a wide range of other flaws.

Many files differ from the one’s deposited at the Copyright Office, for example, and the X-Art videos themselves don’t display a proper copyright notice. On top of that, Malibu also made no effort to protect its content with DRM.

Based on the expert review the John Doe asks the court to rule in his favor. Malibu is not a regular rightsholder, the lawyer argues, but an outfit that’s trying to generate profits through unreliable copyright infringement accusations.

“The only conclusion one can draw is that Malibu does not operate like a normal studio – make films and charge for them. Instead Malibu makes a large chunk of its money using unreliable bittorrent monitoring software which only collects a deminimus amount of data,” the Doe’s lawyer writes.

Stepping it up a notch, the lawyer likens Malibu’s operation to Prenda Law, whose principals were recently indicted and charged with conspiracy to commit fraud, money laundering, and perjury by the US Government.

“Malibu is no different than ‘Prenda Law’ in form and function. They cleverly exploit the fact that most people will settle for 5-10K when sued despite the fact that the system used to ‘capture’ their IP address is neither robust nor valid,” the motion reads.

Whether the court will agree has yet to be seen, but it’s clear that the expert report can be used as a new weapon to combat these and other copyright infringement claims.

Of course, one has to keep in mind that there are always two sides to a story.

At the same time the John Doe submitted his motion, Malibu moved ahead with a motion (pdf) for sanctions and a default judgment. The adult entertainment outfit argues that the defendant destroyed evidence on hard drives, concealed information, and committed perjury on several occasions.

To be continued…

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

UK Piracy Alerts: The First Look Inside the Warning System

Post Syndicated from Andy original https://torrentfreak.com/uk-piracy-alerts-the-first-look-inside-the-warning-system-170210/

In January it was revealed that UK ISPs and the movie and music industries had finally reached their years-long goal of sending infringment notices to pirating subscribers.

The alerts, which are claimed to be educational in nature, are part of the larger Creative Content UK (CCUK) initiative, which includes PR campaigns targeted at the public and classroom.

Until now, no one has published details of the actual alerts in public but thanks to a cooperative member of the UK public, TorrentFreak has the lowdown. The system we’ll show below relates to Sky, so other ISPs may or may not operate slightly differently.

The initial warning email from Sky

The email above has been redacted to protect the identity of our tipster. The blacked-out areas contain his name, the date in DD/MM/YY format, an alleged time of infringement in the HH:MM format, and a seven-digit reference code for the shared content, which is the TV show Westworld.

There is also a pair of links, one to sign into the subscriber’s Sky account (presumably this ensures the person signing in is the account holder) and a link to the ‘Get it Right Information Portal’. The first page before hitting that site looks like this.

What is Creative Content UK?

Once on the GetItRight site, the user is informed that his or her account has been used to breach copyright and that further information is available on the following pages.

There’s a report coming up

Following the links, the alleged infringer is presented with a page which provides a lot more detail. The CIR ID shown below is the same as the seven-digit code on Sky’s website. The date and time are the same, although in different formats.

The all-important IP address is listed alongside details of the software used to share the content. Also included are the filename and filesize of the infringing content and the copyright owner that made the complaint.

The infringement data

Interestingly, the system’s ability to track repeat infringers is evident at the bottom of the screenshot where the “Total Instances Logged This Period” can be seen.

Since the purpose of the campaign is to “educate” infringers, we asked our tipster a little about his habits, his impressions of the system, and how this warning will affect his future behavior.

“I was expecting [a warning] sooner or later as a heavy BitTorrent user. I’m sharing everything from movies, TV shows to games, but this email was about watching a TV show on Popcorn Time,” he revealed.

“This surprised me because I don’t use Popcorn Time very often and yet after approximately 10 minutes of usage I got an email the very next day. Isn’t that funny?”

So in this case, the warning was not only accurate but was also delivered to the correct person, rather than merely the person who pays the bill. We asked our tipster if he was aware of the GetItRight campaign before receiving this warning and whether it would achieve its aims.

“Yes, I have read articles on TorrentFreak. Only what I have read on TorrentFreak,” he said.

“I don’t think [the warnings] will work, at least not on a big scale. Maybe they will educate some people who did it by mistake or did it just once but for someone like me there is no hope. But at least the campaign is not aggressive.”

Interestingly, the education factor in this particular case appears to have somewhat backfired. Our tipster said that thanks to news coverage of the warnings, he knew immediately that there would be no consequences for receiving one. That put his mind at rest.

However, he did indicate that he may change his habits after receiving the warning, particularly given Sky’s claim it will ask subscribers to remove file-sharing software if they’re caught multiple times.

“[The threat to remove software] upsets me as a long-term Sky customer. But I won’t comply, I will either subscribe to another ISP provider or start using VPNs,” he said.

“I might stop using Popcorn Time as I wasn’t using it too often anyway, but I will keep using BitTorrent,” he added. Of course, Popcorn Time has BitTorrent under the hood, so both can trigger warnings.

Received a warning from a UK ISP? Contact TF in complete confidence.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Secure Amazon EMR with Encryption

Post Syndicated from Sai Sriparasa original https://aws.amazon.com/blogs/big-data/secure-amazon-emr-with-encryption/

In the last few years, there has been a rapid rise in enterprises adopting the Apache Hadoop ecosystem for critical workloads that process sensitive or highly confidential data. Due to the highly critical nature of the workloads, the enterprises implement certain organization/industry wide policies and certain regulatory or compliance policies. Such policy requirements are designed to protect sensitive data from unauthorized access.

A common requirement within such policies is about encrypting data at-rest and in-flight. Amazon EMR uses “security configurations” to make it easy to specify the encryption keys and certificates, ranging from AWS Key Management Service to supplying your own custom encryption materials provider.

You create a security configuration that specifies encryption settings and then use the configuration when you create a cluster. This makes it easy to build the security configuration one time and use it for any number of clusters.

o_Amazon_EMR_Encryption_1

In this post, I go through the process of setting up the encryption of data at multiple levels using security configurations with EMR. Before I dive deep into encryption, here are the different phases where data needs to be encrypted.

Data at rest

  • Data residing on Amazon S3—S3 client-side encryption with EMR
  • Data residing on disk—the Amazon EC2 instance store volumes (except boot volumes) and the attached Amazon EBS volumes of cluster instances are encrypted using Linux Unified Key System (LUKS)

Data in transit

  • Data in transit from EMR to S3, or vice versa—S3 client side encryption with EMR
  • Data in transit between nodes in a cluster—in-transit encryption via Secure Sockets Layer (SSL) for MapReduce and Simple Authentication and Security Layer (SASL) for Spark shuffle encryption
  • Data being spilled to disk or cached during a shuffle phase—Spark shuffle encryption or LUKS encryption

Encryption walkthrough

For this post, you create a security configuration that implements encryption in transit and at rest. To achieve this, you create the following resources:

  • KMS keys for LUKS encryption and S3 client-side encryption for data exiting EMR to S3
  • SSL certificates to be used for MapReduce shuffle encryption
  • The environment into which the EMR cluster is launched. For this post, you launch EMR in private subnets and set up an S3 VPC endpoint to get the data from S3.
  • An EMR security configuration

All of the scripts and code snippets used for this walkthrough are available on the aws-blog-emrencryption GitHub repo.

Generate KMS keys

For this walkthrough, you use AWS KMS, a managed service that makes it easy for you to create and control the encryption keys used to encrypt your data and disks.

You generate two KMS master keys, one for S3 client-side encryption to encrypt data going out of EMR and the other for LUKS encryption to encrypt the local disks. The Hadoop MapReduce framework uses HDFS. Spark uses the local file system on each slave instance for intermediate data throughout a workload, where data could be spilled to disk when it overflows memory.

To generate the keys, use the kms.json AWS CloudFormation script.  As part of this script, provide an alias name, or display name, for the keys. An alias must be in the “alias/aliasname” format, and can only contain alphanumeric characters, an underscore, or a dash.

o_Amazon_EMR_Encryption_2

After you finish generating the keys, the ARNs are available as part of the outputs.

o_Amazon_EMR_Encryption_3

Generate SSL certificates

The SSL certificates allow the encryption of the MapReduce shuffle using HTTPS while the data is in transit between nodes.

o_Amazon_EMR_Encryption_4

For this walkthrough, use OpenSSL to generate a self-signed X.509 certificate with a 2048-bit RSA private key that allows access to the issuer’s EMR cluster instances. This prompts you to provide subject information to generate the certificates.

Use the cert-create.sh script to generate SSL certificates that are compressed into a zip file. Upload the zipped certificates to S3 and keep a note of the S3 prefix. You use this S3 prefix when you build your security configuration.

Important

This example is a proof-of-concept demonstration only. Using self-signed certificates is not recommended and presents a potential security risk. For production systems, use a trusted certification authority (CA) to issue certificates.

To implement certificates from custom providers, use the TLSArtifacts provider interface.

Build the environment

For this walkthrough, launch an EMR cluster into a private subnet. If you already have a VPC and would like to launch this cluster into a public subnet, skip this section and jump to the Create a Security Configuration section.

To launch the cluster into a private subnet, the environment must include the following resources:

  • VPC
  • Private subnet
  • Public subnet
  • Bastion
  • Managed NAT gateway
  • S3 VPC endpoint

As the EMR cluster is launched into a private subnet, you need a bastion or a jump server to SSH onto the cluster. After the cluster is running, you need access to the Internet to request the data keys from KMS. Private subnets do not have access to the Internet directly, so route this traffic via the managed NAT gateway. Use an S3 VPC endpoint to provide a highly reliable and a secure connection to S3.

o_Amazon_EMR_Encryption_5

In the CloudFormation console, create a new stack for this environment and use the environment.json CloudFormation template to deploy it.

As part of the parameters, pick an instance family for the bastion and an EC2 key pair to be used to SSH onto the bastion. Provide an appropriate stack name and add the appropriate tags. For example, the following screenshot is the review step for a stack that I created.

o_Amazon_EMR_Encryption_6

After creating the environment stack, look at the Output tab and make a note of the VPC ID, bastion, and private subnet IDs, as you will use them when you launch the EMR cluster resources.

o_Amazon_EMR_Encryption_7

Create a security configuration

The final step before launching the secure EMR cluster is to create a security configuration. For this walkthrough, create a security configuration with S3 client-side encryption using EMR, and LUKS encryption for local volumes using the KMS keys created earlier. You also use the SSL certificates generated and uploaded to S3 earlier for encrypting the MapReduce shuffle.

o_Amazon_EMR_Encryption_8

Launch an EMR cluster

Now, you can launch an EMR cluster in the private subnet. First, verify that the service role being used for EMR has access to the AmazonElasticMapReduceRole managed service policy. The default service role is EMR_DefaultRole. For more information, see Configuring User Permissions Using IAM Roles.

From the Build an environment section, you have the VPC ID and the subnet ID for the private subnet into which the EMR cluster should be launched. Select those values for the Network and EC2 Subnet fields. In the next step, provide a name and tags for the cluster.

o_Amazon_EMR_Encryption_9

The last step is to select the private key, assign the security configuration that was created in the Create a security configuration section, and choose Create Cluster.

o_Amazon_EMR_Encryption_10

Now that you have the environment and the cluster up and running, you can get onto the master node to run scripts. You need the IP address, which you can retrieve from the EMR console page. Choose Hardware, Master Instance group and note the private IP address of the master node.

o_Amazon_EMR_Encryption_11

As the master node is in a private subnet, SSH onto the bastion instance first and then jump from the bastion instance to the master node. For information about how to SSH onto the bastion and then to the Hadoop master, open the ssh-commands.txt file. For more information about how to get onto the bastion, see the Securely Connect to Linux Instances Running in a Private Amazon VPC post.

After you are on the master node, bring your own Hive or Spark scripts. For testing purposes, the GitHub /code directory includes the test.py PySpark and test.q Hive scripts.

Summary

As part of this post, I’ve identified the different phases where data needs to be encrypted and walked through how data in each phase can be encrypted. Then, I described a step-by-step process to achieve all the encryption prerequisites, such as building the KMS keys, building SSL certificates, and launching the EMR cluster with a strong security configuration. As part of this walkthrough, you also secured the data by launching your cluster in a private subnet within a VPC, and used a bastion instance for access to the EMR cluster.

If you have questions or suggestions, please comment below.


About the Author

sai_90Sai Sriparasa is a Big Data Consultant for AWS Professional Services. He works with our customers to provide strategic & tactical big data solutions with an emphasis on automation, operations & security on AWS. In his spare time, he follows sports and current affairs.

 

 

 


Related

Implementing Authorization and Auditing using Apache Ranger on Amazon EMR

EMRRanger_1

 

 

 

 

 

 

 

Managing Secrets for Amazon ECS Applications Using Parameter Store and IAM Roles for Tasks

Post Syndicated from Chris Barclay original https://aws.amazon.com/blogs/compute/managing-secrets-for-amazon-ecs-applications-using-parameter-store-and-iam-roles-for-tasks/

Thanks to my colleague Stas Vonholsky  for a great blog on managing secrets with Amazon ECS applications.

—–

As containerized applications and microservice-oriented architectures become more popular, managing secrets, such as a password to access an application database, becomes more challenging and critical.

Some examples of the challenges include:

  • Support for various access patterns across container environments such as dev, test, and prod
  • Isolated access to secrets on a container/application level rather than at the host level
  • Multiple decoupled services with their own needs for access, both as services and as clients of other services

This post focuses on newly released features that support further improvements to secret management for containerized applications running on Amazon ECS. My colleague, Matthew McClean, also published an excellent post on the AWS Security Blog, How to Manage Secrets for Amazon EC2 Container Service–Based Applications by Using Amazon S3 and Docker, which discusses some of the limitations of passing and storing secrets with container parameter variables.

Most secret management tools provide the following functionality:

  • Highly secured storage system
  • Central management capabilities
  • Secure authorization and authentication mechanisms
  • Integration with key management and encryption providers
  • Secure introduction mechanisms for access
  • Auditing
  • Secret rotation and revocation

Amazon EC2 Systems Manager Parameter Store

Parameter Store is a feature of Amazon EC2 Systems Manager. It provides a centralized, encrypted store for sensitive information and has many advantages when combined with other capabilities of Systems Manager, such as Run Command and State Manager. The service is fully managed, highly available, and highly secured.

Because Parameter Store is accessible using the Systems Manager API, AWS CLI, and AWS SDKs, you can also use it as a generic secret management store. Secrets can be easily rotated and revoked. Parameter Store is integrated with AWS KMS so that specific parameters can be encrypted at rest with the default or custom KMS key. Importing KMS keys enables you to use your own keys to encrypt sensitive data.

Access to Parameter Store is enabled by IAM policies and supports resource level permissions for access. An IAM policy that grants permissions to specific parameters or a namespace can be used to limit access to these parameters. CloudTrail logs, if enabled for the service, record any attempt to access a parameter.

While Amazon S3 has many of the above features and can also be used to implement a central secret store, Parameter Store has the following added advantages:

  • Easy creation of namespaces to support different stages of the application lifecycle.
  • KMS integration that abstracts parameter encryption from the application while requiring the instance or container to have access to the KMS key and for the decryption to take place locally in memory.
  • Stored history about parameter changes.
  • A service that can be controlled separately from S3, which is likely used for many other applications.
  • A configuration data store, reducing overhead from implementing multiple systems.
  • No usage costs.

Note: At the time of publication, Systems Manager doesn’t support VPC private endpoint functionality. To enforce stricter access to a Parameter Store endpoint from a private VPC, use a NAT gateway with a set Elastic IP address together with IAM policy conditions that restrict parameter access to a limited set of IP addresses.

IAM roles for tasks

With IAM roles for Amazon ECS tasks, you can specify an IAM role to be used by the containers in a task. Applications interacting with AWS services must sign their API requests with AWS credentials. This feature provides a strategy for managing credentials for your applications to use, similar to the way that Amazon EC2 instance profiles provide credentials to EC2 instances.

Instead of creating and distributing your AWS credentials to the containers or using the EC2 instance role, you can associate an IAM role with an ECS task definition or the RunTask API operation. For more information, see IAM Roles for Tasks.

You can use IAM roles for tasks to securely introduce and authenticate the application or container with the centralized Parameter Store. Access to the secret manager should include features such as:

  • Limited TTL for credentials used
  • Granular authorization policies
  • An ID to track the requests in the logs of the central secret manager
  • Integration support with the scheduler that could map between the container or task deployed and the relevant access privileges

IAM roles for tasks support this use case well, as the role credentials can be accessed only from within the container for which the role is defined. The role exposes temporary credentials and these are rotated automatically. Granular IAM policies are supported with optional conditions about source instances, source IP addresses, time of day, and other options.

The source IAM role can be identified in the CloudTrail logs based on a unique Amazon Resource Name and the access permissions can be revoked immediately at any time with the IAM API or console. As Parameter Store supports resource level permissions, a policy can be created to restrict access to specific keys and namespaces.

Dynamic environment association

In many cases, the container image does not change when moving between environments, which supports immutable deployments and ensures that the results are reproducible. What does change is the configuration: in this context, specifically the secrets. For example, a database and its password might be different in the staging and production environments. There’s still the question of how do you point the application to retrieve the correct secret? Should it retrieve prod.app1.secret, test.app1.secret or something else?

One option can be to pass the environment type as an environment variable to the container. The application then concatenates the environment type (prod, test, etc.) with the relative key path and retrieves the relevant secret. In most cases, this leads to a number of separate ECS task definitions.

When you describe the task definition in a CloudFormation template, you could base the entry in the IAM role that provides access to Parameter Store, KMS key, and environment property on a single CloudFormation parameter, such as “environment type.” This approach could support a single task definition type that is based on a generic CloudFormation template.

Walkthrough: Securely access Parameter Store resources with IAM roles for tasks

This walkthrough is configured for the North Virginia region (us-east-1). I recommend using the same region.

Step 1: Create the keys and parameters

First, create the following KMS keys with the default security policy to be used to encrypt various parameters:

  • prod-app1 –used to encrypt any secrets for app1.
  • license-key –used to encrypt license-related secrets.
aws kms create-key --description prod-app1 --region us-east-1
aws kms create-key --description license-code --region us-east-1

Note the KeyId property in the output of both commands. You use it throughout the walkthrough to identify the KMS keys.

The following commands create three parameters in Parameter Store:

  • prod.app1.db-pass (encrypted with the prod-app1 KMS key)
  • general.license-code (encrypted with the license-key KMS key)
  • prod.app2.user-name (stored as a standard string without encryption)
aws ssm put-parameter --name prod.app1.db-pass --value "AAAAAAAAAAA" --type SecureString --key-id "<key-id-for-prod-app1-key>" --region us-east-1
aws ssm put-parameter --name general.license-code --value "CCCCCCCCCCC" --type SecureString --key-id "<key-id-for-license-code-key>" --region us-east-1
aws ssm put-parameter --name prod.app2.user-name --value "BBBBBBBBBBB" --type String --region us-east-1

Step 2: Create the IAM role and policies

Now, create a role and an IAM policy to be associated later with the ECS task that you create later on.
The trust policy for the IAM role needs to allow the ecs-tasks entity to assume the role.

{
   "Version": "2012-10-17",
   "Statement": [
     {
       "Sid": "",
       "Effect": "Allow",
       "Principal": {
         "Service": "ecs-tasks.amazonaws.com"
       },
       "Action": "sts:AssumeRole"
     }
   ]
 }

Save the above policy as a file in the local directory with the name ecs-tasks-trust-policy.json.

aws iam create-role --role-name prod-app1 --assume-role-policy-document file://ecs-tasks-trust-policy.json

The following policy is attached to the role and later associated with the app1 container. Access is granted to the prod.app1.* namespace parameters, the encryption key required to decrypt the prod.app1.db-pass parameter and the license code parameter. The namespace resource permission structure is useful for building various hierarchies (based on environments, applications, etc.).

Make sure to replace <key-id-for-prod-app1-key> with the key ID for the relevant KMS key and <account-id> with your account ID in the following policy.

{
     "Version": "2012-10-17",
     "Statement": [
         {
             "Effect": "Allow",
             "Action": [
                 "ssm:DescribeParameters"
             ],
             "Resource": "*"
         },
         {
             "Sid": "Stmt1482841904000",
             "Effect": "Allow",
             "Action": [
                 "ssm:GetParameters"
             ],
             "Resource": [
                 "arn:aws:ssm:us-east-1:<account-id>:parameter/prod.app1.*",
                 "arn:aws:ssm:us-east-1:<account-id>:parameter/general.license-code"
             ]
         },
         {
             "Sid": "Stmt1482841948000",
             "Effect": "Allow",
             "Action": [
                 "kms:Decrypt"
             ],
             "Resource": [
                 "arn:aws:kms:us-east-1:<account-id>:key/<key-id-for-prod-app1-key>"
             ]
         }
     ]
 }

Save the above policy as a file in the local directory with the name app1-secret-access.json:

aws iam create-policy --policy-name prod-app1 --policy-document file://app1-secret-access.json

Replace <account-id> with your account ID in the following command:

aws iam attach-role-policy --role-name prod-app1 --policy-arn "arn:aws:iam::<account-id>:policy/prod-app1"

Step 3: Add the testing script to an S3 bucket

Create a file with the script below, name it access-test.sh and add it to an S3 bucket in your account. Make sure the object is publicly accessible and note down the object link, for example https://s3-eu-west-1.amazonaws.com/my-new-blog-bucket/access-test.sh

#!/bin/bash
#This is simple bash script that is used to test access to the EC2 Parameter store.
# Install the AWS CLI
apt-get -y install python2.7 curl
curl -O https://bootstrap.pypa.io/get-pip.py
python2.7 get-pip.py
pip install awscli
# Getting region
EC2_AVAIL_ZONE=`curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone`
EC2_REGION="`echo \"$EC2_AVAIL_ZONE\" | sed -e 's:\([0-9][0-9]*\)[a-z]*\$:\\1:'`"
# Trying to retrieve parameters from the EC2 Parameter Store
APP1_WITH_ENCRYPTION=`aws ssm get-parameters --names prod.app1.db-pass --with-decryption --region $EC2_REGION --output text 2>&1`
APP1_WITHOUT_ENCRYPTION=`aws ssm get-parameters --names prod.app1.db-pass --no-with-decryption --region $EC2_REGION --output text 2>&1`
LICENSE_WITH_ENCRYPTION=`aws ssm get-parameters --names general.license-code --with-decryption --region $EC2_REGION --output text 2>&1`
LICENSE_WITHOUT_ENCRYPTION=`aws ssm get-parameters --names general.license-code --no-with-decryption --region $EC2_REGION --output text 2>&1`
APP2_WITHOUT_ENCRYPTION=`aws ssm get-parameters --names prod.app2.user-name --no-with-decryption --region $EC2_REGION --output text 2>&1`
# The nginx server is started after the script is invoked, preparing folder for HTML.
if [ ! -d /usr/share/nginx/html/ ]; then
mkdir -p /usr/share/nginx/html/;
fi
chmod 755 /usr/share/nginx/html/

# Creating an HTML file to be accessed at http://<public-instance-DNS-name>/ecs.html
cat > /usr/share/nginx/html/ecs.html <<EOF
<!DOCTYPE html>
<html>
<head>
<title>App1</title>
<style>
body {padding: 20px;margin: 0 auto;font-family: Tahoma, Verdana, Arial, sans-serif;}
code {white-space: pre-wrap;}
result {background: hsl(220, 80%, 90%);}
</style>
</head>
<body>
<h1>Hi there!</h1>
<p style="padding-bottom: 0.8cm;">Following are the results of different access attempts as expirienced by "App1".</p>

<p><b>Access to prod.app1.db-pass:</b><br/>
<pre><code>aws ssm get-parameters --names prod.app1.db-pass --with-decryption</code><br/>
<code><result>$APP1_WITH_ENCRYPTION</result></code><br/>
<code>aws ssm get-parameters --names prod.app1.db-pass --no-with-decryption</code><br/>
<code><result>$APP1_WITHOUT_ENCRYPTION</result></code></pre><br/>
</p>

<p><b>Access to general.license-code:</b><br/>
<pre><code>aws ssm get-parameters --names general.license-code --with-decryption</code><br/>
<code><result>$LICENSE_WITH_ENCRYPTION</result></code><br/>
<code>aws ssm get-parameters --names general.license-code --no-with-decryption</code><br/>
<code><result>$LICENSE_WITHOUT_ENCRYPTION</result></code></pre><br/>
</p>

<p><b>Access to prod.app2.user-name:</b><br/>
<pre><code>aws ssm get-parameters --names prod.app2.user-name --no-with-decryption</code><br/>
<code><result>$APP2_WITHOUT_ENCRYPTION</result></code><br/>
</p>

<p><em>Thanks for visiting</em></p>
</body>
</html>
EOF

Step 4: Create a test cluster

I recommend creating a new ECS test cluster with the latest ECS AMI and ECS agent on the instance. Use the following field values:

  • Cluster name: access-test
  • EC2 instance type: t2.micro
  • Number of instances: 1
  • Key pair: No EC2 key pair is required, unless you’d like to SSH to the instance and explore the running container.
  • VPC: Choose the default VPC. If unsure, you can find the VPC ID with the IP range 172.31.0.0/16 in the Amazon VPC console.
  • Subnets: Pick a subnet in the default VPC.
  • Security group: Create a new security group with CIDR block 0.0.0.0/0 and port 80 for inbound access.

Leave other fields with the default settings.

Create a simple task definition that relies on the public NGINX container and the role that you created for app1. Specify the properties such as the available container resources and port mappings. Note the command option is used to download and invoke a test script that installs the AWS CLI on the container, runs a number of get-parameter commands, and creates an HTML file with the results.

Replace <account-id> with your account ID, <your-S3-URI> with a link to the S3 object created in step 3 in the following commands:

aws ecs register-task-definition --family access-test --task-role-arn "arn:aws:iam::<account-id>:role/prod-app1" --container-definitions name="access-test",image="nginx",portMappings="[{containerPort=80,hostPort=80,protocol=tcp}]",readonlyRootFilesystem=false,cpu=512,memory=490,essential=true,entryPoint="sh,-c",command="\"/bin/sh -c \\\"apt-get update ; apt-get -y install curl ; curl -O <your-S3-URI> ; chmod +x access-test.sh ; ./access-test.sh ; nginx -g 'daemon off;'\\\"\"" --region us-east-1

aws ecs run-task --cluster access-test --task-definition access-test --count 1 --region us-east-1

Verifying access

After the task is in a running state, check the public DNS name of the instance and navigate to the following page:

http://<ec2-instance-public-DNS-name>/ecs.html

You should see the results of running different access tests from the container after a short duration.

If the test results don’t appear immediately, wait a few seconds and refresh the page.
Make sure that inbound traffic for port 80 is allowed on the security group attached to the instance.

The results you see in the static results HTML page should be the same as running the following commands from the container.

prod.app1.key1

aws ssm get-parameters --names prod.app1.db-pass --with-decryption --region us-east-1
aws ssm get-parameters --names prod.app1.db-pass --no-with-decryption --region us-east-1

Both commands should work, as the policy provides access to both the parameter and the required KMS key.

general.license-code

aws ssm get-parameters --names general.license-code --no-with-decryption --region us-east-1
aws ssm get-parameters --names general.license-code --with-decryption --region us-east-1

Only the first command with the “no-with-decryption” parameter should work. The policy allows access to the parameter in Parameter Store but there’s no access to the KMS key. The second command should fail with an access denied error.

prod.app2.user-name

aws ssm get-parameters --names prod.app2.user-name –no-with-decryption --region us-east-1

The command should fail with an access denied error, as there are no permissions associated with the namespace for prod.app2.

Finishing up

Remember to delete all resources (such as the KMS keys and EC2 instance), so that you don’t incur charges.

Conclusion

Central secret management is an important aspect of securing containerized environments. By using Parameter Store and task IAM roles, customers can create a central secret management store and a well-integrated access layer that allows applications to access only the keys they need, to restrict access on a container basis, and to further encrypt secrets with custom keys with KMS.

Whether the secret management layer is implemented with Parameter Store, Amazon S3, Amazon DynamoDB, or a solution such as Vault or KeyWhiz, it’s a vital part to the process of managing and accessing secrets.

Finnish Government Investigates as Tens of Thousands Face Piracy ‘Fines’

Post Syndicated from Andy original https://torrentfreak.com/finnish-government-investigates-as-tens-of-thousands-face-piracy-fines-170126/

So-called copyright trolling is a plague sweeping across the world and there seems to be very little anyone can do to stop it. Claiming that their rights have been infringed, copyright holders head to court to demand the identities of subscribers behind IP addresses and from there they begin their threats.

One of the more recent countries to be hit with the phenomenon is Finland, where the practice seriously got underway in 2014 and escalated in 2015.

It’s now emerged that tens of thousands of citizens are likely to be caught up in a new dragnet following their alleged sharing of movies and TV shows.

A copy of a letter recently sent to an Internet subscriber and obtained by Helsingin Sanomat reveals a demand for 2,200 euros relating to the downloading of a TV show. But this single letter is just the tip of the iceberg.

Last year a local court dealt with around 200 cases that concluded with copyright holders being granted permission to obtain the identities of between hundreds and thousands of individuals said to have infringed their rights.

HS estimates that as many as 60,000 people could be in line to receive cash demands similar to the one detailed above. They come from Hedman Partners, the Helsinki law firm that’s been involved in copyright trolling cases in Finland for the past couple of years.

Based on a 2,200 euro settlement, the cash involved is potentially enormous. For every hundred cases settled, the law firm reportedly pockets 130,000 euros for “monitoring costs”, with 90,000 euros going to the rightsholders.

Due to the scale of the problem, complaints from letter recipients are now being reported to various local authorities. After receiving dozens of complaints from bewildered Internet account holders, police were forced to issue a statement last Friday.

“Based on notification of data there is no reason to suspect a crime. The mere lack of clarity associated with the invoice-based letter, for example, does not prove the crime of fraud or an intention to deliberate deceive,” said Detective Chief Inspector Taija Kostamo.

Kostamo also noted that since the dispute is one based in civil law, the police will not be getting involved in any investigation.

“The police are not the competent authority to solve this issue,” he said, adding that citizens should take steps to secure their Wi-Fi networks to avoid third-party intrusions.

With the police backing away from any involvement, expectations have now fallen on the government to tackle the problem. Thankfully for those involved, the Ministry of Education and Culture appears to be taking the matter seriously and has promised an investigation.

“It is not intended that our legislation should be used for milking [the public],” said ‎Government Counsellor Anna Vuopala.

“It seems that it is appropriate for the Ministry to convene the parties involved in order to find out whether the law is being complied with in all respects,” she said.

Local copyright law obliges ISPs to hand over account holders’ names if copyrighted content has been shared without permission to a “significant degree.” There is now some debate over whether the sharing of a movie or TV show meets that threshold.

With a meeting planned for February, the issue has now attracted the attention of parliament. HS reports that various Members of Parliament are looking into the matter to clarify the position and look at what can be done to deal with the problems raised.

The situation emerging in Finland is a prime example of what happens when large numbers of people are targeted at once. While a few hundred cases might fly somewhat under the radar and fade away relatively quickly, tens of thousands aren’t going to be brushed under the carpet. Trolling has now become a national issue, with all of the consequences that will entail.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

AWS Web Application Firewall (WAF) for Application Load Balancers

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/aws-web-application-firewall-waf-for-application-load-balancers/

I’m still catching up on a couple of launches that we made late last year!

Today’s post covers two services that I’ve written about in the past — AWS Web Application Firewall (WAF) and AWS Application Load Balancer:

AWS Web Application Firewall (WAF) – Helps to protect your web applications from common application-layer exploits that can affect availability or consume excessive resources. As you can see in my post (New – AWS WAF), WAF allows you to use access control lists (ACLs), rules, and conditions that define acceptable or unacceptable requests or IP addresses. You can selectively allow or deny access to specific parts of your web application and you can also guard against various SQL injection attacks. We launched WAF with support for Amazon CloudFront.

AWS Application Load Balancer (ALB) – This load balancing option for the Elastic Load Balancing service runs at the application layer. It allows you to define routing rules that are based on content that can span multiple containers or EC2 instances. Application Load Balancers support HTTP/2 and WebSocket, and give you additional visibility into the health of the target containers and instances (to learn more, read New – AWS Application Load Balancer).

Better Together
Late last year (I told you I am still catching up), we announced that WAF can now help to protect applications that are running behind an Application Load Balancer. You can set this up pretty quickly and you can protect both internal and external applications and web services.

I already have three EC2 instances behind an ALB:

I simple create a Web ACL in the same region and associate it with the ALB. I begin by naming the Web ACL. I also instruct WAF to publish to a designated CloudWatch metric:

Then I add any desired conditions to my Web ACL:

For example, I can easily set up several SQL injection filters for the query string:

After I create the filter I use it to create a rule:

And then I use the rule to block requests that match the condition:

To pull it all together I review my settings and then create the Web ACL:

Seconds after I click on Confirm and create, the new rule is active and WAF is protecting the application behind my ALB:

And that’s all it takes to use WAF to protect the EC2 instances and containers that are running behind an Application Load Balancer!

Learn More
To learn more about how to use WAF and ALB together, plan to attend the Secure Your Web Applications Using AWS WAF and Application Load Balancer webinar at 10 AM PT on January 26th.

You may also find the Secure Your Web Application With AWS WAF and Amazon CloudFront presentation from re:Invent to be of interest.

Jeff;

China Bans Unauthorized VPN Services in Internet Crackdown

Post Syndicated from Andy original https://torrentfreak.com/china-ban-unauthorized-vpn-services-in-internet-crackdown-170123/

blocked-censorWhile the Internet is considered by many to be the greatest invention of modern time, to others it presents a disruptive influence that needs to be controlled.

Among developed nations nowhere is this more obvious than in China, where the government seeks to limit what citizens can experience online. Using technology such as filters and an army of personnel, people are routinely barred from visiting certain websites and engaging in activity deemed as undermining the state.

Of course, a cat-and-mouse game is continuously underway, with citizens regularly trying to punch through the country’s so-called ‘Great Firewall’ using various techniques, services, and encryption technologies. Now, however, even that is under threat.

In an announcement yesterday from China’s Ministry of Industry and Information Technology, the government explained that due to Internet technologies and services expanding in a “disorderly” fashion, regulation is needed to restore order.

“In recent years, as advances in information technology networks, cloud computing, big data and other applications have flourished, China’s Internet network access services market is facing many development opportunities. However, signs of disorderly development show the urgent need for regulation norms,” MIIT said.

In order to “standardize” the market and “strengthen network information security management,” the government says it is embarking on a “nationwide Internet network access services clean-up.” It will begin immediately and continue until March 31, 2018, with several aims.

All Internet services such as data centers, ISPs, CDNs and much-valued censorship-busting VPNs, will need to have pre-approval from the government to operate. Operating such a service without a corresponding telecommunications business license will constitute an offense.

“Internet data centers, ISP and CDN enterprises shall not privately build communication transmission facilities, and shall not use the network infrastructure and IP addresses, bandwidth and other network access resources…without the corresponding telecommunications business license,” the notice reads.

It will also be an offense to possess a business license but then operate outside its scope, such as by exceeding its regional boundaries or by operating other Internet services not permitted by the license. Internet entities are also forbidden to sub-lease to other unlicensed entities.

In the notice, VPNs and similar technologies have a section all to themselves and are framed as “cross-border issues.”

“Without the approval of the telecommunications administrations, entities can not create their own or leased line (including a Virtual Private Network) and other channels to carry out cross-border business activities,” it reads.

The notice, published yesterday, renders most VPN providers in China illegal, SCMP reports.

Only time will tell what effect the ban will have in the real world, but in the short-term there is bound to be some disruption as entities seek to license their services or scurry away underground.

As always, however, the Internet will perceive censorship as damage, and it’s inevitable that the most determined of netizens will find a way to access content outside China (such as Google, Facebook, YouTube and Twitter), no matter how strict the rules.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

The command-line, for cybersec

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/01/the-command-line-for-cybersec.html

On Twitter I made the mistake of asking people about command-line basics for cybersec professionals. A got a lot of useful responses, which I summarize in this long (5k words) post. It’s mostly driven by the tools I use, with a bit of input from the tweets I got in response to my query.

bash

By command-line this document really means bash.

There are many types of command-line shells. Windows has two, ‘cmd.exe’ and ‘PowerShell’. Unix started with the Bourne shell ‘sh’, and there have been many variations of this over the years, ‘csh’, ‘ksh’, ‘zsh’, ‘tcsh’, etc. When GNU rewrote Unix user-mode software independently, they called their shell “Bourne Again Shell” or “bash” (queue “JSON Bourne” shell jokes here).

Bash is the default shell for Linux and macOS. It’s also available on Windows, as part of their special “Windows Subsystem for Linux”. The windows version of ‘bash’ has become my most used shell.

For Linux IoT devices, BusyBox is the most popular shell. It’s easy to clear, as it includes feature-reduced versions of popular commands.

man

‘Man’ is the command you should not run if you want help for a command.

Man pages are designed to drive away newbies. They are only useful if you already mostly an expert with the command you desire help on. Man pages list all possible features of a program, but do not highlight examples of the most common features, or the most common way to use the commands.

Take ‘sed’ as an example. It’s used most commonly to do a search-and-replace in files, like so:

$ sed ‘s/rob/dave/’ foo.txt

This usage is so common that many non-geeks know of it. Yet, if you type ‘man sed’ to figure out how to do a search and replace, you’ll get nearly incomprehensible gibberish, and no example of this most common usage.

I point this out because most guides on using the shell recommend ‘man’ pages to get help. This is wrong, it’ll just endlessly frustrate you. Instead, google the commands you need help on, or better yet, search StackExchange for answers.

You might try asking questions, like on Twitter or forum sites, but this requires a strategy. If you ask a basic question, self-important dickholes will respond by telling you to “rtfm” or “read the fucking manual”. A better strategy is to exploit their dickhole nature, such as saying “too bad command xxx cannot do yyy”. Helpful people will gladly explain why you are wrong, carefully explaining how xxx does yyy.

If you must use ‘man’, use the ‘apropos’ command to find the right man page. Sometimes multiple things in the system have the same or similar names, leading you to the wrong page.

apt-get install yum

Using the command-line means accessing that huge open-source ecosystem. Most of the things in this guide do no already exist on the system. You have to either compile them from source, or install via a package-manager. Linux distros ship with a small footprint, but have a massive database of precompiled software “packages” in the cloud somewhere. Use the “package manager” to install the software from the cloud.

On Debian-derived systems (like Ubuntu, Kali, Raspbian), type “apt-get install masscan” to install “masscan” (as an example). Use “apt-cache search scan” to find a bunch of scanners you might want to install.

On RedHat systems, use “yum” instead. On BSD, use the “ports” system, which you can also get working for macOS.

If no pre-compiled package exists for a program, then you’ll have to download the source code and compile it. There’s about an 80% chance this will work easy, following the instructions. There is a 20% chance you’ll experience “dependency hell”, for example, needing to install two mutually incompatible versions of Python.

Bash is a scripting language

Don’t forget that shells are really scripting languages. The bit that executes a single command is just a degenerate use of the scripting language. For example, you can do a traditional for loop like:

$ for i in $(seq 1 9); do echo $i; done

In this way, ‘bash’ is no different than any other scripting language, like Perl, Python, NodeJS, PHP CLI, etc. That’s why a lot of stuff on the system actually exists as short ‘bash’ programs, aka. shell scripts.

Few want to write bash scripts, but you are expected to be able to read them, either to tweek existing scripts on the system, or to read StackExchange help.

File system commands

The macOS “Finder” or Windows “File Explorer” are just graphical shells that help you find files, open, and save them. The first commands you learn are for the same functionality on the command-line: pwd, cd, ls, touch, rm, rmdir, mkdir, chmod, chown, find, ln, mount.

The command “rm –rf /” removes everything starting from the root directory. This will also follow mounted server directories, deleting files on the server. I point this out to give an appreciation of the raw power you have over the system from the command-line, and how easy you can disrupt things.

Of particular interest is the “mount” command. Desktop versions of Linux typically mount USB flash drives automatically, but on servers, you need to do it manually, e.g.:

$ mkdir ~/foobar
$ mount /dev/sdb ~/foobar

You’ll also use the ‘mount’ command to connect to file servers, using the “cifs” package if they are Windows file servers:

# apt-get install cifs-utils
# mkdir /mnt/vids
# mount -t cifs -o username=robert,password=foobar123  //192.168.1.11/videos /mnt/vids

Linux system commands

The next commands you’ll learn are about syadmin the Linux system: ps, top, who, history, last, df, du, kill, killall, lsof, lsmod, uname, id, shutdown, and so on.

The first thing hackers do when hacking into a system is run “uname” (to figure out what version of the OS is running) and “id” (to figure out which account they’ve acquired, like “root” or some other user).

The Linux system command I use most is “dmesg” (or ‘tail –f /var/log/dmesg’) which shows you the raw system messages. For example, when I plug in USB drives to a server, I look in ‘dmesg’ to find out which device was added so that I can mount it. I don’t know if this is the best way, it’s just the way I do it (servers don’t automount USB drives like desktops do).

Networking commands

The permanent state of the network (what gets configured on the next bootup) is configured in text files somewhere. But there are a wealth of commands you’ll use to view the current state of networking, make temporary changes, and diagnose problems.

The ‘ifconfig’ command has long been used to view the current TCP/IP configuration and make temporary changes. Learning how TCP/IP works means playing a lot with ‘ifconfig’. Use “ifconfig –a” for even more verbose information.

Use the “route” command to see if you are sending packets to the right router.

Use ‘arp’ command to make sure you can reach the local router.

Use ‘traceroute’ to make sure packets are following the correct route to their destination. You should learn the nifty trick it’s based on (TTLs). You should also play with the TCP, UDP, and ICMP options.

Use ‘ping’ to see if you can reach the target across the Internet. Usefully measures the latency in milliseconds, and congestion (via packet loss). For example, ping NetFlix throughout the day, and notice how the ping latency increases substantially during “prime time” viewing hours.

Use ‘dig’ to make sure DNS resolution is working right. (Some use ‘nslookup’ instead). Dig is useful because it’s the raw universal DNS tool – every time they add some new standard feature to DNS, they add that feature into ‘dig’ as well.

The ‘netstat –tualn’ command views the current TCP/IP connections and which ports are listening. I forget what the various options “tualn” mean, only it’s the output I always want to see, rather than the raw “netstat” command by itself.

You’ll want to use ‘ethtool –k’ to turn off checksum and segmentation offloading. These are features that break packet-captures sometimes.

There is this new fangled ‘ip’ system for Linux networking, replacing many of the above commands, but as an old timer, I haven’t looked into that.

Some other tools for diagnosing local network issues are ‘tcpdump’, ‘nmap’, and ‘netcat’. These are described in more detail below.

ssh

In general, you’ll remotely log into a system in order to use the command-line. We use ‘ssh’ for that. It uses a protocol similar to SSL in order to encrypt the connection. There are two ways to use ‘ssh’ to login, with a password or with a client-side certificate.

When using SSH with a password, you type “ssh username@servername”. The remote system will then prompt you for a password for that account.

When using client-side certificates, use “ssh-keygen” to generate a key, then either copy the public-key of the client to the server manually, or use “ssh-copy-id” to copy it using the password method above.

How this works is basic application of public-key cryptography. When logging in with a password, you get a copy of the server’s public-key the first time you login, and if it ever changes, you get a nasty warning that somebody may be attempting a man in the middle attack.

$ ssh rgraham@scanner2.erratasec.com
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!

When using client-side certificates, the server trusts your public-key. This is similar to how client-side certificates work in SSL VPNs.

You can use SSH for things other than loging into a remote shell. You can script ‘ssh’ to run commands remotely on a system in a local shell script. You can use ‘scp’ (SSH copy) to transfer files to and from a remote system. You can do tricks with SSH to create tunnels, which is popular way to bypass the restrictive rules of your local firewall nazi.

openssl

This is your general cryptography toolkit, doing everything from simple encryption, to public-key certificate signing, to establishing SSL connections.

It is extraordinarily user hostile, with terrible inconsistency among options. You can only figure out how to do things by looking up examples on the net, such as on StackExchange. There are competing SSL libraries with their own command-line tools, like GnuTLS and Mozilla NSS that you might find easier to use.

The fundamental use of the ‘openssl’ tool is to create public-keys, “certificate requests”, and creating self-signed certificates. All the web-site certificates I’ve ever obtained has been using the openssl command-line tool to create CSRs.

You should practice using the ‘openssl’ tool to encrypt files, sign files, and to check signatures.

You can use openssl just like PGP for encrypted emails/messages, but following the “S/MIME” standard rather than PGP standard. You might consider learning the ‘pgp’ command-line tools, or the open-source ‘gpg’ or ‘gpg2’ tools as well.

You should learn how to use the “openssl s_client” feature to establish SSL connections, as well as the “openssl s_server” feature to create an SSL proxy for a server that doesn’t otherwise support SSL.

Learning all the ways of using the ‘openssl’ tool to do useful things will go a long way in teaching somebody about crypto and cybersecurity. I can imagine an entire class consisting of nothing but learning ‘openssl’.

netcat (nc, socat, cyptocat, ncat)

A lot of Internet protocols are based on text. That means you can create a raw TCP connection to the service and interact with them using your keyboard. The classic tool for doing this is known as “netcat”, abbreviated “nc”. For example, connect to Google’s web server at port and type the HTTP HEAD command followed by a blank line (hit [return] twice):

$ nc www.google.com 80
HEAD / HTTP/1.0

HTTP/1.0 200 OK
Date: Tue, 17 Jan 2017 01:53:28 GMT
Expires: -1
Cache-Control: private, max-age=0
Content-Type: text/html; charset=ISO-8859-1
P3P: CP=”This is not a P3P policy! See https://www.google.com/support/accounts/answer/151657?hl=en for more info.”
Server: gws
X-XSS-Protection: 1; mode=block
X-Frame-Options: SAMEORIGIN
Set-Cookie: NID=95=o7GT1uJCWTPhaPAefs4CcqF7h7Yd7HEqPdAJncZfWfDSnNfliWuSj3XfS5GJXGt67-QJ9nc8xFsydZKufBHLj-K242C3_Vak9Uz1TmtZwT-1zVVBhP8limZI55uXHuPrejAxyTxSCgR6MQ; expires=Wed, 19-Jul-2017 01:53:28 GMT; path=/; domain=.google.com; HttpOnly
Accept-Ranges: none
Vary: Accept-Encoding

Another classic example is to connect to port 25 on a mail server to send email, spoofing the “MAIL FROM” address.

There are several versions of ‘netcat’ that work over SSL as well. My favorite is ‘ncat’, which comes with ‘nmap’, as it’s actively maintained. In theory, “openssl s_client” should also work this way.

nmap

At some point, you’ll need to port scan. The standard program for this is ‘nmap’, and it’s the best. The classic way of using it is something like:

# nmap –A scanme.nmap.org

The ‘-A’ option means to enable all the interesting features like OS detection, version detection, and basic scripts on the most common ports that a server might have open. It takes awhile to run. The “scanme.nmap.org” is a good site to practice on.

Nmap is more than just a port scanner. It has a rich scripting system for probing more deeply into a system than just a port, and to gather more information useful for attacks. The scripting system essentially contains some attacks, such as password guessing.

Scanning the Internet, finding services identified by ‘nmap’ scripts, and interacting with them with tools like ‘ncat’ will teach you a lot about how the Internet works.

BTW, if ‘nmap’ is too slow, using ‘masscan’ instead. It’s a lot faster, though has much more limited functionality.

Packet sniffing with tcpdump and tshark

All Internet traffic consists of packets going between IP addresses. You can capture those packets and view them using “packet sniffers”. The most important packet-sniffer is “Wireshark”, a GUI. For the command-line, there is ‘tcpdump’ and ‘tshark’.

You can run tcpdump on the command-line to watch packets go in/out of the local computer. This performs a quick “decode” of packets as they are captured. It’ll reverse-lookup IP addresses into DNS names, which means its buffers can overflow, dropping new packets while it’s waiting for DNS name responses for previous packets (which can be disabled with -n):

# tcpdump –p –i eth0

A common task is to create a round-robin set of files, saving the last 100 files of 1-gig each. Older files are overwritten. Thus, when an attack happens, you can stop capture, and go backward in times and view the contents of the network traffic using something like Wireshark:

# tcpdump –p -i eth0 -s65535 –C 1000 –W 100 –w cap

Instead of capturing everything, you’ll often set “BPF” filters to narrow down to traffic from a specific target, or a specific port.

The above examples use the –p option to capture traffic destined to the local computer. Sometimes you may want to look at all traffic going to other machines on the local network. You’ll need to figure out how to tap into wires, or setup “monitor” ports on switches for this to work.

A more advanced command-line program is ‘tshark’. It can apply much more complex filters. It can also be used to extract the values of specific fields and dump them to a text files.

Base64/hexdump/xxd/od

These are some rather trivial commands, but you should know them.

The ‘base64’ command encodes binary data in text. The text can then be passed around, such as in email messages. Base64 encoding is often automatic in the output from programs like openssl and PGP.

In many cases, you’ll need to view a hex dump of some binary data. There are many programs to do this, such as hexdump, xxd, od, and more.

grep

Grep searches for a pattern within a file. More important, it searches for a regular expression (regex) in a file. The fu of Unix is that a lot of stuff is stored in text files, and use grep for regex patterns in order to extra stuff stored in those files.

The power of this tool really depends on your mastery of regexes. You should master enough that you can understand StackExhange posts that explain almost what you want to do, and then tweek them to make them work.

Grep, by default, shows only the matching lines. In many cases, you only want the part that matches. To do that, use the –o option. (This is not available on all versions of grep).

You’ll probably want the better, “extended” regular expressions, so use the –E option.

You’ll often want “case-insensitive” options (matching both upper and lower case), so use the –i option.

For example, to extract all MAC address from a text file, you might do something like the following. This extracts all strings that are twelve hex digits.

$ grep –Eio ‘[0-9A-F]{12}’ foo.txt

Text processing

Grep is just the first of the various “text processing filters”. Other useful ones include ‘sed’, ‘cut’, ‘sort’, and ‘uniq’.

You’ll be an expert as piping output of one to the input of the next. You’ll use “sort | uniq” as god (Dennis Ritchie) intended and not the heresy of “sort –u”.

You might want to master ‘awk’. It’s a new programming language, but once you master it, it’ll be easier than other mechanisms.

You’ll end up using ‘wc’ (word-count) a lot. All it does is count the number of lines, words, characters in a file, but you’ll find yourself wanting to do this a lot.

csvkit and jq

You get data in CSV format and JSON format a lot. The tools ‘csvkit’ and ‘jq’ respectively help you deal with those tools, to convert these files into other formats, sticking the data in databases, and so forth.

It’ll be easier using these tools that understand these text formats to extract data than trying to write ‘awk’ command or ‘grep’ regexes.

strings

Most files are binary with a few readable ASCII strings. You use the program ‘strings’ to extract those strings.

This one simple trick sounds stupid, but it’s more powerful than you’d think. For example, I knew that a program probably contained a hard-coded password. I then blindly grabbed all the strings in the program’s binary file and sent them to a password cracker to see if they could decrypt something. And indeed, one of the 100,000 strings in the file worked, thus finding the hard-coded password.

tail -f

So ‘tail’ is just a standard Linux tool for looking at the end of files. If you want to keep checking the end of a live file that’s constantly growing, then use “tail –f”. It’ll sit there waiting for something new to be added to the end of the file, then print it out. I do this a lot, so I thought it’d be worth mentioning.

tar –xvfz, gzip, xz, 7z

In prehistorical times (like the 1980s), Unix was backed up to tape drives. The tar command could be used to combine a bunch of files into a single “archive” to be sent to the tape drive, hence “tape archive” or “tar”.

These days, a lot of stuff you download will be in tar format (ending in .tar). You’ll need to learn how to extract it:

$ tar –xvf something.tar

Nobody knows what the “xvf” options mean anymore, but these letters most be specified in that order. I’m joking here, but only a little: somebody did a survey once and found that virtually nobody know how to use ‘tar’ other than the canned formulas such as this.

Along with combining files into an archive you also need to compress them. In prehistoric Unix, the “compress” command would be used, which would replace a file with a compressed version ending in ‘.z’. This would found to be encumbered with patents, so everyone switched to ‘gzip’ instead, which replaces a file with a new one ending with ‘.gz’.

$ ls foo.txt*
foo.txt
$ gzip foo.txt
$ ls foo.txt*
foo.txt.gz

Combined with tar, you get files with either the “.tar.gz” extension, or simply “.tgz”. You can untar and uncompress at the same time:

$ tar –xvfz something .tar.gz

Gzip is always good enough, but nerds gonna nerd and want to compress with slightly better compression programs. They’ll have extensions like “.bz2”, “.7z”, “.xz”, and so on. There are a ton of them. Some of them are supported directly by the ‘tar’ program:

$ tar –xvfj something.tar.bz2

Then there is the “zip/unzip” program, which supports Windows .zip file format. To create compressed archives these days, I don’t bother with tar, but just use the ZIP format. For example, this will recursively descend a directory, adding all files to a ZIP file that can easily be extracted under Windows:

$ zip –r test.zip ./test/

dd

I should include this under the system tools at the top, but it’s interesting for a number of purposes. The usage is simply to copy one file to another, the in-file to the out-file.

$ dd if=foo.txt of=foo2.txt

But that’s not interesting. What interesting is using it to write to “devices”. The disk drives in your system also exist as raw devices under the /dev directory.

For example, if you want to create a boot USB drive for your Raspberry Pi:

# dd if=rpi-ubuntu.img of=/dev/sdb

Or, you might want to hard erase an entire hard drive by overwriting random data:

# dd if=/dev/urandom of=/dev/sdc

Or, you might want to image a drive on the system, for later forensics, without stumbling on things like open files.

# dd if=/dev/sda of=/media/Lexar/infected.img

The ‘dd’ program has some additional options, like block size and so forth, that you’ll want to pay attention to.

screen and tmux

You log in remotely and start some long running tool. Unfortunately, if you log out, all the processes you started will be killed. If you want it to keep running, then you need a tool to do this.

I use ‘screen’. Before I start a long running port scan, I run the “screen” command. Then, I type [ctrl-a][ctrl-d] to disconnect from that screen, leaving it running in the background.

Then later, I type “screen –r” to reconnect to it. If there are more than one screen sessions, using ‘-r’ by itself will list them all. Use “-r pid” to reattach to the proper one. If you can’t, then use “-D pid” or “-D –RR pid” to forced the other session to detached from whoever is using it.

Tmux is an alternative to screen that many use. It’s cool for also having lots of terminal screens open at once.

curl and wget

Sometimes you want to download files from websites without opening a browser. The ‘curl’ and ‘wget’ programs do that easily. Wget is the traditional way of doing this, but curl is a bit more flexible. I use curl for everything these days, except mirroring a website, in which case I just do “wget –m website”.

The thing that makes ‘curl’ so powerful is that it’s really designed as a tool for poking and prodding all the various features of HTTP. That it’s also useful for downloading files is a happy coincidence. When playing with a target website, curl will allow you do lots of complex things, which you can then script via bash. For example, hackers often write their cross-site scripting/forgeries in bash scripts using curl.

node/php/python/perl/ruby/lua

As mentioned above, bash is its own programming language. But it’s weird, and annoying. So sometimes you want a real programming language. Here are some useful ones.

Yes, PHP is a language that runs in a web server for creating web pages. But if you know the language well, it’s also a fine command-line language for doing stuff.

Yes, JavaScript is a language that runs in the web browser. But if you know it well, it’s also a great language for doing stuff, especially with the “nodejs” version.

Then there are other good command line languages, like the Python, Ruby, Lua, and the venerable Perl.

What makes all these great is the large library support. Somebody has already written a library that nearly does what you want that can be made to work with a little bit of extra code of your own.

My general impression is that Python and NodeJS have the largest libraries likely to have what you want, but you should pick whichever language you like best, whichever makes you most productive. For me, that’s NodeJS, because of the great Visual Code IDE/debugger.

iptables, iptables-save

I shouldn’t include this in the list. Iptables isn’t a command-line tool as such. The tool is the built-in firewalling/NAT features within the Linux kernel. Iptables is just the command to configure it.

Firewalling is an important part of cybersecurity. Everyone should have some experience playing with a Linux system doing basic firewalling tasks: basic rules, NATting, and transparent proxying for mitm attacks.

Use ‘iptables-save’ in order to persistently save your changes.

MySQL

Similar to ‘iptables’, ‘mysql’ isn’t a tool in its own right, but a way of accessing a database maintained by another process on the system.

Filters acting on text files only goes so far. Sometimes you need to dump it into a database, and make queries on that database.

There is also the offensive skill needed to learn how targets store things in a database, and how attackers get the data.

Hackers often publish raw SQL data they’ve stolen in their hacks (like the Ashley-Madisan dump). Being able to stick those dumps into your own database is quite useful. Hint: disable transaction logging while importing mass data.

If you don’t like SQL, you might consider NoSQL tools like Elasticsearch, MongoDB, and Redis that can similarly be useful for arranging and searching data. You’ll probably have to learn some JSON tools for formatting the data.

Reverse engineering tools

A cybersecurity specialty is “reverse engineering”. Some want to reverse engineer the target software being hacked, to understand vulnerabilities. This is needed for commercial software and device firmware where the source code is hidden. Others use these tools to analyze viruses/malware.

The ‘file’ command uses heuristics to discover the type of a file.

There’s a whole skillset for analyzing PDF and Microsoft Office documents. I play with pdf-parser. There’s a long list at this website:
https://zeltser.com/analyzing-malicious-documents/

There’s a whole skillset for analyzing executables. Binwalk is especially useful for analyzing firmware images.

Qemu is useful is a useful virtual-machine. It can emulate full systems, such as an IoT device based on the MIPS processor. Like some other tools mentioned here, it’s more a full subsystem than a simple command-line tool.

On a live system, you can use ‘strace’ to view what system calls a process is making. Use ‘lsof’ to view which files and network connections a process is making.

Password crackers

A common cybersecurity specialty is “password cracking”. There’s two kinds: online and offline password crackers.

Typical online password crackers are ‘hydra’ and ‘medusa’. They can take files containing common passwords and attempt to log on to various protocols remotely, like HTTP, SMB, FTP, Telnet, and so on. I used ‘hydra’ recently in order to find the default/backdoor passwords to many IoT devices I’ve bought recently in my test lab.

Online password crackers must open TCP connections to the target, and try to logon. This limits their speed. They also may be stymied by systems that lock accounts, or introduce delays, after too many bad password attempts.

Typical offline password crackers are ‘hashcat’ and ‘jtr’ (John the Ripper). They work off of stolen encrypted passwords. They can attempt billions of passwords-per-second, because there’s no network interaction, nothing slowing them down.

Understanding offline password crackers means getting an appreciation for the exponential difficulty of the problem. A sufficiently long and complex encrypted password is uncrackable. Instead of brute-force attempts at all possible combinations, we must use tricks, like mutating the top million most common passwords.

I use hashcat because of the great GPU support, but John is also a great program.

WiFi hacking

A common specialty in cybersecurity is WiFi hacking. The difficulty in WiFi hacking is getting the right WiFi hardware that supports the features (monitor mode, packet injection), then the right drivers installed in your operating system. That’s why I use Kali rather than some generic Linux distribution, because it’s got the right drivers installed.

The ‘aircrack-ng’ suite is the best for doing basic hacking, such as packet injection. When the parents are letting the iPad babysit their kid with a loud movie at the otherwise quite coffeeshop, use ‘aircrack-ng’ to deauth the kid.

The ‘reaver’ tool is useful for hacking into sites that leave WPS wide open and misconfigured.

Remote exploitation

A common specialty in cybersecurity is pentesting.

Nmap, curl, and netcat (described above) above are useful tools for this.

Some useful DNS tools are ‘dig’ (described above), dnsrecon/dnsenum/fierce that try to enumerate and guess as many names as possible within a domain. These tools all have unique features, but also have a lot of overlap.

Nikto is a basic tool for probing for common vulnerabilities, out-of-date software, and so on. It’s not really a vulnerability scanner like Nessus used by defenders, but more of a tool for attack.

SQLmap is a popular tool for probing for SQL injection weaknesses.

Then there is ‘msfconsole’. It has some attack features. This is humor – it has all the attack features. Metasploit is the most popular tool for running remote attacks against targets, exploiting vulnerabilities.

Text editor

Finally, there is the decision of text editor. I use ‘vi’ variants. Others like ‘nano’ and variants. There’s no wrong answer as to which editor to use, unless that answer is ‘emacs’.

Conclusion

Obviously, not every cybersecurity professional will be familiar with every tool in this list. If you don’t do reverse-engineering, then you won’t use reverse-engineering tools.

On the other hand, regardless of your specialty, you need to know basic crypto concepts, so you should know something like the ‘openssl’ tool. You need to know basic networking, so things like ‘nmap’ and ‘tcpdump’. You need to be comfortable processing large dumps of data, manipulating it with any tool available. You shouldn’t be frightened by a little sysadmin work.

The above list is therefore a useful starting point for cybersecurity professionals. Of course, those new to the industry won’t have much familiarity with them. But it’s fair to say that I’ve used everything listed above at least once in the last year, and the year before that, and the year before that. I spend a lot of time on StackExchange and Google searching the exact options I need, so I’m not an expert, but I am familiar with the basic use of all these things.

London Has Fallen Copyright Trolls Test Norway After US Retreat

Post Syndicated from Andy original https://torrentfreak.com/london-has-fallen-copyright-trolls-test-norway-after-us-retreat-170120/

While the overall volume of lawsuits continues to fall, copyright trolling is still a live and viable business model in the United States. However, things don’t always go smoothly.

After demanding payments from alleged pirates for some time, last November it was reported that LHF Productions, the company behind the action movie London Has Fallen, was having difficulty with a spirited defendant in one of its cases.

In communications with LHF’s legal team, James Collins and his lawyer J. Christopher Lynch systematically took apart LHF’s claims, threatening to expose their foreign representatives, the notorious Guardaley, MaverickEye and Crystal Bay organizations, and their “fictitious witnesses.”

But just as LHF Productions were dismissing that case, new opportunities were opening up thousands of miles away. According to reports coming out of Norway this week, letters are now being sent out to locals accusing them of downloading London Has Fallen using Popcorn Time and other BitTorrent-based systems.

In common with similar claims elsewhere, the law firm involved (Denmark-based Njord Law) is demanding a cash payment to make a supposed lawsuit go away.

A copy of the letter obtained by Tek.no reveals that 2,700 NOK (around US$320) can make the case disappear. Failure to comply, on the other hand, could result in a court case and damages of around $12,000, the company warns.

Like the UK, where the Citizens Advice Bureau has taken an interest in the activities of copyright trolls, in Norway The Consumer Council (Forbrukerrådet) has also been commenting this week.

“This is a very funny way of working, we think. An IP address is not an indicator that can be used to determine that someone has done something illegal. At least not the specific person – so this would not hold up in court,” their technical director explained.

“First, we wondered if this was to do with fraud, then if the letters were part of a campaign by licensees to inform users that it is illegal to download movies,” he added.

While that was obviously not the case, even the local organization representing the rights of the major US movie studios was quick to distance itself from the activities of the trolls. Willy Johansen, chairman of Norwegian organization Rights Alliance, said the demands have nothing to do with them and his group had already refused to work with the law firm.

“Njord says they represent producer companies directly in the United States. We have told them clearly that in Norway we do not want to go against consumers in this way,” Johansen said.

So what should recipients of these letters do? According to the Consumer Council, the answer is to dispute the claim. Torgeir Waterhouse of Internet interest group ICT Norway suggests going a step further.

“They claim to have a case, but they have not – at best they have identified the correct broadband subscription at the time the movie was downloaded. I strongly recommend that everyone who receives this letter does not pay,” he told Side3.no.

“We want the Norwegian Data Protection Authority to look at this. One thing is the collection of information, but another thing is that we know nothing about the processing of the information and if it can be presented as evidence in a trial.”

While it is clearly scary for people to receive these kinds of letters, it is only because recipients cave in and pay that the business model keeps rolling. Whether in the US, Europe, or elsewhere, trolls like Guardaley will continue until the money dries up – or someone in authority stops them.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Introducing the AWS IoT Button Enterprise Program

Post Syndicated from Tara Walker original https://aws.amazon.com/blogs/aws/introducing-the-aws-iot-button-enterprise-program/

The AWS IoT Button first made its appearance on the IoT scene in October of 2015 at AWS re:Invent with the introduction of the AWS IoT service.  That year all re:Invent attendees received the AWS IoT Button providing them the opportunity to get hands-on with AWS IoT.  Since that time AWS IoT button has been made broadly available to anyone interested in the clickable IoT device.

During this past AWS re:Invent 2016 conference, the AWS IoT button was launched into the enterprise with the AWS IoT Button Enterprise Program.  This program is intended to help businesses to offer new services or improve existing products at the click of a physical button.  With the AWS IoT Button Enterprise Program, enterprises can use a programmable AWS IoT Button to increase customer engagement, expand applications and offer new innovations to customers by simplifying the user experience.  By harnessing the power of IoT, businesses can respond to customer demand for their products and services in real-time while providing a direct line of communication for customers, all via a simple device.

 

 

AWS IoT Button Enterprise Program

Let’s discuss how the new AWS IoT Button Enterprise Program works.  Businesses start by placing a bulk order of the AWS IoT buttons and provide a custom label for the branding of the buttons.  Amazon manufactures the buttons and pre-provisions the IoT button devices by giving each a certificate and unique private key to grant access to AWS IoT and ensure secure communication with the AWS cloud.  This allows for easier configuration and helps customers more easily get started with the programming of the IoT button device.

Businesses would design and build their IoT solution with the button devices and creation of device companion applications.  The AWS IoT Button Enterprise Program provides businesses some complimentary assistance directly from AWS to ensure a successful deployment.  The deployed devices then would only need to be configured with Wi-Fi at user locations in order to function.

 

 

For enterprises, there are several use cases that would benefit from the implementation of an IoT button solution. Here are some ideas:

  • Reordering services or custom products such as pizza or medical supplies
  • Requesting a callback from a customer service agent
  • Retail operations such as a call for assistance button in stores or restaurants
  • Inventory systems for capturing products amounts for inventory
  • Healthcare applications such as alert or notification systems for the disabled or elderly
  • Interface with Smart Home systems to turn devices on and off such as turning off outside lights or opening the garage door
  • Guest check-in/check-out systems

 

AWS IoT Button

At the heart of the AWS IoT Button Enterprise Program is the AWS IoT Button.  The AWS IoT button is a 2.4GHz Wi-Fi with WPA2-PSK enabled device that has three click types: Single click, Double click, and Long press.  Note that a Long press click type is sent if the button is pressed for 1.5 seconds or longer.  The IoT button has a small LED light with color patterns for the status of the IoT button.  A blinking white light signifies that the IoT button is connecting to Wi-Fi and getting an IP address, while a blinking blue light signifies that the button is in wireless access point (AP) mode.  The data payload that is sent from the device when pressed contains the device serial number, the battery voltage, and the click type.

Currently, there are 3 ways to get started building your AWS IoT button solution.  The first option is to use the AWS IoT Button companion mobile app.  The mobile app will create the required AWS IoT resources, including the creation of the TLS 1.2 certificates, and create an AWS IoT rule tied to AWS Lambda.  Additionally, it will enable the IoT button device via AWS IoT to be an event source that invokes a new AWS Lambda function of your choosing from the Lambda blueprints.  You can download the aforementioned mobile apps for Android and iOS below.

 

The second option is to use the AWS Lambda Blueprint Wizard as an easy way to start using your AWS IoT Button. Like the mobile app, the wizard will create the required AWS IoT resources for you and add an event source to your button that invokes a new Lambda function.

The third option is to follow the step by step tutorial in the AWS IoT getting started guide and leverage the AWS IoT console to create these resources manually.

Once you have configured your IoT button successfully and created a simple one-click solution using one of the aforementioned getting started guides, you should be ready to start building your own custom IoT button solution.   Using a click of a button, your business will be able to build new services for customers, offer new features for existing services, and automate business processes to operate more efficiently.

The basic technical flow of an AWS IoT button solution is as follows:

  • A button is clicked and secure connection is established with AWS IoT with TLS 1.2
  • The button data payload is sent to AWS IoT Device Gateway
  • The rules engine evaluates received messages (JSON) published into AWS IoT and performs actions or trigger AWS Services based defined business rules.
  • The triggered AWS Service executes or action is performed
  • The device state can be read, stored and set with Device Shadows
  • Mobile and Web Apps can receive and update data based upon action

Now that you have general knowledge about the AWS IoT button, we should jump into a technical walk-through of building an AWS IoT button solution.

 

AWS IoT Button Solution Walkthrough

We will dive more deeply into building an AWS IoT Button solution with a quick example of a use case for providing one-click customer service options for a business.

To get started, I will go to the AWS IoT console, register my IoT button as a Thing and create a Thing type.  In the console, I select the Registry and then Things options in console menu.

The name of my IoT thing in this example will be TEW-AWSIoTButton.  If you desire to categorize the IoT things, you can create a Thing type and assign a type to similar IoT ‘things’.  I will categorize my IoT thing, TEW-AWSIoTButton, as an IoTButton thing type with a One-click-device attribute key and select Create thing button.

After my AWS IoT button device, TEW-AWSIoTButton, is registered in the Thing Registry, the next step is to acquire the required X.509 certificate and keys.  I will have AWS IoT generate the certificate for this device, but the service allows for to use your own certificates.  Authenticating the connection with the X.509 certificates helps to protect the data exchange between your device and AWS IoT service.

When the certificates are generated with AWS IoT, it is important that you download and save all of the files created since the public and private keys will not be available after you leave the download page. Additionally, do not forget to download the root CA for AWS IoT from the link provided on the page with your generated certificates.

The newly created certificate will be inactive, therefore, it is vital that you activate the certificate prior to use.  AWS IoT uses the TLS protocol to authenticate the certificates using the TLS protocol’s client authentication mode.  The certificates enable asymmetric keys to be used with devices, and AWS IoT service will request and validate the certificate’s status and the AWS account against a registry of certificates.  The service will challenge for proof of ownership of the private key corresponding to the public key contained in the certificate.  The final step in securing the AWS IoT connection to my IoT button is to create and/or attach an IAM policy for authorization.

I will choose the Attach a policy button and then select Create a Policy option in order to build a specific policy for my IoT button.  In Name field of the new IoT policy, I will enter IoTButtonPolicy for the name of this new policy. Since the AWS IoT Button device only supports button presses, our AWS IoT button policy will only need to add publish permissions.  For this reason, this policy will only allow the iot:Publish action.

 

For the Resource ARN of the IoT policy, the AWS IoT buttons typically follow the format pattern of: arn: aws: iot: TheRegion: AWSAccountNumber: topic/ iotbutton /ButtonSerialNumber.  This means that the Resource ARN for this IoT button policy will be:

I should note that if you are creating an IAM policy for an IoT device that is not an AWS IoT button, the Resource ARN format pattern would be as follows: arn: aws: iot: TheRegion: AWSAccountNumber: topic/ YourTopic/ OptionalSubTopic/

The created policy for our AWS IoT Button, IoTButtonPolicy, looks as follows:

The next step is to return to the AWS IoT console dashboard, select Security and then Certificates menu options.  I will choose the certificate created in the aforementioned steps.

Then on the selected certificate page, I will select the Actions dropdown on the far right top corner.  In order to add the IoTButtonPolicy IAM policy to the certificate, I will click the Attach policy option.

 

I will repeat all of the steps mentioned above but this time I will add the TEW-AWSIoTButton thing by selecting the Attach thing option.

All that is left is to add the certificate and private key to the physical AWS IoT button and connect the AWS IoT Button to Wi-Fi in order to have the IoT button be fully functional.

Important to note: For businesses that have signed up to participate in the AWS IoT Button Enterprise Program, all of these aforementioned steps; Button logo branding, AWS IoT thing creation, obtaining certificate & key creation, and adding certificates to buttons, are completed for them by Amazon and AWS.  Again, this is to help make it easier for enterprises to hit the ground running in the development of their desired AWS IoT button solution.

Now, going back to the AWS IoT button used in our example, I will connect the button to Wi-Fi by holding the button until the LED blinks blue; this means that the device has gone into wireless access point (AP) mode.

In order to provide internet connectivity to the IoT button and start configuring the device’s connection to AWS IoT, I will connect to the button’s Wi-Fi network which should start with Button ConfigureMe. The first time the connection is made to the button’s Wi-Fi, a password will be required.  Enter the last 8 characters of the device serial number shown on the back of the physical AWS IoT button device.

The AWS IoT button is now configured and ready to build a system around it. The next step will be to add the actions that will be performed when the IoT button is pressed.  This brings us to the AWS IoT Rules engine, which is used to analyze the IoT device data payload coming from the MQTT topic stream and/or Device Shadow, and trigger AWS Services actions.  We will set up rules to perform varying actions when different types of button presses are detected.

Our AWS IoT button solution will be a simple one, we will set up two AWS IoT rules to respond to the IoT button being clicked and the button’s payload being sent to AWS IoT.  In our scenario, a single button click will represent that a request is being sent by a customer to a fictional organization’s customer service agent.  A double click, however, will represent that a text will be sent containing a customer’s fictional current account status.

The first AWS IoT rule created will receive the IoT button payload and connect directly to Amazon SNS to send an email only if the rule condition is fulfilled that the button click type is SINGLE. The second AWS IoT rule created will invoke a Lambda function that will send a text message containing customer account status only if the rule condition is fulfilled that the button click type is DOUBLE.

In order to create the AWS IoT rule that will send an email to subscribers of an SNS topic for requesting a customer service agent’s help, we will go to Amazon SNS and create a SNS topic.

I will create an email subscription to the topic with the fictional subscribed customer service email, which in this case is just my email address.  Of course, this could be several customer service representatives that are subscribed to the topic in order to receive emails for customer assistance requests.

Now returning to the AWS IoT console, I will select the Rules menu and choose the Create rule option. I first provide a name and description for the rule.

Next, I select the SQL version to be used for the AWS IoT rules engine.  I select the latest SQL version, however, if I did not choose to set a version, the default version of 2015-10-08 will be used. The rules engine uses a SQL-like syntax with statements containing the SELECT, FROM, and WHERE clauses.  I want to return a literal string for the message, which is not apart of the IoT button data payload.  I also want to return the button serial number as the accountnum, which are not apart of the payload.  Since the latest version, 2016-03-23, supports literal objects, I will be able to send a custom payload to Amazon SNS.

I have created the rule, all that is left is to add a rule action to perform when the rule is analyzed.  As I mentioned above, an email should be sent to customer service representatives when this rule is triggered by a single IoT button press.  Therefore, my rule action will be the Send a message as an SNS push notification to the SNS topic that I created to send an email to our fictional customer service reps aka me. Remember that the use of an IAM role is required to provide access to SNS resources; if you are using the console you have the option to create a new role or update an existing role to provide the correct permissions.  Also, since I am doing a custom message and pushing to SNS, I select the Message format type to be RAW.

Our rule has been created, now all that is left is for us to test that an email is successfully sent when the AWS IoT button is pressed once, and therefore the data payload has a click type of SINGLE.

A single press of our AWS IoT Button and the custom message is published to the SNS Topic, and the email shown below was sent to the subscribed customer service agents email addresses; in this example, to my email address.

 

In order to create the AWS IoT rule that will send a text via Lambda and a SNS topic for the scenario in which customers request account status to be sent when the IoT Button is pressed twice.  We will start by creating an AWS IoT rule with an AWS Lambda action.  To create this IoT rule, we first need to create a Lambda function and the SNS Topic with a SNS text based subscription.

First, I will go to the Amazon SNS console and create a SNS Topic. After the topic is created, I will create a SNS text subscription for our SNS topic and add a number that will receive the text messages. I will then copy the SNS Topic ARN for use in my Lambda function. Please note, that I am creating the SNS Topic in a different region than previously created SNS topic to use a region with support for sending SMS via SNS. In the Lambda function, I will need to ensure the correct region for the SNS Topic is used by including the region as a parameter of the constructor of the SNS object. The created SNS topic, aws-iot-button-topic-text is shown below.

 

We now will go to the AWS Lambda console and create a Lambda function with an AWS IoT trigger, an IoT Type as IoT Button, and the requested Device Serial Number will be the serial number on the back of our AWS IoT Button. There is no need to generate the certificate and keys in this step because the AWS IoT button is already configured with certificates and keys for secure communication with AWS IoT.

The next is to create the Lambda function,  IoTNotifyByText, with the following code that will receive the IoT button data payload and create a message to publish to Amazon SNS.

'use strict';

console.log('Loading function');
var AWS = require("aws-sdk");
var sns = new AWS.SNS({region: 'us-east-1'});

exports.handler = (event, context, callback) => {
    // Load the message as JSON object 
    var iotPayload = JSON.stringify(event, null, 2);
    
    // Create a text message from IoT Payload 
    var snsMessage = "Attention: Customer Info for Account #: " + event.accountnum + " Account Status: In Good Standing " + 
    "Balance is: 1234.56"
    
    // Log payload and SNS message string to the console and for CloudWatch Logs 
    console.log("Received AWS IoT payload:", iotPayload);
    console.log("Message to send: " + snsMessage);
    
    // Create params for SNS publish using SNS Topic created for AWS IoT button
    // Populate the parameters for the publish operation using required JSON format
    // - Message : message text 
    // - TopicArn : the ARN of the Amazon SNS topic  
    var params = {
        Message: snsMessage,
        TopicArn: "arn:aws:sns:us-east-1:xxxxxxxxxxxx:aws-iot-button-topic-text"
     };
     
     sns.publish(params, context.done);
};

All that is left is for us to do is to alter the AWS IoT rule automatically created when we created a Lambda function with an AWS IoT trigger. Therefore, we will go to the AWS IoT console and select Rules menu option. We will find and select the IoT button rule created by Lambda which usually has a name with a suffix that is equal to the IoT button device serial number.

 

Once the rule is selected, we will choose the Edit option beside the Rule query statement section.

We change the Select statement to return the serial number as the accountnum and click Update button to save changes to the AWS IoT rule.

Time to Test. I click the IoT button twice and wait for the green LED light to appear, confirming a successful connection was made and a message was published to AWS IoT. After a few seconds, a text message is received on my phone with the fictitious customer account information.

 

This was a simple example of how a business could leverage the AWS IoT Button in order to build business solutions for their customers.  With the new AWS IoT Button Enterprise Program which helps businesses in obtaining the quantities of AWS IoT buttons needed, as well as, providing AWS IoT service pre-provisioning and deployment support; Businesses can now easily get started in building their own customized IoT solution.

Available Now

The original 1st generation of the AWS IoT button is currently available on Amazon.com, and the 2nd generation AWS IoT button will be generally available in February.  The main difference in the IoT buttons are the amount of battery life and/or clicks available for the button.  Please note that right now if you purchase the original AWS IoT button, you will receive $20 in AWS credits when you register.

Businesses can sign up today for the AWS IoT Button Enterprise Program currently in Limited Preview. This program is designed to enable businesses to expand their existing applications or build new IoT capabilities with the cloud and a click of an IoT button device.  You can read more about the AWS IoT button and learn more about building solutions with a programmable IoT button on the AWS IoT Button product page.  You can also dive deeper into the AWS IoT service by visiting the AWS IoT developer guide, the AWS IoT Device SDK documentation, and/or the AWS Internet of Things Blog.

 

Tara

Landmark Movie Streaming Trial Gets Underway in Sweden

Post Syndicated from Andy original https://torrentfreak.com/landmark-movie-streaming-trial-gets-underway-in-sweden-170116/

swefilmlogoFounded half a decade ago, Swefilmer grew to become Sweden’s most popular movie and TV show streaming site. Together with Dreamfilm, another site operating in the same niche, Swefilmer is said to have accounted for 25% of all web TV viewing in Sweden.

In the summer of 2015, local man Ola Johansson revealed that he’d been raided by the police under suspicion of being involved in running the site. In March 2015, a Turkish national was arrested in Germany on a secret European arrest warrant. The now 26-year-old was accused of receiving donations from users and setting up Swefilmer’s deals with advertisers.

In a subsequent indictment filed at the Varberg District Court, the men were accused of copyright infringement offenses relating to the unlawful distribution of more than 1,400 movies. However, just hours after the trial got underway last June, it was suspended, when a lawyer for one of the men asked to wait for an important EU copyright case to run its course.

That case, between Dutch blog GeenStijl.nl and Playboy, had seen a Dutch court ask the EU Court of Justice to rule whether unauthorized links to copyrighted content could be seen as a ‘communication to the public’ under Article 3(1) of the Copyright Directive, and whether those links facilitated copyright infringement.

Last September, the European Court of Justice ruled that it is usually acceptable to link to copyrighted content without permission when people are not aware content is infringing and when they do so on a non-profit basis. In commercial cases, the rules are much more strict.

The Swefilmer siteswefilmer

In light of that ruling, the pair return to the Varberg District Court today, accused of making more than $1.5m from their activities between November 2013 and June 2015.

While Swedish prosecutions against sites such as The Pirate Bay have made global headlines, the case against Swefilmer is the first of its kind against a stream-links portal. Prosecutor Anna Ginner and Rights Alliance lawyer Henrik Pontén believe they have the evidence needed to take down the pair.

“Swefilmer is a typical example of how a piracy operation looks today: fully commercial, well organized and great efforts expended to conceal itself. This applies particularly to the principal of the site,” Pontén told IDG.

According to Ginner, the pair ran an extensive operation and generated revenues from a number of advertising companies. They did not act alone but the duo were the ones that were identified by, among other things, their IP addresses.

The 26-year-old, who was arrested in Germany, was allegedly the money man who dealt with the advertisers. In addition to copyright infringement offenses, he stands accused of money laundering.

According to IDG, he will plead not guilty. His lawyer gave no hints why but suggested the reasons will become evident during the trial.

The younger man, who previously self-identified as Ola Johansson, is accused of being the day-to-day operator of the site, which included uploading movies to other sites where Swefilmer linked. He is said to have received a modest sum for his work, around $3,800.

“I think what’s interesting for the Swedish court is that this case has such clear elements of organized crime compared to what we have seen before,” Anna Ginner concludes.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Piracy Notices? There Shouldn’t Be Many UK Torrent Users Left to Warn

Post Syndicated from Andy original https://torrentfreak.com/piracy-notices-there-shouldnt-be-many-uk-torrent-users-left-to-warn-170115/

Later this month in partnership with the Creative Content UK (CCUK) initiative, four major ISPs will begin sending warning notices to subscribers whose connections are being used to pirate content.

BT, Sky, TalkTalk and Virgin Media are all involved in the scheme, which will be educational in tone and designed to encourage users towards legitimate services. The BBC obtained a copy of the email due to be sent out, and it’s very inoffensive.

“Get it Right is a government-backed campaign acting for copyright owners who think their content’s been shared without their permission,” the notice reads.

“It looks like someone has been using your broadband to share copyrighted material (that means things like music, films, sport or books). And as your broadband provider, we have to let you know when this happens.”

The notice then recommends where people can obtain tips to ensure that the unlawful sharing doesn’t happen again. Since the scheme will target mainly BitTorrent users, it’s likely that one of the tips will be to stop using torrents to obtain content. However, that in itself should be an eyebrow-raising statement in the UK.

For the past several years, UK Internet service providers – including all of the ones due to send out piracy notices this month – have been blocking all of the major torrent sites on the orders of the High Court. The Pirate Bay, KickassTorrents (and all their variants), every site in the top 10 most-visited torrent list and hundreds more, are all blocked at the ISP level in the UK.

By any normal means, no significant public torrent sites can be accessed by any subscriber from any major UK ISP and it’s been that way for a long time. Yet here we are in 2017 preparing to send up to 2.5 million warning notices a year to UK BitTorrent users. Something doesn’t add up.

According to various industry reports, there are around six million Internet pirates in the UK, which give or take is around 10% of the population. If we presume that a few years ago the majority were using BitTorrent, they could have conceivably received a couple of notices each per year.

However, if site-blocking is as effective as the music and movie industries claim it to be, then these days we should be looking at a massive decrease in the number of UK BitTorrent users. After all, if users can’t access the sites then they can’t download the .torrent files or magnet links they offer. If users can’t get those, then no downloads can take place.

While this is probably true for some former torrent users, it is obvious that massive site blocking efforts are being evaded on an industrial scale. With that in mind, the warning notices will still go out in large numbers but only to people who are savvy enough to circumvent a blockade but don’t take any other precautions as far as torrent transfers are concerned.

For others, who already turned to VPNs to give them access to blocked torrent sites, the battle is already over. They will never see a warning notice from their ISP and sites will remain available for as long as they stay online.

There’s also another category of users who migrated away from torrents to streaming sites. Users began to notice web-based streaming platforms in their millions when The Pirate Bay was first blocked several years ago, and they have only gained in popularity since. Like VPN users, people who frequent these sites will never see an ISP piracy notice.

Finally, there are those users who don’t understand torrents or web-based streaming but still use the latter on a daily basis via modified Kodi setups. These boxes or sticks utilize online streaming platforms so their users’ activities cannot be tracked. They too will receive no warnings. The same can be said about users who download from online hosting sites, such as Uploaded and Rapidgator.

So, if we trim this down, we’re looking at an educational notice scheme that will mainly target UK pirates who are somehow able to circumvent High Court blockades but do not conceal their IP addresses. How many of these semi-determined pirates exist is unclear but many are likely to receive ‘educational’ notices in the coming months.

Interestingly, the majority of these users will already be well aware that file-sharing copyrighted content is illegal, since when they’ve tried to access torrent sites in recent years they’ve all received a “blocked” message which mentions copyright infringement and the High Court.

When it comes to the crunch, this notice scheme has come several years too late. Technology has again outrun the mitigation measures available, and notices are now only useful as part of a basket of measures.

That being said, no one in the UK will have their Internet disconnected or throttled for receiving a notice. That’s a marked improvement over what was being proposed six years ago as part of the Digital Economy Act. Furthermore, the notices appear to be both polite and considered. On that basis, consumers should have little to complain about.

And, if some people do migrate to services like Netflix and Spotify, that will only be a good thing. Just don’t expect them to give up pirating altogether since not only are pirates the industry’s best customers, site blockades clearly don’t work.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Pirate Bay Offered to Help Catch Criminals But Copyright Got in the Way

Post Syndicated from Andy original https://torrentfreak.com/pirate-bay-offered-to-help-catch-criminals-but-copyright-got-in-the-way-170109/

thepirateIf The Pirate Bay manages to navigate the stormy waters of the Internet for another couple of years, it will have spent an unprecedented decade-and-a-half thumbing its nose at the authorities. Of course, that has come at a price.

The authorities’ interest in The Pirate Bay remains at a high and, given the chance, police in some countries would happily take down the world’s most prominent copyright scofflaw. However, painting the site as having no respect for any law would be doing it a disservice. In fact, at one point it even offered to work with the police.

The revelations follow the publication of a shocking article by Aftonbladet (Swedish) which details how, over an extended period, its reporters monitored dozens of people sharing images of child abuse online. The publication even met up with some of its targets and conducted interviews in person.

One of the people to comment on the extraordinary piece is Tobias Andersson, an early spokesperson of free-sharing advocacy group Piratbyrån (Pirate Bureau) and The Pirate Bay. Interestingly, Andersson reveals how The Pirate Bay offered to help police catch these kinds of offenders many years ago.

“A ‘fun’ thing about my time at the Pirate Bureau and The Pirate Bay was when the National Police, during the middle of the trial against us, called and wanted to consult about [abuse images] and TPB,” Andersson says.

The former site spokesperson, who also had more recent responsibility at The Promo Bay project, says he went to meet the police where he spoke with an officer and a technician. They had a specific request – to implement a filter to stop certain content appearing on the site.

“They wanted us to block certain [abuse-related] keywords,” Andersson explains.

Of course, keyword filters are notoriously weak and easily circumvented. So, instead, Andersson suggested another route the authorities might take which, due to the very public nature of torrent sharing (especially more than a decade ago when people were less privacy-conscious), might make actual perpetrators more easy to catch.

“I told [the police] how they could see the IP addresses in a [BitTorrent] client belonging to those who were sharing the content,” Andersson explains.

“I showed them how to start a torrent at 0.1kb/s download to be able to see the client list but without sharing anything. Which is not really rocket science,” the TPB and Piratbyrån veteran informs TorrentFreak.

Somewhat disappointingly, however, the police were unresponsive.

“They were not at all interested,” he says.

“Our skilled moderators [on The Pirate Bay] routinely deleted everything that could be suspected to be child porn, but still people tried to post it again and again. I wanted to explain to the police that we could easily identify most of stuff being posted but they were totally uninterested.”

Meanwhile, however, Hollywood and the recording industries were working with Swedish police on a highly expensive and complex technical case to bring down The Pirate Bay on copyright grounds. Sadly, it was to be further copyright-related demands that would bring negotiations on catching more serious offenders to an end.

“Because we refused to censor [The Pirate Bay’s] search to remove, for example, a crappy Stanley Kubrick movie, our ‘cooperation’ with the police ended there. Too bad, because we could have easily provided them with lists [of offenders] like those Aftonbladet reported today,” Andersson concludes.

Today’s revelations mark the second time The Pirate Bay has been shown to work with authorities to trap serious criminals. In 2013, the site provided evidence to TorrentFreak which showed notorious copyright troll outfit Prenda Law uploaded “honey-pot” torrents to the site. The principals of that organization are now facing charges of extortion and fraud.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.