Tag Archives: target

TVAddons Suffers Big Setback as Court Completely Overturns Earlier Ruling

Post Syndicated from Andy original https://torrentfreak.com/tvaddons-suffers-big-setback-as-court-completely-overturns-earlier-ruling-180221/

On June 2, 2017 a group of Canadian telecoms giants including Bell Canada, Bell ExpressVu, Bell Media, Videotron, Groupe TVA, Rogers Communications and Rogers Media, filed a complaint in Federal Court against Montreal resident, Adam Lackman.

Better known as the man behind Kodi addon repository TVAddons, Lackman was painted as a serial infringer in the complaint. The telecoms companies said that, without gaining permission from rightsholders, Lackman communicated copyrighted TV shows including Game of Thrones, Prison Break, The Big Bang Theory, America’s Got Talent, Keeping Up With The Kardashians and dozens more, by developing, hosting, distributing and promoting infringing Kodi add-ons.

To limit the harm allegedly caused by TVAddons, the complaint demanded interim, interlocutory, and permanent injunctions restraining Lackman from developing, promoting or distributing any of the allegedly infringing add-ons or software. On top, the plaintiffs requested punitive and exemplary damages, plus costs.

On June 9, 2017 the Federal Court handed down a time-limited interim injunction against Lackman ex parte, without Lackman being able to mount a defense. Bailiffs took control of TVAddons’ domains but the most controversial move was the granting of an Anton Piller order, a civil search warrant which granted the plaintiffs no-notice permission to enter Lackman’s premises to secure evidence before it could be tampered with.

The order was executed June 12, 2017, with Lackman’s home subjected to a lengthy search during which the Canadian was reportedly refused his right to remain silent. Non-cooperation with an Anton Piller order can amount to a contempt of court, he was told.

With the situation seemingly spinning out of Lackman’s control, unexpected support came from the Honourable B. Richard Bell during a subsequent June 29, 2017 Federal Court hearing to consider the execution of the Anton Piller order.

The Judge said that Lackman had been subjected to a search “without any of the protections normally afforded to litigants in such circumstances” and took exception to the fact that the plaintiffs had ordered Lackman to spill the beans on other individuals in the Kodi addon community. He described this as a hunt for further evidence, not the task of preserving evidence it should’ve been.

Justice Bell concluded by ruling that while the prima facie case against Lackman may have appeared strong before the judge who heard the matter ex parte, the subsequent adversarial hearing undermined it, to the point that it no longer met the threshold.

As a result of these failings, Judge Bell vacated the Anton Piller order and dismissed the application for interlocutory injunction.

While this was an early victory for Lackman and TVAddons, the plaintiffs took the decision to an appeal which was heard November 29, 2017. Determined by a three-judge panel and signed by Justice Yves de Montigny, the decision was handed down Tuesday and it effectively turns the earlier ruling upside down.

The appeal had two matters to consider: whether Justice Bell made errors when he vacated the Anton Piller order, and whether he made errors when he dismissed the application for an interlocutory injunction. In short, the panel found that he did.

In a 27-page ruling, the first key issue concerns Justice Bell’s understanding of the nature of both Lackman and TVAddons.

The telecoms companies complained that the Judge got it wrong when he characterized Lackman as a software developer who came up with add-ons that permit users to access material “that is for the most part not infringing on the rights” of the telecoms companies.

The companies also challenged the Judge’s finding that the infringing add-ons offered by the site represented “just over 1%” of all the add-ons developed by Lackman.

“I agree with the [telecoms companies] that the Judge misapprehended the evidence and made palpable and overriding errors in his assessment of the strength of the appellants’ case,” Justice Yves de Montigny writes in the ruling.

“Nowhere did the appellants actually state that only a tiny proportion of the add-ons found on the respondent’s website are infringing add-ons.”

The confusion appears to have arisen from the fact that while TVAddons offered 1,500 add-ons in total, the heavily discussed ‘featured’ addon category on the site contained just 22 add-ons, 16 of which were considered to be infringing according to the original complaint. So, it was 16 add-ons out of 22 being discussed, not 16 add-ons out of a possible 1,500.

“[Justice Bell] therefore clearly misapprehended the evidence in this regard by concluding that just over 1% of the add-ons were purportedly infringing,” the appeals Judge adds.

After gaining traction with Justice Bell in the previous hearing, Lackman’s assertion that his add-ons were akin to a “mini Google” was fiercely contested by the telecoms companies. They also fell flat before the appeal hearing.

Justice de Montigny says that Justice Bell “had been swayed” when Lackman’s expert replicated the discovery of infringing content using Google but had failed to grasp the important differences between a general search engine and a dedicated Kodi add-on.

“While Google is an indiscriminate search engine that returns results based on relevance, as determined by an algorithm, infringing add-ons target predetermined infringing content in a manner that is user-friendly and reliable,” the Judge writes.

“The fact that a search result using an add-on can be replicated with Google is of little consequence. The content will always be found using Google or any other Internet search engine because they search the entire universe of all publicly available information. Using addons, however, takes one to the infringing content much more directly, effortlessly and safely.”

With this in mind, Justice de Montigny says there is a “strong prima facie case” that Lackman, by hosting and distributing infringing add-ons, made the telecoms companies’ content available to the public “at a time of their choosing”, thereby infringing paragraph 2.4(1.1) and section 27 of the Copyright Act.

On TVAddons itself, the Judge said that the platform is “clearly designed” to facilitate access to infringing material since it targets “those who want to circumvent the legal means of watching television programs and the related costs.”

Turning to Lackman, the Judge said he could not claim to have no knowledge of the infringing content delivered by the add-ons distributed on this site, since they were purposefully curated prior to distribution.

“The respondent cannot credibly assert that his participation is content neutral and that he was not negligent in failing to investigate, since at a minimum he selects and organizes the add-ons that find their way onto his website,” the Judge notes.

In a further setback, the Judge draws clear parallels with another case before the Canadian courts involving pre-loaded ‘pirate’ set-top boxes. Justice de Montigny says that TVAddons itself bears “many similarities” with those devices that are already subjected to an interlocutory injunction in Canada.

“The service offered by the respondent through the TVAddons website is no different from the service offered through the set-top boxes. The means through which access is provided to infringing content is different (one relied on hardware while the other relied on a website), but they both provided unauthorized access to copyrighted material without authorization of the copyright owners,” the Judge finds.

Continuing, the Judge makes some pointed remarks concerning the execution of the Anton Piller order. In short, he found little wrong with the way things went ahead and also contradicted some of the claims and beliefs circulated in the earlier hearing.

Citing the affidavit of an independent solicitor who monitored the order’s execution, the Judge said that the order was explained to Lackman in plain language and he was informed of his right to remain silent. He was also told that he could refuse to answer questions other than those specified in the order.

The Judge said that Lackman was allowed to have counsel present, “with whom he consulted throughout the execution of the order.” There was nothing, the Judge said, that amounted to the “interrogation” alluded to in the earlier hearing.

Justice de Montigny also criticized Justice Bell for failing to take into account that Lackman “attempted to conceal crucial evidence and lied to the independent supervising solicitor regarding the whereabouts of that evidence.”

Much was previously made of Lackman apparently being forced to hand over personal details of third-parties associated directly or indirectly with TVAddons. The Judge clarifies what happened in his ruling.

“A list of names was put to the respondent by the plaintiffs’ solicitors, but it was apparently done to expedite the questioning process. In any event, the respondent did not provide material information on the majority of the aliases put to him,” the Judge reveals.

But while not handing over evidence on third-parties will paint Lackman in a better light with concerned elements of the add-on community, the Judge was quick to bring up the Canadian’s history and criticized Justice Bell for not taking it into account when he vacated the Anton Piller order.

“[T]he respondent admitted that he was involved in piracy of satellite television signals when he was younger, and there is evidence that he was involved in the configuration and sale of ‘jailbroken’ Apple TV set-top boxes,” Justice de Montigny writes.

“When juxtaposed to the respondent’s attempt to conceal relevant evidence during the execution of the Anton Piller order, that contextual evidence adds credence to the appellants’ concern that the evidence could disappear without a comprehensive order.”

Dismissing Justice Bell’s findings as “fatally flawed”, Justice de Montigny allowed the appeal of the telecoms companies, set aside the order of June 29, 2017, declared the Anton Piller order and interim injunctions legal, and granted an interlocutory injunction to remain valid until the conclusion of the case in Federal Court. The telecoms companies were also awarded costs of CAD$50,000.

It’s worth noting that despite all the detail provided up to now, the case hasn’t yet got to the stage where the Court has tested any of the claims put forward by the telecoms companies. Everything reported to date is pre-trial and has been taken at face value.

TorrentFreak spoke with Adam Lackman but since he hadn’t yet had the opportunity to discuss the matter with his lawyers, he declined to comment further on the record. There is a statement on the TVAddons website which gives his position on the story so far.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Copyright Trolls Target Up to 22,000 Norwegians for Movie Piracy

Post Syndicated from Andy original https://torrentfreak.com/copyright-trolls-target-up-to-22000-norwegians-for-movie-piracy-180220/

Last January it was revealed that after things had become tricky in the US, the copyright trolls behind the action movie London Has Fallen were testing out the Norwegian market.

Reports emerged of letters being sent out to local Internet users by Danish law firm Njord Law, each demanding a cash payment of 2,700 NOK (around US$345). Failure to comply, the company claimed, could result in a court case and damages of around $12,000.

The move caused outrage locally, with consumer advice groups advising people not to pay and even major anti-piracy groups distancing themselves from the action. However, in May 2017 it appeared that progress had been made in stopping the advance of the trolls when another Njord Law case running since 2015 hit the rocks.

The law firm previously sent a request to the Oslo District Court on behalf of entertainment company Scanbox asking ISP Telenor to hand over subscribers’ details. In May 2016, Scanbox won its case and Telenor was ordered to hand over the information.

On appeal, however, the tables were turned when it was decided that evidence supplied by the law firm failed to show that sharing carried out by subscribers was substantial.

Undeterred, Njord Law took the case all the way to the Supreme Court. The company lost when a panel of judges found that the evidence presented against Telenor’s customers wasn’t good enough to prove infringement beyond a certain threshold. But Njord Law still wasn’t done.

More than six months on, the ruling from the Supreme Court only seems to have provided the company with a template. If the law firm could show that the scale of sharing exceeds the threshold set by Norway’s highest court, then disclosure could be obtained. That appears to be the case now.

In a ruling handed down by the Oslo District Court in January, it’s revealed that Njord Law and its partners handed over evidence which shows 23,375 IP addresses engaged in varying amounts of infringing behavior over an extended period. The ISP they have targeted is being kept secret by the court but is believed to be Telenor.

Using information supplied by German anti-piracy outfit MaverickEye (which is involved in numerous copyright troll cases globally), Njord Law set out to show that the conduct of the alleged pirates had been exceptional for a variety of reasons, categorizing them variously (but non-exclusively) as follows:

– IP addresses involved in BitTorrent swarm sizes greater than 10,000 peers/pirates
– IP addresses that have shared at least two of the plaintiffs’ movies
– IP addresses making available the plaintiffs’ movies on at least two individual days
– IP addresses that made available at least ten movies in total
– IP addresses that made available different movies on at least ten individual days
– IP addresses that made available movies from businesses and public institutions

While rejecting some categories, the court was satisfied that 21,804 IP addresses of the 23,375 IP addresses presented by Njord Law met or exceeded the criteria for disclosure. It’s still not clear how many of these IP addresses identify unique subscribers but many thousands are expected.

“For these users, it has been established that the gravity, extent, and harm of the infringement are so great that consideration for the rights holder’s interests in accessing information identifying the [allegedly infringing] subscribers is greater than the consideration of the subscribers’,” the court writes in its ruling.

“Users’ confidence that their private use of the Internet is protected from public access is a generally important factor, but not in this case where illegal file sharing has been proven. Nor has there been any information stating that the offenders in the case are children or anything else which implies that disclosure of information about the holder of the subscriber should be problematic.”

While the ISP (Telenor) will now have to spend time and resources disclosing its subscribers’ personal details to the law firm, it will be compensated for its efforts. The Oslo District Court has ordered Njord Law to pay costs of NOK 907,414 (US$115,822) plus NOK 125 (US$16.00) for every IP address and associated details it receives.

The decision can be appealed but when contacted by Norwegian publication Nettavisen, Telenor declined to comment on the case.

There is now the question of what Njord Law will do with the identities it obtains. It seems very likely that it will ask for a sum of money to make a potential lawsuit go away but it will still need to take an individual subscriber to court in order to extract payment, if they refuse to pay.

This raises the challenge of proving that the subscriber is the actual infringer when it could be anyone in a household. But that battle will have to wait until another day.

The full decision of the Oslo District Court can be found here (Norwegian)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Running ActiveMQ in a Hybrid Cloud Environment with Amazon MQ

Post Syndicated from Tara Van Unen original https://aws.amazon.com/blogs/compute/running-activemq-in-a-hybrid-cloud-environment-with-amazon-mq/

This post courtesy of Greg Share, AWS Solutions Architect

Many organizations, particularly enterprises, rely on message brokers to connect and coordinate different systems. Message brokers enable distributed applications to communicate with one another, serving as the technological backbone for their IT environment, and ultimately their business services. Applications depend on messaging to work.

In many cases, those organizations have started to build new or “lift and shift” applications to AWS. In some cases, there are applications, such as mainframe systems, too costly to migrate. In these scenarios, those on-premises applications still need to interact with cloud-based components.

Amazon MQ is a managed message broker service for ActiveMQ that enables organizations to send messages between applications in the cloud and on-premises to enable hybrid environments and application modernization. For example, you can invoke AWS Lambda from queues and topics managed by Amazon MQ brokers to integrate legacy systems with serverless architectures. ActiveMQ is an open-source message broker written in Java that is packaged with clients in multiple languages, Java Message Server (JMS) client being one example.

This post shows you can use Amazon MQ to integrate on-premises and cloud environments using the network of brokers feature of ActiveMQ. It provides configuration parameters for a one-way duplex connection for the flow of messages from an on-premises ActiveMQ message broker to Amazon MQ.

ActiveMQ and the network of brokers

First, look at queues within ActiveMQ and then at the network of brokers as a mechanism to distribute messages.

The network of brokers behaves differently from models such as physical networks. The key consideration is that the production (sending) of a message is disconnected from the consumption of that message. Think of the delivery of a parcel: The parcel is sent by the supplier (producer) to the end customer (consumer). The path it took to get there is of little concern to the customer, as long as it receives the package.

The same logic can be applied to the network of brokers. Here’s how you build the flow from a simple message to a queue and build toward a network of brokers. Before you look at setting up a hybrid connection, I discuss how a broker processes messages in a simple scenario.

When a message is sent from a producer to a queue on a broker, the following steps occur:

  1. A message is sent to a queue from the producer.
  2. The broker persists this in its store or journal.
  3. At this point, an acknowledgement (ACK) is sent to the producer from the broker.

When a consumer looks to consume the message from that same queue, the following steps occur:

  1. The message listener (consumer) calls the broker, which creates a subscription to the queue.
  2. Messages are fetched from the message store and sent to the consumer.
  3. The consumer acknowledges that the message has been received before processing it.
  4. Upon receiving the ACK, the broker sets the message as having been consumed. By default, this deletes it from the queue.
    • You can set the consumer to ACK after processing by setting up transaction management or handle it manually using Session.CLIENT_ACKNOWLEDGE.

Static propagation

I now introduce the concept of static propagation with the network of brokers as the mechanism for message transfer from on-premises brokers to Amazon MQ.  Static propagation refers to message propagation that occurs in the absence of subscription information. In this case, the objective is to transfer messages arriving at your selected on-premises broker to the Amazon MQ broker for consumption within the cloud environment.

After you configure static propagation with a network of brokers, the following occurs:

  1. The on-premises broker receives a message from a producer for a specific queue.
  2. The on-premises broker sends (statically propagates) the message to the Amazon MQ broker.
  3. The Amazon MQ broker sends an acknowledgement to the on-premises broker, which marks the message as having been consumed.
  4. Amazon MQ holds the message in its queue ready for consumption.
  5. A consumer connects to Amazon MQ broker, subscribes to the queue in which the message resides, and receives the message.
  6. Amazon MQ broker marks the message as having been consumed.

Getting started

The first step is creating an Amazon MQ broker.

  1. Sign in to the Amazon MQ console and launch a new Amazon MQ broker.
  2. Name your broker and choose Next step.
  3. For Broker instance type, choose your instance size:
    mq.t2.micro
    mq.m4.large
  4. For Deployment mode, enter one of the following:
    Single-instance broker for development and test implementations (recommended)
    Active/standby broker for high availability in production environments
  5. Scroll down and enter your user name and password.
  6. Expand Advanced Settings.
  7. For VPC, Subnet, and Security Group, pick the values for the resources in which your broker will reside.
  8. For Public Accessibility, choose Yes, as connectivity is internet-based. Another option would be to use private connectivity between your on-premises network and the VPC, an example being an AWS Direct Connect or VPN connection. In that case, you could set Public Accessibility to No.
  9. For Maintenance, leave the default value, No preference.
  10. Choose Create Broker. Wait several minutes for the broker to be created.

After creation is complete, you see your broker listed.

For connectivity to work, you must configure the security group where Amazon MQ resides. For this post, I focus on the OpenWire protocol.

For Openwire connectivity, allow port 61617 access for Amazon MQ from your on-premises ActiveMQ broker source IP address. For alternate protocols, see the Amazon MQ broker configuration information for the ports required:

OpenWire – ssl://xxxxxxx.xxx.com:61617
AMQP – amqp+ssl:// xxxxxxx.xxx.com:5671
STOMP – stomp+ssl:// xxxxxxx.xxx.com:61614
MQTT – mqtt+ssl:// xxxxxxx.xxx.com:8883
WSS – wss:// xxxxxxx.xxx.com:61619

Configuring the network of brokers

Configuring the network of brokers with static propagation occurs on the on-premises broker by applying changes to the following file:
<activemq install directory>/conf activemq.xml

Network connector

This is the first configuration item required to enable a network of brokers. It is only required on the on-premises broker, which initiates and creates the connection with Amazon MQ. This connection, after it’s established, enables the flow of messages in either direction between the on-premises broker and Amazon MQ. The focus of this post is the uni-directional flow of messages from the on-premises broker to Amazon MQ.

The default activemq.xml file does not include the network connector configuration. Add this with the networkConnector element. In this scenario, edit the on-premises broker activemq.xml file to include the following information between <systemUsage> and <transportConnectors>:

<networkConnectors>
             <networkConnector 
                name="Q:source broker name->target broker name"
                duplex="false" 
                uri="static:(ssl:// aws mq endpoint:61617)" 
                userName="username"
                password="password" 
                networkTTL="2" 
                dynamicOnly="false">
                <staticallyIncludedDestinations>
                    <queue physicalName="queuename"/>
                </staticallyIncludedDestinations> 
                <excludedDestinations>
                      <queue physicalName=">" />
                </excludedDestinations>
             </networkConnector> 
     <networkConnectors>

The highlighted components are the most important elements when configuring your on-premises broker.

  • name – Name of the network bridge. In this case, it specifies two things:
    • That this connection relates to an ActiveMQ queue (Q) as opposed to a topic (T), for reference purposes.
    • The source broker and target broker.
  • duplex –Setting this to false ensures that messages traverse uni-directionally from the on-premises broker to Amazon MQ.
  • uri –Specifies the remote endpoint to which to connect for message transfer. In this case, it is an Openwire endpoint on your Amazon MQ broker. This information could be obtained from the Amazon MQ console or via the API.
  • username and password – The same username and password configured when creating the Amazon MQ broker, and used to access the Amazon MQ ActiveMQ console.
  • networkTTL – Number of brokers in the network through which messages and subscriptions can pass. Leave this setting at the current value, if it is already included in your broker connection.
  • staticallyIncludedDestinations > queue physicalName – The destination ActiveMQ queue for which messages are destined. This is the queue that is propagated from the on-premises broker to the Amazon MQ broker for message consumption.

After the network connector is configured, you must restart the ActiveMQ service on the on-premises broker for the changes to be applied.

Verify the configuration

There are a number of places within the ActiveMQ console of your on-premises and Amazon MQ brokers to browse to verify that the configuration is correct and the connection has been established.

On-premises broker

Launch the ActiveMQ console of your on-premises broker and navigate to Network. You should see an active network bridge similar to the following:

This identifies that the connection between your on-premises broker and your Amazon MQ broker is up and running.

Now navigate to Connections and scroll to the bottom of the page. Under the Network Connectors subsection, you should see a connector labeled with the name: value that you provided within the ActiveMQ.xml configuration file. You should see an entry similar to:

Amazon MQ broker

Launch the ActiveMQ console of your Amazon MQ broker and navigate to Connections. Scroll to the Connections openwire subsection and you should see a connection specified that references the name: value that you provided within the ActiveMQ.xml configuration file. You should see an entry similar to:

If you configured the uri: for AMQP, STOMP, MQTT, or WSS as opposed to Openwire, you would see this connection under the corresponding section of the Connections page.

Testing your message flow

The setup described outlines a way for messages produced on premises to be propagated to the cloud for consumption in the cloud. This section provides steps on verifying the message flow.

Verify that the queue has been created

After you specify this queue name as staticallyIncludedDestinations > queue physicalName: and your ActiveMQ service starts, you see the following on your on-premises ActiveMQ console Queues page.

As you can see, no messages have been sent but you have one consumer listed. If you then choose Active Consumers under the Views column, you see Active Consumers for TestingQ.

This is telling you that your Amazon MQ broker is a consumer of your on-premises broker for the testing queue.

Produce and send a message to the on-premises broker

Now, produce a message on an on-premises producer and send it to your on-premises broker to a queue named TestingQ. If you navigate back to the queues page of your on-premises ActiveMQ console, you see that the messages enqueued and messages dequeued column count for your TestingQ queue have changed:

What this means is that the message originating from the on-premises producer has traversed the on-premises broker and propagated immediately to the Amazon MQ broker. At this point, the message is no longer available for consumption from the on-premises broker.

If you access the ActiveMQ console of your Amazon MQ broker and navigate to the Queues page, you see the following for the TestingQ queue:

This means that the message originally sent to your on-premises broker has traversed the network of brokers unidirectional network bridge, and is ready to be consumed from your Amazon MQ broker. The indicator is the Number of Pending Messages column.

Consume the message from an Amazon MQ broker

Connect to the Amazon MQ TestingQ queue from a consumer within the AWS Cloud environment for message consumption. Log on to the ActiveMQ console of your Amazon MQ broker and navigate to the Queue page:

As you can see, the Number of Pending Messages column figure has changed to 0 as that message has been consumed.

This diagram outlines the message lifecycle from the on-premises producer to the on-premises broker, traversing the hybrid connection between the on-premises broker and Amazon MQ, and finally consumption within the AWS Cloud.

Conclusion

This post focused on an ActiveMQ-specific scenario for transferring messages within an ActiveMQ queue from an on-premises broker to Amazon MQ.

For other on-premises brokers, such as IBM MQ, another approach would be to run ActiveMQ on-premises broker and use JMS bridging to IBM MQ, while using the approach in this post to forward to Amazon MQ. Yet another approach would be to use Apache Camel for more sophisticated routing.

I hope that you have found this example of hybrid messaging between an on-premises environment in the AWS Cloud to be useful. Many customers are already using on-premises ActiveMQ brokers, and this is a great use case to enable hybrid cloud scenarios.

To learn more, see the Amazon MQ website and Developer Guide. You can try Amazon MQ for free with the AWS Free Tier, which includes up to 750 hours of a single-instance mq.t2.micro broker and up to 1 GB of storage per month for one year.

 

Canadian Pirate Site Blocks Could Spread to VPNs, Professor Warns

Post Syndicated from Ernesto original https://torrentfreak.com/canadian-pirate-site-blocks-could-spread-to-vpns-professor-warns-180219/

ISP blocking has become a prime measure for the entertainment industry to target pirate sites on the Internet.

In recent years sites have been blocked throughout Europe, in Asia, and even Down Under.

Last month, a coalition of Canadian companies called on the local telecom regulator CRTC to establish a local pirate site blocking program, which would be the first of its kind in North America.

The Canadian deal is backed by both copyright holders and major players in the Telco industry, such as Bell and Rogers, which also have media companies of their own. Instead of court-ordered blockades, they call for a mutually agreed deal where ISPs will block pirate sites.

The plan has triggered a fair amount of opposition. Tens of thousands of people have protested against the proposal and several experts are warning against the negative consequences it may have.

One of the most vocal opponents is University of Ottawa law professor Micheal Geist. In a series of articles, processor Geist highlighted several problems, including potential overblocking.

The Fairplay Canada coalition downplays overblocking, according to Geist. They say the measures will only affect sites that are blatantly, overwhelmingly or structurally engaged in piracy, which appears to be a high standard.

However, the same coalition uses a report from MUSO as its primary evidence. This report draws on a list of 23,000 pirate sites, which may not all be blatant enough to meet the blocking standard.

For example, professor Geist notes that it includes a site dedicated to user-generated subtitles as well as sites that offer stream ripping tools which can be used for legal purposes.

“Stream ripping is a concern for the music industry, but these technologies (which are also found in readily available software programs from a local BestBuy) also have considerable non-infringing uses, such as for downloading Creative Commons licensed videos also found on video sites,” Geist writes.

If the coalition tried to have all these sites blocked the scope would be much larger than currently portrayed. Conversely, if only a few of the sites would be blocked, then the evidence that was used to put these blocks in place would have been exaggerated.

“In other words, either the scope of block list coverage is far broader than the coalition admits or its piracy evidence is inflated by including sites that do not meet its piracy standard,” Geist notes.

Perhaps most concerning is the slippery slope that the blocking efforts can turn into. Professor Geist fears that after the standard piracy sites are dealt with, related targets may be next.

This includes VPN services. While this may sound far-fetched to some, several members of the coalition, such as Bell and Rogers, have already criticized VPNs in the past since these allow people to watch geo-blocked content.

“Once the list of piracy sites (whatever the standard) is addressed, it is very likely that the Bell coalition will turn its attention to other sites and services such as virtual private networks (VPNs).

“This is not mere speculation. Rather, it is taking Bell and its allies at their word on how they believe certain services and sites constitute theft,” Geist adds.

The issue may even be more relevant in this case, since the same VPNs can also be used to circumvent pirate sites blockades.

“Further, since the response to site blocking from some Internet users will surely involve increased use of VPNs to evade the blocks, the attempt to characterize VPNs as services engaged in piracy will only increase,” Geist adds.

Potential overblocking is just one of the many issues with the current proposal, according to the law professor. Geist previously highlighted that current copyright law already provides sufficient remedies to deal with piracy and that piracy isn’t that much of a problem in Canada in the first place.

The CRTC has yet to issue its review of the proposal but now that the cat is out of the bag, rightsholders and ISPs are likely to keep pushing for blockades, one way or the other.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Flight Sim Company Embeds Malware to Steal Pirates’ Passwords

Post Syndicated from Andy original https://torrentfreak.com/flight-sim-company-embeds-malware-to-steal-pirates-passwords-180219/

Anti-piracy systems and DRM come in all shapes and sizes, none of them particularly popular, but one deployed by flight sim company FlightSimLabs is likely to go down in history as one of the most outrageous.

It all started yesterday on Reddit when Flight Sim user ‘crankyrecursion’ reported a little extra something in his download of FlightSimLabs’ A320X module.

“Using file ‘FSLabs_A320X_P3D_v2.0.1.231.exe’ there seems to be a file called ‘test.exe’ included,” crankyrecursion wrote.

“This .exe file is from http://securityxploded.com and is touted as a ‘Chrome Password Dump’ tool, which seems to work – particularly as the installer would typically run with Administrative rights (UAC prompts) on Windows Vista and above. Can anyone shed light on why this tool is included in a supposedly trusted installer?”

The existence of a Chrome password dumping tool is certainly cause for alarm, especially if the software had been obtained from a less-than-official source, such as a torrent or similar site, given the potential for third-party pollution.

However, with the possibility of a nefarious third-party dumping something nasty in a pirate release still lurking on the horizon, things took an unexpected turn. FlightSimLabs chief Lefteris Kalamaras made a statement basically admitting that his company was behind the malware installation.

“We were made aware there is a Reddit thread started tonight regarding our latest installer and how a tool is included in it, that indiscriminately dumps Chrome passwords. That is not correct information – in fact, the Reddit thread was posted by a person who is not our customer and has somehow obtained our installer without purchasing,” Kalamaras wrote.

“[T]here are no tools used to reveal any sensitive information of any customer who has legitimately purchased our products. We all realize that you put a lot of trust in our products and this would be contrary to what we believe.

“There is a specific method used against specific serial numbers that have been identified as pirate copies and have been making the rounds on ThePirateBay, RuTracker and other such malicious sites,” he added.

In a nutshell, FlightSimLabs installed a password dumper onto ALL users’ machines, whether they were pirates or not, but then only activated the password-stealing module when it determined that specific ‘pirate’ serial numbers had been used which matched those on FlightSimLabs’ servers.

“Test.exe is part of the DRM and is only targeted against specific pirate copies of copyrighted software obtained illegally. That program is only extracted temporarily and is never under any circumstances used in legitimate copies of the product,” Kalamaras added.

That didn’t impress Luke Gorman, who published an analysis slamming the flight sim company for knowingly installing password-stealing malware on users machines, even those who purchased the title legitimately.

Password stealer in action (credit: Luke Gorman)

Making matters even worse, the FlightSimLabs chief went on to say that information being obtained from pirates’ machines in this manner is likely to be used in court or other legal processes.

“This method has already successfully provided information that we’re going to use in our ongoing legal battles against such criminals,” Kalamaras revealed.

While the use of the extracted passwords and usernames elsewhere will remain to be seen, it appears that FlightSimLabs has had a change of heart. With immediate effect, the company is pointing customers to a new installer that doesn’t include code for stealing their most sensitive data.

“I want to reiterate and reaffirm that we as a company and as flight simmers would never do anything to knowingly violate the trust that you have placed in us by not only buying our products but supporting them and FlightSimLabs,” Kalamaras said in an update.

“While the majority of our customers understand that the fight against piracy is a difficult and ongoing battle that sometimes requires drastic measures, we realize that a few of you were uncomfortable with this particular method which might be considered to be a bit heavy handed on our part. It is for this reason we have uploaded an updated installer that does not include the DRM check file in question.”

To be continued………

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Epic Games Uses Private Investigators to Locate Cheaters

Post Syndicated from Ernesto original https://torrentfreak.com/epic-games-uses-private-investigators-to-locate-cheaters-180218/

Last fall, Epic Games released Fortnite’s free-to-play “Battle Royale” game mode for the PC and other platforms, generating massive interest among gamers.

This also included thousands of cheaters, many of whom were subsequently banned. Epic Games then went a step further by taking several cheaters to court for copyright infringement.

In the months that have passed several cases have been settled with undisclosed terms, but it appears that not all defendants are easy to track down. In at least two cases, Epic had to retain the services of private investigators to locate their targets.

In a case filed in North Carolina, the games company was unable to serve the defendant (now identified as B.B) so they called in the help of Klatt Investigations, with success.

“[A]fter having previously engaged two other process servers that were unable to locate and successfully serve B.B., Epic engaged Klatt Investigations, a Canadian firm that provides various services related to the private service of process in civil matters.

“In this case, we engaged Klatt Investigations to locate and effect service of process by personal service on Defendant,” Epic informs the court.

As Epic Games didn’t know the age of the defendant beforehand they chose to approach the person as a minor, which turned out to be a wise choice. The alleged cheater indeed appears to be a minor, so both the Defendant and Defendant’s mother were served.

Based on this new information, Epic Games asked the court to redact any court documents that reveal personal information of the defendant, which includes his or her full name.

Epic’s request to seal

This is not the first time Epic Games has used a private investigator to locate a defendant. It hired S&H Investigative Services in another widely reported case, where the defendant also turned out to be a minor.

In that case, the mother of the alleged cheater wrote a letter to the court in her son’s defense, but after that, things went quiet.

This lack of response prompted Epic Games to ask the court to enter a default in this case, which means that the defendant risks a default judgment for copyright infringement.

Epic’s declaration for the motion to seal the personal details of minor B.B. is available here (pdf). The request to enter a default in the separate C.R case can be found (here pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Court Orders Spanish ISPs to Block Pirate Sites For Hollywood

Post Syndicated from Andy original https://torrentfreak.com/court-orders-spanish-isps-to-block-pirate-sites-for-hollywood-180216/

Determined to reduce levels of piracy globally, Hollywood has become one of the main proponents of site-blocking on the planet. To date there have been multiple lawsuits in far-flung jurisdictions, with Europe one of the primary targets.

Following complaints from Disney, 20th Century Fox, Paramount, Sony, Universal and Warner, Spain has become one of the latest targets. According to the studios a pair of sites – HDFull.tv and Repelis.tv – infringe their copyrights on a grand scale and need to be slowed down by preventing users from accessing them.

HDFull is a platform that provides movies and TV shows in both Spanish and English. Almost 60% its traffic comes from Spain and after a huge surge in visitors last July, it’s now the 337th most popular site in the country according to Alexa. Visitors from Mexico, Argentina, United States and Chile make up the rest of its audience.

Repelis.tv is a similar streaming portal specializing in movies, mainly in Spanish. A third of the site’s visitors hail from Mexico with the remainder coming from Argentina, Columbia, Spain and Chile. In common with HDFull, Repelis has been building its visitor numbers quickly since 2017.

The studios demanding more blocks

With a ruling in hand from the European Court of Justice which determined that sites can be blocked on copyright infringement grounds, the studios asked the courts to issue an injunction against several local ISPs including Telefónica, Vodafone, Orange and Xfera. In an order handed down this week, Barcelona Commercial Court No. 6 sided with the studios and ordered the ISPs to begin blocking the sites.

“They damage the legitimate rights of those who own the films and series, which these pages illegally display and with which they profit illegally through the advertising revenues they generate,” a statement from the Spanish Federation of Cinematographic Distributors (FEDECINE) reads.

FEDECINE General director Estela Artacho said that changes in local law have helped to provide the studios with a new way to protect audiovisual content released in Spain.

“Thanks to the latest reform of the Civil Procedure Law, we have in this jurisdiction a new way to exercise different possibilities to protect our commercial film offering,” Artacho said.

“Those of us who are part of this industry work to make culture accessible and offer the best cinematographic experience in the best possible conditions, guaranteeing the continuity of the sector.”

The development was also welcomed by Stan McCoy, president of the Motion Picture Association’s EMEA division, which represents the plaintiffs in the case.

“We have just taken a welcome step which we consider crucial to face the problem of piracy in Spain,” McCoy said.

“These actions are necessary to maintain the sustainability of the creative community both in Spain and throughout Europe. We want to ensure that consumers enjoy the entertainment offer in a safe and secure environment.”

After gaining experience from blockades and subsequent circumvention in other regions, the studios seem better prepared to tackle fallout in Spain. In addition to blocking primary domains, the ruling handed down by the court this week also obliges ISPs to block any other domain, subdomain or IP address whose purpose is to facilitate access to the blocked platforms.

News of Spain’s ‘pirate’ blocks come on the heels of fresh developments in Germany, where this week a court ordered ISP Vodafone to block KinoX, one of the country’s most popular streaming portals.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

FOSS Project Spotlight: LinuxBoot (Linux Journal)

Post Syndicated from jake original https://lwn.net/Articles/747380/rss

Linux Journal takes a look at the newly announced LinuxBoot project. LWN covered a related talk back in November. “Modern firmware generally consists of two main parts: hardware initialization (early stages) and OS loading (late stages). These parts may be divided further depending on the implementation, but the overall flow is similar across boot firmware. The late stages have gained many capabilities over the years and often have an environment with drivers, utilities, a shell, a graphical menu (sometimes with 3D animations) and much more. Runtime components may remain resident and active after firmware exits. Firmware, which used to fit in an 8 KiB ROM, now contains an OS used to boot another OS and doesn’t always stop running after the OS boots. LinuxBoot replaces the late stages with a Linux kernel and initramfs, which are used to load and execute the next stage, whatever it may be and wherever it may come from. The Linux kernel included in LinuxBoot is called the ‘boot kernel’ to distinguish it from the ‘target kernel’ that is to be booted and may be something other than Linux.

How to Patch Linux Workloads on AWS

Post Syndicated from Koen van Blijderveen original https://aws.amazon.com/blogs/security/how-to-patch-linux-workloads-on-aws/

Most malware tries to compromise your systems by using a known vulnerability that the operating system maker has already patched. As best practices to help prevent malware from affecting your systems, you should apply all operating system patches and actively monitor your systems for missing patches.

In this blog post, I show you how to patch Linux workloads using AWS Systems Manager. To accomplish this, I will show you how to use the AWS Command Line Interface (AWS CLI) to:

  1. Launch an Amazon EC2 instance for use with Systems Manager.
  2. Configure Systems Manager to patch your Amazon EC2 Linux instances.

In two previous blog posts (Part 1 and Part 2), I showed how to use the AWS Management Console to perform the necessary steps to patch, inspect, and protect Microsoft Windows workloads. You can implement those same processes for your Linux instances running in AWS by changing the instance tags and types shown in the previous blog posts.

Because most Linux system administrators are more familiar with using a command line, I show how to patch Linux workloads by using the AWS CLI in this blog post. The steps to use the Amazon EBS Snapshot Scheduler and Amazon Inspector are identical for both Microsoft Windows and Linux.

What you should know first

To follow along with the solution in this post, you need one or more Amazon EC2 instances. You may use existing instances or create new instances. For this post, I assume this is an Amazon EC2 for Amazon Linux instance installed from Amazon Machine Images (AMIs).

Systems Manager is a collection of capabilities that helps you automate management tasks for AWS-hosted instances on Amazon EC2 and your on-premises servers. In this post, I use Systems Manager for two purposes: to run remote commands and apply operating system patches. To learn about the full capabilities of Systems Manager, see What Is AWS Systems Manager?

As of Amazon Linux 2017.09, the AMI comes preinstalled with the Systems Manager agent. Systems Manager Patch Manager also supports Red Hat and Ubuntu. To install the agent on these Linux distributions or an older version of Amazon Linux, see Installing and Configuring SSM Agent on Linux Instances.

If you are not familiar with how to launch an Amazon EC2 instance, see Launching an Instance. I also assume you launched or will launch your instance in a private subnet. You must make sure that the Amazon EC2 instance can connect to the internet using a network address translation (NAT) instance or NAT gateway to communicate with Systems Manager. The following diagram shows how you should structure your VPC.

Diagram showing how to structure your VPC

Later in this post, you will assign tasks to a maintenance window to patch your instances with Systems Manager. To do this, the IAM user you are using for this post must have the iam:PassRole permission. This permission allows the IAM user assigning tasks to pass his own IAM permissions to the AWS service. In this example, when you assign a task to a maintenance window, IAM passes your credentials to Systems Manager. You also should authorize your IAM user to use Amazon EC2 and Systems Manager. As mentioned before, you will be using the AWS CLI for most of the steps in this blog post. Our documentation shows you how to get started with the AWS CLI. Make sure you have the AWS CLI installed and configured with an AWS access key and secret access key that belong to an IAM user that have the following AWS managed policies attached to the IAM user you are using for this example: AmazonEC2FullAccess and AmazonSSMFullAccess.

Step 1: Launch an Amazon EC2 Linux instance

In this section, I show you how to launch an Amazon EC2 instance so that you can use Systems Manager with the instance. This step requires you to do three things:

  1. Create an IAM role for Systems Manager before launching your Amazon EC2 instance.
  2. Launch your Amazon EC2 instance with Amazon EBS and the IAM role for Systems Manager.
  3. Add tags to the instances so that you can add your instances to a Systems Manager maintenance window based on tags.

A. Create an IAM role for Systems Manager

Before launching an Amazon EC2 instance, I recommend that you first create an IAM role for Systems Manager, which you will use to update the Amazon EC2 instance. AWS already provides a preconfigured policy that you can use for the new role and it is called AmazonEC2RoleforSSM.

  1. Create a JSON file named trustpolicy-ec2ssm.json that contains the following trust policy. This policy describes which principal (an entity that can take action on an AWS resource) is allowed to assume the role we are going to create. In this example, the principal is the Amazon EC2 service.
    {
      "Version": "2012-10-17",
      "Statement": {
        "Effect": "Allow",
        "Principal": {"Service": "ec2.amazonaws.com"},
        "Action": "sts:AssumeRole"
      }
    }

  1. Use the following command to create a role named EC2SSM that has the AWS managed policy AmazonEC2RoleforSSM attached to it. This generates JSON-based output that describes the role and its parameters, if the command is successful.
    $ aws iam create-role --role-name EC2SSM --assume-role-policy-document file://trustpolicy-ec2ssm.json

  1. Use the following command to attach the AWS managed IAM policy (AmazonEC2RoleforSSM) to your newly created role.
    $ aws iam attach-role-policy --role-name EC2SSM --policy-arn arn:aws:iam::aws:policy/service-role/AmazonEC2RoleforSSM

  1. Use the following commands to create the IAM instance profile and add the role to the instance profile. The instance profile is needed to attach the role we created earlier to your Amazon EC2 instance.
    $ aws iam create-instance-profile --instance-profile-name EC2SSM-IP
    $ aws iam add-role-to-instance-profile --instance-profile-name EC2SSM-IP --role-name EC2SSM

B. Launch your Amazon EC2 instance

To follow along, you need an Amazon EC2 instance that is running Amazon Linux. You can use any existing instance you may have or create a new instance.

When launching a new Amazon EC2 instance, be sure that:

  1. Use the following command to launch a new Amazon EC2 instance using an Amazon Linux AMI available in the US East (N. Virginia) Region (also known as us-east-1). Replace YourKeyPair and YourSubnetId with your information. For more information about creating a key pair, see the create-key-pair documentation. Write down the InstanceId that is in the output because you will need it later in this post.
    $ aws ec2 run-instances --image-id ami-cb9ec1b1 --instance-type t2.micro --key-name YourKeyPair --subnet-id YourSubnetId --iam-instance-profile Name=EC2SSM-IP

  1. If you are using an existing Amazon EC2 instance, you can use the following command to attach the instance profile you created earlier to your instance.
    $ aws ec2 associate-iam-instance-profile --instance-id YourInstanceId --iam-instance-profile Name=EC2SSM-IP

C. Add tags

The final step of configuring your Amazon EC2 instances is to add tags. You will use these tags to configure Systems Manager in Step 2 of this post. For this example, I add a tag named Patch Group and set the value to Linux Servers. I could have other groups of Amazon EC2 instances that I treat differently by having the same tag name but a different tag value. For example, I might have a collection of other servers with the tag name Patch Group with a value of Web Servers.

  • Use the following command to add the Patch Group tag to your Amazon EC2 instance.
    $ aws ec2 create-tags --resources YourInstanceId --tags --tags Key="Patch Group",Value="Linux Servers"

Note: You must wait a few minutes until the Amazon EC2 instance is available before you can proceed to the next section. To make sure your Amazon EC2 instance is online and ready, you can use the following AWS CLI command:

$ aws ec2 describe-instance-status --instance-ids YourInstanceId

At this point, you now have at least one Amazon EC2 instance you can use to configure Systems Manager.

Step 2: Configure Systems Manager

In this section, I show you how to configure and use Systems Manager to apply operating system patches to your Amazon EC2 instances, and how to manage patch compliance.

To start, I provide some background information about Systems Manager. Then, I cover how to:

  1. Create the Systems Manager IAM role so that Systems Manager is able to perform patch operations.
  2. Create a Systems Manager patch baseline and associate it with your instance to define which patches Systems Manager should apply.
  3. Define a maintenance window to make sure Systems Manager patches your instance when you tell it to.
  4. Monitor patch compliance to verify the patch state of your instances.

You must meet two prerequisites to use Systems Manager to apply operating system patches. First, you must attach the IAM role you created in the previous section, EC2SSM, to your Amazon EC2 instance. Second, you must install the Systems Manager agent on your Amazon EC2 instance. If you have used a recent Amazon Linux AMI, Amazon has already installed the Systems Manager agent on your Amazon EC2 instance. You can confirm this by logging in to an Amazon EC2 instance and checking the Systems Manager agent log files that are located at /var/log/amazon/ssm/.

To install the Systems Manager agent on an instance that does not have the agent preinstalled or if you want to use the Systems Manager agent on your on-premises servers, see Installing and Configuring the Systems Manager Agent on Linux Instances. If you forgot to attach the newly created role when launching your Amazon EC2 instance or if you want to attach the role to already running Amazon EC2 instances, see Attach an AWS IAM Role to an Existing Amazon EC2 Instance by Using the AWS CLI or use the AWS Management Console.

A. Create the Systems Manager IAM role

For a maintenance window to be able to run any tasks, you must create a new role for Systems Manager. This role is a different kind of role than the one you created earlier: this role will be used by Systems Manager instead of Amazon EC2. Earlier, you created the role, EC2SSM, with the policy, AmazonEC2RoleforSSM, which allowed the Systems Manager agent on your instance to communicate with Systems Manager. In this section, you need a new role with the policy, AmazonSSMMaintenanceWindowRole, so that the Systems Manager service can execute commands on your instance.

To create the new IAM role for Systems Manager:

  1. Create a JSON file named trustpolicy-maintenancewindowrole.json that contains the following trust policy. This policy describes which principal is allowed to assume the role you are going to create. This trust policy allows not only Amazon EC2 to assume this role, but also Systems Manager.
    {
       "Version":"2012-10-17",
       "Statement":[
          {
             "Sid":"",
             "Effect":"Allow",
             "Principal":{
                "Service":[
                   "ec2.amazonaws.com",
                   "ssm.amazonaws.com"
               ]
             },
             "Action":"sts:AssumeRole"
          }
       ]
    }

  1. Use the following command to create a role named MaintenanceWindowRole that has the AWS managed policy, AmazonSSMMaintenanceWindowRole, attached to it. This command generates JSON-based output that describes the role and its parameters, if the command is successful.
    $ aws iam create-role --role-name MaintenanceWindowRole --assume-role-policy-document file://trustpolicy-maintenancewindowrole.json

  1. Use the following command to attach the AWS managed IAM policy (AmazonEC2RoleforSSM) to your newly created role.
    $ aws iam attach-role-policy --role-name MaintenanceWindowRole --policy-arn arn:aws:iam::aws:policy/service-role/AmazonSSMMaintenanceWindowRole

B. Create a Systems Manager patch baseline and associate it with your instance

Next, you will create a Systems Manager patch baseline and associate it with your Amazon EC2 instance. A patch baseline defines which patches Systems Manager should apply to your instance. Before you can associate the patch baseline with your instance, though, you must determine if Systems Manager recognizes your Amazon EC2 instance. Use the following command to list all instances managed by Systems Manager. The --filters option ensures you look only for your newly created Amazon EC2 instance.

$ aws ssm describe-instance-information --filters Key=InstanceIds,Values= YourInstanceId

{
    "InstanceInformationList": [
        {
            "IsLatestVersion": true,
            "ComputerName": "ip-10-50-2-245",
            "PingStatus": "Online",
            "InstanceId": "YourInstanceId",
            "IPAddress": "10.50.2.245",
            "ResourceType": "EC2Instance",
            "AgentVersion": "2.2.120.0",
            "PlatformVersion": "2017.09",
            "PlatformName": "Amazon Linux AMI",
            "PlatformType": "Linux",
            "LastPingDateTime": 1515759143.826
        }
    ]
}

If your instance is missing from the list, verify that:

  1. Your instance is running.
  2. You attached the Systems Manager IAM role, EC2SSM.
  3. You deployed a NAT gateway in your public subnet to ensure your VPC reflects the diagram shown earlier in this post so that the Systems Manager agent can connect to the Systems Manager internet endpoint.
  4. The Systems Manager agent logs don’t include any unaddressed errors.

Now that you have checked that Systems Manager can manage your Amazon EC2 instance, it is time to create a patch baseline. With a patch baseline, you define which patches are approved to be installed on all Amazon EC2 instances associated with the patch baseline. The Patch Group resource tag you defined earlier will determine to which patch group an instance belongs. If you do not specifically define a patch baseline, the default AWS-managed patch baseline is used.

To create a patch baseline:

  1. Use the following command to create a patch baseline named AmazonLinuxServers. With approval rules, you can determine the approved patches that will be included in your patch baseline. In this example, you add all Critical severity patches to the patch baseline as soon as they are released, by setting the Auto approval delay to 0 days. By setting the Auto approval delay to 2 days, you add to this patch baseline the Important, Medium, and Low severity patches two days after they are released.
    $ aws ssm create-patch-baseline --name "AmazonLinuxServers" --description "Baseline containing all updates for Amazon Linux" --operating-system AMAZON_LINUX --approval-rules "PatchRules=[{PatchFilterGroup={PatchFilters=[{Values=[Critical],Key=SEVERITY}]},ApproveAfterDays=0,ComplianceLevel=CRITICAL},{PatchFilterGroup={PatchFilters=[{Values=[Important,Medium,Low],Key=SEVERITY}]},ApproveAfterDays=2,ComplianceLevel=HIGH}]"
    
    {
        "BaselineId": "YourBaselineId"
    }

  1. Use the following command to register the patch baseline you created with your instance. To do so, you use the Patch Group tag that you added to your Amazon EC2 instance.
    $ aws ssm register-patch-baseline-for-patch-group --baseline-id YourPatchBaselineId --patch-group "Linux Servers"
    
    {
        "PatchGroup": "Linux Servers",
        "BaselineId": "YourBaselineId"
    }

C.  Define a maintenance window

Now that you have successfully set up a role, created a patch baseline, and registered your Amazon EC2 instance with your patch baseline, you will define a maintenance window so that you can control when your Amazon EC2 instances will receive patches. By creating multiple maintenance windows and assigning them to different patch groups, you can make sure your Amazon EC2 instances do not all reboot at the same time.

To define a maintenance window:

  1. Use the following command to define a maintenance window. In this example command, the maintenance window will start every Saturday at 10:00 P.M. UTC. It will have a duration of 4 hours and will not start any new tasks 1 hour before the end of the maintenance window.
    $ aws ssm create-maintenance-window --name SaturdayNight --schedule "cron(0 0 22 ? * SAT *)" --duration 4 --cutoff 1 --allow-unassociated-targets
    
    {
        "WindowId": "YourMaintenanceWindowId"
    }

For more information about defining a cron-based schedule for maintenance windows, see Cron and Rate Expressions for Maintenance Windows.

  1. After defining the maintenance window, you must register the Amazon EC2 instance with the maintenance window so that Systems Manager knows which Amazon EC2 instance it should patch in this maintenance window. You can register the instance by using the same Patch Group tag you used to associate the Amazon EC2 instance with the AWS-provided patch baseline, as shown in the following command.
    $ aws ssm register-target-with-maintenance-window --window-id YourMaintenanceWindowId --resource-type INSTANCE --targets "Key=tag:Patch Group,Values=Linux Servers"
    
    {
        "WindowTargetId": "YourWindowTargetId"
    }

  1. Assign a task to the maintenance window that will install the operating system patches on your Amazon EC2 instance. The following command includes the following options.
    1. name is the name of your task and is optional. I named mine Patching.
    2. task-arn is the name of the task document you want to run.
    3. max-concurrency allows you to specify how many of your Amazon EC2 instances Systems Manager should patch at the same time. max-errors determines when Systems Manager should abort the task. For patching, this number should not be too low, because you do not want your entire patch task to stop on all instances if one instance fails. You can set this, for example, to 20%.
    4. service-role-arn is the Amazon Resource Name (ARN) of the AmazonSSMMaintenanceWindowRole role you created earlier in this blog post.
    5. task-invocation-parameters defines the parameters that are specific to the AWS-RunPatchBaseline task document and tells Systems Manager that you want to install patches with a timeout of 600 seconds (10 minutes).
      $ aws ssm register-task-with-maintenance-window --name "Patching" --window-id "YourMaintenanceWindowId" --targets "Key=WindowTargetIds,Values=YourWindowTargetId" --task-arn AWS-RunPatchBaseline --service-role-arn "arn:aws:iam::123456789012:role/MaintenanceWindowRole" --task-type "RUN_COMMAND" --task-invocation-parameters "RunCommand={Comment=,TimeoutSeconds=600,Parameters={SnapshotId=[''],Operation=[Install]}}" --max-concurrency "500" --max-errors "20%"
      
      {
          "WindowTaskId": "YourWindowTaskId"
      }

Now, you must wait for the maintenance window to run at least once according to the schedule you defined earlier. If your maintenance window has expired, you can check the status of any maintenance tasks Systems Manager has performed by using the following command.

$ aws ssm describe-maintenance-window-executions --window-id "YourMaintenanceWindowId"

{
    "WindowExecutions": [
        {
            "Status": "SUCCESS",
            "WindowId": "YourMaintenanceWindowId",
            "WindowExecutionId": "b594984b-430e-4ffa-a44c-a2e171de9dd3",
            "EndTime": 1515766467.487,
            "StartTime": 1515766457.691
        }
    ]
}

D.  Monitor patch compliance

You also can see the overall patch compliance of all Amazon EC2 instances using the following command in the AWS CLI.

$ aws ssm list-compliance-summaries

This command shows you the number of instances that are compliant with each category and the number of instances that are not in JSON format.

You also can see overall patch compliance by choosing Compliance under Insights in the navigation pane of the Systems Manager console. You will see a visual representation of how many Amazon EC2 instances are up to date, how many Amazon EC2 instances are noncompliant, and how many Amazon EC2 instances are compliant in relation to the earlier defined patch baseline.

Screenshot of the Compliance page of the Systems Manager console

In this section, you have set everything up for patch management on your instance. Now you know how to patch your Amazon EC2 instance in a controlled manner and how to check if your Amazon EC2 instance is compliant with the patch baseline you have defined. Of course, I recommend that you apply these steps to all Amazon EC2 instances you manage.

Summary

In this blog post, I showed how to use Systems Manager to create a patch baseline and maintenance window to keep your Amazon EC2 Linux instances up to date with the latest security patches. Remember that by creating multiple maintenance windows and assigning them to different patch groups, you can make sure your Amazon EC2 instances do not all reboot at the same time.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or issues implementing any part of this solution, start a new thread on the Amazon EC2 forum or contact AWS Support.

– Koen

‘Pirate’ Kodi Addon Devs & Distributors Told to Cease-and-Desist

Post Syndicated from Andy original https://torrentfreak.com/pirate-kodi-addon-devs-distributors-told-to-cease-and-desist-180214/

Last November, following a year of upheaval for third-party addon creators and distributors, yet more turmoil hit the community in the form of threats from the world’s most powerful anti-piracy coalition – the Alliance for Creativity and Entertainment (ACE).

Comprised of 30 companies including the studios of the MPAA, Amazon, Netflix, CBS, HBO, BBC, Sky, Bell Canada, CBS, Hulu, Lionsgate, Foxtel, Village Roadshow, and many more, ACE warned several developers to shut down – or else.

The letter: shut down – or else

Now it appears that ACE is on the warpath again, this time targeting a broader range of individuals involved in the Kodi addon scene, from developers and distributors to those involved in the production of how-to videos on YouTube.

The first report of action came from TVAddons, who noted that the lead developer at the Noobs and Nerds repository had been targeted with a cease-and-desist notice, adding that people from the site had been “visited at their homes.”

As seen in the image below, the Noobs and Nerds website is currently down. The site’s Twitter account has also been disabled.

Noobs and Nerds – gone

While TVAddons couldn’t precisely confirm the source of the threat, information gathered from individuals involved in the addon scene all point to the involvement of ACE.

In particular, a man known online as Teverz, who develops his own builds, runs a repo, and creates Kodi-themed YouTube videos, confirmed that ACE had been in touch.

An apparently unconcerned Teverz….

“I am not a dev so they really don’t scare me lmao,” he added.

Teverz claims to be from Canada and it appears that others in the country are also facing cease and desist notices. An individual known as Doggmatic, who also identifies as Canadian and has Kodi builds under his belt, says he too was targeted.

Another target in Canada

Doggmatic, who appears to be part of the Illuminati repo, says he had someone call the people who sent the cease-and-desist but like Teverz, he doesn’t seem overly concerned, at least for now.

“I have a legal representative calling them. The letters they sent aren’t legal documents. No lawyer signed them and no law firm mentioned,” Doggmatic said.

But the threats don’t stop there. Blamo, the developer of the Neptune Rising addon accessible from the Blamo repo, also claims to have been threatened.

SpinzTV, who offers unofficial Kodi builds and an associated repository, is also under the spotlight. Unlike his Canadian counterparts, he has already thrown in the towel, according to a short announcement on Twitter.

For SpinzTV it’s all over…

TorrentFreak contacted the Alliance for Creativity and Entertainment, asking them if they could confirm the actions and provide any additional details. At the time of publication they had no information for us but we’ll update if and when that comes in.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Early Challenges: Making Critical Hires

Post Syndicated from Gleb Budman original https://www.backblaze.com/blog/early-challenges-making-critical-hires/

row of potential employee hires sitting waiting for an interview

In 2009, Google disclosed that they had 400 recruiters on staff working to hire nearly 10,000 people. Someday, that might be your challenge, but most companies in their early days are looking to hire a handful of people — the right people — each year. Assuming you are closer to startup stage than Google stage, let’s look at who you need to hire, when to hire them, where to find them (and how to help them find you), and how to get them to join your company.

Who Should Be Your First Hires

In later stage companies, the roles in the company have been well fleshed out, don’t change often, and each role can be segmented to focus on a specific area. A large company may have an entire department focused on just cubicle layout; at a smaller company you may not have a single person whose actual job encompasses all of facilities. At Backblaze, our CTO has a passion and knack for facilities and mostly led that charge. Also, the needs of a smaller company are quick to change. One of our first hires was a QA person, Sean, who ended up being 100% focused on data center infrastructure. In the early stage, things can shift quite a bit and you need people that are broadly capable, flexible, and most of all willing to pitch in where needed.

That said, there are times you may need an expert. At a previous company we hired Jon, a PhD in Bayesian statistics, because we needed algorithmic analysis for spam fighting. However, even that person was not only able and willing to do the math, but also code, and to not only focus on Bayesian statistics but explore a plethora of spam fighting options.

When To Hire

If you’ve raised a lot of cash and are willing to burn it with mistakes, you can guess at all the roles you might need and start hiring for them. No judgement: that’s a reasonable strategy if you’re cash-rich and time-poor.

If your cash is limited, try to see what you and your team are already doing and then hire people to take those jobs. It may sound counterintuitive, but if you’re already doing it presumably it needs to be done, you have a good sense of the type of skills required to do it, and you can bring someone on-board and get them up to speed quickly. That then frees you up to focus on tasks that can’t be done by someone else. At Backblaze, I ran marketing internally for years before hiring a VP of Marketing, making it easier for me to know what we needed. Once I was hiring, my primary goal was to find someone I could trust to take that role completely off of me so I could focus solely on my CEO duties

Where To Find the Right People

Finding great people is always difficult, particularly when the skillsets you’re looking for are highly in-demand by larger companies with lots of cash and cachet. You, however, have one massive advantage: you need to hire 5 people, not 5,000.

People You Worked With

The absolutely best people to hire are ones you’ve worked with before that you already know are good in a work situation. Consider your last job, the one before, and the one before that. A significant number of the people we recruited at Backblaze came from our previous startup MailFrontier. We knew what they could do and how they would fit into the culture, and they knew us and thus could quickly meld into the environment. If you didn’t have a previous job, consider people you went to school with or perhaps individuals with whom you’ve done projects previously.

People You Know

Hiring friends, family, and others can be risky, but should be considered. Sometimes a friend can be a “great buddy,” but is not able to do the job or isn’t a good fit for the organization. Having to let go of someone who is a friend or family member can be rough. Have the conversation up front with them about that possibility, so you have the ability to stay friends if the position doesn’t work out. Having said that, if you get along with someone as a friend, that’s one critical component of succeeding together at work. At Backblaze we’ve hired a number of people successfully that were friends of someone in the organization.

Friends Of People You Know

Your network is likely larger than you imagine. Your employees, investors, advisors, spouses, friends, and other folks all know people who might be a great fit for you. Make sure they know the roles you’re hiring for and ask them if they know anyone that would fit. Search LinkedIn for the titles you’re looking for and see who comes up; if they’re a 2nd degree connection, ask your connection for an introduction.

People You Know About

Sometimes the person you want isn’t someone anyone knows, but you may have read something they wrote, used a product they’ve built, or seen a video of a presentation they gave. Reach out. You may get a great hire: worst case, you’ll let them know they were appreciated, and make them aware of your organization.

Other Places to Find People

There are a million other places to find people, including job sites, community groups, Facebook/Twitter, GitHub, and more. Consider where the people you’re looking for are likely to congregate online and in person.

A Comment on Diversity

Hiring “People You Know” can often result in “Hiring People Like You” with the same workplace experiences, culture, background, and perceptions. Some studies have shown [1, 2, 3, 4] that homogeneous groups deliver faster, while heterogeneous groups are more creative. Also, “Hiring People Like You” often propagates the lack of women and minorities in tech and leadership positions in general. When looking for people you know, keep an eye to not discount people you know who don’t have the same cultural background as you.

Helping People To Find You

Reaching out proactively to people is the most direct way to find someone, but you want potential hires coming to you as well. To do this, they have to a) be aware of you, b) know you have a role they’re interested in, and c) think they would want to work there. Let’s tackle a) and b) first below.

Your Blog

I started writing our blog before we launched the product and talked about anything I found interesting related to our space. For several years now our team has owned the content on the blog and in 2017 over 1.5 million people read it. Each time we have a position open it’s published to the blog. If someone finds reading about backup and storage interesting, perhaps they’d want to dig in deeper from the inside. Many of the people we’ve recruited have mentioned reading the blog as either how they found us or as a factor in why they wanted to work here.
[BTW, this is Gleb’s 200th post on Backblaze’s blog. The first was in 2008. — Editor]

Your Email List

In addition to the emails our blog subscribers receive, we send regular emails to our customers, partners, and prospects. These are largely focused on content we think is directly useful or interesting for them. However, once every few months we include a small mention that we’re hiring, and the positions we’re looking for. Often a small blurb is all you need to capture people’s imaginations whether they might find the jobs interesting or can think of someone that might fit the bill.

Your Social Involvement

Whether it’s Twitter or Facebook, Hacker News or Slashdot, your potential hires are engaging in various communities. Being socially involved helps make people aware of you, reminds them of you when they’re considering a job, and paints a picture of what working with you and your company would be like. Adam was in a Reddit thread where we were discussing our Storage Pods, and that interaction was ultimately part of the reason he left Apple to come to Backblaze.

Convincing People To Join

Once you’ve found someone or they’ve found you, how do you convince them to join? They may be currently employed, have other offers, or have to relocate. Again, while the biggest companies have a number of advantages, you might have more unique advantages than you realize.

Why Should They Join You

Here are a set of items that you may be able to offer which larger organizations might not:

Role: Consider the strengths of the role. Perhaps it will have broader scope? More visibility at the executive level? No micromanagement? Ability to take risks? Option to create their own role?

Compensation: In addition to salary, will their options potentially be worth more since they’re getting in early? Can they trade-off salary for more options? Do they get option refreshes?

Benefits: In addition to healthcare, food, and 401(k) plans, are there unique benefits of your company? One company I knew took the entire team for a one-month working retreat abroad each year.

Location: Most people prefer to work close to home. If you’re located outside of the San Francisco Bay Area, you might be at a disadvantage for not being in the heart of tech. But if you find employees close to you you’ve got a huge advantage. Sometimes it’s micro; even in the Bay Area the difference of 5 miles can save 20 minutes each way every day. We located the Backblaze headquarters in San Mateo, a middle-ground that made it accessible to those coming from San Jose and San Francisco. We also chose a downtown location near a train, restaurants, and cafes: all to make it easier and more pleasant. Also, are you flexible in letting your employees work remotely? Our systems administrator Elliott is about to embark on a long-term cross-country journey working from an RV.

Environment: Open office, cubicle, cafe, work-from-home? Loud/quiet? Social or focused? 24×7 or work-life balance? Different environments appeal to different people.

Team: Who will they be working with? A company with 100,000 people might have 100 brilliant ones you’d want to work with, but ultimately we work with our core team. Who will your prospective hires be working with?

Market: Some people are passionate about gaming, others biotech, still others food. The market you’re targeting will get different people excited.

Product: Have an amazing product people love? Highlight that. If you’re lucky, your potential hire is already a fan.

Mission: Curing cancer, making people happy, and other company missions inspire people to strive to be part of the journey. Our mission is to make storing data astonishingly easy and low-cost. If you care about data, information, knowledge, and progress, our mission helps drive all of them.

Culture: I left this for last, but believe it’s the most important. What is the culture of your company? Finding people who want to work in the culture of your organization is critical. If they like the culture, they’ll fit and continue it. We’ve worked hard to build a culture that’s collaborative, friendly, supportive, and open; one in which people like coming to work. For example, the five founders started with (and still have) the same compensation and equity. That started a culture of “we’re all in this together.” Build a culture that will attract the people you want, and convey what the culture is.

Writing The Job Description

Most job descriptions focus on the all the requirements the candidate must meet. While important to communicate, the job description should first sell the job. Why would the appropriate candidate want the job? Then share some of the requirements you think are critical. Remember that people read not just what you say but how you say it. Try to write in a way that conveys what it is like to actually be at the company. Ahin, our VP of Marketing, said the job description itself was one of the things that attracted him to the company.

Orchestrating Interviews

Much can be said about interviewing well. I’m just going to say this: make sure that everyone who is interviewing knows that their job is not only to evaluate the candidate, but give them a sense of the culture, and sell them on the company. At Backblaze, we often have one person interview core prospects solely for company/culture fit.

Onboarding

Hiring success shouldn’t be defined by finding and hiring the right person, but instead by the right person being successful and happy within the organization. Ensure someone (usually their manager) provides them guidance on what they should be concentrating on doing during their first day, first week, and thereafter. Giving new employees opportunities and guidance so that they can achieve early wins and feel socially integrated into the company does wonders for bringing people on board smoothly

In Closing

Our Director of Production Systems, Chris, said to me the other day that he looks for companies where he can work on “interesting problems with nice people.” I’m hoping you’ll find your own version of that and find this post useful in looking for your early and critical hires.

Of course, I’d be remiss if I didn’t say, if you know of anyone looking for a place with “interesting problems with nice people,” Backblaze is hiring. 😉

The post Early Challenges: Making Critical Hires appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Pirate Site Blockades Enter Germany With Kinox.to as First Target

Post Syndicated from Ernesto original https://torrentfreak.com/pirate-site-blockades-enter-germany-with-kinox-to-as-first-target-180213/

Website blocking has become one of the leading anti-piracy mechanisms of recent years.

It is particularly prevalent across Europe, where thousands of sites are blocked by ISPs following court orders.

This week, these blocking efforts also reached Germany. Following a provisional injunction issued by the federal court in Munich, Internet provider Vodafone must block access to the popular streaming portal Kinox.to.

The injunction was issued on behalf of the German film production and distribution company Constantin Film. The company complained that Kinox facilitates copyright infringement and cited a recent order from the European Court of Justice in its defense, Golem reports.

While these types of blockades are common in Europe, they’re a new sight in Germany. Vodafone users who attempt to access the Kinox site will now be welcomed with a blocking notification instead.

“This portal is temporarily unavailable due to a copyright claim,” it reads, translated from German.

Blocked

The Kinox streaming site has been a thorn in the side of German authorities and copyright holders for a long time. Last year, one of the site’s admins was detained in Kosovo after a three-year manhunt, but despite these and other actions, the site remains online.

With the blocking efforts, Constantin Film hopes to make it harder for people to access the site, although this measure is also limited.

For now, it seems to be a simple DNS blockade, which means that people can bypass it relatively easily by switching to a free alternative DNS provider such as Google DNS or OpenDNS.

And there are other workarounds as well, as operators of Kinox point out in a message on their homepage.

“Vodafone User: Use the public Google DNS server: 8.8.8.8, that goes the .TO domain again! Otherwise, a VPN or the free Tor Browser can be used!” they write.

While the measure may not be foolproof, the current order is certainly significant. Previously, all German courts have denied similar blocking orders based on different arguments. This means that more blocking efforts may be on the horizon.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Hosting Provider Steadfast Maintains DMCA Safe Harbor Defense For Trial

Post Syndicated from Ernesto original https://torrentfreak.com/hosting-provider-steadfast-maintains-dmca-safe-harbor-defense-for-trial-180212/

Two years ago, adult entertainment publisher ALS Scan dragged several third-party Internet services to court.

The company targeted several companies including CDN provider CloudFlare and the Chicago-based hosting company Steadfast, accusing them of copyright infringement because they offered services to pirate sites.

The case against Steadfast is getting close to trial and to start with an advantage, ALS Scan recently asked the court for partial summary judgment, determining that the hosting company contributed to copyright infringement and that it has no safe harbor protection.

ALS argued that Steadfast refused to shut down the servers of the image sharing platform Imagebam.com, which was operated by its client Flixya. ALS Scan described the site as a repeat offender, as it had been targeted with dozens of DMCA notices, and accused Steadfast of turning a blind eye to the situation.

Steadfast, for its part, fiercely denied the allegations. The hosting provider admitted that it leased servers to Flixya for ten years but said that it forwarded all notices to its client. The hosting company could not address individual infringements, other than shutting down the entire site, which would have been disproportionate in their view.

A few days ago California District Court Judge George Wu ruled on the matter, denying ALS’s motion for summary judgment.

Both sides made sensible arguments on the contributory infringement issue, but it is by no means undisputed that the hosting provider ‘contributed’ to the infringing activities. The court, therefore, left this question open for the jury to determine at trial.

“Ultimately, both sides have raised triable issues of fact with respect to material contribution. As a result, the Court would deny Plaintiff’s Motion,” Judge Wu writes.

ALS also sought summary judgment on the DMCA safe harbor protection issue, but the court denied this request as well. While it’s clear that the hosting company never terminated a customer for repeat infringements, it’s not clear whether it was ever in a situation where it needed to.

The DMCA requires Internet services to implement a meaningful repeat infringer policy, but in this case, Steadfast’s client Imagebam reportedly had a takedown policy of its own, which complicates the issue.

“While the fact Steadfast has never terminated one of its own customers for infringement is potentially damaging to its ability to fit the safe harbor, Plaintiff has not established that Steadfast faced a situation requiring it to terminate one of its users,” Judge Wu writes.

“Even in the present case it is unclear that Steadfast needed to terminate Flixya’s account given Flixya itself had a policy that was arguably successful at removing infringing images from imagebam.com.”

Judge Wu adds that safe harbor defenses are generally left to the jury, and this is what he decided as well.

As a result, ALS’s entire motion for summary judgment is denied. This is good news for Steadfast, who will have their safe harbor defense available at the upcoming trial. However, they will likely celebrate this win with caution, as the jury makes its ultimate decision.

A copy of the court’s order is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Internet Security Threats at the Olympics

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/02/internet_securi.html

There are a lot:

The cybersecurity company McAfee recently uncovered a cyber operation, dubbed Operation GoldDragon, attacking South Korean organizations related to the Winter Olympics. McAfee believes the attack came from a nation state that speaks Korean, although it has no definitive proof that this is a North Korean operation. The victim organizations include ice hockey teams, ski suppliers, ski resorts, tourist organizations in Pyeongchang, and departments organizing the Pyeongchang Olympics.

Meanwhile, a Russia-linked cyber attack has already stolen and leaked documents from other Olympic organizations. The so-called Fancy Bear group, or APT28, began its operations in late 2017 –­ according to Trend Micro and Threat Connect, two private cybersecurity firms­ — eventually publishing documents in 2018 outlining the political tensions between IOC officials and World Anti-Doping Agency (WADA) officials who are policing Olympic athletes. It also released documents specifying exceptions to anti-doping regulations granted to specific athletes (for instance, one athlete was given an exception because of his asthma medication). The most recent Fancy Bear leak exposed details about a Canadian pole vaulter’s positive results for cocaine. This group has targeted WADA in the past, specifically during the 2016 Rio de Janeiro Olympics. Assuming the attribution is right, the action appears to be Russian retaliation for the punitive steps against Russia.

A senior analyst at McAfee warned that the Olympics may experience more cyber attacks before closing ceremonies. A researcher at ThreatConnect asserted that organizations like Fancy Bear have no reason to stop operations just because they’ve already stolen and released documents. Even the United States Department of Homeland Security has issued a notice to those traveling to South Korea to remind them to protect themselves against cyber risks.

One presumes the Olympics network is sufficiently protected against the more pedestrian DDoS attacks and the like, but who knows?

EDITED TO ADD: There was already one attack.

US Online Piracy Lawsuits Skyrocket in the New Year

Post Syndicated from Ernesto original https://torrentfreak.com/u-s-online-piracy-lawsuits-skyrocket-in-the-new-year-180211/

Since the turn of the last decade, numerous people have been sued for illegal file-sharing in US courts.

Initially, these lawsuits targeted hundreds or thousands of BitTorrent users per case, but this practice has been rooted out since. Now, most file-sharing cases target a single person, up to a dozen or two at most.

While there may be fewer defendants, there are still plenty of lawsuits filed every month. These generally come from a small group of companies, regularly referred to as “copyright trolls,” who are looking to settle with the alleged pirates.

According to Lex Machina, there were 1,019 file-sharing cases filed in the United States last year, which is an average of 85 per month. More than half of these came from adult entertainment outfit Malibu Media (X-Art), which alone was good for 550 lawsuits.

While those are decent numbers, they could easily be shattered this year. Data collected by TorrentFreak shows that during the first month of 2018, three copyright holders filed a total of 286 lawsuits against alleged pirates. That’s three times more than the monthly average for 2017.

As expected, Malibu Media takes the crown with 138 lawsuits, but not by a large margin. Strike 3 Holdings, which distributes its adult videos via the Blacked, Tushy, and Vixen websites, comes in second place with 133 cases.

Some Malibu Media cases

While Strike 3 Holdings is a relative newcomer, their cases follow a similar pattern. There are also clear links to Malibu Media, as one of the company’s former lawyers, Emilie Kennedy, now works as in-house counsel at Strike 3.

The only non-adult copyright holder that filed cases against alleged BitTorrent pirates was Bodyguard Productions. The company filed 15 cases against downloaders of The Hitman’s Bodyguard, totaling a few dozen defendants.

While these numbers are significant, it’s hard to predict whether the increase will persist. Lawsuits targeted at BitTorrent users often come in waves, and the same companies that flooded the courts with cases last month could easily take a break the next.

While copyright holders have every right to go after people who share their work without permission, these type of cases are not without controversy.

Several judges have referred used strong terms including “harassment,” to describe some of the tactics that are used, and the IP-address evidence is not always trusted either.

That said, there’s no evidence that Malibu Media and others are done yet.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Sharing Secrets with AWS Lambda Using AWS Systems Manager Parameter Store

Post Syndicated from Chris Munns original https://aws.amazon.com/blogs/compute/sharing-secrets-with-aws-lambda-using-aws-systems-manager-parameter-store/

This post courtesy of Roberto Iturralde, Sr. Application Developer- AWS Professional Services

Application architects are faced with key decisions throughout the process of designing and implementing their systems. One decision common to nearly all solutions is how to manage the storage and access rights of application configuration. Shared configuration should be stored centrally and securely with each system component having access only to the properties that it needs for functioning.

With AWS Systems Manager Parameter Store, developers have access to central, secure, durable, and highly available storage for application configuration and secrets. Parameter Store also integrates with AWS Identity and Access Management (IAM), allowing fine-grained access control to individual parameters or branches of a hierarchical tree.

This post demonstrates how to create and access shared configurations in Parameter Store from AWS Lambda. Both encrypted and plaintext parameter values are stored with only the Lambda function having permissions to decrypt the secrets. You also use AWS X-Ray to profile the function.

Solution overview

This example is made up of the following components:

  • An AWS SAM template that defines:
    • A Lambda function and its permissions
    • An unencrypted Parameter Store parameter that the Lambda function loads
    • A KMS key that only the Lambda function can access. You use this key to create an encrypted parameter later.
  • Lambda function code in Python 3.6 that demonstrates how to load values from Parameter Store at function initialization for reuse across invocations.

Launch the AWS SAM template

To create the resources shown in this post, you can download the SAM template or choose the button to launch the stack. The template requires one parameter, an IAM user name, which is the name of the IAM user to be the admin of the KMS key that you create. In order to perform the steps listed in this post, this IAM user will need permissions to execute Lambda functions, create Parameter Store parameters, administer keys in KMS, and view the X-Ray console. If you have these privileges in your IAM user account you can use your own account to complete the walkthrough. You can not use the root user to administer the KMS keys.

SAM template resources

The following sections show the code for the resources defined in the template.
Lambda function

ParameterStoreBlogFunctionDev:
    Type: 'AWS::Serverless::Function'
    Properties:
      FunctionName: 'ParameterStoreBlogFunctionDev'
      Description: 'Integrating lambda with Parameter Store'
      Handler: 'lambda_function.lambda_handler'
      Role: !GetAtt ParameterStoreBlogFunctionRoleDev.Arn
      CodeUri: './code'
      Environment:
        Variables:
          ENV: 'dev'
          APP_CONFIG_PATH: 'parameterStoreBlog'
          AWS_XRAY_TRACING_NAME: 'ParameterStoreBlogFunctionDev'
      Runtime: 'python3.6'
      Timeout: 5
      Tracing: 'Active'

  ParameterStoreBlogFunctionRoleDev:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
          -
            Effect: Allow
            Principal:
              Service:
                - 'lambda.amazonaws.com'
            Action:
              - 'sts:AssumeRole'
      ManagedPolicyArns:
        - 'arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole'
      Policies:
        -
          PolicyName: 'ParameterStoreBlogDevParameterAccess'
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              -
                Effect: Allow
                Action:
                  - 'ssm:GetParameter*'
                Resource: !Sub 'arn:aws:ssm:${AWS::Region}:${AWS::AccountId}:parameter/dev/parameterStoreBlog*'
        -
          PolicyName: 'ParameterStoreBlogDevXRayAccess'
          PolicyDocument:
            Version: '2012-10-17'
            Statement:
              -
                Effect: Allow
                Action:
                  - 'xray:PutTraceSegments'
                  - 'xray:PutTelemetryRecords'
                Resource: '*'

In this YAML code, you define a Lambda function named ParameterStoreBlogFunctionDev using the SAM AWS::Serverless::Function type. The environment variables for this function include the ENV (dev) and the APP_CONFIG_PATH where you find the configuration for this app in Parameter Store. X-Ray tracing is also enabled for profiling later.

The IAM role for this function extends the AWSLambdaBasicExecutionRole by adding IAM policies that grant the function permissions to write to X-Ray and get parameters from Parameter Store, limited to paths under /dev/parameterStoreBlog*.
Parameter Store parameter

SimpleParameter:
    Type: AWS::SSM::Parameter
    Properties:
      Name: '/dev/parameterStoreBlog/appConfig'
      Description: 'Sample dev config values for my app'
      Type: String
      Value: '{"key1": "value1","key2": "value2","key3": "value3"}'

This YAML code creates a plaintext string parameter in Parameter Store in a path that your Lambda function can access.
KMS encryption key

ParameterStoreBlogDevEncryptionKeyAlias:
    Type: AWS::KMS::Alias
    Properties:
      AliasName: 'alias/ParameterStoreBlogKeyDev'
      TargetKeyId: !Ref ParameterStoreBlogDevEncryptionKey

  ParameterStoreBlogDevEncryptionKey:
    Type: AWS::KMS::Key
    Properties:
      Description: 'Encryption key for secret config values for the Parameter Store blog post'
      Enabled: True
      EnableKeyRotation: False
      KeyPolicy:
        Version: '2012-10-17'
        Id: 'key-default-1'
        Statement:
          -
            Sid: 'Allow administration of the key & encryption of new values'
            Effect: Allow
            Principal:
              AWS:
                - !Sub 'arn:aws:iam::${AWS::AccountId}:user/${IAMUsername}'
            Action:
              - 'kms:Create*'
              - 'kms:Encrypt'
              - 'kms:Describe*'
              - 'kms:Enable*'
              - 'kms:List*'
              - 'kms:Put*'
              - 'kms:Update*'
              - 'kms:Revoke*'
              - 'kms:Disable*'
              - 'kms:Get*'
              - 'kms:Delete*'
              - 'kms:ScheduleKeyDeletion'
              - 'kms:CancelKeyDeletion'
            Resource: '*'
          -
            Sid: 'Allow use of the key'
            Effect: Allow
            Principal:
              AWS: !GetAtt ParameterStoreBlogFunctionRoleDev.Arn
            Action:
              - 'kms:Encrypt'
              - 'kms:Decrypt'
              - 'kms:ReEncrypt*'
              - 'kms:GenerateDataKey*'
              - 'kms:DescribeKey'
            Resource: '*'

This YAML code creates an encryption key with a key policy with two statements.

The first statement allows a given user (${IAMUsername}) to administer the key. Importantly, this includes the ability to encrypt values using this key and disable or delete this key, but does not allow the administrator to decrypt values that were encrypted with this key.

The second statement grants your Lambda function permission to encrypt and decrypt values using this key. The alias for this key in KMS is ParameterStoreBlogKeyDev, which is how you reference it later.

Lambda function

Here I walk you through the Lambda function code.

import os, traceback, json, configparser, boto3
from aws_xray_sdk.core import patch_all
patch_all()

# Initialize boto3 client at global scope for connection reuse
client = boto3.client('ssm')
env = os.environ['ENV']
app_config_path = os.environ['APP_CONFIG_PATH']
full_config_path = '/' + env + '/' + app_config_path
# Initialize app at global scope for reuse across invocations
app = None

class MyApp:
    def __init__(self, config):
        """
        Construct new MyApp with configuration
        :param config: application configuration
        """
        self.config = config

    def get_config(self):
        return self.config

def load_config(ssm_parameter_path):
    """
    Load configparser from config stored in SSM Parameter Store
    :param ssm_parameter_path: Path to app config in SSM Parameter Store
    :return: ConfigParser holding loaded config
    """
    configuration = configparser.ConfigParser()
    try:
        # Get all parameters for this app
        param_details = client.get_parameters_by_path(
            Path=ssm_parameter_path,
            Recursive=False,
            WithDecryption=True
        )

        # Loop through the returned parameters and populate the ConfigParser
        if 'Parameters' in param_details and len(param_details.get('Parameters')) > 0:
            for param in param_details.get('Parameters'):
                param_path_array = param.get('Name').split("/")
                section_position = len(param_path_array) - 1
                section_name = param_path_array[section_position]
                config_values = json.loads(param.get('Value'))
                config_dict = {section_name: config_values}
                print("Found configuration: " + str(config_dict))
                configuration.read_dict(config_dict)

    except:
        print("Encountered an error loading config from SSM.")
        traceback.print_exc()
    finally:
        return configuration

def lambda_handler(event, context):
    global app
    # Initialize app if it doesn't yet exist
    if app is None:
        print("Loading config and creating new MyApp...")
        config = load_config(full_config_path)
        app = MyApp(config)

    return "MyApp config is " + str(app.get_config()._sections)

Beneath the import statements, you import the patch_all function from the AWS X-Ray library, which you use to patch boto3 to create X-Ray segments for all your boto3 operations.

Next, you create a boto3 SSM client at the global scope for reuse across function invocations, following Lambda best practices. Using the function environment variables, you assemble the path where you expect to find your configuration in Parameter Store. The class MyApp is meant to serve as an example of an application that would need its configuration injected at construction. In this example, you create an instance of ConfigParser, a class in Python’s standard library for handling basic configurations, to give to MyApp.

The load_config function loads the all the parameters from Parameter Store at the level immediately beneath the path provided in the Lambda function environment variables. Each parameter found is put into a new section in ConfigParser. The name of the section is the name of the parameter, less the base path. In this example, the full parameter name is /dev/parameterStoreBlog/appConfig, which is put in a section named appConfig.

Finally, the lambda_handler function initializes an instance of MyApp if it doesn’t already exist, constructing it with the loaded configuration from Parameter Store. Then it simply returns the currently loaded configuration in MyApp. The impact of this design is that the configuration is only loaded from Parameter Store the first time that the Lambda function execution environment is initialized. Subsequent invocations reuse the existing instance of MyApp, resulting in improved performance. You see this in the X-Ray traces later in this post. For more advanced use cases where configuration changes need to be received immediately, you could implement an expiry policy for your configuration entries or push notifications to your function.

To confirm that everything was created successfully, test the function in the Lambda console.

  1. Open the Lambda console.
  2. In the navigation pane, choose Functions.
  3. In the Functions pane, filter to ParameterStoreBlogFunctionDev to find the function created by the SAM template earlier. Open the function name to view its details.
  4. On the top right of the function detail page, choose Test. You may need to create a new test event. The input JSON doesn’t matter as this function ignores the input.

After running the test, you should see output similar to the following. This demonstrates that the function successfully fetched the unencrypted configuration from Parameter Store.

Create an encrypted parameter

You currently have a simple, unencrypted parameter and a Lambda function that can access it.

Next, you create an encrypted parameter that only your Lambda function has permission to use for decryption. This limits read access for this parameter to only this Lambda function.

To follow along with this section, deploy the SAM template for this post in your account and make your IAM user name the KMS key admin mentioned earlier.

  1. In the Systems Manager console, under Shared Resources, choose Parameter Store.
  2. Choose Create Parameter.
    • For Name, enter /dev/parameterStoreBlog/appSecrets.
    • For Type, select Secure String.
    • For KMS Key ID, choose alias/ParameterStoreBlogKeyDev, which is the key that your SAM template created.
    • For Value, enter {"secretKey": "secretValue"}.
    • Choose Create Parameter.
  3. If you now try to view the value of this parameter by choosing the name of the parameter in the parameters list and then choosing Show next to the Value field, you won’t see the value appear. This is because, even though you have permission to encrypt values using this KMS key, you do not have permissions to decrypt values.
  4. In the Lambda console, run another test of your function. You now also see the secret parameter that you created and its decrypted value.

If you do not see the new parameter in the Lambda output, this may be because the Lambda execution environment is still warm from the previous test. Because the parameters are loaded at Lambda startup, you need a fresh execution environment to refresh the values.

Adjust the function timeout to a different value in the Advanced Settings at the bottom of the Lambda Configuration tab. Choose Save and test to trigger the creation of a new Lambda execution environment.

Profiling the impact of querying Parameter Store using AWS X-Ray

By using the AWS X-Ray SDK to patch boto3 in your Lambda function code, each invocation of the function creates traces in X-Ray. In this example, you can use these traces to validate the performance impact of your design decision to only load configuration from Parameter Store on the first invocation of the function in a new execution environment.

From the Lambda function details page where you tested the function earlier, under the function name, choose Monitoring. Choose View traces in X-Ray.

This opens the X-Ray console in a new window filtered to your function. Be aware of the time range field next to the search bar if you don’t see any search results.
In this screenshot, I’ve invoked the Lambda function twice, one time 10.3 minutes ago with a response time of 1.1 seconds and again 9.8 minutes ago with a response time of 8 milliseconds.

Looking at the details of the longer running trace by clicking the trace ID, you can see that the Lambda function spent the first ~350 ms of the full 1.1 sec routing the request through Lambda and creating a new execution environment for this function, as this was the first invocation with this code. This is the portion of time before the initialization subsegment.

Next, it took 725 ms to initialize the function, which includes executing the code at the global scope (including creating the boto3 client). This is also a one-time cost for a fresh execution environment.

Finally, the function executed for 65 ms, of which 63.5 ms was the GetParametersByPath call to Parameter Store.

Looking at the trace for the second, much faster function invocation, you see that the majority of the 8 ms execution time was Lambda routing the request to the function and returning the response. Only 1 ms of the overall execution time was attributed to the execution of the function, which makes sense given that after the first invocation you’re simply returning the config stored in MyApp.

While the Traces screen allows you to view the details of individual traces, the X-Ray Service Map screen allows you to view aggregate performance data for all traced services over a period of time.

In the X-Ray console navigation pane, choose Service map. Selecting a service node shows the metrics for node-specific requests. Selecting an edge between two nodes shows the metrics for requests that traveled that connection. Again, be aware of the time range field next to the search bar if you don’t see any search results.

After invoking your Lambda function several more times by testing it from the Lambda console, you can view some aggregate performance metrics. Look at the following:

  • From the client perspective, requests to the Lambda service for the function are taking an average of 50 ms to respond. The function is generating ~1 trace per minute.
  • The function itself is responding in an average of 3 ms. In the following screenshot, I’ve clicked on this node, which reveals a latency histogram of the traced requests showing that over 95% of requests return in under 5 ms.
  • Parameter Store is responding to requests in an average of 64 ms, but note the much lower trace rate in the node. This is because you only fetch data from Parameter Store on the initialization of the Lambda execution environment.

Conclusion

Deduplication, encryption, and restricted access to shared configuration and secrets is a key component to any mature architecture. Serverless architectures designed using event-driven, on-demand, compute services like Lambda are no different.

In this post, I walked you through a sample application accessing unencrypted and encrypted values in Parameter Store. These values were created in a hierarchy by application environment and component name, with the permissions to decrypt secret values restricted to only the function needing access. The techniques used here can become the foundation of secure, robust configuration management in your enterprise serverless applications.

Pirate ‘Kodi’ Boxes & Infringing Streams Cost eBay Sellers Dearly

Post Syndicated from Andy original https://torrentfreak.com/pirate-kodi-boxes-infringing-streams-cost-ebay-sellers-dearly-180209/

Those on the look out for ready-configured pirate set-top boxes can drift around the web looking at hundreds of options or head off to the places most people know best – eBay and Facebook.

Known for its ease of use and broad range of content, eBay is often the go-to place for sellers looking to offload less than legitimate stock. Along with Facebook, it’s become one of the easiest places online to find so-called Kodi boxes.

While the Kodi software itself is entirely legal, millions of people have their boxes configured for piracy purposes and eBay and Facebook provide a buying platform for those who don’t want to do the work themselves.

Sellers generally operate with impunity but according to news from the Premier League and anti-piracy partners Federation Against Copyright Theft (FACT), that’s not always the case.

FACT reports that a supplier of ISDs (Illicit Streaming Devices) that came pre-loaded for viewing top-tier football without permission has agreed to pay the Premier League thousands of pounds.

Nayanesh Patel from Harrow, Middlesex, is said to have sold Kodi-type boxes on eBay and Facebook but got caught in the act. As a result he’s agreed to cough up £18,000, disable his website, remove all advertising, and cease future sales.

A second individual, who isn’t named, allegedly sold subscriptions to illegal streams of Premier League football via eBay. He too was tracked down and eventually agreed to pay £8,000 and cease all future streams sales.

“This case shows there are serious consequences for sellers of pre-loaded boxes and is a warning for anyone who thinks they might get away with this type of activity,” says Premier League Director of Legal Services, Kevin Plumb.

“The Premier League is currently engaged in a comprehensive copyright protection programme that includes targeting and taking action against sellers of pre-loaded devices, and any ISPs or hosts that facilitate the broadcast of pirated Premier League content.”

The number of individuals selling pirate set-top devices and IPTV-style subscription packages on eBay and social media has grown to epidemic proportions, so perhaps the biggest surprise is that there aren’t more cases like these. Importantly, however, these apparent settlement agreements are a step back from the criminal prosecutions we’ve seen in the past.

Previously, individuals under FACT’s spotlight have tended to be targeted by the police, with all the drawn-out misery that entails. While these cash settlements are fairly hefty, they appear to be in lieu of law enforcement involvement, not inconsiderable solicitors bills, and potential jail sentences. For a few unlucky sellers, this could prove the more attractive option.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Migrating Your Amazon ECS Containers to AWS Fargate

Post Syndicated from Tiffany Jernigan original https://aws.amazon.com/blogs/compute/migrating-your-amazon-ecs-containers-to-aws-fargate/

AWS Fargate is a new technology that works with Amazon Elastic Container Service (ECS) to run containers without having to manage servers or clusters. What does this mean? With Fargate, you no longer need to provision or manage a single virtual machine; you can just create tasks and run them directly!

Fargate uses the same API actions as ECS, so you can use the ECS console, the AWS CLI, or the ECS CLI. I recommend running through the first-run experience for Fargate even if you’re familiar with ECS. It creates all of the one-time setup requirements, such as the necessary IAM roles. If you’re using a CLI, make sure to upgrade to the latest version

In this blog, you will see how to migrate ECS containers from running on Amazon EC2 to Fargate.

Getting started

Note: Anything with code blocks is a change in the task definition file. Screen captures are from the console. Additionally, Fargate is currently available in the us-east-1 (N. Virginia) region.

Launch type

When you create tasks (grouping of containers) and clusters (grouping of tasks), you now have two launch type options: EC2 and Fargate. The default launch type, EC2, is ECS as you knew it before the announcement of Fargate. You need to specify Fargate as the launch type when running a Fargate task.

Even though Fargate abstracts away virtual machines, tasks still must be launched into a cluster. With Fargate, clusters are a logical infrastructure and permissions boundary that allow you to isolate and manage groups of tasks. ECS also supports heterogeneous clusters that are made up of tasks running on both EC2 and Fargate launch types.

The optional, new requiresCompatibilities parameter with FARGATE in the field ensures that your task definition only passes validation if you include Fargate-compatible parameters. Tasks can be flagged as compatible with EC2, Fargate, or both.

"requiresCompatibilities": [
    "FARGATE"
]

Networking

"networkMode": "awsvpc"

In November, we announced the addition of task networking with the network mode awsvpc. By default, ECS uses the bridge network mode. Fargate requires using the awsvpc network mode.

In bridge mode, all of your tasks running on the same instance share the instance’s elastic network interface, which is a virtual network interface, IP address, and security groups.

The awsvpc mode provides this networking support to your tasks natively. You now get the same VPC networking and security controls at the task level that were previously only available with EC2 instances. Each task gets its own elastic networking interface and IP address so that multiple applications or copies of a single application can run on the same port number without any conflicts.

The awsvpc mode also provides a separation of responsibility for tasks. You can get complete control of task placement within your own VPCs, subnets, and the security policies associated with them, even though the underlying infrastructure is managed by Fargate. Also, you can assign different security groups to each task, which gives you more fine-grained security. You can give an application only the permissions it needs.

"portMappings": [
    {
        "containerPort": "3000"
    }
 ]

What else has to change? First, you only specify a containerPort value, not a hostPort value, as there is no host to manage. Your container port is the port that you access on your elastic network interface IP address. Therefore, your container ports in a single task definition file need to be unique.

"environment": [
    {
        "name": "WORDPRESS_DB_HOST",
        "value": "127.0.0.1:3306"
    }
 ]

Additionally, links are not allowed as they are a property of the “bridge” network mode (and are now a legacy feature of Docker). Instead, containers share a network namespace and communicate with each other over the localhost interface. They can be referenced using the following:

localhost/127.0.0.1:<some_port_number>

CPU and memory

"memory": "1024",
 "cpu": "256"

"memory": "1gb",
 "cpu": ".25vcpu"

When launching a task with the EC2 launch type, task performance is influenced by the instance types that you select for your cluster combined with your task definition. If you pick larger instances, your applications make use of the extra resources if there is no contention.

In Fargate, you needed a way to get additional resource information so we created task-level resources. Task-level resources define the maximum amount of memory and cpu that your task can consume.

  • memory can be defined in MB with just the number, or in GB, for example, “1024” or “1gb”.
  • cpu can be defined as the number or in vCPUs, for example, “256” or “.25vcpu”.
    • vCPUs are virtual CPUs. You can look at the memory and vCPUs for instance types to get an idea of what you may have used before.

The memory and CPU options available with Fargate are:

CPU Memory
256 (.25 vCPU) 0.5GB, 1GB, 2GB
512 (.5 vCPU) 1GB, 2GB, 3GB, 4GB
1024 (1 vCPU) 2GB, 3GB, 4GB, 5GB, 6GB, 7GB, 8GB
2048 (2 vCPU) Between 4GB and 16GB in 1GB increments
4096 (4 vCPU) Between 8GB and 30GB in 1GB increments

IAM roles

Because Fargate uses awsvpc mode, you need an Amazon ECS service-linked IAM role named AWSServiceRoleForECS. It provides Fargate with the needed permissions, such as the permission to attach an elastic network interface to your task. After you create your service-linked IAM role, you can delete the remaining roles in your services.

"executionRoleArn": "arn:aws:iam::<your_account_id>:role/ecsTaskExecutionRole"

With the EC2 launch type, an instance role gives the agent the ability to pull, publish, talk to ECS, and so on. With Fargate, the task execution IAM role is only needed if you’re pulling from Amazon ECR or publishing data to Amazon CloudWatch Logs.

The Fargate first-run experience tutorial in the console automatically creates these roles for you.

Volumes

Fargate currently supports non-persistent, empty data volumes for containers. When you define your container, you no longer use the host field and only specify a name.

Load balancers

For awsvpc mode, and therefore for Fargate, use the IP target type instead of the instance target type. You define this in the Amazon EC2 service when creating a load balancer.

If you’re using a Classic Load Balancer, change it to an Application Load Balancer or a Network Load Balancer.

Tip: If you are using an Application Load Balancer, make sure that your tasks are launched in the same VPC and Availability Zones as your load balancer.

Let’s migrate a task definition!

Here is an example NGINX task definition. This type of task definition is what you’re used to if you created one before Fargate was announced. It’s what you would run now with the EC2 launch type.

{
    "containerDefinitions": [
        {
            "name": "nginx",
            "image": "nginx",
            "memory": "512",
            "cpu": "100",
            "essential": true,
            "portMappings": [
                {
                    "hostPort": "80",
                    "containerPort": "80",
                    "protocol": "tcp"
                }
            ],
            "logConfiguration": {
                "logDriver": "awslogs",
                "options": {
                    "awslogs-group": "/ecs/",
                    "awslogs-region": "us-east-1",
                    "awslogs-stream-prefix": "ecs"
                }
            }
        }
    ],
    "family": "nginx-ec2"
}

OK, so now what do you need to do to change it to run with the Fargate launch type?

  • Add FARGATE for requiredCompatibilities (not required, but a good safety check for your task definition).
  • Use awsvpc as the network mode.
  • Just specify the containerPort (the hostPortvalue is the same).
  • Add a task executionRoleARN value to allow logging to CloudWatch.
  • Provide cpu and memory limits for the task.
{
    "requiresCompatibilities": [
        "FARGATE"
    ],
    "containerDefinitions": [
        {
            "name": "nginx",
            "image": "nginx",
            "memory": "512",
            "cpu": "100",
            "essential": true,
            "portMappings": [
                {
                    "containerPort": "80",
                    "protocol": "tcp"
                }
            ],
            "logConfiguration": {
                "logDriver": "awslogs",
                "options": {
                    "awslogs-group": "/ecs/",
                    "awslogs-region": "us-east-1",
                    "awslogs-stream-prefix": "ecs"
                }
            }
        }
    ],
    "networkMode": "awsvpc",
    "executionRoleArn": "arn:aws:iam::<your_account_id>:role/ecsTaskExecutionRole",
    "family": "nginx-fargate",
    "memory": "512",
    "cpu": "256"
}

Are there more examples?

Yep! Head to the AWS Samples GitHub repo. We have several sample task definitions you can try for both the EC2 and Fargate launch types. Contributions are very welcome too :).

 

tiffany jernigan
@tiffanyfayj

Cabinet of Secret Documents from Australia

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2018/02/cabinet_of_secr.html

This story of leaked Australian government secrets is unlike any other I’ve heard:

It begins at a second-hand shop in Canberra, where ex-government furniture is sold off cheaply.

The deals can be even cheaper when the items in question are two heavy filing cabinets to which no-one can find the keys.

They were purchased for small change and sat unopened for some months until the locks were attacked with a drill.

Inside was the trove of documents now known as The Cabinet Files.

The thousands of pages reveal the inner workings of five separate governments and span nearly a decade.

Nearly all the files are classified, some as “top secret” or “AUSTEO”, which means they are to be seen by Australian eyes only.

Yes, that really happened. The person who bought and opened the file cabinets contacted the Australian Broadcasting Corp, who is now publishing a bunch of it.

There’s lots of interesting (and embarassing) stuff in the documents, although most of it is local politics. I am more interested in the government’s reaction to the incident: they’re pushing for a law making it illegal for the press to publish government secrets it received through unofficial channels.

“The one thing I would point out about the legislation that does concern me particularly is that classified information is an element of the offence,” he said.

“That is to say, if you’ve got a filing cabinet that is full of classified information … that means all the Crown has to prove if they’re prosecuting you is that it is classified ­ nothing else.

“They don’t have to prove that you knew it was classified, so knowledge is beside the point.”

[…]

Many groups have raised concerns, including media organisations who say they unfairly target journalists trying to do their job.

But really anyone could be prosecuted just for possessing classified information, regardless of whether they know about it.

That might include, for instance, if you stumbled across a folder of secret files in a regular skip bin while walking home and handed it over to a journalist.

This illustrates a fundamental misunderstanding of the threat. The Australian Broadcasting Corp gets their funding from the government, and was very restrained in what they published. They waited months before publishing as they coordinated with the Australian government. They allowed the government to secure the files, and then returned them. From the government’s perspective, they were the best possible media outlet to receive this information. If the government makes it illegal for the Australian press to publish this sort of material, the next time it will be sent to the BBC, the Guardian, the New York Times, or Wikileaks. And since people no longer read their news from newspapers sold in stores but on the Internet, the result will be just as many people reading the stories with far fewer redactions.

The proposed law is older than this leak, but the leak is giving it new life. The Australian opposition party is being cagey on whether they will support the law. They don’t want to appear weak on national security, so I’m not optimistic.

EDITED TO ADD (2/8): The Australian government backed down on that new security law.

EDITED TO ADD (2/13): Excellent political cartoon.