Tag Archives: mitm

EU Compliance Update: AWS’s 2017 C5 Assessment

Post Syndicated from Oliver Bell original https://aws.amazon.com/blogs/security/eu-compliance-update-awss-2017-c5-assessment/

C5 logo

AWS has completed its 2017 assessment against the Cloud Computing Compliance Controls Catalog (C5) information security and compliance program. Bundesamt für Sicherheit in der Informationstechnik (BSI)—Germany’s national cybersecurity authority—established C5 to define a reference standard for German cloud security requirements. With C5 (as well as with IT-Grundschutz), customers in German member states can use the work performed under this BSI audit to comply with stringent local requirements and operate secure workloads in the AWS Cloud.

Continuing our commitment to Germany and the AWS European Regions, AWS has added 16 services to this year’s scope:

The English version of the C5 report is available through AWS Artifact. The German version of the report will be available through AWS Artifact in the coming weeks.

– Oliver

Supporting Conservancy Makes a Difference

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2017/12/31/donate-conservancy.html

Earlier this year, in
February, I wrote a blog post encouraging people to donate
to where I
work, Software Freedom Conservancy. I’ve not otherwise blogged too much
this year. It’s been a rough year for many reasons, and while I
personally and Conservancy in general have accomplished some very
important work this year, I’m reminded as always that more resources do
make things easier.

I understand the urge, given how bad the larger political crises have
gotten, to want to give to charities other than those related to software
freedom. There are important causes out there that have become more urgent
this year. Here’s three issues which have become shockingly more acute
this year:

  • making sure the USA keeps it commitment
    to immigrants to allow them make a new life here just like my own ancestors
    did,
  • assuring that the great national nature reserves are maintained and
    left pristine for generations to come,
  • assuring that we have zero tolerance abusive behavior —
    particularly by those in power against people who come to them for help and
    job opportunities.

These are just three of the many issues this year that I’ve seen get worse,
not better. I am glad that I know and support people who work on these
issues, and I urge everyone to work on these issues, too.

Nevertheless, as I plan my primary donations this year, I’m again, as I
always do, giving to the FSF and my
own employer, Software
Freedom Conservancy
. The reason is simple: software freedom is still
an essential cause and it is frankly one that most people don’t understand
(yet). I wrote almost
two years ago about the phenomenon I dubbed Kuhn’s
Paradox
. Simply put: it keeps getting more and more difficult
to avoid proprietary software in a normal day’s tasks, even while the
number of lines of code licensed freely gets larger every day.

As long as that paradox remains true, I see software freedom as urgent. I
know that we’re losing ground on so many other causes, too. But those of
you who read my blog are some of the few people in the world that
understand that software freedom is under threat and needs the urgent work
that the very few software-freedom-related organizations,
like the FSF
and Software Freedom
Conservancy
are doing. I hope you’ll donate now to both of them. For
my part, I gave $120 myself to FSF as part of the monthly Associate
Membership program, and in a few minutes, I’m going to give $400 to
Conservancy. I’ll be frank: if you work in technology in an industrialized
country, I’m quite sure you can afford that level of money, and I suspect
those amounts are less than most of you spent on technology equipment
and/or network connectivity charges this year. Make a difference for us
and give to the cause of software freedom at least as much a you’re giving
to large technology companies.

Finally, a good reason to give to smaller charities like FSF and
Conservancy is that your donation makes a bigger difference. I do think
bigger organizations, such as (to pick an example of an organization I used
to give to) my local NPR station does important work. However, I was
listening this week to my local NPR station, and they said their goal
for that day was to raise $50,000. For Conservancy, that’s closer
to a goal we have for entire fundraising season, which for this year was
$75,000. The thing is: NPR is an important part of USA society, but it’s
one that nearly everyone understands. So few people understand the threats
looming from proprietary software, and they may not understand at all until
it’s too late — when all their devices are locked down, DRM is
fully ubiquitous, and no one is allowed to tinker with the software on
their devices and learn the wonderful art of computer programming. We are
at real risk of reaching that distopia before 90% of the world’s
population understands the threat!

Thus, giving to organizations in the area of software freedom is just
going to have a bigger and more immediate impact than more general causes
that more easily connect with people. You’re giving to prevent a future
that not everyone understands yet, and making an impact on our
work to help explain the dangers to the larger population.

Now Open AWS EU (Paris) Region

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/now-open-aws-eu-paris-region/

Today we are launching our 18th AWS Region, our fourth in Europe. Located in the Paris area, AWS customers can use this Region to better serve customers in and around France.

The Details
The new EU (Paris) Region provides a broad suite of AWS services including Amazon API Gateway, Amazon Aurora, Amazon CloudFront, Amazon CloudWatch, CloudWatch Events, Amazon CloudWatch Logs, Amazon DynamoDB, Amazon Elastic Compute Cloud (EC2), EC2 Container Registry, Amazon ECS, Amazon Elastic Block Store (EBS), Amazon EMR, Amazon ElastiCache, Amazon Elasticsearch Service, Amazon Glacier, Amazon Kinesis Streams, Polly, Amazon Redshift, Amazon Relational Database Service (RDS), Amazon Route 53, Amazon Simple Notification Service (SNS), Amazon Simple Queue Service (SQS), Amazon Simple Storage Service (S3), Amazon Simple Workflow Service (SWF), Amazon Virtual Private Cloud, Auto Scaling, AWS Certificate Manager (ACM), AWS CloudFormation, AWS CloudTrail, AWS CodeDeploy, AWS Config, AWS Database Migration Service, AWS Direct Connect, AWS Elastic Beanstalk, AWS Identity and Access Management (IAM), AWS Key Management Service (KMS), AWS Lambda, AWS Marketplace, AWS OpsWorks Stacks, AWS Personal Health Dashboard, AWS Server Migration Service, AWS Service Catalog, AWS Shield Standard, AWS Snowball, AWS Snowball Edge, AWS Snowmobile, AWS Storage Gateway, AWS Support (including AWS Trusted Advisor), Elastic Load Balancing, and VM Import.

The Paris Region supports all sizes of C5, M5, R4, T2, D2, I3, and X1 instances.

There are also four edge locations for Amazon Route 53 and Amazon CloudFront: three in Paris and one in Marseille, all with AWS WAF and AWS Shield. Check out the AWS Global Infrastructure page to learn more about current and future AWS Regions.

The Paris Region will benefit from three AWS Direct Connect locations. Telehouse Voltaire is available today. AWS Direct Connect will also become available at Equinix Paris in early 2018, followed by Interxion Paris.

All AWS infrastructure regions around the world are designed, built, and regularly audited to meet the most rigorous compliance standards and to provide high levels of security for all AWS customers. These include ISO 27001, ISO 27017, ISO 27018, SOC 1 (Formerly SAS 70), SOC 2 and SOC 3 Security & Availability, PCI DSS Level 1, and many more. This means customers benefit from all the best practices of AWS policies, architecture, and operational processes built to satisfy the needs of even the most security sensitive customers.

AWS is certified under the EU-US Privacy Shield, and the AWS Data Processing Addendum (DPA) is GDPR-ready and available now to all AWS customers to help them prepare for May 25, 2018 when the GDPR becomes enforceable. The current AWS DPA, as well as the AWS GDPR DPA, allows customers to transfer personal data to countries outside the European Economic Area (EEA) in compliance with European Union (EU) data protection laws. AWS also adheres to the Cloud Infrastructure Service Providers in Europe (CISPE) Code of Conduct. The CISPE Code of Conduct helps customers ensure that AWS is using appropriate data protection standards to protect their data, consistent with the GDPR. In addition, AWS offers a wide range of services and features to help customers meet the requirements of the GDPR, including services for access controls, monitoring, logging, and encryption.

From Our Customers
Many AWS customers are preparing to use this new Region. Here’s a small sample:

Societe Generale, one of the largest banks in France and the world, has accelerated their digital transformation while working with AWS. They developed SG Research, an application that makes reports from Societe Generale’s analysts available to corporate customers in order to improve the decision-making process for investments. The new AWS Region will reduce latency between applications running in the cloud and in their French data centers.

SNCF is the national railway company of France. Their mobile app, powered by AWS, delivers real-time traffic information to 14 million riders. Extreme weather, traffic events, holidays, and engineering works can cause usage to peak at hundreds of thousands of users per second. They are planning to use machine learning and big data to add predictive features to the app.

Radio France, the French public radio broadcaster, offers seven national networks, and uses AWS to accelerate its innovation and stay competitive.

Les Restos du Coeur, a French charity that provides assistance to the needy, delivering food packages and participating in their social and economic integration back into French society. Les Restos du Coeur is using AWS for its CRM system to track the assistance given to each of their beneficiaries and the impact this is having on their lives.

AlloResto by JustEat (a leader in the French FoodTech industry), is using AWS to to scale during traffic peaks and to accelerate their innovation process.

AWS Consulting and Technology Partners
We are already working with a wide variety of consulting, technology, managed service, and Direct Connect partners in France. Here’s a partial list:

AWS Premier Consulting PartnersAccenture, Capgemini, Claranet, CloudReach, DXC, and Edifixio.

AWS Consulting PartnersABC Systemes, Atos International SAS, CoreExpert, Cycloid, Devoteam, LINKBYNET, Oxalide, Ozones, Scaleo Information Systems, and Sopra Steria.

AWS Technology PartnersAxway, Commerce Guys, MicroStrategy, Sage, Software AG, Splunk, Tibco, and Zerolight.

AWS in France
We have been investing in Europe, with a focus on France, for the last 11 years. We have also been developing documentation and training programs to help our customers to improve their skills and to accelerate their journey to the AWS Cloud.

As part of our commitment to AWS customers in France, we plan to train more than 25,000 people in the coming years, helping them develop highly sought after cloud skills. They will have access to AWS training resources in France via AWS Academy, AWSome days, AWS Educate, and webinars, all delivered in French by AWS Technical Trainers and AWS Certified Trainers.

Use it Today
The EU (Paris) Region is open for business now and you can start using it today!

Jeff;

 

Security Vulnerabilities in Certificate Pinning

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/12/security_vulner_10.html

New research found that many banks offer certificate pinning as a security feature, but fail to authenticate the hostname. This leaves the systems open to man-in-the-middle attacks.

From the paper:

Abstract: Certificate verification is a crucial stage in the establishment of a TLS connection. A common security flaw in TLS implementations is the lack of certificate hostname verification but, in general, this is easy to detect. In security-sensitive applications, the usage of certificate pinning is on the rise. This paper shows that certificate pinning can (and often does) hide the lack of proper hostname verification, enabling MITM attacks. Dynamic (black-box) detection of this vulnerability would typically require the tester to own a high security certificate from the same issuer (and often same intermediate CA) as the one used by the app. We present Spinner, a new tool for black-box testing for this vulnerability at scale that does not require purchasing any certificates. By redirecting traffic to websites which use the relevant certificates and then analysing the (encrypted) network traffic we are able to determine whether the hostname check is correctly done, even in the presence of certificate pinning. We use Spinner to analyse 400 security-sensitive Android and iPhone apps. We found that 9 apps had this flaw, including two of the largest banks in the world: Bank of America and HSBC. We also found that TunnelBear, one of the most popular VPN apps was also vulnerable. These apps have a joint user base of tens of millions of users.

News article.

Apple CEO is Optimistic VPN Apps Will Return to China App Store

Post Syndicated from Andy original https://torrentfreak.com/apple-ceo-is-optimistic-vpn-apps-will-return-to-china-app-store-171206/

As part of an emerging crackdown on tools and systems with the ability to bypass China’s ‘Great Firewall’, during the summer Chinese government pressure began to affect Apple.

During the final days of July, Apple was forced to remove many of the most-used VPN applications from its Chinese App Store. In a short email from the company, VPN providers and software developers were told that VPN applications are considered illegal in China.

“We are writing to notify you that your application will be removed from the China App Store because it includes content that is illegal in China, which is not in compliance with the App Store Review Guidelines,” Apple informed the affected VPNs.

While the position on the ground doesn’t appear to have changed in the interim, Apple Chief Executive Tim Cook today expressed optimism that the VPN apps would eventually be restored to their former positions on China’s version of the App Store.

“My hope over time is that some of the things, the couple of things that’s been pulled, come back,” Cook said. “I have great hope on that and great optimism on that.”

According to Reuters, Cook said that he always tries to find ways to work together to settle differences and if he gets criticized for that “so be it.”

Speaking at the Fortune Forum in the Chinese city of Guangzhou, Cook said that he believes strongly in freedoms. But back home in the US, Apple has been strongly criticized for not doing enough to uphold freedom of speech and communication in China.

Back in October, two US senators wrote to Cook asking why the company had removed the VPN apps from the company’s store in China.

“VPNs allow users to access the uncensored Internet in China and other countries that restrict Internet freedom. If these reports are true, we are concerned that Apple may be enabling the Chinese government’s censorship and surveillance of the Internet,” senators Ted Cruz and Patrick Leahy wrote.

“While Apple’s many contributions to the global exchange of information are admirable, removing VPN apps that allow individuals in China to evade the Great Firewall and access the Internet privately does not enable people in China to ‘speak up’.”

They were comments Senator Leahy underlined again yesterday.

“American tech companies have become leading champions of free expression. But that commitment should not end at our borders,” Leahy told CNBC.

“Global leaders in innovation, like Apple, have both an opportunity and a moral obligation to promote free expression and other basic human rights in countries that routinely deny these rights.”

Whether the optimism expressed by Cook today is based on discussions with the Chinese government is unknown. However, it seems unlikely that authorities would be willing to significantly compromise on their dedication to maintaining the Great Firewall, which not only controls access to locally controversial content but also seeks to boost the success of Chinese companies.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

M5 – The Next Generation of General-Purpose EC2 Instances

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/m5-the-next-generation-of-general-purpose-ec2-instances/

I always advise new EC2 users to start with our general-purpose instances, run some stress tests, and to get a really good feel for the compute, memory, and networking profile of their application before taking a look at other instance types. With a broad selection of instances optimized for compute, memory, and storage, our customers have many options and the flexibility to choose the instance type that is the best fit for their needs.

As you can see from my EC2 Instance History post, the general-purpose (M) instances go all the way back to 2006 when we launched the m1.small. We continued to evolve along this branch of our family tree, launching the the M2 (2009), M3 (2012), and the M4 (2015) instances. Our customers use the general-purpose instances to run web & app servers, host enterprise applications, support online games, and build cache fleets.

New M5 Instances
Today we are taking the next step forward with the launch of our new M5 instances. These instances benefit from our commitment to continued innovation and offer even better price-performance than their predecessors. Based on Custom Intel® Xeon® Platinum 8175M series processors running at 2.5 GHz, the M5 instances are designed for highly demanding workloads and will deliver 14% better price/performance than the M4 instances on a per-core basis. Applications that use the AVX-512 instructions will crank out twice as many FLOPS per core. We’ve also added a new size at the high-end, giving you even more options.

Here are the M5 instances (all VPC-only, HVM-only, and EBS-Optimized):

Instance Name vCPUs
RAM
Network Bandwidth EBS-Optimized Bandwidth
m5.large 2 8 GiB Up to 10 Gbps Up to 2120 Mbps
m5.xlarge 4 16 GiB Up to 10 Gbps Up to 2120 Mbps
m5.2xlarge 8 32 GiB Up to 10 Gbps Up to 2120 Mbps
m5.4xlarge 16 64 GiB Up to 10 Gbps 2120 Mbps
m5.12xlarge 48 192 GiB 10 Gbps 5000 Mbps
m5.24xlarge 96 384 GiB 25 Gbps 10000 Mbps

At the top end of the lineup, the m5.24xlarge is second only to the X instances when it comes to vCPU count, giving you more room to scale up and to consolidate workloads. The instances support Enhanced Networking, and can deliver up to 25 Gbps when used within a Placement Group.

In addition to dedicated, EBS-Optimized bandwidth to EBS, access to EBS storage is enhanced by the use of NVMe (you’ll need to install the proper drivers if you are using older AMIs). The combination of more bandwidth and NVMe will increase the amount of data that your M5 instances can chew through.

Launch One Today
You can launch M5 instances today in the US East (Northern Virginia), US West (Oregon), and EU (Ireland) Regions in On-Demand and Spot form (Reserved Instances are also available), with additional Regions in the works.

One quick note: the current NVMe driver is not optimized for high-performance sequential workloads and we don’t recommend the use of M5 instances in conjunction with sc1 or st1 volumes. We are aware of this issue and have been working to optimize the driver for this important use case.

Jeff;

 

 

Looming Net Neutrality Repeal Sparks BitTorrent Throttling Fears

Post Syndicated from Ernesto original https://torrentfreak.com/looming-net-neutrality-repeal-sparks-bittorrent-throttling-fears-171123/

Ten years ago we uncovered that Comcast was systematically slowing down BitTorrent traffic to ease the load on its network.

The Comcast case ignited a broad discussion about net neutrality and provided the setup for the FCC’s Open Internet Order, which came into effect three years later.

This Open Internet Order then became the foundation of the net neutrality regulation that was adopted in 2015 and still applies today. The big change compared to the earlier attempt was that ISPs can be regulated as carriers under Title II.

These rules provide a clear standard that prevents ISPs from blocking, throttling, and paid prioritization of “lawful” traffic. However, this may soon be over as the FCC is determined to repeal it.

FCC head Ajit Pai recently told Reuters that the current rules are too restrictive and hinder competition and innovation, which is ultimately not in the best interests of consumers

“The FCC will no longer be in the business of micromanaging business models and preemptively prohibiting services and applications and products that could be pro-competitive,” Pai said. “We should simply set rules of the road that let companies of all kinds in every sector compete and let consumers decide who wins and loses.”

This week the FCC released its final repeal draft (pdf), which was met with fierce resistance from the public and various large tech companies. They fear that, if the current net neutrality rules disappear, throttling and ‘fast lanes’ for some services will become commonplace.

This could also mean that BitTorrent traffic could become a target once again, with it being blocked or throttled across many networks, as The Verge just pointed out.

Blocking BitTorrent traffic would indeed become much easier if current net neutrality safeguards were removed. However, the FCC believes that the current “no-throttling rules are unnecessary to prevent the harms that they were intended to thwart,” such as blocking entire file transfer protocols.

Instead, the FCC notes that antitrust law, FTC enforcement of ISP commitments, and consumer expectations will prevent any unwelcome blocking. This is also the reason why ISPs adopted no-blocking policies even when they were not required to, they point out.

Indeed, when the DC Circuit Court of Appeals decimated the Open Internet Order in 2014, Comcast was quick to assure subscribers that it had no plans to start throttling torrents again. Yes, that offers no guarantees for the future.

The FCC goes on to mention that the current net neutrality rules don’t prevent selective blocking. They can already be bypassed by ISPs if they offer “curated services,” which allows them to filter content on viewpoint grounds. And Edge providers also block content because it violates their “viewpoints,” citing the Cloudflare termination of The Daily Stormer.

Net neutrality supporters see these explanations as weak excuses and have less trust in the self-regulating capacity of the ISP industry that the FCC, calling for last minute protests to stop the repeal.

For now it appears, however, that the FCC is unlikely to change its course, as Ars Technica reports.

While net neutrality concerns are legitimate, for BitTorrent users not that much will change.

As we’ve highlighted in the past, blocking pirate sites is already an option under the current rules. The massive copyright loophole made sure of that. Targeting all torrent traffic is even an option, in theory.

If net neutrality is indeed repealed next month, blocking or throttling BitTorrent traffic across the entire network will become easier, no doubt. For now, however, there are no signs that any ISPs plan to do so.

If it does, we will know soon enough. The FCC will require ISPs to be transparent under the new plan. They have to disclose network management practices, blocking efforts, commercial prioritization, and the like.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

SNIFFlab – Create Your Own MITM Test Environment

Post Syndicated from Darknet original https://www.darknet.org.uk/2017/11/snifflab-create-mitm-test-environment/?utm_source=rss&utm_medium=social&utm_campaign=darknetfeed

SNIFFlab – Create Your Own MITM Test Environment

SNIFFlab is a set of scripts in Python that enable you to create your own MITM test environment for packet sniffing through a WiFi access point.

Essentially it’s a WiFi hotspot that is continually collecting all the packets transmitted across it. All connected clients’ HTTPS communications are subjected to a “Man-in-the-middle” attack, whereby they can later be decrypted for analysis

What is SNIFFLab MITM Test Environment

In our environment, dubbed Snifflab, a researcher simply connects to the Snifflab WiFi network, is prompted to install a custom certificate authority on the device, and then can use their device as needed for the test.

Read the rest of SNIFFlab – Create Your Own MITM Test Environment now! Only available at Darknet.

EC2 Convertible Reserved Instance Update – New 1-Year CRI, Merges & Splits

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/ec2-convertible-reserved-instance-update-new-1-year-cri-merges-splits/

We launched Convertible Reserved Instances for EC2 just about a year ago. The Convertible RIs give you a significant discount (typically 54% when compared to On-Demand) and allow you to change the instance family and other parameters associated with the RI if your needs change.

Today we are introducing Convertible RIs with a 1-year term, complementing the existing 3-year term. We are also making the Convertible Reserved Instance model more flexible by allowing you to exchange portions of your RIs and to perform bulk exchanges.

New 1-Year Convertible RIs
Convertible Reserved Instances with a 1-year term are now available. This will give you more options and more flexibility; you can now purchase a mix of 1-year and 3-year Convertible Reserved Instances (CRIs) in accord with your needs. Startups with financial constraints will find this option attractive, as will other ventures that may not be in a position to make a commitment that runs for longer than one year.

Merging and Splitting Convertible RIs
Let’s say that you start running your web and application servers on M4 instances and uses Convertible RIs to save money. Later, after a tuning exercise you move your application servers to C4 instances. With today’s launch you can exchange a portion of your M4 Convertible RIs for C4 Convertible RIs. You can also merge two or more CRIs (perhaps for smaller instances) and obtain one for a larger instance.

The exchange model for Convertible Reserved Instances is based on splitting, exchanging, and merging. Let’s say I own a 3-year Partial Upfront CRI for four t2.micro instances:

My application has changed and now I want to use a pair of t2.micro instances and a single r4.xlarge. The first step is to split this CRI into the part that I want to keep and the part that I want to exchange. I select it and click on Modify Reserved Instances. Then I create my desired configuration and click on Continue:

I review the request and click on Submit Modifications:

The state of the CRI changes to indicate that it is being modified. After a moment or two it will be marked as retired, replaced by a pair that are active:

Now I can exchange one of the 2-instance CRIs. I select it, click on Exchange Reserved Instance, and enter the desired configuration for my new CRI:

I click on Find Offering to see my options, and choose the desired one, an r4.xlarge Partial Upfront. As you can see, the console “does the math” takes the remaining upfront value ($139.995 in this case) of the unneeded CRIs into account when computing the upfront payment:

When I am ready to move forward I click on Exchange. This initiates the exchange process and lets me know that it may take a few minutes to complete.

I can also merge two or more Convertible Reserved Instances together and then use them as the starting point for an exchange. To do this I simply select the existing CRIs, click on Action, and choose Exchange Reserved Instances. I can see the total remaining upfront value of the selected CRIs and proceed accordingly:

You can merge CRIs that have different start dates and/or term lengths. The merged CRI will have the expiry date of the RI that is furthest from the date of exchange. Merging CRIs with different term lengths always produces a 3-year CRI.

You can also perform the split, exchange, and merge operations using the AWS Command Line Interface (CLI) and the EC2 APIs.

Available Now
All of the functions and the 1-year CRIs described in this post are available now and you can start using them today.

Jeff;

Abandon Proactive Copyright Filters, Huge Coalition Tells EU Heavyweights

Post Syndicated from Andy original https://torrentfreak.com/abandon-proactive-copyright-filters-huge-coalition-tells-eu-heavyweights-171017/

Last September, EU Commission President Jean-Claude Juncker announced plans to modernize copyright law in Europe.

The proposals (pdf) are part of the Digital Single Market reforms, which have been under development for the past several years.

One of the proposals is causing significant concern. Article 13 would require some online service providers to become ‘Internet police’, proactively detecting and filtering allegedly infringing copyright works, uploaded to their platforms by users.

Currently, users are generally able to share whatever they like but should a copyright holder take exception to their upload, mechanisms are available for that content to be taken down. It’s envisioned that proactive filtering, whereby user uploads are routinely scanned and compared to a database of existing protected content, will prevent content becoming available in the first place.

These proposals are of great concern to digital rights groups, who believe that such filters will not only undermine users’ rights but will also place unfair burdens on Internet platforms, many of which will struggle to fund such a program. Yesterday, in the latest wave of opposition to Article 13, a huge coalition of international rights groups came together to underline their concerns.

Headed up by Civil Liberties Union for Europe (Liberties) and European Digital Rights (EDRi), the coalition is formed of dozens of influential groups, including Electronic Frontier Foundation (EFF), Human Rights Watch, Reporters without Borders, and Open Rights Group (ORG), to name just a few.

In an open letter to European Commission President Jean-Claude Juncker, President of the European Parliament Antonio Tajani, President of the European Council Donald Tusk and a string of others, the groups warn that the proposals undermine the trust established between EU member states.

“Fundamental rights, justice and the rule of law are intrinsically linked and constitute
core values on which the EU is founded,” the letter begins.

“Any attempt to disregard these values undermines the mutual trust between member states required for the EU to function. Any such attempt would also undermine the commitments made by the European Union and national governments to their citizens.”

Those citizens, the letter warns, would have their basic rights undermined, should the new proposals be written into EU law.

“Article 13 of the proposal on Copyright in the Digital Single Market include obligations on internet companies that would be impossible to respect without the imposition of excessive restrictions on citizens’ fundamental rights,” it notes.

A major concern is that by placing new obligations on Internet service providers that allow users to upload content – think YouTube, Facebook, Twitter and Instagram – they will be forced to err on the side of caution. Should there be any concern whatsoever that content might be infringing, fair use considerations and exceptions will be abandoned in favor of staying on the right side of the law.

“Article 13 appears to provoke such legal uncertainty that online services will have no other option than to monitor, filter and block EU citizens’ communications if they are to have any chance of staying in business,” the letter warns.

But while the potential problems for service providers and users are numerous, the groups warn that Article 13 could also be illegal since it contradicts case law of the Court of Justice.

According to the E-Commerce Directive, platforms are already required to remove infringing content, once they have been advised it exists. The new proposal, should it go ahead, would force the monitoring of uploads, something which goes against the ‘no general obligation to monitor‘ rules present in the Directive.

“The requirement to install a system for filtering electronic communications has twice been rejected by the Court of Justice, in the cases Scarlet Extended (C70/10) and Netlog/Sabam (C 360/10),” the rights groups warn.

“Therefore, a legislative provision that requires internet companies to install a filtering system would almost certainly be rejected by the Court of Justice because it would contravene the requirement that a fair balance be struck between the right to intellectual property on the one hand, and the freedom to conduct business and the right to freedom of expression, such as to receive or impart information, on the other.”

Specifically, the groups note that the proactive filtering of content would violate freedom of expression set out in Article 11 of the Charter of Fundamental Rights. That being the case, the groups expect national courts to disapply it and the rule to be annulled by the Court of Justice.

The latest protests against Article 13 come in the wake of large-scale objections earlier in the year, voicing similar concerns. However, despite the groups’ fears, they have powerful adversaries, each determined to stop the flood of copyrighted content currently being uploaded to the Internet.

Front and center in support of Article 13 is the music industry and its current hot-topic, the so-called Value Gap(1,2,3). The industry feels that platforms like YouTube are able to avoid paying expensive licensing fees (for music in particular) by exploiting the safe harbor protections of the DMCA and similar legislation.

They believe that proactively filtering uploads would significantly help to diminish this problem, which may very well be the case. But at what cost to the general public and the platforms they rely upon? Citizens and scholars feel that freedoms will be affected and it’s likely the outcry will continue.

The ball is now with the EU, whose members will soon have to make what could be the most important decision in recent copyright history. The rights groups, who are urging for Article 13 to be deleted, are clear where they stand.

The full letter is available here (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

AWS Developer Tools Expands Integration to Include GitHub

Post Syndicated from Balaji Iyer original https://aws.amazon.com/blogs/devops/aws-developer-tools-expands-integration-to-include-github/

AWS Developer Tools is a set of services that include AWS CodeCommit, AWS CodePipeline, AWS CodeBuild, and AWS CodeDeploy. Together, these services help you securely store and maintain version control of your application’s source code and automatically build, test, and deploy your application to AWS or your on-premises environment. These services are designed to enable developers and IT professionals to rapidly and safely deliver software.

As part of our continued commitment to extend the AWS Developer Tools ecosystem to third-party tools and services, we’re pleased to announce AWS CodeStar and AWS CodeBuild now integrate with GitHub. This will make it easier for GitHub users to set up a continuous integration and continuous delivery toolchain as part of their release process using AWS Developer Tools.

In this post, I will walk through the following:

Prerequisites:

You’ll need an AWS account, a GitHub account, an Amazon EC2 key pair, and administrator-level permissions for AWS Identity and Access Management (IAM), AWS CodeStar, AWS CodeBuild, AWS CodePipeline, Amazon EC2, Amazon S3.

 

Integrating GitHub with AWS CodeStar

AWS CodeStar enables you to quickly develop, build, and deploy applications on AWS. Its unified user interface helps you easily manage your software development activities in one place. With AWS CodeStar, you can set up your entire continuous delivery toolchain in minutes, so you can start releasing code faster.

When AWS CodeStar launched in April of this year, it used AWS CodeCommit as the hosted source repository. You can now choose between AWS CodeCommit or GitHub as the source control service for your CodeStar projects. In addition, your CodeStar project dashboard lets you centrally track GitHub activities, including commits, issues, and pull requests. This makes it easy to manage project activity across the components of your CI/CD toolchain. Adding the GitHub dashboard view will simplify development of your AWS applications.

In this section, I will show you how to use GitHub as the source provider for your CodeStar projects. I’ll also show you how to work with recent commits, issues, and pull requests in the CodeStar dashboard.

Sign in to the AWS Management Console and from the Services menu, choose CodeStar. In the CodeStar console, choose Create a new project. You should see the Choose a project template page.

CodeStar Project

Choose an option by programming language, application category, or AWS service. I am going to choose the Ruby on Rails web application that will be running on Amazon EC2.

On the Project details page, you’ll now see the GitHub option. Type a name for your project, and then choose Connect to GitHub.

Project details

You’ll see a message requesting authorization to connect to your GitHub repository. When prompted, choose Authorize, and then type your GitHub account password.

Authorize

This connects your GitHub identity to AWS CodeStar through OAuth. You can always review your settings by navigating to your GitHub application settings.

Installed GitHub Apps

You’ll see AWS CodeStar is now connected to GitHub:

Create project

You can choose a public or private repository. GitHub offers free accounts for users and organizations working on public and open source projects and paid accounts that offer unlimited private repositories and optional user management and security features.

In this example, I am going to choose the public repository option. Edit the repository description, if you like, and then choose Next.

Review your CodeStar project details, and then choose Create Project. On Choose an Amazon EC2 Key Pair, choose Create Project.

Key Pair

On the Review project details page, you’ll see Edit Amazon EC2 configuration. Choose this link to configure instance type, VPC, and subnet options. AWS CodeStar requires a service role to create and manage AWS resources and IAM permissions. This role will be created for you when you select the AWS CodeStar would like permission to administer AWS resources on your behalf check box.

Choose Create Project. It might take a few minutes to create your project and resources.

Review project details

When you create a CodeStar project, you’re added to the project team as an owner. If this is the first time you’ve used AWS CodeStar, you’ll be asked to provide the following information, which will be shown to others:

  • Your display name.
  • Your email address.

This information is used in your AWS CodeStar user profile. User profiles are not project-specific, but they are limited to a single AWS region. If you are a team member in projects in more than one region, you’ll have to create a user profile in each region.

User settings

User settings

Choose Next. AWS CodeStar will create a GitHub repository with your configuration settings (for example, https://github.com/biyer/ruby-on-rails-service).

When you integrate your integrated development environment (IDE) with AWS CodeStar, you can continue to write and develop code in your preferred environment. The changes you make will be included in the AWS CodeStar project each time you commit and push your code.

IDE

After setting up your IDE, choose Next to go to the CodeStar dashboard. Take a few minutes to familiarize yourself with the dashboard. You can easily track progress across your entire software development process, from your backlog of work items to recent code deployments.

Dashboard

After the application deployment is complete, choose the endpoint that will display the application.

Pipeline

This is what you’ll see when you open the application endpoint:

The Commit history section of the dashboard lists the commits made to the Git repository. If you choose the commit ID or the Open in GitHub option, you can use a hotlink to your GitHub repository.

Commit history

Your AWS CodeStar project dashboard is where you and your team view the status of your project resources, including the latest commits to your project, the state of your continuous delivery pipeline, and the performance of your instances. This information is displayed on tiles that are dedicated to a particular resource. To see more information about any of these resources, choose the details link on the tile. The console for that AWS service will open on the details page for that resource.

Issues

You can also filter issues based on their status and the assigned user.

Filter

AWS CodeBuild Now Supports Building GitHub Pull Requests

CodeBuild is a fully managed build service that compiles source code, runs tests, and produces software packages that are ready to deploy. With CodeBuild, you don’t need to provision, manage, and scale your own build servers. CodeBuild scales continuously and processes multiple builds concurrently, so your builds are not left waiting in a queue. You can use prepackaged build environments to get started quickly or you can create custom build environments that use your own build tools.

We recently announced support for GitHub pull requests in AWS CodeBuild. This functionality makes it easier to collaborate across your team while editing and building your application code with CodeBuild. You can use the AWS CodeBuild or AWS CodePipeline consoles to run AWS CodeBuild. You can also automate the running of AWS CodeBuild by using the AWS Command Line Interface (AWS CLI), the AWS SDKs, or the AWS CodeBuild Plugin for Jenkins.

AWS CodeBuild

In this section, I will show you how to trigger a build in AWS CodeBuild with a pull request from GitHub through webhooks.

Open the AWS CodeBuild console at https://console.aws.amazon.com/codebuild/. Choose Create project. If you already have a CodeBuild project, you can choose Edit project, and then follow along. CodeBuild can connect to AWS CodeCommit, S3, BitBucket, and GitHub to pull source code for builds. For Source provider, choose GitHub, and then choose Connect to GitHub.

Configure

After you’ve successfully linked GitHub and your CodeBuild project, you can choose a repository in your GitHub account. CodeBuild also supports connections to any public repository. You can review your settings by navigating to your GitHub application settings.

GitHub Apps

On Source: What to Build, for Webhook, select the Rebuild every time a code change is pushed to this repository check box.

Note: You can select this option only if, under Repository, you chose Use a repository in my account.

Source

In Environment: How to build, for Environment image, select Use an image managed by AWS CodeBuild. For Operating system, choose Ubuntu. For Runtime, choose Base. For Version, choose the latest available version. For Build specification, you can provide a collection of build commands and related settings, in YAML format (buildspec.yml) or you can override the build spec by inserting build commands directly in the console. AWS CodeBuild uses these commands to run a build. In this example, the output is the string “hello.”

Environment

On Artifacts: Where to put the artifacts from this build project, for Type, choose No artifacts. (This is also the type to choose if you are just running tests or pushing a Docker image to Amazon ECR.) You also need an AWS CodeBuild service role so that AWS CodeBuild can interact with dependent AWS services on your behalf. Unless you already have a role, choose Create a role, and for Role name, type a name for your role.

Artifacts

In this example, leave the advanced settings at their defaults.

If you expand Show advanced settings, you’ll see options for customizing your build, including:

  • A build timeout.
  • A KMS key to encrypt all the artifacts that the builds for this project will use.
  • Options for building a Docker image.
  • Elevated permissions during your build action (for example, accessing Docker inside your build container to build a Dockerfile).
  • Resource options for the build compute type.
  • Environment variables (built-in or custom). For more information, see Create a Build Project in the AWS CodeBuild User Guide.

Advanced settings

You can use the AWS CodeBuild console to create a parameter in Amazon EC2 Systems Manager. Choose Create a parameter, and then follow the instructions in the dialog box. (In that dialog box, for KMS key, you can optionally specify the ARN of an AWS KMS key in your account. Amazon EC2 Systems Manager uses this key to encrypt the parameter’s value during storage and decrypt during retrieval.)

Create parameter

Choose Continue. On the Review page, either choose Save and build or choose Save to run the build later.

Choose Start build. When the build is complete, the Build logs section should display detailed information about the build.

Logs

To demonstrate a pull request, I will fork the repository as a different GitHub user, make commits to the forked repo, check in the changes to a newly created branch, and then open a pull request.

Pull request

As soon as the pull request is submitted, you’ll see CodeBuild start executing the build.

Build

GitHub sends an HTTP POST payload to the webhook’s configured URL (highlighted here), which CodeBuild uses to download the latest source code and execute the build phases.

Build project

If you expand the Show all checks option for the GitHub pull request, you’ll see that CodeBuild has completed the build, all checks have passed, and a deep link is provided in Details, which opens the build history in the CodeBuild console.

Pull request

Summary:

In this post, I showed you how to use GitHub as the source provider for your CodeStar projects and how to work with recent commits, issues, and pull requests in the CodeStar dashboard. I also showed you how you can use GitHub pull requests to automatically trigger a build in AWS CodeBuild — specifically, how this functionality makes it easier to collaborate across your team while editing and building your application code with CodeBuild.


About the author:

Balaji Iyer is an Enterprise Consultant for the Professional Services Team at Amazon Web Services. In this role, he has helped several customers successfully navigate their journey to AWS. His specialties include architecting and implementing highly scalable distributed systems, serverless architectures, large scale migrations, operational security, and leading strategic AWS initiatives. Before he joined Amazon, Balaji spent more than a decade building operating systems, big data analytics solutions, mobile services, and web applications. In his spare time, he enjoys experiencing the great outdoors and spending time with his family.

 

The UK Law Enforcement Community Can Now Use the AWS Cloud

Post Syndicated from Oliver Bell original https://aws.amazon.com/blogs/security/the-uk-law-enforcement-community-can-now-use-the-aws-cloud/

AWS security image

The AWS EU (London) Region has been Police Assured Secure Facility (PASF) assessed, offering additional support for UK law enforcement customers. This assessment means The National Policing Information Risk Management Team (NPIRMT) has completed a comprehensive physical security assessment of the AWS UK infrastructure and has reviewed the integral practices and processes of how AWS manages data center operations. UK Policing organizations can now leverage this assessment (available to those organizations from NPIRMT) as part of their own risk management approach to systems development and design with the confidence their data is stored in highly secure and compliant facilities. Note that the NPIRMT does not offer any warranty of physical security of the AWS data center.

The security, privacy, and protection of AWS customers are our first priority, and we are committed to supporting Public Sector and Blue Light organizations. This assessment further demonstrates AWS’s commitment to deliver secure and compliant services to the UK law enforcement community. We have built technology services suitable for use by Justice, Blue Light, and Public Safety organizations, and whether in law enforcement, emergency management, or criminal justice, AWS has the capability and resources to support this community’s unique IT needs. From Public Services Network–compliant solutions to architecting a UK OFFICIAL secure environment, AWS can help tackle public safety data needs. By combining the secure and flexible AWS infrastructure with the breadth of our specialized APN Partner solutions, we are confident we can help our customers across the industry succeed in their missions.

– Oliver

Stream Ripping Piracy Goes From Bad to Worse, Music Industry Reports

Post Syndicated from Ernesto original https://torrentfreak.com/stream-ripping-piracy-goes-from-bad-to-worse-music-industry-reports-170919/

Free music is easy to find nowadays. Just head over to YouTube and you can find millions of tracks including many of the most recent releases.

While the music industry profits from the advertisements on many of these videos, it’s not happy with the current state of affairs. Record labels complain about a “value gap” and go as far as accusing the video streaming platform of operating a DMCA protection racket.

YouTube doesn’t agree with this stance and points to the billions of dollars it pays copyright holders. Still, the music industry is far from impressed.

Today, IFPI has released a new music consumer insight report that highlights this issue once again, while pointing out that YouTube accounts for more than half of all music video streaming.

“User upload services, such as YouTube, are heavily used by music consumers and yet do not return fair value to those who are investing in and creating the music. The Value Gap remains the single biggest threat facing the music world today and we are campaigning for a legislative solution,” IFPI CEO Frances Moore writes.

The report also zooms in on piracy and “stream ripping” in particular, which is another YouTube and Google related issue. While this phenomenon is over a decade old, it’s now the main source of music piracy, the report states.

A survey conducted in the world’s leading music industry markets reveals that 35% of all Internet users are stream rippers, up from 30% last year. In total, 40% of all respondents admitted to obtaining unlicensed music.

35% stream ripping (source IFPI)

This means that the vast majority of all music pirates use stream ripping tools. This practice is particularly popular among those in the youngest age group, where more than half of all Internet users admit to ripping music, and it goes down as age increases.

Adding another stab at Google, the report further notes that more than half of all pirates use the popular search engine to find unlicensed music.

Stream rippers are young (source IFPI)

TorrentFreak spoke to former RIAA executive Neil Turkewitz, who has been very vocal about the stream ripping problem. He now heads his own consulting group that focuses on expanding economic cultural prosperity, particularly online.

Stream ripping is a “double whammy,” Turkewitz says, as it’s undermining both streaming and distribution markets. This affects the bottom line of labels and artists, so YouTube should do more to block stream rippers and converters from exploiting the service.

“YouTube and Alphabet talk of their commitment to expanding opportunities for creators. This is an opportunity to prove it,” Turkewitz informs TF.

“Surely the company that, as Eric Schmidt likes to say, ‘knows what people want before they know it’ has the capacity to develop tools to address problems that inhibit the development of a robust online market that sustains creators.”

While stream ripping remains rampant, there is a positive development the music industry can cling to.

Two weeks ago the major record labels managed to take down YouTube-MP3, the largest ripping site of all. While this is a notable success, there are many sites and tools like it that continue business as usual.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Inside the MPAA, Netflix & Amazon Global Anti-Piracy Alliance

Post Syndicated from Andy original https://torrentfreak.com/inside-the-mpaa-netflix-amazon-global-anti-piracy-alliance-170918/

The idea of collaboration in the anti-piracy arena isn’t new but an announcement this summer heralded what is destined to become the largest project the entertainment industry has ever seen.

The Alliance for Creativity and Entertainment (ACE) is a coalition of 30 companies that reads like a who’s who of the global entertainment market. In alphabetical order its members are:

Amazon, AMC Networks, BBC Worldwide, Bell Canada and Bell Media, Canal+ Group, CBS Corporation, Constantin Film, Foxtel, Grupo Globo, HBO, Hulu, Lionsgate, Metro-Goldwyn-Mayer (MGM), Millennium Media, NBCUniversal, Netflix, Paramount Pictures, SF Studios, Sky, Sony Pictures Entertainment, Star India, Studio Babelsberg, STX Entertainment, Telemundo, Televisa, Twentieth Century Fox, Univision Communications Inc., Village Roadshow, The Walt Disney Company, and Warner Bros. Entertainment Inc.

The aim of the project is clear. Instead of each company considering its anti-piracy operations as a distinct island, ACE will bring them all together while presenting a united front to decision and lawmakers. At the core of the Alliance will be the MPAA.

“ACE, with its broad coalition of creators from around the world, is designed, specifically, to leverage the best possible resources to reduce piracy,”
outgoing MPAA chief Chris Dodd said in June.

“For decades, the MPAA has been the gold standard for antipiracy enforcement. We are proud to provide the MPAA’s worldwide antipiracy resources and the deep expertise of our antipiracy unit to support ACE and all its initiatives.”

Since then, ACE and its members have been silent on the project. Today, however, TorrentFreak can pull back the curtain, revealing how the agreement between the companies will play out, who will be in control, and how much the scheme will cost.

Power structure: Founding Members & Executive Committee Members

Netflix, Inc., Amazon Studios LLC, Paramount Pictures Corporation, Sony Pictures Entertainment, Inc., Twentieth Century Fox Film Corporation, Universal City Studios LLC, Warner Bros. Entertainment Inc., and Walt Disney Studios Motion Pictures, are the ‘Founding Members’ (Governing Board) of ACE.

These companies are granted full voting rights on ACE business, including the approval of initiatives and public policy, anti-piracy strategy, budget-related matters, plus approval of legal action. Not least, they’ll have the power to admit or expel ACE members.

All actions taken by the Governing Board (never to exceed nine members) need to be approved by consensus, with each Founding Member able to vote for or against decisions. Members are also allowed to abstain but one persistent objection will be enough to stop any matter being approved.

The second tier – ‘Executive Committee Members’ – is comprised of all the other companies in the ACE project (as listed above, minus the Governing Board). These companies will not be allowed to vote on ACE initiatives but can present ideas and strategies. They’ll also be allowed to suggest targets for law enforcement action while utilizing the MPAA’s anti-piracy resources.

Rights of all members

While all members of ACE can utilize the alliance’s resources, none are barred from simultaneously ‘going it alone’ on separate anti-piracy initiatives. None of these strategies and actions need approval from the Founding Members, provided they’re carried out in a company’s own name and at its own expense.

Information obtained by TorrentFreak indicates that the MPAA also reserves the right to carry out anti-piracy actions in its own name or on behalf of its member studios. The pattern here is different, since the MPAA’s global anti-piracy resources are the same resources being made available to the ACE alliance and for which members have paid to share.

Expansion of ACE

While ACE membership is already broad, the alliance is prepared to take on additional members, providing certain criteria are met. Crucially, any prospective additions must be owners or producers of movies and/or TV shows. The Governing Board will then vet applicants to ensure that they meet the criteria for acceptance as a new Executive Committee Members.

ACE Operations

The nine Governing Board members will meet at least four times a year, with each nominating a senior executive to serve as its representative. The MPAA’s General Counsel will take up the position of non-voting member of the Governing Board and will chair its meetings.

Matters to be discussed include formulating and developing the alliance’s ‘Global Anti-Piracy Action Plan’ and approving and developing the budget. ACE will also form an Anti-Piracy Working Group, which is scheduled to meet at least once a month.

On a daily basis, the MPAA and its staff will attend to the business of the ACE alliance. The MPAA will carry out its own work too but when presenting to outside third parties, it will clearly state which “hat” it is currently wearing.

Much deliberation has taken place over who should be the official spokesperson for ACE. Documents obtained by TF suggest that the MPAA planned to hire a consulting firm to find a person for the role, seeking a professional with international experience who had never been previously been connected with the MPAA.

They appear to have settled on Zoe Thorogood, who previously worked for British Prime Minister David Cameron.

Money, money, money

Of course, the ACE program isn’t going to fund itself, so all members are required to contribute to the operation. The MPAA has opened a dedicated bank account under its control specifically for the purpose, with members contributing depending on status.

Founding/Governing Board Members will be required to commit $5m each annually. However, none of the studios that are MPAA members will have to hand over any cash, since they already fund the MPAA, whose anti-piracy resources ACE is built.

“Each Governing Board Member will contribute annual dues in an amount equal to $5 million USD. Payment of dues shall be made bi-annually in equal shares, payable at
the beginning of each six (6) month period,” the ACE agreement reads.

“The contribution of MPAA personnel, assets and resources…will constitute and be considered as full payment of each MPAA Member Studio’s Governing Board dues.”

That leaves just Netflix and Amazon paying the full amount of $5m in cash each.

From each company’s contribution, $1m will be paid into legal trust accounts allocated to each Governing Board member. If ACE-agreed litigation and legal expenses exceed that amount for the year, members will be required to top up their accounts to cover their share of the costs.

For the remaining 21 companies on the Executive Committee, annual dues are $200,000 each, to be paid in one installment at the start of the financial year – $4.2m all in. Of all dues paid by all members from both tiers, half will be used to boost anti-piracy resources, over and above what the MPAA will spend on the same during 2017.

“Fifty percent (50%) of all dues received from Global Alliance Members other than
the MPAA Member Studios…shall, as agreed by the Governing Board, be used (a) to increase the resources spent on online antipiracy over and above….the amount of MPAA’s 2017 Content Protection Department budget for online antipiracy initiatives/operations,” an internal ACE document reads.

Intellectual property

As the project moves forward, the Alliance expects to gain certain knowledge and experience. On the back of that, the MPAA hopes to grow its intellectual property portfolio.

“Absent written agreement providing otherwise, any and all data, intellectual property, copyrights, trademarks, or know-how owned and/or contributed to the Global Alliance by MPAA, or developed or created by the MPAA or the Global Alliance during the Term of this Charter, shall remain and/or become the exclusive property of the MPAA,” the ACE agreement reads.

That being said, all Governing Board Members will also be granted “perpetual, irrevocable, non-exclusive licenses” to use the same under certain rules, even in the event they leave the ACE initiative.

Terms and extensions

Any member may withdraw from the Alliance at any point, but there will be no refunds. Additionally, any financial commitment previously made to litigation will have to be honored by the member.

The ACE agreement has an initial term of two years but Governing Board Members will meet not less than three months before it is due to expire to vote on any extension.

To be continued……

With the internal structure of ACE now revealed, all that remains is to discover the contents of the initiative’s ‘Global Anti-Piracy Action Plan’. To date, that document has proven elusive but with an operation of such magnitude, future leaks are a distinct possibility.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Delivering Graphics Apps with Amazon AppStream 2.0

Post Syndicated from Deepak Suryanarayanan original https://aws.amazon.com/blogs/compute/delivering-graphics-apps-with-amazon-appstream-2-0/

Sahil Bahri, Sr. Product Manager, Amazon AppStream 2.0

Do you need to provide a workstation class experience for users who run graphics apps? With Amazon AppStream 2.0, you can stream graphics apps from AWS to a web browser running on any supported device. AppStream 2.0 offers a choice of GPU instance types. The range includes the newly launched Graphics Design instance, which allows you to offer a fast, fluid user experience at a fraction of the cost of using a graphics workstation, without upfront investments or long-term commitments.

In this post, I discuss the Graphics Design instance type in detail, and how you can use it to deliver a graphics application such as Siemens NX―a popular CAD/CAM application that we have been testing on AppStream 2.0 with engineers from Siemens PLM.

Graphics Instance Types on AppStream 2.0

First, a quick recap on the GPU instance types available with AppStream 2.0. In July, 2017, we launched graphics support for AppStream 2.0 with two new instance types that Jeff Barr discussed on the AWS Blog:

  • Graphics Desktop
  • Graphics Pro

Many customers in industries such as engineering, media, entertainment, and oil and gas are using these instances to deliver high-performance graphics applications to their users. These instance types are based on dedicated NVIDIA GPUs and can run the most demanding graphics applications, including those that rely on CUDA graphics API libraries.

Last week, we added a new lower-cost instance type: Graphics Design. This instance type is a great fit for engineers, 3D modelers, and designers who use graphics applications that rely on the hardware acceleration of DirectX, OpenGL, or OpenCL APIs, such as Siemens NX, Autodesk AutoCAD, or Adobe Photoshop. The Graphics Design instance is based on AMD’s FirePro S7150x2 Server GPUs and equipped with AMD Multiuser GPU technology. The instance type uses virtualized GPUs to achieve lower costs, and is available in four instance sizes to scale and match the requirements of your applications.

Instance vCPUs Instance RAM (GiB) GPU Memory (GiB)
stream.graphics-design.large 2 7.5 GiB 1
stream.graphics-design.xlarge 4 15.3 GiB 2
stream.graphics-design.2xlarge 8 30.5 GiB 4
stream.graphics-design.4xlarge 16 61 GiB 8

The following table compares all three graphics instance types on AppStream 2.0, along with example applications you could use with each.

  Graphics Design Graphics Desktop Graphics Pro
Number of instance sizes 4 1 3
GPU memory range
1–8 GiB 4 GiB 8–32 GiB
vCPU range 2–16 8 16–32
Memory range 7.5–61 GiB 15 GiB 122–488 GiB
Graphics libraries supported AMD FirePro S7150x2 NVIDIA GRID K520 NVIDIA Tesla M60
Price range (N. Virginia AWS Region) $0.25 – $2.00/hour $0.5/hour $2.05 – $8.20/hour
Example applications Adobe Premiere Pro, AutoDesk Revit, Siemens NX AVEVA E3D, SOLIDWORKS AutoDesk Maya, Landmark DecisionSpace, Schlumberger Petrel

Example graphics instance set up with Siemens NX

In the section, I walk through setting up Siemens NX with Graphics Design instances on AppStream 2.0. After set up is complete, users can able to access NX from within their browser and also access their design files from a file share. You can also use these steps to set up and test your own graphics applications on AppStream 2.0. Here’s the workflow:

  1. Create a file share to load and save design files.
  2. Create an AppStream 2.0 image with Siemens NX installed.
  3. Create an AppStream 2.0 fleet and stack.
  4. Invite users to access Siemens NX through a browser.
  5. Validate the setup.

To learn more about AppStream 2.0 concepts and set up, see the previous post Scaling Your Desktop Application Streams with Amazon AppStream 2.0. For a deeper review of all the setup and maintenance steps, see Amazon AppStream 2.0 Developer Guide.

Step 1: Create a file share to load and save design files

To launch and configure the file server

  1. Open the EC2 console and choose Launch Instance.
  2. Scroll to the Microsoft Windows Server 2016 Base Image and choose Select.
  3. Choose an instance type and size for your file server (I chose the general purpose m4.large instance). Choose Next: Configure Instance Details.
  4. Select a VPC and subnet. You launch AppStream 2.0 resources in the same VPC. Choose Next: Add Storage.
  5. If necessary, adjust the size of your EBS volume. Choose Review and Launch, Launch.
  6. On the Instances page, give your file server a name, such as My File Server.
  7. Ensure that the security group associated with the file server instance allows for incoming traffic from the security group that you select for your AppStream 2.0 fleets or image builders. You can use the default security group and select the same group while creating the image builder and fleet in later steps.

Log in to the file server using a remote access client such as Microsoft Remote Desktop. For more information about connecting to an EC2 Windows instance, see Connect to Your Windows Instance.

To enable file sharing

  1. Create a new folder (such as C:\My Graphics Files) and upload the shared files to make available to your users.
  2. From the Windows control panel, enable network discovery.
  3. Choose Server Manager, File and Storage Services, Volumes.
  4. Scroll to Shares and choose Start the Add Roles and Features Wizard. Go through the wizard to install the File Server and Share role.
  5. From the left navigation menu, choose Shares.
  6. Choose Start the New Share Wizard to set up your folder as a file share.
  7. Open the context (right-click) menu on the share and choose Properties, Permissions, Customize Permissions.
  8. Choose Permissions, Add. Add Read and Execute permissions for everyone on the network.

Step 2:  Create an AppStream 2.0 image with Siemens NX installed

To connect to the image builder and install applications

  1. Open the AppStream 2.0 management console and choose Images, Image Builder, Launch Image Builder.
  2. Create a graphics design image builder in the same VPC as your file server.
  3. From the Image builder tab, select your image builder and choose Connect. This opens a new browser tab and display a desktop to log in to.
  4. Log in to your image builder as ImageBuilderAdmin.
  5. Launch the Image Assistant.
  6. Download and install Siemens NX and other applications on the image builder. I added Blender and Firefox, but you could replace these with your own applications.
  7. To verify the user experience, you can test the application performance on the instance.

Before you finish creating the image, you must mount the file share by enabling a few Microsoft Windows services.

To mount the file share

  1. Open services.msc and check the following services:
  • DNS Client
  • Function Discovery Resource Publication
  • SSDP Discovery
  • UPnP Device H
  1. If any of the preceding services have Startup Type set to Manual, open the context (right-click) menu on the service and choose Start. Otherwise, open the context (right-click) menu on the service and choose Properties. For Startup Type, choose Manual, Apply. To start the service, choose Start.
  2. From the Windows control panel, enable network discovery.
  3. Create a batch script that mounts a file share from the storage server set up earlier. The file share is mounted automatically when a user connects to the AppStream 2.0 environment.

Logon Script Location: C:\Users\Public\logon.bat

Script Contents:

:loop

net use H: \\path\to\network\share 

PING localhost -n 30 >NUL

IF NOT EXIST H:\ GOTO loop

  1. Open gpedit.msc and choose User Configuration, Windows Settings, Scripts. Set logon.bat as the user logon script.
  2. Next, create a batch script that makes the mounted drive visible to the user.

Logon Script Location: C:\Users\Public\startup.bat

Script Contents:
REG DELETE “HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer” /v “NoDrives” /f

  1. Open Task Scheduler and choose Create Task.
  2. Choose General, provide a task name, and then choose Change User or Group.
  3. For Enter the object name to select, enter SYSTEM and choose Check Names, OK.
  4. Choose Triggers, New. For Begin the task, choose At startup. Under Advanced Settings, change Delay task for to 5 minutes. Choose OK.
  5. Choose Actions, New. Under Settings, for Program/script, enter C:\Users\Public\startup.bat. Choose OK.
  6. Choose Conditions. Under Power, clear the Start the task only if the computer is on AC power Choose OK.
  7. To view your scheduled task, choose Task Scheduler Library. Close Task Scheduler when you are done.

Step 3:  Create an AppStream 2.0 fleet and stack

To create a fleet and stack

  1. In the AppStream 2.0 management console, choose Fleets, Create Fleet.
  2. Give the fleet a name, such as Graphics-Demo-Fleet, that uses the newly created image and the same VPC as your file server.
  3. Choose Stacks, Create Stack. Give the stack a name, such as Graphics-Demo-Stack.
  4. After the stack is created, select it and choose Actions, Associate Fleet. Associate the stack with the fleet you created in step 1.

Step 4:  Invite users to access Siemens NX through a browser

To invite users

  1. Choose User Pools, Create User to create users.
  2. Enter a name and email address for each user.
  3. Select the users just created, and choose Actions, Assign Stack to provide access to the stack created in step 2. You can also provide access using SAML 2.0 and connect to your Active Directory if necessary. For more information, see the Enabling Identity Federation with AD FS 3.0 and Amazon AppStream 2.0 post.

Your user receives an email invitation to set up an account and use a web portal to access the applications that you have included in your stack.

Step 5:  Validate the setup

Time for a test drive with Siemens NX on AppStream 2.0!

  1. Open the link for the AppStream 2.0 web portal shared through the email invitation. The web portal opens in your default browser. You must sign in with the temporary password and set a new password. After that, you get taken to your app catalog.
  2. Launch Siemens NX and interact with it using the demo files available in the shared storage folder – My Graphics Files. 

After I launched NX, I captured the screenshot below. The Siemens PLM team also recorded a video with NX running on AppStream 2.0.

Summary

In this post, I discussed the GPU instances available for delivering rich graphics applications to users in a web browser. While I demonstrated a simple setup, you can scale this out to launch a production environment with users signing in using Active Directory credentials,  accessing persistent storage with Amazon S3, and using other commonly requested features reviewed in the Amazon AppStream 2.0 Launch Recap – Domain Join, Simple Network Setup, and Lots More post.

To learn more about AppStream 2.0 and capabilities added this year, see Amazon AppStream 2.0 Resources.

Seth – RDP Man In The Middle Attack Tool

Post Syndicated from Darknet original https://www.darknet.org.uk/2017/09/seth-rdp-man-in-the-middle-attack-tool/?utm_source=rss&utm_medium=social&utm_campaign=darknetfeed

Seth – RDP Man In The Middle Attack Tool

Seth is an RDP Man In The Middle attack tool written in Python to MiTM RDP connections by attempting to downgrade the connection in order to extract clear text credentials.

It was developed to raise awareness and educate about the importance of properly configured RDP connections in the context of pentests, workshops or talks.

Usage of Seth RDP Man In The Middle Attack Tool

Run it like this:

$ ./seth.sh <INTERFACE> <ATTACKER IP> <VICTIM IP> <GATEWAY IP|HOST IP>

Unless the RDP host is on the same subnet as the victim machine, the last IP address must be that of the gateway.

Read the rest of Seth – RDP Man In The Middle Attack Tool now! Only available at Darknet.

ShareBeast & AlbumJams Operator Pleads Guilty to Criminal Copyright Infringement

Post Syndicated from Andy original https://torrentfreak.com/sharebeast-albumjams-operator-pleads-guilty-to-criminal-copyright-infringement-170911/

In September 2015, U.S. authorities announced action against a pair of sites involved in music piracy.

ShareBeast.com and AlbumJams.com were allegedly responsible for the distribution of “a massive library” of popular albums and tracks. Both were accused of offering thousands of tracks before their official release dates.

The U.S. Department of Justice (DOJ) placed their now familiar seizure notice on both domains, with the RIAA claiming ShareBeast was the largest illegal file-sharing site operating in the United States. Indeed, the site’s IP addresses at the time indicated at least some hosting taking place in Illinois.

“This is a huge win for the music community and legitimate music services. Sharebeast operated with flagrant disregard for the rights of artists and labels while undermining the legal marketplace,” RIAA Chairman & CEO Cary Sherman commented at the time.

“Millions of users accessed songs from Sharebeast each month without one penny of compensation going to countless artists, songwriters, labels and others who created the music.”

Now, a full two years later, former Sharebeast operator Artur Sargsyan has pleaded guilty to one felony count of criminal copyright infringement, admitting to the unauthorized distribution and reproduction of over 1 billion copies of copyrighted works.

“Through Sharebeast and other related sites, this defendant profited by illegally distributing copyrighted music and albums on a massive scale,” said U. S. Attorney John Horn.

“The collective work of the FBI and our international law enforcement partners have shut down the Sharebeast websites and prevented further economic losses by scores of musicians and artists.”

The Department of Justice says that from 2012 to 2015, 29-year-old Sargsyan used ShareBeast as a pirate music repository, infringing works produced by Ariana Grande, Katy Perry, Beyonce, Kanye West, and Justin Bieber, among others. He linked to that content from Newjams.net and Albumjams.com, two other sites under his control.

The DoJ says that Sargsyan was informed at least 100 times that there was infringing content on ShareBeast but despite the warnings, the content remained available. When those warnings produced no results, the FBI – assisted by law enforcement in the UK and the Netherlands – seized servers used by Sargsyan to distribute the material.

Brad Buckles, EVP, Anti-Piracy at the RIAA, welcomed the guilty plea.

“Sharebeast and its related sites represented the most popular network of infringing music sites operated out of the United States. The network was responsible for providing millions of downloads of popular music files including unauthorized pre-release albums and tracks.This illicit activity was a gut-punch to music creators who were paid nothing by the service,” Buckles said.

“We are incredibly grateful for the government’s commitment to protecting the rights of artists and labels. We especially thank the dedicated agents of the FBI who painstakingly unraveled this criminal enterprise, and U.S. Attorney John Horn and his team for their work and diligence in seeing this case to its successful conclusion.”

Sargsyan, of Glendale, California, will be sentenced December 4 before U.S. District Judge Timothy C. Batten.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Research on What Motivates ISIS — and Other — Fighters

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/09/research_on_wha.html

Interesting research from Nature Human Behaviour: “The devoted actor’s will to fight and the spiritual dimension of human conflict“:

Abstract: Frontline investigations with fighters against the Islamic State (ISIL or ISIS), combined with multiple online studies, address willingness to fight and die in intergroup conflict. The general focus is on non-utilitarian aspects of human conflict, which combatants themselves deem ‘sacred’ or ‘spiritual’, whether secular or religious. Here we investigate two key components of a theoretical framework we call ‘the devoted actor’ — sacred values and identity fusion with a group­ — to better understand people’s willingness to make costly sacrifices. We reveal three crucial factors: commitment to non-negotiable sacred values and the groups that the actors are wholly fused with; readiness to forsake kin for those values; and perceived spiritual strength of ingroup versus foes as more important than relative material strength. We directly relate expressed willingness for action to behaviour as a check on claims that decisions in extreme conflicts are driven by cost-benefit calculations, which may help to inform policy decisions for the common defense.

Announcement: IPS code

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/08/announcement-ips-code.html

So after 20 years, IBM is killing off my BlackICE code created in April 1998. So it’s time that I rewrite it.

BlackICE was the first “inline” intrusion-detection system, aka. an “intrusion prevention system” or IPS. ISS purchased my company in 2001 and replaced their RealSecure engine with it, and later renamed it Proventia. Then IBM purchased ISS in 2006. Now, they are formally canceling the project and moving customers onto Cisco’s products, which are based on Snort.

So now is a good time to write a replacement. The reason is that BlackICE worked fundamentally differently than Snort, using protocol analysis rather than pattern-matching. In this way, it worked more like Bro than Snort. The biggest benefit of protocol-analysis is speed, making it many times faster than Snort. The second benefit is better detection ability, as I describe in this post on Heartbleed.

So my plan is to create a new project. I’ll be checking in the starter bits into GitHub starting a couple weeks from now. I need to figure out a new name for the project, so I don’t have to rip off a name from William Gibson like I did last time :).

Some notes:

  • Yes, it’ll be GNU open source. I’m a capitalist, so I’ll earn money like snort/nmap dual-licensing it, charging companies who don’t want to open-source their addons. All capitalists GNU license their code.
  • C, not Rust. Sorry, I’m going for extreme scalability. We’ll re-visit this decision later when looking at building protocol parsers.
  • It’ll be 95% compatible with Snort signatures. Their language definition leaves so much ambiguous it’ll be hard to be 100% compatible.
  • It’ll support Snort output as well, though really, Snort’s events suck.
  • Protocol parsers in Lua, so you can use it as a replacement for Bro, writing parsers to extract data you are interested in.
  • Protocol state machine parsers in C, like you see in my Masscan project for X.509.
  • First version IDS only. These days, “inline” means also being able to MitM the SSL stack, so I’m gong to have to think harder on that.
  • Mutli-core worker threads off PF_RING/DPDK/netmap receive queues. Should handle 10gbps, tracking 10 million concurrent connections, with quad-core CPU.
So if you want to contribute to the project, here’s what I need:
  • Requirements from people who work daily with IDS/IPS today. I need you to write up what your products do well that you really like. I need to you write up what they suck at that needs to be fixed. These need to be in some detail.
  • Testing environment to play with. This means having a small server plugged into a real-world link running at a minimum of several gigabits-per-second available for the next year. I’ll sign NDAs related to the data I might see on the network.
  • Coders. I’ll be doing the basic architecture, but protocol parsers, output plugins, etc. will need work. Code will be in C and Lua for the near term. Unfortunately, since I’m going to dual-license, I’ll need waivers before accepting pull requests.
Anyway, follow me on Twitter @erratarob if you want to contribute.

New – SES Dedicated IP Pools

Post Syndicated from Randall Hunt original https://aws.amazon.com/blogs/aws/new-ses-dedicated-ip-pools/

Today we released Dedicated IP Pools for Amazon Simple Email Service (SES). With dedicated IP pools, you can specify which dedicated IP addresses to use for sending different types of email. Dedicated IP pools let you use your SES for different tasks. For instance, you can send transactional emails from one set of IPs and you can send marketing emails from another set of IPs.

If you’re not familiar with Amazon SES these concepts may not make much sense. We haven’t had the chance to cover SES on this blog since 2016, which is a shame, so I want to take a few steps back and talk about the service as a whole and some of the enhancements the team has made over the past year. If you just want the details on this new feature I strongly recommend reading the Amazon Simple Email Service Blog.

What is SES?

So, what is SES? If you’re a customer of Amazon.com you know that we send a lot of emails. Bought something? You get an email. Order shipped? You get an email. Over time, as both email volumes and types increased Amazon.com needed to build an email platform that was flexible, scalable, reliable, and cost-effective. SES is the result of years of Amazon’s own work in dealing with email and maximizing deliverability.

In short: SES gives you the ability to send and receive many types of email with the monitoring and tools to ensure high deliverability.

Sending an email is easy; one simple API call:

import boto3
ses = boto3.client('ses')
ses.send_email(
    Source='[email protected]',
    Destination={'ToAddresses': ['[email protected]']},
    Message={
        'Subject': {'Data': 'Hello, World!'},
        'Body': {'Text': {'Data': 'Hello, World!'}}
    }
)

Receiving and reacting to emails is easy too. You can set up rulesets that forward received emails to Amazon Simple Storage Service (S3), Amazon Simple Notification Service (SNS), or AWS Lambda – you could even trigger a Amazon Lex bot through Lambda to communicate with your customers over email. SES is a powerful tool for building applications. The image below shows just a fraction of the capabilities:

Deliverability 101

Deliverability is the percentage of your emails that arrive in your recipients’ inboxes. Maintaining deliverability is a shared responsibility between AWS and the customer. AWS takes the fight against spam very seriously and works hard to make sure services aren’t abused. To learn more about deliverability I recommend the deliverability docs. For now, understand that deliverability is an important aspect of email campaigns and SES has many tools that enable a customer to manage their deliverability.

Dedicated IPs and Dedicated IP pools

When you’re starting out with SES your emails are sent through a shared IP. That IP is responsible for sending mail on behalf of many customers and AWS works to maintain appropriate volume and deliverability on each of those IPs. However, when you reach a sufficient volume shared IPs may not be the right solution.

By creating a dedicated IP you’re able to fully control the reputations of those IPs. This makes it vastly easier to troubleshoot any deliverability or reputation issues. It’s also useful for many email certification programs which require a dedicated IP as a commitment to maintaining your email reputation. Using the shared IPs of the Amazon SES service is still the right move for many customers but if you have sustained daily sending volume greater than hundreds of thousands of emails per day you might want to consider a dedicated IP. One caveat to be aware of: if you’re not sending a sufficient volume of email with a consistent pattern a dedicated IP can actually hurt your reputation. Dedicated IPs are $24.95 per address per month at the time of this writing – but you can find out more at the pricing page.

Before you can use a Dedicated IP you need to “warm” it. You do this by gradually increasing the volume of emails you send through a new address. Each IP needs time to build a positive reputation. In March of this year SES released the ability to automatically warm your IPs over the course of 45 days. This feature is on by default for all new dedicated IPs.

Customers who send high volumes of email will typically have multiple dedicated IPs. Today the SES team released dedicated IP pools to make managing those IPs easier. Now when you send email you can specify a configuration set which will route your email to an IP in a pool based on the pool’s association with that configuration set.

One of the other major benefits of this feature is that it allows customers who previously split their email sending across several AWS accounts (to manage their reputation for different types of email) to consolidate into a single account.

You can read the documentation and blog for more info.