Tag Archives: express

Canada’s Supreme Court Orders Google to Remove Search Results Worldwide

Post Syndicated from Andy original https://torrentfreak.com/canadas-supreme-court-orders-google-remove-search-results-worldwide-170629/

Back in 2014, the case of Equustek Solutions Inc. v. Jack saw two Canadian entities battle over stolen intellectual property used to manufacture competing products.

Google had no direct links to the case, yet it became embroiled when Equustek Solutions claimed that Google’s search results helped to send visitors to websites operated by the defendants (former Equustek employees) who were selling unlawful products.

Google voluntarily removed links to the sites from its Google.ca (Canada) results, but Equustek demanded a more comprehensive response. It got one.

In a ruling handed down by a court in British Columbia, Google was ordered to remove the infringing websites’ listings from its central database in the United States, meaning that the ruling had worldwide implications.

Google filed an appeal hoping for a better result, arguing that it does not operate servers in British Columbia, nor does it operate any local offices. It also questioned whether the injunction could be enforced outside Canada’s borders.

Ultimately, the British Columbia Court of Appeal disappointed the search giant. In a June 2015 ruling, the Court decided that Google does indeed do business in the region. It also found that a decision to restrict infringement was unlikely to offend any overseas nation.

“The plaintiffs have established, in my view, that an order limited to the google.ca search site would not be effective. I am satisfied that there was a basis, here, for giving the injunction worldwide effect,” Justice Groberman wrote.

Undeterred, Google took its case all the way to the Supreme Court of Canada, hoping to limit the scope of the injunction by arguing that it violates freedom of expression. That effort has now failed.

In a 7-2 majority decision released Wednesday, Google was branded a “determinative player” in facilitating harm to Equustek.

“This is not an order to remove speech that, on its face, engages freedom of expression values, it is an order to de-index websites that are in violation of several court orders,” wrote Justice Rosalia Abella.

“We have not, to date, accepted that freedom of expression requires the facilitation of the unlawful sale of goods.”

With Google now required to delist the sites on a global basis, the big question is what happens when other players attempt to apply the ruling to their particular business sector. Unsurprisingly that hasn’t taken long.

The International Federation of the Phonographic Industry (IFPI), which supported Equustek’s position in the long-running case, welcomed the decision and said that Google must “take on the responsibility” to ensure it does not direct users to illegal sites.

“Canada’s highest court has handed down a decision that is very good news for rights holders both in Canada and around the world. Whilst this was not a music piracy case, search engines play a prominent role in directing users to illegal content online including illegal music sites,” said IFPI CEO, Frances Moore.

“If the digital economy is to grow to its full potential, online intermediaries, including search engines, must play their part by ensuring that their services are not used to facilitate the infringement of intellectual property rights.”

Graham Henderson, President and CEO of Music Canada, which represents Sony, Universal, Warner and others, also welcomed the ruling.

“Today’s decision confirms that online service providers cannot turn a blind eye to illegal activity that they facilitate; on the contrary, they have an affirmative duty to take steps to prevent the Internet from becoming a black market,” Henderson said.

But for every voice of approval from groups like IFPI and Music Canada, others raised concerns over the scope of the decision and its potential to create a legal and political minefield. In particular, University of Ottawa professor Michael Geist raised a number of interesting scenarios.

“What happens if a Chinese court orders [Google] to remove Taiwanese sites from the index? Or if an Iranian court orders it to remove gay and lesbian sites from the index? Since local content laws differ from country to country, there is a great likelihood of conflicts,” Geist said.

But rather than painting Google as the loser in this battle, Geist believes the decision actually grants the search giant more power.

“When it comes to Internet jurisdiction, exercising restraint and limiting the scope of court orders is likely to increase global respect for the law and the effectiveness of judicial decisions. Yet this decision demonstrates what many have feared: the temptation for courts will be to assert jurisdiction over online activities and leave it to the parties to sort out potential conflicts,” Geist says.

“In doing so, the Supreme Court of Canada has lent its support to global takedowns and vested more power in Internet intermediaries, who may increasingly emerge as the arbiters of which laws to follow online.”

Only time will tell how Google will react, but it’s clear there will be plenty of entities ready to test the limits and scope of the company’s responses to the ruling.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

From Idea to Launch: Getting Your First Customers

Post Syndicated from Gleb Budman original https://www.backblaze.com/blog/how-to-get-your-first-customers/

line outside of Apple

After deciding to build an unlimited backup service and developing our own storage platform, the next step was to get customers and feedback. Not all customers are created equal. Let’s talk about the types, and when and how to attract them.

How to Get Your First Customers

First Step – Don’t Launch Publicly
Launch when you’re ready for the judgments of people who don’t know you at all. Until then, don’t launch. Sign up users and customers either that you know, those you can trust to cut you some slack (while providing you feedback), or at minimum those for whom you can set expectations. For months the Backblaze website was a single page with no ability to get the product and minimal info on what it would be. This is not to counter the Lean Startup ‘iterate quickly with customer feedback’ advice. Rather, this is an acknowledgement that there are different types of feedback required based on your development stage.

Sign Up Your Friends
We knew all of our first customers; they were friends, family, and previous co-workers. Many knew what we were up to and were excited to help us. No magic marketing or tech savviness was required to reach them – we just asked that they try the service. We asked them to provide us feedback on their experience and collected it through email and conversations. While the feedback wasn’t unbiased, it was nonetheless wide-ranging, real, and often insightful. These people were willing to spend time carefully thinking about their feedback and delving deeper into the conversations.

Broaden to Beta
Unless you’re famous or your service costs $1 million per customer, you’ll probably need to expand quickly beyond your friends to build a business – and to get broader feedback. Our next step was to broaden the customer base to beta users.

Opening up the service in beta provides three benefits:

  1. Air cover for the early warts. There are going to be issues, bugs, unnecessarily complicated user flows, and poorly worded text. Beta tells people, “We don’t consider the product ‘done’ and you should expect some of these issues. Please be patient with us.”
  2. A request for feedback. Some people always provide feedback, but beta communicates that you want it.
  3. An awareness opportunity. Opening up in beta provides an early (but not only) opportunity to have an announcement and build awareness.

Pitching Beta to Press
Not all press cares about, or is even willing to cover, beta products. Much of the mainstream press wants to write about services that are fully live, have scale, and are important in the marketplace. However, there are a number of sites that like to cover the leading edge – and that means covering betas. Techcrunch, Ars Technica, and SimpleHelp covered our initial private beta launch. I’ll go into the details of how to work with the press to cover your announcements in a post next month.

Private vs. Public Beta
Both private and public beta provide all three of the benefits above. The difference between the two is that private betas are much more controlled, whereas public ones bring in more users. But this isn’t an either/or – I recommend doing both.

Private Beta
For our original beta in 2008, we decided that we were comfortable with about 1,000 users subscribing to our service. That would provide us with a healthy amount of feedback and get some early adoption, while not overwhelming us or our server capacity, and equally important not causing cash flow issues from having to buy more equipment. So we decided to limit the sign-up to only the first 1,000 people who signed up; then we would shut off sign-ups for a while.

But how do you even get 1,000 people to sign up for your service? In our case, get some major publications to write about our beta. (Note: In a future post I’ll explain exactly how to find and reach out to writers. Sign up to receive all of the entrepreneurial posts in this series.)

Public Beta
For our original service (computer backup), we did not have a public beta; but when we launched Backblaze B2, we had a private and then a public beta. The private beta allowed us to work out early kinks, while the public beta brought us a more varied set of use cases. In public beta, there is no cap on the number of users that may try the service.

While this is a first-class problem to have, if your service is flooded and stops working, it’s still a problem. Think through what you will do if that happens. In our early days, when our system could get overwhelmed by volume, we had a static web page hosted with a different registrar that wouldn’t let customers sign up but would tell them when our service would be open again. When we reached a critical volume level we would redirect to it in order to at least provide status for when we could accept more customers.

Collect Feedback
Since one of the goals of betas is to get feedback, we made sure that we had our email addresses clearly presented on the site so users could send us thoughts. We were most interested in broad qualitative feedback on users’ experience, so all emails went to an internal mailing list that would be read by everyone at Backblaze.

For our B2 public and private betas, we also added an optional short survey to the sign-up process. In order to be considered for the private beta you had to fill the survey out, though we found that 80% of users continued to fill out the survey even when it was not required. This survey had both closed-end questions (“how much data do you have”) and open-ended ones (“what do you want to use cloud storage for?”).

BTW, despite us getting a lot of feedback now via our support team, Twitter, and marketing surveys, we are always open to more – you can email me directly at gleb.budman {at} backblaze.com.

Don’t Throw Away Users
Initially our backup service was available only on Windows, but we had an email sign-up list for people who wanted it for their Mac. This provided us with a sense of market demand and a ready list of folks who could be beta users and early adopters when we had a Mac version. Have a service targeted at doctors but lawyers are expressing interest? Capture that.

Product Launch

When
The first question is “when” to launch. Presuming your service is in ‘public beta’, what is the advantage of moving out of beta and into a “version 1.0”, “gold”, or “public availability”? That depends on your service and customer base. Some services fly through public beta. Gmail, on the other hand, was (in)famous for being in beta for 5 years, despite having over 100 million users.

The term beta says to users, “give us some leeway, but feel free to use the service”. That’s fine for many consumer apps and will have near zero impact on them. However, services aimed at businesses and government will often not be adopted with a beta label as the enterprise customers want to know the company feels the service is ‘ready’. While Backblaze started out as a purely consumer service, because it was a data backup service, it was important for customers to trust that the service was ready.

No product is bug-free. But from a product readiness perspective, the nomenclature should also be a reflection of the quality of the product. You can launch a product with one feature that works well out of beta. But a product with fifty features on which half the users will bump into problems should likely stay in beta. The customer feedback, surveys, and your own internal testing should guide you in determining this quality during the beta. Be careful about “we’ve only seen that one time” or “I haven’t been able to reproduce that on my machine”; those issues are likely to scale with customers when you launch.

How
Launching out of beta can be as simple as removing the beta label from the website/product. However, this can be a great time to reach out to press, write a blog post, and send an email announcement to your customers.

Consider thanking your beta testers somehow; can they get some feature turned out for free, an extension of their trial, or premium support? If nothing else, remember to thank them for their feedback. Users that signed up during your beta are likely the ones who will propel your service. They had the need and interest to both be early adopters and deal with bugs. They are likely the key to getting 1,000 true fans.

The Beginning
The title of this post was “Getting your first customers”, because getting to launch may feel like the peak of your journey when you’re pre-launch, but it really is just the beginning. It’s a step along the journey of building your business. If your launch is wildly successful, enjoy it, work to build on the momentum, but don’t lose track of building your business. If your launch is a dud, go out for a coffee with your team, say “well that sucks”, and then get back to building your business. You can learn a tremendous amount from your early customers, and they can become your biggest fans, but the success of your business will depend on what you continue to do the months and years after your launch.

The post From Idea to Launch: Getting Your First Customers appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

A Stack Clash disclosure post-mortem

Post Syndicated from corbet original https://lwn.net/Articles/726137/rss

For those who are curious about how the community deals with a serious
vulnerability, Solar Designer’s description of the embargo process around
the “Stack Clash” issue (and his unhappiness with it) is worth
a read. “Qualys first informed the distros list about this upcoming set of issues
on May 3. This initial notification didn’t say Stack Clash nor anything
like that, but merely expressed intent to disclose the issues and
concern that the list’s maximum embargo duration of 14 to 19 days might
not be sufficient in this case. In the resulting discussion, I agreed
to consider extending the embargo beyond list policy should there be
convincing reasons for that. In retrospect, I think I shouldn’t have
agreed to that.

Protect Web Sites & Services Using Rate-Based Rules for AWS WAF

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/protect-web-sites-services-using-rate-based-rules-for-aws-waf/

AWS WAF (Web Application Firewall) helps to protect your application from many different types of application-layer attacks that involve requests that are malicious or malformed. As I showed you when I first wrote about this service (New – AWS WAF), you can define rules that match cross-site scripting, IP address, SQL injection, size, or content constraints:

When incoming requests match rules, actions are invoked. Actions can either allow, block, or simply count matches.

The existing rule model is powerful and gives you the ability to detect and respond to many different types of attacks. It does not, however, allow you to respond to attacks that simply consist of a large number of otherwise valid requests from a particular IP address. These requests might be a web-layer DDoS attack, a brute-force login attempt, or even a partner integration gone awry.

New Rate-Based Rules
Today we are adding Rate-based Rules to WAF, giving you control of when IP addresses are added to and removed from a blacklist, along with the flexibility to handle exceptions and special cases:

Blacklisting IP Addresses – You can blacklist IP addresses that make requests at a rate that exceeds a configured threshold rate.

IP Address Tracking– You can see which IP addresses are currently blacklisted.

IP Address Removal – IP addresses that have been blacklisted are automatically removed when they no longer make requests at a rate above the configured threshold.

IP Address Exemption – You can exempt certain IP addresses from blacklisting by using an IP address whitelist inside of the a rate-based rule. For example, you might want to allow trusted partners to access your site at a higher rate.

Monitoring & Alarming – You can watch and alarm on CloudWatch metrics that are published for each rule.

You can combine new Rate-based Rules with WAF Conditions to implement sophisticated rate-limiting strategies. For example, you could use a Rate-based Rule and a WAF Condition that matches your login pages. This would allow you to impose a modest threshold on your login pages (to avoid brute-force password attacks) and allow a more generous one on your marketing or system status pages.

Thresholds are defined in terms of the number of incoming requests from a single IP address within a 5 minute period. Once this threshold is breached, additional requests from the IP address are blocked until the request rate falls below the threshold.

Using Rate-Based Rules
Here’s how you would define a Rate-based Rule that protects the /login portion of your site. Start by defining a WAF condition that matches the desired string in the URI of the page:

Then use this condition to define a Rate-based Rule (the rate limit is expressed in terms of requests within a 5 minute interval, but the blacklisting goes in to effect as soon as the limit is breached):

With the condition and the rule in place, create a Web ACL (ProtectLoginACL) to bring it all together and to attach it to the AWS resource (a CloudFront distribution in this case):

Then attach the rule (ProtectLogin) to the Web ACL:

The resource is now protected in accord with the rule and the web ACL. You can monitor the associated CloudWatch metrics (ProtectLogin and ProtectLoginACL in this case). You could even create CloudWatch Alarms and use them to fire Lambda functions when a protection threshold is breached. The code could examine the offending IP address and make a complex, business-driven decision, perhaps adding a whitelisting rule that gives an extra-generous allowance to a trusted partner or to a user with a special payment plan.

Available Now
The new, Rate-based Rules are available now and you can start using them today! Rate-based rules are priced the same as Regular rules; see the WAF Pricing page for more info.

Jeff;

BPI Breaks Record After Sending 310 Million Google Takedowns

Post Syndicated from Andy original https://torrentfreak.com/bpi-breaks-record-after-sending-310-million-google-takedowns-170619/

A little over a year ago during March 2016, music industry group BPI reached an important milestone. After years of sending takedown notices to Google, the group burst through the 200 million URL barrier.

The fact that it took BPI several years to reach its 200 million milestone made the surpassing of the quarter billion milestone a few months later even more remarkable. In October 2016, the group sent its 250 millionth takedown to Google, a figure that nearly doubled when accounting for notices sent to Microsoft’s Bing.

But despite the volumes, the battle hadn’t been won, let alone the war. The BPI’s takedown machine continued to run at a remarkable rate, churning out millions more notices per week.

As a result, yet another new milestone was reached this month when the BPI smashed through the 300 million URL barrier. Then, days later, a further 10 million were added, with the latter couple of million added during the time it took to put this piece together.

BPI takedown notices, as reported by Google

While demanding that Google places greater emphasis on its de-ranking of ‘pirate’ sites, the BPI has called again and again for a “notice and stay down” regime, to ensure that content taken down by the search engine doesn’t simply reappear under a new URL. It’s a position BPI maintains today.

“The battle would be a whole lot easier if intermediaries played fair,” a BPI spokesperson informs TF.

“They need to take more proactive responsibility to reduce infringing content that appears on their platform, and, where we expressly notify infringing content to them, to ensure that they do not only take it down, but also keep it down.”

The long-standing suggestion is that the volume of takedown notices sent would reduce if a “take down, stay down” regime was implemented. The BPI says it’s difficult to present a precise figure but infringing content has a tendency to reappear, both in search engines and on hosting sites.

“Google rejects repeat notices for the same URL. But illegal content reappears as it is re-indexed by Google. As to the sites that actually host the content, the vast majority of notices sent to them could be avoided if they implemented take-down & stay-down,” BPI says.

The fact that the BPI has added 60 million more takedowns since the quarter billion milestone a few months ago is quite remarkable, particularly since there appears to be little slowdown from month to month. However, the numbers have grown so huge that 310 billion now feels a lot like 250 million, with just a few added on top for good measure.

That an extra 60 million takedowns can almost be dismissed as a handful is an indication of just how massive the issue is online. While pirates always welcome an abundance of links to juicy content, it’s no surprise that groups like the BPI are seeking more comprehensive and sustainable solutions.

Previously, it was hoped that the Digital Economy Bill would provide some relief, hopefully via government intervention and the imposition of a search engine Code of Practice. In the event, however, all pressure on search engines was removed from the legislation after a separate voluntary agreement was reached.

All parties agreed that the voluntary code should come into effect two weeks ago on June 1 so it seems likely that some effects should be noticeable in the near future. But the BPI says it’s still early days and there’s more work to be done.

“BPI has been working productively with search engines since the voluntary code was agreed to understand how search engines approach the problem, but also what changes can and have been made and how results can be improved,” the group explains.

“The first stage is to benchmark where we are and to assess the impact of the changes search engines have made so far. This will hopefully be completed soon, then we will have better information of the current picture and from that we hope to work together to continue to improve search for rights owners and consumers.”

With more takedown notices in the pipeline not yet publicly reported by Google, the BPI informs TF that it has now notified the search giant of 315 million links to illegal content.

“That’s an astonishing number. More than 1 in 10 of the entire world’s notices to Google come from BPI. This year alone, one in every three notices sent to Google from BPI is for independent record label repertoire,” BPI concludes.

While it’s clear that groups like BPI have developed systems to cope with the huge numbers of takedown notices required in today’s environment, it’s clear that few rightsholders are happy with the status quo. With that in mind, the fight will continue, until search engines are forced into compromise. Considering the implications, that could only appear on a very distant horizon.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

New – Auto Scaling for Amazon DynamoDB

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/new-auto-scaling-for-amazon-dynamodb/

Amazon DynamoDB has more than one hundred thousand customers, spanning a wide range of industries and use cases. These customers depend on DynamoDB’s consistent performance at any scale and presence in 16 geographic regions around the world. A recent trend we’ve been observing is customers using DynamoDB to power their serverless applications. This is a good match: with DynamoDB, you don’t have to think about things like provisioning servers, performing OS and database software patching, or configuring replication across availability zones to ensure high availability – you can simply create tables and start adding data, and let DynamoDB handle the rest.

DynamoDB provides a provisioned capacity model that lets you set the amount of read and write capacity required by your applications. While this frees you from thinking about servers and enables you to change provisioning for your table with a simple API call or button click in the AWS Management Console, customers have asked us how we can make managing capacity for DynamoDB even easier.

Today we are introducing Auto Scaling for DynamoDB to help automate capacity management for your tables and global secondary indexes. You simply specify the desired target utilization and provide upper and lower bounds for read and write capacity. DynamoDB will then monitor throughput consumption using Amazon CloudWatch alarms and then will adjust provisioned capacity up or down as needed. Auto Scaling will be on by default for all new tables and indexes, and you can also configure it for existing ones.

Even if you’re not around, DynamoDB Auto Scaling will be monitoring your tables and indexes to automatically adjust throughput in response to changes in application traffic. This can make it easier to administer your DynamoDB data, help you maximize availability for your applications, and help you reduce your DynamoDB costs.

Let’s see how it works…

Using Auto Scaling
The DynamoDB Console now proposes a comfortable set of default parameters when you create a new table. You can accept them as-is or you can uncheck Use default settings and enter your own parameters:

Here’s how you enter your own parameters:

Target utilization is expressed in terms of the ratio of consumed capacity to provisioned capacity. The parameters above would allow for sufficient headroom to allow consumed capacity to double due to a burst in read or write requests (read Capacity Unit Calculations to learn more about the relationship between DynamoDB read and write operations and provisioned capacity). Changes in provisioned capacity take place in the background.

Auto Scaling in Action
In order to see this important new feature in action, I followed the directions in the Getting Started Guide. I launched a fresh EC2 instance, installed (sudo pip install boto3) and configured (aws configure) the AWS SDK for Python. Then I used the code in the Python and DynamoDB section to create and populate a table with some data, and manually configured the table for 5 units each of read and write capacity.

I took a quick break in order to have clean, straight lines for the CloudWatch metrics so that I could show the effect of Auto Scaling. Here’s what the metrics look like before I started to apply a load:

I modified the code in Step 3 to continually issue queries for random years in the range of 1920 to 2007, ran a single copy of the code, and checked the read metrics a minute or two later:

The consumed capacity is higher than the provisioned capacity, resulting in a large number of throttled reads. Time for Auto Scaling!

I returned to the console and clicked on the Capacity tab for my table. Then I clicked on Read capacity, accepted the default values, and clicked on Save:

DynamoDB created a new IAM role (DynamoDBAutoscaleRole) and a pair of CloudWatch alarms to manage the Auto Scaling of read capacity:

DynamoDB Auto Scaling will manage the thresholds for the alarms, moving them up and down as part of the scaling process. The first alarm was triggered and the table state changed to Updating while additional read capacity was provisioned:

The change was visible in the read metrics within minutes:

I started a couple of additional copies of my modified query script and watched as additional capacity was provisioned, as indicated by the red line:

I killed all of the scripts and turned my attention to other things while waiting for the scale-down alarm to trigger. Here’s what I saw when I came back:

The next morning I checked my Scaling activities and saw that the alarm had triggered several more times overnight:

This was also visible in the metrics:

Until now, you would prepare for this situation by setting your read capacity well about your expected usage, and pay for the excess capacity (the space between the blue line and the red line). Or, you might set it too low, forget to monitor it, and run out of capacity when traffic picked up. With Auto Scaling you can get the best of both worlds: an automatic response when an increase in demand suggests that more capacity is needed, and another automated response when the capacity is no longer needed.

Things to Know
DynamoDB Auto Scaling is designed to accommodate request rates that vary in a somewhat predictable, generally periodic fashion. If you need to accommodate unpredictable bursts of read activity, you should use Auto Scaling in combination with DAX (read Amazon DynamoDB Accelerator (DAX) – In-Memory Caching for Read-Intensive Workloads to learn more). Also, the AWS SDKs will detect throttled read and write requests and retry them after a suitable delay.

I mentioned the DynamoDBAutoscaleRole earlier. This role provides Auto Scaling with the privileges that it needs to have in order for it to be able to scale your tables and indexes up and down. To learn more about this role and the permissions that it uses, read Grant User Permissions for DynamoDB Auto Scaling.

Auto Scaling has complete CLI and API support, including the ability to enable and disable the Auto Scaling policies. If you have some predictable, time-bound spikes in traffic, you can programmatically disable an Auto Scaling policy, provision higher throughput for a set period of time, and then enable Auto Scaling again later.

As noted on the Limits in DynamoDB page, you can increase provisioned capacity as often as you would like and as high as you need (subject to per-account limits that we can increase on request). You can decrease capacity up to nine times per day for each table or global secondary index.

You pay for the capacity that you provision, at the regular DynamoDB prices. You can also purchase DynamoDB Reserved Capacity to further savings.

Available Now
This feature is available now in all regions and you can start using it today!

Jeff;

Pirate Bay Facilitates Piracy and Can be Blocked, Top EU Court Rules

Post Syndicated from Ernesto original https://torrentfreak.com/pirate-bay-facilitates-piracy-and-can-be-blocked-top-eu-court-rules-170614/

pirate bayIn 2014, The Court of The Hague handed down its decision in a long running case which had previously forced two Dutch ISPs, Ziggo and XS4ALL, to block The Pirate Bay.

The Court ruled against local anti-piracy outfit BREIN, concluding that the blockade was ineffective and restricted the ISPs’ entrepreneurial freedoms.

The Pirate Bay was unblocked by all local ISPs while BREIN took the matter to the Supreme Court, which subsequently referred the case to the EU Court of Justice, seeking further clarification.

After a careful review of the case, the Court of Justice today ruled that The Pirate Bay can indeed be blocked.

While the operators don’t share anything themselves, they knowingly provide users with a platform to share copyright-infringing links. This can be seen as “an act of communication” under the EU Copyright Directive, the Court concludes.

“Whilst it accepts that the works in question are placed online by the users, the Court highlights the fact that the operators of the platform play an essential role in making those works available,” the Court explains in a press release (pdf).

According to the ruling, The Pirate Bay indexes torrents in a way that makes it easy for users to find infringing content while the site makes a profit. The Pirate Bay is aware of the infringements, and although moderators sometimes remove “faulty” torrents, infringing links remain online.

“In addition, the same operators expressly display, on blogs and forums accessible on that platform, their intention of making protected works available to users, and encourage the latter to make copies of those works,” the Court writes.

The ruling means that there are no major obstacles for the Dutch Supreme Court to issue an ISP blockade, but a final decision in the underlying case will likely take a few more months.

A decision at the European level is important, as it may also affect court orders in other countries where The Pirate Bay and other torrent sites are already blocked, including Austria, Belgium, Finland, Italy, and its home turf Sweden.

Despite the negative outcome, the Pirate Bay team is not overly worried.

“Copyright holders will remain stubborn and fight to hold onto a dying model. Clueless and corrupt law makers will put corporate interests before the public’s. Their combined jackassery is what keeps TPB alive,” TPB’s plc365 tells TorrentFreak.

“The reality is that regardless of the ruling, nothing substantial will change. Maybe more ISPs will block TPB. More people will use one of the hundreds of existing proxies, and even more new ones will be created as a result.”

Pirate Bay moderator “Xe” notes that while it’s an extra barrier to access the site, blockades will eventually help people to get around censorship efforts, which are not restricted to TPB.

“They’re an issue for everyone in the sense that they’re an obstacle which has to be overcome. But learning how to work around them isn’t hard and knowing how to work around them is becoming a core skill for everyone who uses the Internet.

“Blockades are not a major issue for the site in the sense that they’re nothing new: we’ve long since adapted to them. We serve the needs of millions of people every day in spite of them,” Xe adds.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Seven Tips for Using S3DistCp on Amazon EMR to Move Data Efficiently Between HDFS and Amazon S3

Post Syndicated from Illya Yalovyy original https://aws.amazon.com/blogs/big-data/seven-tips-for-using-s3distcp-on-amazon-emr-to-move-data-efficiently-between-hdfs-and-amazon-s3/

Have you ever needed to move a large amount of data between Amazon S3 and Hadoop Distributed File System (HDFS) but found that the data set was too large for a simple copy operation? EMR can help you with this. In addition to processing and analyzing petabytes of data, EMR can move large amounts of data.

In the Hadoop ecosystem, DistCp is often used to move data. DistCp provides a distributed copy capability built on top of a MapReduce framework. S3DistCp is an extension to DistCp that is optimized to work with S3 and that adds several useful features. In addition to moving data between HDFS and S3, S3DistCp is also a Swiss Army knife of file manipulations. In this post we’ll cover the following tips for using S3DistCp, starting with basic use cases and then moving to more advanced scenarios:

1. Copy or move files without transformation
2. Copy and change file compression on the fly
3. Copy files incrementally
4. Copy multiple folders in one job
5. Aggregate files based on a pattern
6. Upload files larger than 1 TB in size
7. Submit a S3DistCp step to an EMR cluster

1. Copy or move files without transformation

We’ve observed that customers often use S3DistCp to copy data from one storage location to another, whether S3 or HDFS. Syntax for this operation is simple and straightforward:

$ s3-dist-cp --src /data/incoming/hourly_table --dest s3://my-tables/incoming/hourly_table

The source location may contain extra files that we don’t necessarily want to copy. Here, we can use filters based on regular expressions to do things such as copying files with the .log extension only.

Each subfolder has the following files:

$ hadoop fs -ls /data/incoming/hourly_table/2017-02-01/03
Found 8 items
-rw-r--r--   1 hadoop hadoop     197850 2017-02-19 03:41 /data/incoming/hourly_table/2017-02-01/03/2017-02-01.03.25845.log
-rw-r--r--   1 hadoop hadoop     484006 2017-02-19 03:41 /data/incoming/hourly_table/2017-02-01/03/2017-02-01.03.32953.log
-rw-r--r--   1 hadoop hadoop     868522 2017-02-19 03:41 /data/incoming/hourly_table/2017-02-01/03/2017-02-01.03.62649.log
-rw-r--r--   1 hadoop hadoop     408072 2017-02-19 03:41 /data/incoming/hourly_table/2017-02-01/03/2017-02-01.03.64637.log
-rw-r--r--   1 hadoop hadoop    1031949 2017-02-19 03:41 /data/incoming/hourly_table/2017-02-01/03/2017-02-01.03.70767.log
-rw-r--r--   1 hadoop hadoop     368240 2017-02-19 03:41 /data/incoming/hourly_table/2017-02-01/03/2017-02-01.03.89910.log
-rw-r--r--   1 hadoop hadoop     437348 2017-02-19 03:41 /data/incoming/hourly_table/2017-02-01/03/2017-02-01.03.96053.log
-rw-r--r--   1 hadoop hadoop        800 2017-02-19 03:41 /data/incoming/hourly_table/2017-02-01/03/processing.meta

To copy only the required files, let’s use the --srcPattern option:

$ s3-dist-cp --src /data/incoming/hourly_table --dest s3://my-tables/incoming/hourly_table_filtered --srcPattern .*\.log

After the upload has finished successfully, let’s check the folder contents in the destination location to confirm only the files ending in .log were copied:

$ hadoop fs -ls s3://my-tables/incoming/hourly_table_filtered/2017-02-01/03
-rw-rw-rw-   1     197850 2017-02-19 22:56 s3://my-tables/incoming/hourly_table_filtered/2017-02-01/03/2017-02-01.03.25845.log
-rw-rw-rw-   1     484006 2017-02-19 22:56 s3://my-tables/incoming/hourly_table_filtered/2017-02-01/03/2017-02-01.03.32953.log
-rw-rw-rw-   1     868522 2017-02-19 22:56 s3://my-tables/incoming/hourly_table_filtered/2017-02-01/03/2017-02-01.03.62649.log
-rw-rw-rw-   1     408072 2017-02-19 22:56 s3://my-tables/incoming/hourly_table_filtered/2017-02-01/03/2017-02-01.03.64637.log
-rw-rw-rw-   1    1031949 2017-02-19 22:56 s3://my-tables/incoming/hourly_table_filtered/2017-02-01/03/2017-02-01.03.70767.log
-rw-rw-rw-   1     368240 2017-02-19 22:56 s3://my-tables/incoming/hourly_table_filtered/2017-02-01/03/2017-02-01.03.89910.log
-rw-rw-rw-   1     437348 2017-02-19 22:56 s3://my-tables/incoming/hourly_table_filtered/2017-02-01/03/2017-02-01.03.96053.log

Sometimes, data needs to be moved instead of copied. In this case, we can use the --deleteOnSuccess option. This option is similar to aws s3 mv, which you might have used previously with the AWS CLI. The files are first copied and then deleted from the source:

$ s3-dist-cp --src s3://my-tables/incoming/hourly_table --dest s3://my-tables/incoming/hourly_table_archive --deleteOnSuccess

After the preceding operation, the source location has only empty folders, and the target location contains all files.

$ hadoop fs -ls -R s3://my-tables/incoming/hourly_table/2017-02-01/
drwxrwxrwx   -          0 1970-01-01 00:00 s3://my-tables/incoming/hourly_table/2017-02-01/00
drwxrwxrwx   -          0 1970-01-01 00:00 s3://my-tables/incoming/hourly_table/2017-02-01/01
...
drwxrwxrwx   -          0 1970-01-01 00:00 s3://my-tables/incoming/hourly_table/2017-02-01/21
drwxrwxrwx   -          0 1970-01-01 00:00 s3://my-tables/incoming/hourly_table/2017-02-01/22


$ hadoop fs -ls s3://my-tables/incoming/hourly_table_archive/2017-02-01/01
-rw-rw-rw-   1     676756 2017-02-19 23:27 s3://my-tables/incoming/hourly_table_archive/2017-02-01/01/2017-02-01.01.27047.log
-rw-rw-rw-   1     780197 2017-02-19 23:27 s3://my-tables/incoming/hourly_table_archive/2017-02-01/01/2017-02-01.01.59789.log
-rw-rw-rw-   1    1041789 2017-02-19 23:27 s3://my-tables/incoming/hourly_table_archive/2017-02-01/01/2017-02-01.01.82293.log
-rw-rw-rw-   1        400 2017-02-19 23:27 s3://my-tables/incoming/hourly_table_archive/2017-02-01/01/processing.meta

The important things to remember here are that S3DistCp deletes only files with the --deleteOnSuccess flag and that it doesn’t delete parent folders, even when they are empty.

2. Copy and change file compression on the fly

Raw files often land in S3 or HDFS in an uncompressed text format. This format is suboptimal both for the cost of storage and for running analytics on that data. S3DistCp can help you efficiently store data and compress files on the fly with the --outputCodec option:

$ s3-dist-cp --src s3://my-tables/incoming/hourly_table_filtered --dest s3://my-tables/incoming/hourly_table_gz --outputCodec=gz

The current version of S3DistCp supports the codecs gzip, gz, lzo, lzop, and snappy, and the keywords none and keep (the default). These keywords have the following meaning:

  • none” – Save files uncompressed. If the files are compressed, then S3DistCp decompresses them.
  • keep” – Don’t change the compression of the files but copy them as-is.

Let’s check the files in the target folder, which have now been compressed with the gz codec:

$ hadoop fs -ls s3://my-tables/incoming/hourly_table_gz/2017-02-01/01/
Found 3 items
-rw-rw-rw-   1     78756 2017-02-20 00:07 s3://my-tables/incoming/hourly_table_gz/2017-02-01/01/2017-02-01.01.27047.log.gz
-rw-rw-rw-   1     80197 2017-02-20 00:07 s3://my-tables/incoming/hourly_table_gz/2017-02-01/01/2017-02-01.01.59789.log.gz
-rw-rw-rw-   1    121178 2017-02-20 00:07 s3://my-tables/incoming/hourly_table_gz/2017-02-01/01/2017-02-01.01.82293.log.gz

3. Copy files incrementally

In real life, the upstream process drops files in some cadence. For instance, new files might get created every hour, or every minute. The downstream process can be configured to pick it up at a different schedule.

Let’s say data lands on S3 and we want to process it on HDFS daily. Copying all files every time doesn’t scale very well. Fortunately, S3DistCp has a built-in solution for that.

For this solution, we use a manifest file. That file allows S3DistCp to keep track of copied files. Following is an example of the command:

$ s3-dist-cp --src s3://my-tables/incoming/hourly_table --dest s3://my-tables/processing/hourly_table --srcPattern .*\.log --outputManifest=manifest-2017-02-25.gz --previousManifest=s3://my-tables/processing/hourly_table/manifest-2017-02-24.gz

The command takes two manifest files as parameters, outputManifest and previousManifest. The first one contains a list of all copied files (old and new), and the second contains a list of files copied previously. This way, we can recreate the full history of operations and see what files were copied during each run:

$ hadoop fs -text s3://my-tables/processing/hourly_table/manifest-2017-02-24.gz > previous.lst
$ hadoop fs -text s3://my-tables/processing/hourly_table/manifest-2017-02-25.gz > current.lst
$ diff previous.lst current.lst
2548a2549,2550
> {"path":"s3://my-tables/processing/hourly_table/2017-02-25/00/2017-02-15.00.50958.log","baseName":"2017-02-25/00/2017-02-15.00.50958.log","srcDir":"s3://my-tables/processing/hourly_table","size":610308}
> {"path":"s3://my-tables/processing/hourly_table/2017-02-25/00/2017-02-25.00.93423.log","baseName":"2017-02-25/00/2017-02-25.00.93423.log","srcDir":"s3://my-tables/processing/hourly_table","size":178928}

S3DistCp creates the file in the local file system using the provided path, /tmp/mymanifest.gz. When the copy operation finishes, it moves that manifest to <DESTINATION LOCATION>.

4. Copy multiple folders in one job

Imagine that we need to copy several folders. Usually, we run as many copy jobs as there are folders that need to be copied. With S3DistCp, the copy can be done in one go. All we need is to prepare a file with list of prefixes and use it as a parameter for the tool:

$ s3-dist-cp --src s3://my-tables/incoming/hourly_table_filtered --dest s3://my-tables/processing/sample_table --srcPrefixesFile file://${PWD}/folders.lst

In this case, the folders.lst file contains the following prefixes:

$ cat folders.lst
s3://my-tables/incoming/hourly_table_filtered/2017-02-10/11
s3://my-tables/incoming/hourly_table_filtered/2017-02-19/02
s3://my-tables/incoming/hourly_table_filtered/2017-02-23

As a result, the target location has only the requested subfolders:

$ hadoop fs -ls -R s3://my-tables/processing/sample_table
drwxrwxrwx   -          0 1970-01-01 00:00 s3://my-tables/processing/sample_table/2017-02-10
drwxrwxrwx   -          0 1970-01-01 00:00 s3://my-tables/processing/sample_table/2017-02-10/11
-rw-rw-rw-   1     139200 2017-02-24 05:59 s3://my-tables/processing/sample_table/2017-02-10/11/2017-02-10.11.12980.log
...
drwxrwxrwx   -          0 1970-01-01 00:00 s3://my-tables/processing/sample_table/2017-02-19
drwxrwxrwx   -          0 1970-01-01 00:00 s3://my-tables/processing/sample_table/2017-02-19/02
-rw-rw-rw-   1     702058 2017-02-24 05:59 s3://my-tables/processing/sample_table/2017-02-19/02/2017-02-19.02.19497.log
-rw-rw-rw-   1     265404 2017-02-24 05:59 s3://my-tables/processing/sample_table/2017-02-19/02/2017-02-19.02.26671.log
...
drwxrwxrwx   -          0 1970-01-01 00:00 s3://my-tables/processing/sample_table/2017-02-23
drwxrwxrwx   -          0 1970-01-01 00:00 s3://my-tables/processing/sample_table/2017-02-23/00
-rw-rw-rw-   1     310425 2017-02-24 05:59 s3://my-tables/processing/sample_table/2017-02-23/00/2017-02-23.00.10061.log
-rw-rw-rw-   1    1030397 2017-02-24 05:59 s3://my-tables/processing/sample_table/2017-02-23/00/2017-02-23.00.22664.log
...

5. Aggregate files based on a pattern

Hadoop is optimized for reading a fewer number of large files rather than many small files, whether from S3 or HDFS. You can use S3DistCp to aggregate small files into fewer large files of a size that you choose, which can optimize your analysis for both performance and cost.

In the following example, we combine small files into bigger files. We do so by using a regular expression with the –groupBy option.

$ s3-dist-cp --src /data/incoming/hourly_table --dest s3://my-tables/processing/daily_table --targetSize=10 --groupBy=’.*/hourly_table/.*/(\d\d)/.*\.log’

Let’s take a look into the target folders and compare them to the corresponding source folders:

$ hadoop fs -ls /data/incoming/hourly_table/2017-02-22/05/
Found 8 items
-rw-r--r--   1 hadoop hadoop     289949 2017-02-19 06:07 /data/incoming/hourly_table/2017-02-22/05/2017-02-22.05.11125.log
-rw-r--r--   1 hadoop hadoop     407290 2017-02-19 06:07 /data/incoming/hourly_table/2017-02-22/05/2017-02-22.05.19596.log
-rw-r--r--   1 hadoop hadoop     253434 2017-02-19 06:07 /data/incoming/hourly_table/2017-02-22/05/2017-02-22.05.30135.log
-rw-r--r--   1 hadoop hadoop     590655 2017-02-19 06:07 /data/incoming/hourly_table/2017-02-22/05/2017-02-22.05.36531.log
-rw-r--r--   1 hadoop hadoop     762076 2017-02-19 06:07 /data/incoming/hourly_table/2017-02-22/05/2017-02-22.05.47822.log
-rw-r--r--   1 hadoop hadoop     489783 2017-02-19 06:07 /data/incoming/hourly_table/2017-02-22/05/2017-02-22.05.80518.log
-rw-r--r--   1 hadoop hadoop     205976 2017-02-19 06:07 /data/incoming/hourly_table/2017-02-22/05/2017-02-22.05.99127.log
-rw-r--r--   1 hadoop hadoop        800 2017-02-19 06:07 /data/incoming/hourly_table/2017-02-22/05/processing.meta

 

$ hadoop fs -ls s3://my-tables/processing/daily_table/2017-02-22/05/
Found 2 items
-rw-rw-rw-   1   10541944 2017-02-28 05:16 s3://my-tables/processing/daily_table/2017-02-22/05/054
-rw-rw-rw-   1   10511516 2017-02-28 05:16 s3://my-tables/processing/daily_table/2017-02-22/05/055

As you can see, seven data files were combined into two with a size close to the requested 10 MB. The *.meta file was filtered out because --groupBy pattern works in a similar way to –srcPattern. We recommend keeping files larger than the default block size, which is 128 MB on EMR.

The name of the final file is composed of groups in the regular expression used in --groupBy plus some number to make the name unique. The pattern must have at least one group defined.

Let’s consider one more example. This time, we want the file name to be formed from three parts: year, month, and file extension (.log in this case). Here is an updated command:

$ s3-dist-cp --src /data/incoming/hourly_table --dest s3://my-tables/processing/daily_table_2017 --targetSize=10 --groupBy=’.*/hourly_table/.*(2017-).*/(\d\d)/.*\.(log)’

Now we have final files named in a different way:

$ hadoop fs -ls s3://my-tables/processing/daily_table_2017/2017-02-22/05/
Found 2 items
-rw-rw-rw-   1   10541944 2017-02-28 05:16 s3://my-tables/processing/daily_table/2017-02-22/05/2017-05log4
-rw-rw-rw-   1   10511516 2017-02-28 05:16 s3://my-tables/processing/daily_table/2017-02-22/05/2017-05log5

As you can see, names of final files consist of concatenation of 3 groups from the regular expression (2017-), (\d\d), (log).

You might find that occasionally you get an error that looks like the following:

$ s3-dist-cp --src /data/incoming/hourly_table --dest s3://my-tables/processing/daily_table_2017 --targetSize=10 --groupBy=’.*/hourly_table/.*(2018-).*/(\d\d)/.*\.(log)’
...
17/04/27 15:37:45 INFO S3DistCp.S3DistCp: Created 0 files to copy 0 files
... 
Exception in thread “main” java.lang.RuntimeException: Error running job
	at com.amazon.elasticmapreduce.S3DistCp.S3DistCp.run(S3DistCp.java:927)
	at com.amazon.elasticmapreduce.S3DistCp.S3DistCp.run(S3DistCp.java:705)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
	at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
	at com.amazon.elasticmapreduce.S3DistCp.Main.main(Main.java:22)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
…

In this case, the key information is contained in Created 0 files to copy 0 files. S3DistCp didn’t find any files to copy because the regular expression in the --groupBy option doesn’t match any files in the source location.

The reason for this issue varies. For example, it can be a mistake in the specified pattern. In the preceding example, we don’t have any files for the year 2018. Another common reason is incorrect escaping of the pattern when we submit S3DistCp command as a step, which is addressed later later in this post.

6. Upload files larger than 1 TB in size

The default upload chunk size when doing an S3 multipart upload is 128 MB. When files are larger than 1 TB, the total number of parts can reach over 10,000. Such a large number of parts can make the job run for a very long time or even fail.

In this case, you can improve job performance by increasing the size of each part. In S3DistCp, you can do this by using the --multipartUploadChunkSize option.

Let’s test how it works on several files about 200 GB in size. With the default part size, it takes about 84 minutes to copy them to S3 from HDFS.

We can increase the default part size to 1000 MB:

$ time s3-dist-cp --src /data/gb200 --dest s3://my-tables/data/S3DistCp/gb200_2 --multipartUploadChunkSize=1000
...
real    41m1.616s

The maximum part size is 5 GB. Keep in mind that larger parts have a higher chance to fail during upload and don’t necessarily speed up the process. Let’s run the same job with the maximum part size:

time s3-dist-cp --src /data/gb200 --dest s3://my-tables/data/S3DistCp/gb200_2 --multipartUploadChunkSize=5000
...
real    40m17.331s

7. Submit a S3DistCp step to an EMR cluster

You can run the S3DistCp tool in several ways. First, you can SSH to the master node and execute the command in a terminal window as we did in the preceding examples. This approach might be convenient for many use cases, but sometimes you might want to create a cluster that has some data on HDFS. You can do this by submitting a step directly in the AWS Management Console when creating a cluster.

In the console add step dialog box, we can fill the fields in the following way:

  • Step type: Custom JAR
  • Name*: S3DistCp Stepli>
  • JAR location: command-runner.jar
  • Arguments: s3-dist-cp --src s3://my-tables/incoming/hourly_table --dest /data/input/hourly_table --targetSize 10 --groupBy .*/hourly_table/.*(2017-).*/(\d\d)/.*\.(log)
  • Action of failure: Continue

Notice that we didn’t add quotation marks around our pattern. We needed quotation marks when we were using bash in the terminal window, but not here. The console takes care of escaping and transferring our arguments to the command on the cluster.

Another common use case is to run S3DistCp recurrently or on some event. We can always submit a new step to the existing cluster. The syntax here is slightly different than in previous examples. We separate arguments by commas. In the case of a complex pattern, we shield the whole step option with single quotation marks:

aws emr add-steps --cluster-id j-ABC123456789Z --steps 'Name=LoadData,Jar=command-runner.jar,ActionOnFailure=CONTINUE,Type=CUSTOM_JAR,Args=s3-dist-cp,--src,s3://my-tables/incoming/hourly_table,--dest,/data/input/hourly_table,--targetSize,10,--groupBy,.*/hourly_table/.*(2017-).*/(\d\d)/.*\.(log)'

Summary

This post showed you the basics of how S3DistCp works and highlighted some of its most useful features. It covered how you can use S3DistCp to optimize for raw files of different sizes and also selectively copy different files between locations. We also looked at several options for using the tool from SSH, the AWS Management Console, and the AWS CLI.

If you have questions or suggestions, leave a message in the comments.


Next Steps

Take your new knowledge to the next level! Click on the post below and learn the top 10 tips to improve query performance in Amazon Athena.

Top 10 Performance Tuning Tips for Amazon Athena


About the Author

Illya Yalovyy is a Senior Software Development Engineer with Amazon Web Services. He works on cutting-edge features of EMR and is heavily involved in open source projects such as Apache Hive, Apache Zookeeper, Apache Sqoop. His spare time is completely dedicated to his children and family.

 

I want to talk for a moment about tolerance

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/05/i-want-to-talk-for-moment-about.html

This post is in response to this Twitter thread. I was going to do a series of tweets in response, but as the number grew, I thought it’d better be done in a blog.

She thinks we are fighting for the rights of Nazis. We aren’t — indeed, the fact that she thinks we are is exactly the problem. They aren’t Nazis.

The issue is not about a slippery slope that first Nazi’s lose free speech, then other groups start losing their speech as well. The issue is that it’s a slippery slope that more and more people get labeled a Nazi. And we are already far down that slope.

The “alt-right” is a diverse group. Like any group. Vilifying the entire alt-right by calling them Nazi’s is like lumping all Muslims in with ISIS or Al Qaeda. We really don’t have Nazi’s in America. Even White Nationalists don’t fit the bill. Nazism was about totalitarianism, real desire to exterminate Jews, lebensraum, and Aryan superiority. Sure, some of these people exist, but they are a fringe, even among the alt-right.

It’s at this point we need to discuss words like “tolerance”. I don’t think it means what you think it means.

The idea of tolerance is that reasonable people can disagree. You still believe you are right, and the other person is wrong, but you accept that they are nonetheless a reasonable person with good intentions, and that they don’t need to be punished for holding the wrong opinion.

Gay rights is a good example. I agree with you that there is only one right answer to this. Having spent nights holding my crying gay college roommate, because his father hated gays, has filled me with enormous hatred and contempt for people like his father. I’ve done my fair share shouting at people for anti-gay slurs.

Yet on the other hand, progressive icons like Barack Obama and Hillary Clinton have had evolving positions on gay rights issues, such as having opposed gay marriage at one time.

Tolerance means accepting that a person is reasonable, intelligent, and well-meaning — even if they oppose gay marriage. It means accepting that Hillary and Obama were reasonable people, even when they were vocally opposing gay marriage.

I’m libertarian. Like most libertarians, I support wide open borders, letting any immigrant across the border for any reason. To me, Hillary’s and Obama’s immigration policies are almost as racist as Trump’s. I have to either believe all you people supporting Hillary/Obama are irredeemably racist — or that well-meaning, good people can disagree about immigration.

I could go through a long list of issues that separate the progressive left and alt-right, and my point would always be the same. While people disagree on issues, and I have my own opinions about which side is right, there are reasonable people on both sides. If there are issues that divide our country down the middle, then by definition, both sides are equally reasonable. The problem with the progressive left is that they do not tolerate this. They see the world as being between one half who hold the correct opinions, and the other half who are unreasonable.

What defines the “alt-right” is not Nazism or White Nationalism, but the reaction of many on the right to intolerance of many on the left. Every time somebody is punished and vilified for uttering what is in fact a reasonable difference of opinion, they join the “alt-right”.

The issue at stake here, the issue that the ACLU is defending, is after that violent attack on the Portland train by an extremist, the city is denying all “alt-right” protesters the right to march. It’s blaming all those of the “alt-right” for the actions of one of their member. It’s similar to cities blocking Muslims from building a mosque because of extremists like ISIS and Al Qaeda, or disturbed individuals who carry out violent attacks in the name of Islam.

This is not just a violation of the First Amendment rights, it’s an obvious one. As the Volokh Conspiracy documents, the courts have ruled many times on this issue. There is no doubt that the “alt-right” has the right to march, and that the city’s efforts to deny them this right is a blatant violation of the constitution.

What we are defending here is not the rights of actual Nazi’s to march (as the courts famous ruled was still legitimate speech in Skokie, Illinois), but the rights of non-Nazi’s to march, most who have legitimate, reasonable (albeit often wrong) grievances to express. This speech is clearly being suppressed by gun wielding thugs in Portland, Oregon.

Those like Jillian see this as dealing with unreasonable speech, we see this as a problem of tolerably wrong speech. Those like Jillian York aren’t defending the right to free speech because, in their minds, they’ve vilified the people they disagree with. But that’s that’s exactly when, and only when, free speech needs our protection, when those speaking out have been vilified, and their repression seems just. Look at how Russia suppresses supporters of gay rights, with exactly this sort of vilification, whereby the majority of the populace sees the violence and policing as a legitimate response to speech that should not be free.

We aren’t fighting a slippery slope here, by defending Nazis. We’ve already slid down that slope, where reasonable people’s rights are being violated. We are fighting to get back up top.

–> –>

Introspection

Post Syndicated from Eevee original https://eev.ee/blog/2017/05/28/introspection/

This month, IndustrialRobot has generously donated in order to ask:

How do you go about learning about yourself? Has your view of yourself changed recently? How did you handle it?

Whoof. That’s incredibly abstract and open-ended — there’s a lot I could say, but most of it is hard to turn into words.


The first example to come to mind — and the most conspicuous, at least from where I’m sitting — has been the transition from technical to creative since quitting my tech job. I think I touched on this a year ago, but it’s become all the more pronounced since then.

I quit in part because I wanted more time to work on my own projects. Two years ago, those projects included such things as: giving the Python ecosystem a better imaging library, designing an alternative to regular expressions, building a Very Correct IRC bot framework, and a few more things along similar lines. The goals were all to solve problems — not hugely important ones, but mildly inconvenient ones that I thought I could bring something novel to. Problem-solving for its own sake.

Now that I had all the time in the world to work on these things, I… didn’t. It turned out they were almost as much of a slog as my job had been!

The problem, I think, was that there was no point.

This was really weird to realize and come to terms with. I do like solving problems for its own sake; it’s interesting and educational. And most of the programming folks I know and surround myself with have that same drive and use it to create interesting tools like Twisted. So besides taking for granted that this was the kind of stuff I wanted to do, it seemed like the kind of stuff I should want to do.

But even if I create a really interesting tool, what do I have? I don’t have a thing; I have a tool that can be used to build things. If I want a thing, I have to either now build it myself — starting from nearly zero despite all the work on the tool, because it can only do so much in isolation — or convince a bunch of other people to use my tool to build things. Then they’d be depending on my tool, which means I have to maintain and support it, which is even more time and effort poured into this non-thing.

Despite frequently being drawn to think about solving abstract tooling problems, it seems I truly want to make things. This is probably why I have a lot of abandoned projects boldly described as “let’s solve X problem forever!” — I go to scratch the itch, I do just enough work that it doesn’t itch any more, and then I lose interest.

I spent a few months quietly flailing over this minor existential crisis. I’d spent years daydreaming about making tools; what did I have if not that drive? I was having to force myself to work on what I thought were my passion projects.

Meanwhile, I’d vaguely intended to do some game development, but for some reason dragged my feet forever and then took my sweet time dipping my toes in the water. I did work on a text adventure, Runed Awakening, on and off… but it was a fractal of creative decisions and I had a hard time making all of them. It might’ve been too ambitious, despite feeling small, and that might’ve discouraged me from pursuing other kinds of games earlier.

A big part of it might have been the same reason I took so long to even give art a serious try. I thought of myself as a technical person, and art is a thing for creative people, so I’m simply disqualified, right? Maybe the same thing applies to games.

Lord knows I had enough trouble when I tried. I’d orbited the Doom community for years but never released a single finished level. I did finally give it a shot again, now that I had the time. Six months into my funemployment, I wrote a three-part guide on making Doom levels. Three months after that, I finally released one of my own.

I suppose that opened the floodgates; a couple weeks later, glip and I decided to try making something for the PICO-8, and then we did that (almost exactly a year ago!). Then kept doing it.

It’s been incredibly rewarding — far moreso than any “pure” tooling problem I’ve ever approached. Moreso than even something like veekun, which is a useful thing. People have thoughts and opinions on games. Games give people feelings, which they then tell you about. Most of the commentary on a reference website is that something is missing or incorrect.

I like doing creative work. There was never a singular moment when this dawned on me; it was a slow process over the course of a year or more. I probably should’ve had an inkling when I started drawing, half a year before I quit; even my early (and very rough) daily comics made people laugh, and I liked that a lot. Even the most well-crafted software doesn’t tend to bring joy to people, but amateur art can.

I still like doing technical work, but I prefer when it’s a means to a creative end. And, just as important, I prefer when it has a clear and constrained scope. “Make a library/tool for X” is a nebulous problem that could go in a great many directions; “make a bot that tweets Perlin noise” has a pretty definitive finish line. It was interesting to write a little physics engine, but I would’ve hated doing it if it weren’t for a game I were making and didn’t have the clear scope of “do what I need for this game”.


It feels like creative work is something I’ve been wanting to do for a long time. If this were a made-for-TV movie, I would’ve discovered this impulse one day and immediately revealed myself as a natural-born artistic genius of immense unrealized talent.

That didn’t happen. Instead I’ve found that even something as mundane as having ideas is a skill, and while it’s one I enjoy, I’ve barely ever exercised it at all. I have plenty of ideas with technical work, but I run into brick walls all the time with creative stuff.

How do I theme this area? Well, I don’t know. How do I think of something? I don’t know that either. It’s a strange paradox to have an urge to create things but not quite know what those things are.

It’s such a new and completely different kind of problem. There’s no right answer, or even an answer I can check for “correctness”. I can do anything. With no landmarks to start from, it’s easy to feel completely lost and just draw blanks.

I’ve essentially recalibrated the texture of stuff I work on, and I have to find some completely new ways to approach problems. I haven’t found them yet. I don’t think they’re anything that can be told or taught. But I’m starting to get there, and part of it is just accepting that I can’t treat these like problems with clear best solutions and clear algorithms to find those solutions.

A particularly glaring irony is that I’ve had a really tough problem designing abstract spaces, even though that’s exactly the kind of architecture I praise in Doom. It’s much trickier than it looks — a good abstract design is reminiscent of something without quite being that something.

I suppose it’s similar to a struggle I’ve had with art. I’m drawn to a cartoony style, and cartooning is also a mild form of abstraction, of whittling away details to leave only what’s most important. I’m reminded in particular of the forest background in fox flux — I was completely lost on how to make something reminiscent of a tree line. I knew enough to know that drawing trees would’ve made the background far too busy, but trees are naturally busy, so how do you represent that?

The answer glip gave me was to make big chunky leaf shapes around the edges and where light levels change. Merely overlapping those shapes implies depth well enough to convey the overall shape of the tree. The result works very well and looks very simple — yet it took a lot of effort just to get to the idea.

It reminds me of mathematical research, in a way? You know the general outcome you want, and you know the tools at your disposal, and it’s up to you to make some creative leaps. I don’t think there’s a way to directly learn how to approach that kind of problem; all you can do is look at what others have done and let it fuel your imagination.


I think I’m getting a little distracted here, but this is stuff that’s been rattling around lately.

If there’s a more personal meaning to the tree story, it’s that this is a thing I can do. I can learn it, and it makes sense to me, despite being a huge nerd.

Two and a half years ago, I never would’ve thought I’d ever make an entire game from scratch and do all the art for it. It was completely unfathomable. Maybe we can do a lot of things we don’t expect we’re capable of, if only we give them a serious shot.

And ask for help, of course. I have a hell of a time doing that. I did a painting recently that factored in mountains of glip’s advice, and on some level I feel like I didn’t quite do it myself, even though every stroke was made by my hand. Hell, I don’t even look at references nearly as much as I should. It feels like cheating, somehow? I know that’s ridiculous, but my natural impulse is to put my head down and figure it out myself. Maybe I’ve been doing that for too long with programming. Trust me, it doesn’t work quite so well in a brand new field.


I’m getting distracted again!

To answer your actual questions: how do I go about learning about myself? I don’t! It happens completely by accident. I’ll consciously examine my surface-level thoughts or behaviors or whatever, sure, but the serious fundamental revelations have all caught me completely by surprise — sometimes slowly, sometimes suddenly.

Most of them also came from listening to the people who observe me from the outside: I only started drawing in the first place because of some ridiculous deal I made with glip. At the time I thought they just wanted everyone to draw because art is their thing, but now I’m starting to suspect they’d caught on after eight years of watching me lament that I couldn’t draw.

I don’t know how I handle such discoveries, either. What is handling? I imagine someone discovering something and trying to come to grips with it, but I don’t know that I have quite that experience — my grappling usually comes earlier, when I’m still trying to figure the thing out despite not knowing that there’s a thing to find out. Once I know it, it’s on the table; I can’t un-know it or reject it meaningfully. All I can do is figure out what to do with it, and I approach that the same way I approach every other problem: by flailing at it and hoping for the best.

This isn’t quite 2000 words. Sorry. I’ve run out of things to say about me. This paragraph is very conspicuous filler. Banana. Atmosphere. Vocation.

MariaDB 10.2 GA released with several advanced features

Post Syndicated from Michael "Monty" Widenius original http://monty-says.blogspot.com/2017/05/mariadb-102-ga-released-with-several.html

MariaDB 10.2.6 GA is now released. It’s a release where we have concentrated on adding new advanced features to MariaDB

The most noteworthy ones are:

  • Windows Functions gives you the ability to do advanced calculation over a sliding window.
  • Common table expressions allows you to do more complex SQL statements without having to do explicit temporary tables.
  • We finally have a DEFAULT clause that can take expressions and also CHECK CONSTRAINT.
  • Multiple triggers for the same event. This is important for anyone trying to use tools, like pt-online-schema-change, which requires multiple triggers for the same table.
  • A new storage engine, MyRocks, that gives you high compression of your data without sacrificing speed. It has been developed in cooperation with Facebook and MariaDB to allow you to handle more data with less resources.
  • flashback, a feature that can rollback instances/databases/tables to an old snapshot. The version in MariaDB 10.2 is DML only. In MariaDB 10.3 we will also allow rollback over DML (like DROP TABLE).
  • Compression of events in the binary log.
  • JSON functions added. In 10.2.7 we will also add support for CREATE TABLE … (a JSON).

A few smaller but still noteworthy new features:

  • Connection setup was made faster by moving creation of THD to a new thread. This, in addition with better thread caching, can give a connection speedup for up to 85 % in some cases.
  • Table cache can automatically partition itself as needed to reduce the contention.
  • NO PAD collations, which means that end space are significant in comparisons.
  • InnoDB is now the default storage engine. Until MariaDB 10.1, MariaDB used the XtraDB storage engine as default. XtraDB in 10.2 is not up to date with the latest features of InnoDB and cannot be used. The main reason for this change is that most of the important features of XtraDB are nowadays implemented in InnoDB . As the MariaDB team is doing a lot more InnoDB development than ever before, we can’t anymore manage updating two almost identical engines. The InnoDB version in MariaDB contains the best features of MySQL InnoDB and XtraDB and a lot more. As the InnoDB on disk format is identical to XtraDB’s this will not cause any problems when upgrading to MariaDB 10.2
  • The old GPL client library is gone; now MariaDB Server comes with the LGPL Connector/C client library.

There are a lot of other new features, performance enhancements and variables in MariaDB 10.2 for you to explore!

I am happy to see that a lot of the new features have come from the MariadB community! (Note to myself; This list doesn’t include all contributors to MariadB 10.2, needs to be update.)

Thanks a lot to everyone that has contributed to MariaDB!

Streaming Site Operator Jailed For Three Years After Landmark Trial

Post Syndicated from Andy original https://torrentfreak.com/streaming-site-operator-jailed-for-three-years-after-landmark-trial-170516/

Founded more than half a decade ago, Swefilmer grew to become Sweden’s most popular movie and TV show streaming site. It was credited alongside another streaming portal for serving up to 25% of all online video streaming in Sweden.

With this level of prominence, it was only a question of time before authorities stepped in to end the free streaming bonanza. In 2015, that happened when an operator of the site in his early twenties was raided by local police.

This was followed by the arrest of a now 26-year-old Turkish man in Germany, who was accused of receiving donations from users and setting up Swefilmer’s deals with advertisers.

The pair, who had never met in person, appeared at the Varberg District Court in January, together accused of making more than $1.5m from their activities between November 2013 and June 2015.

As the trial progressed, it was clear that the outcome was not likely to be a good one for the men.

Prosecutor Anna Ginner described the operation as being like “organized crime”, with lawyer Henrik Pontén of RightsAlliance claiming that the evidence only represented a small part of the money made by the pair.

From the beginning, it was always claimed that the 26-year-old was the main player behind the site, with the now 23-year-old playing a much smaller role. While the latter received an estimated $4,000 of the proceeds, the former was said to have enriched himself with more than $1.5m in advertising revenue.

The Varberg District Court has now handed down its ruling and it’s particularly bad news for the 26-year-old, who is reported to have led a luxury lifestyle with proceeds from the site.

In a short statement the court confirmed he had been convicted of 1,044 breaches of copyright law and serious money laundering offenses. He was sentenced to serve three years in prison and ordered to forfeit $1.59m. The Court was far more lenient with the younger man.

After being found guilty of four counts of copyright infringement but playing almost no role in the site’s revenue operations, no sentencing for money laundering was handed down. He was instead handed probation and ordered to complete 120 hours of community service, a sentence that was positively affected by his age when the offenses were committed.

It’s worth noting that the sentence received by the 26-year-old goes way beyond the sentences handed down even in the notorious Pirate Bay case, where defendants Fredrik Neij, Peter Sunde and Gottfrid Svartholm received 10 months, 12 months and 8 months respectively.

However, with Henrik Pontén describing the Swefilmer case as being primarily about money laundering, his group is clearly unhappy that copyright offenses aren’t considered serious enough to warrant lengthy sentences in their own right.

“We welcome the judgment, but it is clear that copyright law must be adapted to today’s serious piracy. The penalty for copyright infringement should in itself be enough to deter people from crime,” Pontén says.

“The low level of penalties allows foreign piracy organizations to locate their operations in Sweden. The trend is very worrying.”

An important factor in the case moving forward is that in determining whether infringement had taken place, the Court drew heavily on the GS Media ruling handed down by the European Court of Justice last September.

In that decision, the Court found that linking to copyrighted material is only allowed when there is no intent to profit and when the linker is unaware that the content is infringing.

When there is a profit motive, which there clearly was in the Swefilmer case, operators of a site are expected to carry out the “checks necessary” to ensure that linked works have not been illegally published.

The operators of Swefilmer failed on all counts, so the local court determined that the platform had communicated copyrighted works to the public, in breach of copyright law.

Speaking with TorrentFreak, the 23-year-old expressed relief at his relatively light sentence but noted it may not be over yet.

“I was really happy when the judgment came. The long wait is finally over,” he said.

“RightsAlliance will appeal because they did not receive any compensation for the trial. But the prosecutor is satisfied with the judgment so it is only RightsAlliance who are dissatisfied.”

According to IDG, the lawyer of the 26-year-old believes that his client’s sentence is far too severe, so there may be an appeal in that direction too.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

NO, Kodi Users Are Not Risking Ten Years in Prison

Post Syndicated from Andy original https://torrentfreak.com/no-kodi-users-are-not-risking-ten-years-in-prison-170507/

Piracy has always been a reasonably popular topic in the UK and there can barely be a person alive today who hasn’t either engaged in or been exposed to the phenomenon in some way. Just lately, however, things have really entered the mainstream.

The massive public interest is down to the set-top box craze, which is largely fueled by legal Kodi software augmented with infringing addons that provide free access to premium movies, TV channels and live sports.

While this a topic one might expect technology sites to report on, just recently UK tabloids have flooded the market with largely sensational stories about Kodi and piracy in general, which often recycle the same story time and again with SHOCKING click-bait headlines YOU JUST WON’T BELIEVE.

We’ve had to put up with misleading headlines and stories for months, so a while ago we made an effort to discuss the issues with tabloid reporters. Needless to say, we didn’t get very far. Most ignored our emails, but even those who responded weren’t prepared to do much.

One told us that his publication had decided that articles featuring Kodi were good for traffic while another promised to escalate our comments further up the chain of command. Within days additional articles with similar problems were being published regardless and this week things really boiled over.

10 Years for Kodi users? Hardly

The above report published in the Daily Express is typical of many doing the rounds at the moment. Taking Kodi as the popular search term, it shoe-horns the topic into areas of copyright law that do not apply to it, and ones certainly not covered by the Digital Economy Act cited in the headline.

As reported this week, the Digital Economy Act raises penalties for online copyright infringement offenses from two to ten years, but only in specific circumstances. Users streaming content to their homes via Kodi is absolutely not one of them.

To fall foul of the new law a user would need to communicate a copyrighted work to the public. In piracy terms that means ‘uploading’ and people streaming content via Kodi do nothing of the sort. The Digital Economy Act offers no remedy to deal with users streaming content – period – but let’s not allow the facts to get in the way of a click-inducing headline.

The Mirror has it wrong too

The Mirror article weaves in comments from Kieron Sharp from the Federation Against Copyright Theft. He notes that the new legislation should be targeted at people making a business out of infringement, which will hopefully be the case.

However, the article incorrectly extrapolates Sharp’s comments to mean that the law also applies to people streaming content via Kodi. Only making things more confusing, it then states that people “who casually stream a couple of movies every once in a while are extremely unlikely to be prosecuted to such extremes.”

Again, the Digital Economy Act has nothing to do with people streaming movies via Kodi but if we go along with the charade and agree that people who casually stream movies aren’t going to be prosecuted, why claim “10 year jail sentences for Kodi users” in the headline?

The bottom line is that there is nothing in the article itself that supports the article’s headline claim that Kodi users could go to jail for ten years. In itself, this is problematic from a reporting standpoint.

Published by IPSO, the Editors’ Code of Practice clearly states that “the Press must take care not to publish inaccurate, misleading or distorted information or images, including headlines not supported by the text.”

But singling out the Daily Express and The Mirror on this would be unfair. Dozens of other publications jumped on the same bandwagon, parroting the same misinformation, often with similar click-bait headlines.

For people dealing with these issues every day, the ins-and-outs of piracy alongside developing copyright law can be easier to grasp, so it’s perhaps a little unfair to expect general reporters to understand every detail of what can be extremely complex issues. Mistakes get made by everyone, that’s human nature.

But really, is there any excuse for headlines like this one published by the Sunday Express this morning?

According to the piece, readers of TorrentFreak are also at risk of spending ten years in prison. You couldn’t make this damaging nonsense up. Actually, apparently you can.

In addition to a lack of research, the problem here is the prevalence of click-bait headlines driving traffic and the inability of the underlying articles to live up to the hype. If we can moderate the headlines and report within them, the rest should simply fall into place. Ditch the NEEDLESS capital letters and stick to the facts.

Society in 2017 needs those more than ever.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Friday Squid Blogging: Squid Communications

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/05/friday_squid_bl_576.html

In the oval squid Sepioteuthis lessoniana, males use body patterns to communicate with both females and other males:

To gain insight into the visual communication associated with each behavior in terms of the body patterning’s key components, the co-expression frequencies of two or more components at any moment in time were calculated in order to assess uniqueness when distinguishing one behavior from another. This approach identified the minimum set of key components that, when expressed together, represents an unequivocal visual communication signal. While the interpretation of the signal and the associated response of the receiver during visual communication are difficult to determine, the concept of the component assembly is similar to a typical language within which individual words often have multiple meanings, but when they appeared together with other words, the message becomes unequivocal. The present study thus demonstrates that dynamic body pattering, by expressing unique sets of key components acutely, is an efficient way of communicating behavioral information between oval squids.

News article.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Hollywood Demands Net Neutrality Exceptions to Tackle Piracy

Post Syndicated from Andy original https://torrentfreak.com/hollywood-demands-net-neutrality-exceptions-to-tackle-piracy-170502/

Net neutrality is the notion that ISPs should treat all data traveling via the Internet in the same manner. Providers shouldn’t discriminate based on user, content or platform type, nor devices attached to the network.

While there are plenty of entities who support these principles, the free-flow of information is sometimes perceived as a threat. The concept of so-called fast and slow lanes with variable pricing, for example, has the potential to cause many anti-competitive headaches.

But for the content industries, particularly those involved in movies, TV shows, and other video entertainment, the concept of net neutrality has the potential to complicate plans to block and otherwise restrict access to copyright-infringing material.

As a result, Hollywood is making its feelings known both locally and overseas, including in India where it’s just contributed to the country’s net neutrality debate.

Early 2017, the Telecom Regulatory Authority of India (TRAI) asked for input on its “Consultation Paper on Net Neutrality”, the fifth in the past two years aimed at introducing a legal framework for net neutrality.

Published by MediaNama in January, the 14-point questionnaire received responses from many stakeholders, including the Motion Picture Distribution Association, the local division of the MPA/MPAA representing Paramount, Sony, Twentieth Century Fox, Universal, Disney and Warner.

Exceptions to net neutrality principles for pirate content

In response to a question which asked whether there should be exceptions to net neutrality in order for ISPs to implement traffic management practices (TMP), Hollywood is clear. Net neutrality should only ever apply when Internet traffic is lawful, and ISPs should be able to take measures to deal with infringing content.

“For the Motion Picture Association’s members, as representatives of an industry that creates and distributes copyrighted content, it is critical that the Internet does not serve as a haven for illegal activity and that [service providers] should be permitted to take reasonable action to prevent the transfer of stolen copyrighted content,” the Hollywood group writes.

“It is commonly accepted that the requirements of [net neutrality] apply only in respect of access to lawful content. This implies that a [service provider] to, say, block content pursuant to a direction from authorities authorised by law to do so, and after following due process – will not be considered unreasonable.”

The studios say they’re in agreement that the Indian government should have the right to regulate content in “emergency situations” and also whenever content is deemed illegal, so in these instances, net neutrality rules would not apply.

Copyright-infringing content fits the latter category, but the MPA wants the government to include specific wording in any regulation that expressly denotes pirate material as exempt from the freedoms of net neutrality.

“We urge that a clear statement be included in any eventual net neutrality regulation that specifies that pirated and infringing content is unlawful and therefore not subject to the normal net neutrality policy of prohibiting content-based regulations,” the studios say.

Exemptions for blocking and throttling to counter piracy

The idea that infringing content should be blocked, throttled, or otherwise hindered is a cornerstone of Hollywood’s fight against infringing content worldwide, despite it being unable to achieve those things in its own backyard. In India, however, the studios see blocking as a fair response to the spread of infringing content and something that should be allowed under net neutrality rules.

“As a remedy to address the dissemination of, or unauthorized access to, unlawful content, blocking and throttling are necessary and appropriate measures,” the studios note.

“Blocking access to infringing sites is not inconsistent with net neutrality. In fact, blocking illegal sites, especially when they originate from outside the country, is often the only effective remedy to prevent access to illegal content in India.

“[Service providers] must be able to block sites that link, stream, make available, or otherwise communicate to the public unauthorized or illegal content.”

Rightsholders and ISPs should work together

In both the United States and Europe, Hollywood is an advocate of voluntary anti-piracy measures, with content owners and ISPs collaborating to hinder the spread of infringing content. According to its submission to the telecoms regulator, Hollywood would like to see something similar in India.

When forming its regulations, the studios would like to see service providers “encouraged” to work with rightsholders to “employ the best available tools and technologies” to fight piracy while affirming ISPs’ right to use traffic management practices (TMP) to deal with the spread of infringing content.

Furthermore, Hollywood would like a clear statement that the use of TMPs against infringing content “should not depend on an advance judicial or regulatory determination of ‘lawfulness’ prior to every use.” In other words, court oversight should not generally be required.

In conclusion, the MPA underlines that rightsholders and rightsholders alone should have the final say in respect of when, to whom, and under what circumstances they make content available. Should the Telecom Regulatory Authority of India interfere with that right, both domestic and international breaches of law could result.

The full submission can be found here (pdf)

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Why GPL Compliance Education Materials Should Be Free as in Freedom

Post Syndicated from Bradley M. Kuhn original http://ebb.org/bkuhn/blog/2017/04/25/liberate-compliance-tutorials.html

[ This blog was crossposted
on Software Freedom Conservancy’s website
. ]

I am honored to be a co-author and editor-in-chief of the most
comprehensive, detailed, and complete guide on matters related to compliance
of copyleft software licenses such as the GPL.
This book, Copyleft and the GNU
General Public License: A Comprehensive Tutorial and Guide
(which we
often call the Copyleft Guide for short)
is 155 pages filled
with useful material to help everyone understand copyleft licenses for
software, how they work, and how to comply with them properly. It is the
only document to fully incorporate esoteric material such as the FSF’s famous
GPLv3 rationale documents directly alongside practical advice, such as
the pristine example,
which is the only freely published compliance analysis of a real product on
the market. The document explains in great detail how that product
manufacturer made good choices to comply with the GPL. The reader learns by
both real-world example as well as abstract explanation.

However, the most important fact about the Copyleft Guide is not its
useful and engaging content. More importantly, the license of this book
gives freedom to its readers in the same way the license of the copylefted
software does. Specifically, we chose
the Creative
Commons Attribution Share-Alike 4.0 license

(CC BY-SA)
for this work. We believe that not just software, but any generally useful
technical information that teaches people should be freely sharable and
modifiable by the general public.

The reasons these freedoms are necessary seem so obvious that I’m
surprised I need to state them. Companies who want to build internal
training courses on copyleft compliance for their employees need to modify
the materials for that purpose. They then need to be able to freely
distribute them to employees and contractors for maximum effect.
Furthermore, like all documents and software alike, there are always
“bugs”, which (in the case of written prose) usually means
there are sections that are fail to communicate to maximum effect. Those
who find better ways to express the ideas need the ability to propose
patches and write improvements. Perhaps most importantly, everyone who
teaches should avoid
NIH syndrome. Education and
science work best when we borrow and share (with proper license-compliant
attribution, of course!) the best material that others develop, and augment
our works by incorporating them.

These reasons are akin to those that led Richard M. Stallman to write his
seminal
essay, Why
Software Should Be Free
. Indeed, if you reread that essay now
— as I just did — you’ll see that much of damage and many of
the same problems to the advancement of software that RMS documents in that
essay also occur in the world of tutorial documentation about FLOSS
licensing. As too often happens in the Open Source community, though,
folks seek ways to proprietarize, for profit, any copyrighted work that
doesn’t already have a copyleft license attached. In the field of copyleft
compliance education, we see the same behavior: organizations who wish to
control the dialogue and profit from selling compliance education seek to
proprietarize the meta-material of compliance education, rather than
sharing freely like the software itself. This yields an ironic
exploitation, since the copyleft license documented therein exists as a
strategy to assure the freedom to share knowledge. These educators tell
their audiences with a straight face: Sure, the software is
free as in freedom, but if you want to learn how its license
works, you have to license our proprietary materials!
This behavior
uses legal controls to curtail the sharing of knowledge, limits the
advancement and improvement of those tutorials, and emboldens silos of
know-how that only wealthy corporations have the resources to access and
afford. The educational dystopia that these organizations create is
precisely what I sought to prevent by advocating for software freedom for
so long.

While Conservancy’s primary job
provides non-profit infrastructure for Free
Software projects
, we also do a bit
of license compliance work as well.
But we practice what we preach: we release all the educational materials
that we produce as part of
the Copyleft Guide project
under CC BY-SA. Other Open Source organizations are currently hypocrites
on this point; they tout the values of openness and sharing of knowledge
through software, but they take their tutorial materials and lock them up
under proprietary licenses. I hereby publicly call on such organizations
(including but not limited to the Linux Foundation) to license
materials such
as
those under CC BY-SA.

I did not make this public call for liberation of such materials without
first trying friendly diplomacy first. Conservancy has been in talks with
individuals and staff who produce these materials for some time. We urged
them to join the Free Software community and share their materials under
free licenses. We even offered volunteer time to help them improve those
materials if they would simply license them freely. After two years of
that effort, it’s now abundantly clear that public pressure is the only
force that might work0. Ultimately, like all
proprietary businesses, the training divisions of Linux Foundation and
other entities in the compliance industrial complex (such
as Black Duck)
realize they can make much more revenue by making materials proprietary and
choosing legal restrictions that forbid their students from sharing and
improving the materials after they complete the course. While the reality
of this impasse regarding freely licensing these materials is probably an
obvious outcome, multiple sources inside these organizations have also
confirmed for me that liberation of the materials for the good of general
public won’t happen without a major paradigm shift — specifically
because such educational freedom will reduce the revenue stream around
those materials.

Of course, I can attest first-hand that freely liberating tutorial
materials curtails revenue. Karen Sandler and I have regularly taught
courses on copyleft licensing based
on the freely available materials
for a few years — most
recently in
January 2017 at LinuxConf Australia
and at
at
OSCON in a few weeks
. These conferences do kindly cover our travel
expenses to attend and teach the tutorial, but compliance education is not
a revenue stream for Conservancy. While, in an ideal world, we’d get
revenue from education to fund our other important activities, we believe
that there is value in doing this education as currently funded by
our individual Supporters; these education
efforts fit withour charitable mission to promote the public good. We
furthermore don’t believe that locking up the materials and refusing to
share them with others fits a mission of software freedom, so we never
considered such as a viable option. Finally, given the
institutionally-backed
FUD that we’ve
continue to witness, we seek to draw specific attention to the fundamental
difference in approach that Conservancy (as a charity) take toward this
compliance education work. (My
my recent talk on compliance
covered on LWN
includes some points on that matter, if you’d like
further reading).


0One notable exception to
these efforts was the success of my colleague, Karen Sandler (and others)
in convincing the OpenChain
project
to choose CC-0 licensing. However, OpenChain is not officially
part of the LF training curriculum to my knowledge, and if it is, it can of
course be proprietarized therein, since CC-0 is not a copyleft license.