Tag Archives: commercial

[$] Federation in social networks

Post Syndicated from jake original https://lwn.net/Articles/741218/rss

Social networking is often approached by the free-software community with a
certain amount of suspicion—rightly so, since commercial social networks
almost always generate revenue by exploiting user data in one way or
another. While
attempts at a free-software approach to social networking have so far not met
widespread success, the new ActivityPub federation protocol and its
implementation in the free-software microblogging system Mastodon are gaining
popularity and already show some of the advantages of a community-driven
approach.

Is blockchain a security topic? (Opensource.com)

Post Syndicated from jake original https://lwn.net/Articles/740929/rss

At Opensource.com, Mike Bursell looks at blockchain security from the angle of trust. Unlike cryptocurrencies, which are pseudonymous typically, other kinds of blockchains will require mapping users to real-life identities; that raises the trust issue.

What’s really interesting is that, if you’re thinking about moving to a permissioned blockchain or distributed ledger with permissioned actors, then you’re going to have to spend some time thinking about trust. You’re unlikely to be using a proof-of-work system for making blocks—there’s little point in a permissioned system—so who decides what comprises a “valid” block that the rest of the system should agree on? Well, you can rotate around some (or all) of the entities, or you can have a random choice, or you can elect a small number of über-trusted entities. Combinations of these schemes may also work.

If these entities all exist within one trust domain, which you control, then fine, but what if they’re distributors, or customers, or partners, or other banks, or manufacturers, or semi-autonomous drones, or vehicles in a commercial fleet? You really need to ensure that the trust relationships that you’re encoding into your implementation/deployment truly reflect the legal and IRL [in real life] trust relationships that you have with the entities that are being represented in your system.

And the problem is that, once you’ve deployed that system, it’s likely to be very difficult to backtrack, adjust, or reset the trust relationships that you’ve designed.”

Libertarians are against net neutrality

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/12/libertarians-are-against-net-neutrality.html

This post claims to be by a libertarian in support of net neutrality. As a libertarian, I need to debunk this. “Net neutrality” is a case of one-hand clapping, you rarely hear the competing side, and thus, that side may sound attractive. This post is about the other side, from a libertarian point of view.

That post just repeats the common, and wrong, left-wing talking points. I mean, there might be a libertarian case for some broadband regulation, but this isn’t it.

This thing they call “net neutrality” is just left-wing politics masquerading as some sort of principle. It’s no different than how people claim to be “pro-choice”, yet demand forced vaccinations. Or, it’s no different than how people claim to believe in “traditional marriage” even while they are on their third “traditional marriage”.

Properly defined, “net neutrality” means no discrimination of network traffic. But nobody wants that. A classic example is how most internet connections have faster download speeds than uploads. This discriminates against upload traffic, harming innovation in upload-centric applications like DropBox’s cloud backup or BitTorrent’s peer-to-peer file transfer. Yet activists never mention this, or other types of network traffic discrimination, because they no more care about “net neutrality” than Trump or Gingrich care about “traditional marriage”.

Instead, when people say “net neutrality”, they mean “government regulation”. It’s the same old debate between who is the best steward of consumer interest: the free-market or government.

Specifically, in the current debate, they are referring to the Obama-era FCC “Open Internet” order and reclassification of broadband under “Title II” so they can regulate it. Trump’s FCC is putting broadband back to “Title I”, which means the FCC can’t regulate most of its “Open Internet” order.

Don’t be tricked into thinking the “Open Internet” order is anything but intensely politically. The premise behind the order is the Democrat’s firm believe that it’s government who created the Internet, and all innovation, advances, and investment ultimately come from the government. It sees ISPs as inherently deceitful entities who will only serve their own interests, at the expense of consumers, unless the FCC protects consumers.

It says so right in the order itself. It starts with the premise that broadband ISPs are evil, using illegitimate “tactics” to hurt consumers, and continues with similar language throughout the order.

A good contrast to this can be seen in Tim Wu’s non-political original paper in 2003 that coined the term “net neutrality”. Whereas the FCC sees broadband ISPs as enemies of consumers, Wu saw them as allies. His concern was not that ISPs would do evil things, but that they would do stupid things, such as favoring short-term interests over long-term innovation (such as having faster downloads than uploads).

The political depravity of the FCC’s order can be seen in this comment from one of the commissioners who voted for those rules:

FCC Commissioner Jessica Rosenworcel wants to increase the minimum broadband standards far past the new 25Mbps download threshold, up to 100Mbps. “We invented the internet. We can do audacious things if we set big goals, and I think our new threshold, frankly, should be 100Mbps. I think anything short of that shortchanges our children, our future, and our new digital economy,” Commissioner Rosenworcel said.

This is indistinguishable from communist rhetoric that credits the Party for everything, as this booklet from North Korea will explain to you.

But what about monopolies? After all, while the free-market may work when there’s competition, it breaks down where there are fewer competitors, oligopolies, and monopolies.

There is some truth to this, in individual cities, there’s often only only a single credible high-speed broadband provider. But this isn’t the issue at stake here. The FCC isn’t proposing light-handed regulation to keep monopolies in check, but heavy-handed regulation that regulates every last decision.

Advocates of FCC regulation keep pointing how broadband monopolies can exploit their renting-seeking positions in order to screw the customer. They keep coming up with ever more bizarre and unlikely scenarios what monopoly power grants the ISPs.

But the never mention the most simplest: that broadband monopolies can just charge customers more money. They imagine instead that these companies will pursue a string of outrageous, evil, and less profitable behaviors to exploit their monopoly position.

The FCC’s reclassification of broadband under Title II gives it full power to regulate ISPs as utilities, including setting prices. The FCC has stepped back from this, promising it won’t go so far as to set prices, that it’s only regulating these evil conspiracy theories. This is kind of bizarre: either broadband ISPs are evilly exploiting their monopoly power or they aren’t. Why stop at regulating only half the evil?

The answer is that the claim “monopoly” power is a deception. It starts with overstating how many monopolies there are to begin with. When it issued its 2015 “Open Internet” order the FCC simultaneously redefined what they meant by “broadband”, upping the speed from 5-mbps to 25-mbps. That’s because while most consumers have multiple choices at 5-mbps, fewer consumers have multiple choices at 25-mbps. It’s a dirty political trick to convince you there is more of a problem than there is.

In any case, their rules still apply to the slower broadband providers, and equally apply to the mobile (cell phone) providers. The US has four mobile phone providers (AT&T, Verizon, T-Mobile, and Sprint) and plenty of competition between them. That it’s monopolistic power that the FCC cares about here is a lie. As their Open Internet order clearly shows, the fundamental principle that animates the document is that all corporations, monopolies or not, are treacherous and must be regulated.

“But corporations are indeed evil”, people argue, “see here’s a list of evil things they have done in the past!”

No, those things weren’t evil. They were done because they benefited the customers, not as some sort of secret rent seeking behavior.

For example, one of the more common “net neutrality abuses” that people mention is AT&T’s blocking of FaceTime. I’ve debunked this elsewhere on this blog, but the summary is this: there was no network blocking involved (not a “net neutrality” issue), and the FCC analyzed it and decided it was in the best interests of the consumer. It’s disingenuous to claim it’s an evil that justifies FCC actions when the FCC itself declared it not evil and took no action. It’s disingenuous to cite the “net neutrality” principle that all network traffic must be treated when, in fact, the network did treat all the traffic equally.

Another frequently cited abuse is Comcast’s throttling of BitTorrent.Comcast did this because Netflix users were complaining. Like all streaming video, Netflix backs off to slower speed (and poorer quality) when it experiences congestion. BitTorrent, uniquely among applications, never backs off. As most applications become slower and slower, BitTorrent just speeds up, consuming all available bandwidth. This is especially problematic when there’s limited upload bandwidth available. Thus, Comcast throttled BitTorrent during prime time TV viewing hours when the network was already overloaded by Netflix and other streams. BitTorrent users wouldn’t mind this throttling, because it often took days to download a big file anyway.

When the FCC took action, Comcast stopped the throttling and imposed bandwidth caps instead. This was a worse solution for everyone. It penalized heavy Netflix viewers, and prevented BitTorrent users from large downloads. Even though BitTorrent users were seen as the victims of this throttling, they’d vastly prefer the throttling over the bandwidth caps.

In both the FaceTime and BitTorrent cases, the issue was “network management”. AT&T had no competing video calling service, Comcast had no competing download service. They were only reacting to the fact their networks were overloaded, and did appropriate things to solve the problem.

Mobile carriers still struggle with the “network management” issue. While their networks are fast, they are still of low capacity, and quickly degrade under heavy use. They are looking for tricks in order to reduce usage while giving consumers maximum utility.

The biggest concern is video. It’s problematic because it’s designed to consume as much bandwidth as it can, throttling itself only when it experiences congestion. This is what you probably want when watching Netflix at the highest possible quality, but it’s bad when confronted with mobile bandwidth caps.

With small mobile devices, you don’t want as much quality anyway. You want the video degraded to lower quality, and lower bandwidth, all the time.

That’s the reasoning behind T-Mobile’s offerings. They offer an unlimited video plan in conjunction with the biggest video providers (Netflix, YouTube, etc.). The catch is that when congestion occurs, they’ll throttle it to lower quality. In other words, they give their bandwidth to all the other phones in your area first, then give you as much of the leftover bandwidth as you want for video.

While it sounds like T-Mobile is doing something evil, “zero-rating” certain video providers and degrading video quality, the FCC allows this, because they recognize it’s in the customer interest.

Mobile providers especially have great interest in more innovation in this area, in order to conserve precious bandwidth, but they are finding it costly. They can’t just innovate, but must ask the FCC permission first. And with the new heavy handed FCC rules, they’ve become hostile to this innovation. This attitude is highlighted by the statement from the “Open Internet” order:

And consumers must be protected, for example from mobile commercial practices masquerading as “reasonable network management.”

This is a clear declaration that free-market doesn’t work and won’t correct abuses, and that that mobile companies are treacherous and will do evil things without FCC oversight.

Conclusion

Ignoring the rhetoric for the moment, the debate comes down to simple left-wing authoritarianism and libertarian principles. The Obama administration created a regulatory regime under clear Democrat principles, and the Trump administration is rolling it back to more free-market principles. There is no principle at stake here, certainly nothing to do with a technical definition of “net neutrality”.

The 2015 “Open Internet” order is not about “treating network traffic neutrally”, because it doesn’t do that. Instead, it’s purely a left-wing document that claims corporations cannot be trusted, must be regulated, and that innovation and prosperity comes from the regulators and not the free market.

It’s not about monopolistic power. The primary targets of regulation are the mobile broadband providers, where there is plenty of competition, and who have the most “network management” issues. Even if it were just about wired broadband (like Comcast), it’s still ignoring the primary ways monopolies profit (raising prices) and instead focuses on bizarre and unlikely ways of rent seeking.

If you are a libertarian who nonetheless believes in this “net neutrality” slogan, you’ve got to do better than mindlessly repeating the arguments of the left-wing. The term itself, “net neutrality”, is just a slogan, varying from person to person, from moment to moment. You have to be more specific. If you truly believe in the “net neutrality” technical principle that all traffic should be treated equally, then you’ll want a rewrite of the “Open Internet” order.

In the end, while libertarians may still support some form of broadband regulation, it’s impossible to reconcile libertarianism with the 2015 “Open Internet”, or the vague things people mean by the slogan “net neutrality”.

Google & Facebook Excluded From Aussie Safe Harbor Copyright Amendments

Post Syndicated from Andy original https://torrentfreak.com/google-facebook-excluded-from-aussie-safe-harbor-copyright-amendments-171205/

Due to a supposed drafting error in Australia’s implementation of the Australia – US Free Trade Agreement (AUSFTA), copyright safe harbor provisions currently only apply to commercial Internet service providers.

This means that while local ISPs such as Telstra receive protection from copyright infringement complaints, services such as Google, Facebook and YouTube face legal uncertainty.

Proposed amendments to the Copyright Act earlier this year would’ve seen enhanced safe harbor protections for such platforms but they were withdrawn at the eleventh hour so that the government could consider “further feedback” from interested parties.

Shortly after the government embarked on a detailed consultation with entertainment industry groups. They accuse platforms like YouTube of exploiting safe harbor provisions in the US and Europe, which forces copyright holders into an expensive battle to have infringing content taken down. They do not want that in Australia and at least for now, they appear to have achieved their aims.

According to a report from AFR (paywall), the Australian government is set to introduce new legislation Wednesday which will expand safe harbors for some organizations but will exclude companies such as Google, Facebook, and similar platforms.

Communications Minister Mitch Fifield confirmed the exclusions while noting that additional safeguards will be available to institutions, libraries, and organizations in the disability, archive and culture sectors.

“The measures in the bill will ensure these sectors are protected from legal liability where they can demonstrate that they have taken reasonable steps to deal with copyright infringement by users of their online platforms,” Senator Fifield told AFR.

“Extending the safe harbor scheme in this way will provide greater certainty to institutions in these sectors and enhance their ability to provide more innovative and creative services for all Australians.”

According to the Senator, the government will continue its work with stakeholders to further reform safe harbor provisions, before applying them to other service providers.

The news that Google, Facebook, and similar platforms are to be denied access to the new safe harbor rules will be seen as a victory for rightsholders. They’re desperately trying to tighten up legislation in other regions where such safeguards are already in place, arguing that platforms utilizing user-generated content for profit should obtain appropriate licensing first.

This so-called ‘Value Gap’ (1,2,3) and associated proactive filtering proposals are among the hottest copyright topics right now, generating intense debate across Europe and the United States.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Glenn’s Take on re:Invent Part 2

Post Syndicated from Glenn Gore original https://aws.amazon.com/blogs/architecture/glenns-take-on-reinvent-part-2/

Glenn Gore here, Chief Architect for AWS. I’m in Las Vegas this week — with 43K others — for re:Invent 2017. We’ve got a lot of exciting announcements this week. I’m going to check in to the Architecture blog with my take on what’s interesting about some of the announcements from an cloud architectural perspective. My first post can be found here.

The Media and Entertainment industry has been a rapid adopter of AWS due to the scale, reliability, and low costs of our services. This has enabled customers to create new, online, digital experiences for their viewers ranging from broadcast to streaming to Over-the-Top (OTT) services that can be a combination of live, scheduled, or ad-hoc viewing, while supporting devices ranging from high-def TVs to mobile devices. Creating an end-to-end video service requires many different components often sourced from different vendors with different licensing models, which creates a complex architecture and a complex environment to support operationally.

AWS Media Services
Based on customer feedback, we have developed AWS Media Services to help simplify distribution of video content. AWS Media Services is comprised of five individual services that can either be used together to provide an end-to-end service or individually to work within existing deployments: AWS Elemental MediaConvert, AWS Elemental MediaLive, AWS Elemental MediaPackage, AWS Elemental MediaStore and AWS Elemental MediaTailor. These services can help you with everything from storing content safely and durably to setting up a live-streaming event in minutes without having to be concerned about the underlying infrastructure and scalability of the stream itself.

In my role, I participate in many AWS and industry events and often work with the production and event teams that put these shows together. With all the logistical tasks they have to deal with, the biggest question is often: “Will the live stream work?” Compounding this fear is the reality that, as users, we are also quick to jump on social media and make noise when a live stream drops while we are following along remotely. Worse is when I see event organizers actively selecting not to live stream content because of the risk of failure and and exposure — leading them to decide to take the safe option and not stream at all.

With AWS Media Services addressing many of the issues around putting together a high-quality media service, live streaming, and providing access to a library of content through a variety of mechanisms, I can’t wait to see more event teams use live streaming without the concern and worry I’ve seen in the past. I am excited for what this also means for non-media companies, as video becomes an increasingly common way of sharing information and adding a more personalized touch to internally- and externally-facing content.

AWS Media Services will allow you to focus more on the content and not worry about the platform. Awesome!

Amazon Neptune
As a civilization, we have been developing new ways to record and store information and model the relationships between sets of information for more than a thousand years. Government census data, tax records, births, deaths, and marriages were all recorded on medium ranging from knotted cords in the Inca civilization, clay tablets in ancient Babylon, to written texts in Western Europe during the late Middle Ages.

One of the first challenges of computing was figuring out how to store and work with vast amounts of information in a programmatic way, especially as the volume of information was increasing at a faster rate than ever before. We have seen different generations of how to organize this information in some form of database, ranging from flat files to the Information Management System (IMS) used in the 1960s for the Apollo space program, to the rise of the relational database management system (RDBMS) in the 1970s. These innovations drove a lot of subsequent innovations in information management and application development as we were able to move from thousands of records to millions and billions.

Today, as architects and developers, we have a vast variety of database technologies to select from, which have different characteristics that are optimized for different use cases:

  • Relational databases are well understood after decades of use in the majority of companies who required a database to store information. Amazon Relational Database (Amazon RDS) supports many popular relational database engines such as MySQL, Microsoft SQL Server, PostgreSQL, MariaDB, and Oracle. We have even brought the traditional RDBMS into the cloud world through Amazon Aurora, which provides MySQL and PostgreSQL support with the performance and reliability of commercial-grade databases at 1/10th the cost.
  • Non-relational databases (NoSQL) provided a simpler method of storing and retrieving information that was often faster and more scalable than traditional RDBMS technology. The concept of non-relational databases has existed since the 1960s but really took off in the early 2000s with the rise of web-based applications that required performance and scalability that relational databases struggled with at the time. AWS published this Dynamo whitepaper in 2007, with DynamoDB launching as a service in 2012. DynamoDB has quickly become one of the critical design elements for many of our customers who are building highly-scalable applications on AWS. We continue to innovate with DynamoDB, and this week launched global tables and on-demand backup at re:Invent 2017. DynamoDB excels in a variety of use cases, such as tracking of session information for popular websites, shopping cart information on e-commerce sites, and keeping track of gamers’ high scores in mobile gaming applications, for example.
  • Graph databases focus on the relationship between data items in the store. With a graph database, we work with nodes, edges, and properties to represent data, relationships, and information. Graph databases are designed to make it easy and fast to traverse and retrieve complex hierarchical data models. Graph databases share some concepts from the NoSQL family of databases such as key-value pairs (properties) and the use of a non-SQL query language such as Gremlin. Graph databases are commonly used for social networking, recommendation engines, fraud detection, and knowledge graphs. We released Amazon Neptune to help simplify the provisioning and management of graph databases as we believe that graph databases are going to enable the next generation of smart applications.

A common use case I am hearing every week as I talk to customers is how to incorporate chatbots within their organizations. Amazon Lex and Amazon Polly have made it easy for customers to experiment and build chatbots for a wide range of scenarios, but one of the missing pieces of the puzzle was how to model decision trees and and knowledge graphs so the chatbot could guide the conversation in an intelligent manner.

Graph databases are ideal for this particular use case, and having Amazon Neptune simplifies the deployment of a graph database while providing high performance, scalability, availability, and durability as a managed service. Security of your graph database is critical. To help ensure this, you can store your encrypted data by running AWS in Amazon Neptune within your Amazon Virtual Private Cloud (Amazon VPC) and using encryption at rest integrated with AWS Key Management Service (AWS KMS). Neptune also supports Amazon VPC and AWS Identity and Access Management (AWS IAM) to help further protect and restrict access.

Our customers now have the choice of many different database technologies to ensure that they can optimize each application and service for their specific needs. Just as DynamoDB has unlocked and enabled many new workloads that weren’t possible in relational databases, I can’t wait to see what new innovations and capabilities are enabled from graph databases as they become easier to use through Amazon Neptune.

Look for more on DynamoDB and Amazon S3 from me on Monday.

 

Glenn at Tour de Mont Blanc

 

 

European Commission Steps Up Fight Against Online Piracy

Post Syndicated from Ernesto original https://torrentfreak.com/european-commission-steps-up-fight-against-online-piracy-171130/

The European Commission has had copyright issues at the top of its agenda for a while, resulting in several controversial proposals.

This week it presented a series of new measures to ensure that copyright holders are well protected, targeting both online piracy and counterfeit goods.

“Today we boost our collective ability to catch the ‘big fish’ behind fake goods and pirated content which harm our companies and our jobs – as well as our health and safety in areas such as medicines or toys,” Commissioner Elżbieta Bieńkowska announced.

The Commission notes that it’s stepping up the fight against counterfeiting and piracy. However, many of the proposals are not entirely new for those who follow anti-piracy issues around the globe.

One of the main goals is to focus on the people who facilitate copyright infringement, such as pirate site operators, and try to cut their revenue streams.

“The Commission seeks to deprive commercial-scale IP infringers of the revenue flows that make their criminal activity lucrative – this is the so-called ‘follow the money’ approach which focuses on the ‘big fish’ rather than individuals,” they write.

Instead of using legislation to reach this goal, the Commission prefers to continue its support for voluntary agreements between copyright holders and third-party services. This includes deals with advertising and payment services to cut their ties with pirate sites.

“Such agreements can lead to faster action against counterfeiting and piracy than court actions,” the Commission writes.

Another tool to fight piracy appears on the agenda for the first time. The European Commission notes that it will also support the quest for new anti-piracy initiatives, including the use of blockchain technology.

“Supporting industry-led initiatives to combat IP infringements, including work on Memoranda of Understanding and exploring the potential of new technologies such as blockchain to combat IP infringements in supply chains,” the suggestion reads.

No concrete examples were given but earlier this week, European Parliament member Brando Benifei wrote an article on the issue in Euractiv.

Benifei mentions that blockchain technology can help independent artists collect royalty payments without the need for middlemen. In a similar vein, blockchains can also be used to track the unauthorized distribution of works.

In addition to broadening the anti-piracy horizon, the European Commission also released a new guidance on how the current IPR Enforcement Directive (IPRED) should be interpreted, taking into account various recent developments, including landmark EU Court of Justice rulings.

The guidance explains how and when it’s appropriate to issue website blocking orders, for example. In general, blocking injunctions are warranted when they are proportional and aimed at preventing concrete infringements.

The comprehensive guidance also covers the issue of filtering. Interestingly, the Commission clarifies that third-party services can’t be required to “install and operate excessively broad, unspecific and expensive filtering systems.”

This appears to run counter to the mandatory piracy filters that were suggested as part of the copyright reform proposal.

However, the Commission notes that in some specific cases, hosting providers (e.g. YouTube) can be ordered to monitor uploads. This is in line with a recent communication which recommended that online services should implement measures to automatically detect and remove suspected illegal content.

While the new plans continue down the path of stronger copyright protections, not all rightsholders are happy. IFPI is glad that the main problems are highlighted, but would have liked to have seen more concrete plans.

“We are disappointed that despite the European Commission recognizing the need to modernize IPRED and years of evidence gathering, today’s result is merely guidance to EU Member State governments. Soft law does not give right holders the tools they need to take effective action against pirate services,” IFPI writes.

On the other side of the divide, opposition to the previously announced EU copyright reform plans continues as well. Earlier today a group of over 80 organizations urged EU member states to speak out against several controversial copyright proposals, including the upload filter.

“The signatories warn the Member states that the discussion around the Copyright Directive are on the verge of causing irreparable damage to our fundamental rights and freedoms, our economy and competitiveness, our education and research, our innovation and competition, our creativity and our culture,” they say.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

EU Court: Cloud-Based TV Recorder Requires Rightsholder Permission

Post Syndicated from Andy original https://torrentfreak.com/eu-court-cloud-based-tv-recorder-requires-rightsholder-permission-171130/

Over the years, many useful devices have come along which enable the public to make copies of copyright works, the VCR (video cassette recorder) being a prime example.

But while many such devices have been consumed by history, their modern equivalents still pose tricky questions for copyright law. One such service is VCAST, which markets itself as a Video Cloud Recorder. It functions in a notionally similar way to its older cousin but substitutes cassette storage for that in the cloud.

VCAST targets the Italian market, allowing users to sign up in order to gain access to more than 50 digital terrestrial TV channels. However, rather than simply watching live, the user can tell VCAST to receive TV shows (via its own antenna system) while recording them to private cloud storage (such as Google Drive) for subsequent viewing over the Internet.

VCAST attracted the negative interests of rightsholders, including Mediaset-owned RTI, who doubted the legality of the service. So, in response, VCAST sued RTI at the Turin Court of First Instance, seeking a judgment confirming the legality of its operations. The company believed that since the recordings are placed in users’ own cloud storage, the Italian private copying exception would apply and rightsholders would be compensated.

Perhaps unsurprisingly given the complexity of the case, the Turin Court decided to refer questions to the European Court of Justice. It essentially asked whether the private copying exception is applicable when the copying requires a service like VCAST and whether such a service is allowed to operate without permission from copyright holders.

In September, Advocate General Szpunar published his opinion, concluding that EU law prohibits this kind of service when copyright holders haven’t given their permission. Today, the ECJ handed down its decision, broadly agreeing with Szpunar’s conclusion.

“By today’s judgment, the Court finds that the service provided by VCAST has a dual functionality, consisting in ensuring both the reproduction and the making available of protected works. To the extent that the service offered by VCAST consists in the making available of protected works, it falls within communication to the public,” the ECJ announced.

“In that regard, the Court recalls that, according to the directive, any communication to the public, including the making available of a protected work or subject-matter, requires the rightholder’s consent, given that the right of communication of works to the public should be understood, in a broad sense, as covering any transmission or retransmission of a work to the public by wire or wireless means, including broadcasting.”

The ECJ notes that the original transmission made by RTI was intended for one audience. In turn, the transmission by VCAST was intended for another. In this respect, the subsequent VCAST transmission was made to a “new public”, which means that copyright holder permission is required under EU law.

“Accordingly, such a remote recording service cannot fall within the private copying exception,” the ECJ concludes.

The full text of the judgment can be found here.

The key ruling reads as follows:

Directive 2001/29/EC of the European Parliament and of the Council of 22 May 2001 on the harmonisation of certain aspects of copyright and related rights in the information society, in particular Article 5(2)(b) thereof, must be interpreted as precluding national legislation which permits a commercial undertaking to provide private individuals with a cloud service for the remote recording of private copies of works protected by copyright, by means of a computer system, by actively involving itself in the recording, without the rightholder’s consent.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Amazon MQ – Managed Message Broker Service for ActiveMQ

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-mq-managed-message-broker-service-for-activemq/

Messaging holds the parts of a distributed application together, while also adding resiliency and enabling the implementation of highly scalable architectures. For example, earlier this year, Amazon Simple Queue Service (SQS) and Amazon Simple Notification Service (SNS) supported the processing of customer orders on Prime Day, collectively processing 40 billion messages at a rate of 10 million per second, with no customer-visible issues.

SQS and SNS have been used extensively for applications that were born in the cloud. However, many of our larger customers are already making use of open-sourced or commercially-licensed message brokers. Their applications are mission-critical, and so is the messaging that powers them. Our customers describe the setup and on-going maintenance of their messaging infrastructure as “painful” and report that they spend at least 10 staff-hours per week on this chore.

New Amazon MQ
Today we are launching Amazon MQ – a managed message broker service for Apache ActiveMQ that lets you get started in minutes with just three clicks! As you may know, ActiveMQ is a popular open-source message broker that is fast & feature-rich. It offers queues and topics, durable and non-durable subscriptions, push-based and poll-based messaging, and filtering.

As a managed service, Amazon MQ takes care of the administration and maintenance of ActiveMQ. This includes responsibility for broker provisioning, patching, failure detection & recovery for high availability, and message durability. With Amazon MQ, you get direct access to the ActiveMQ console and industry standard APIs and protocols for messaging, including JMS, NMS, AMQP, STOMP, MQTT, and WebSocket. This allows you to move from any message broker that uses these standards to Amazon MQ–along with the supported applications–without rewriting code.

You can create a single-instance Amazon MQ broker for development and testing, or an active/standby pair that spans AZs, with quick, automatic failover. Either way, you get data replication across AZs and a pay-as-you-go model for the broker instance and message storage.

Amazon MQ is a full-fledged part of the AWS family, including the use of AWS Identity and Access Management (IAM) for authentication and authorization to use the service API. You can use Amazon CloudWatch metrics to keep a watchful eye metrics such as queue depth and initiate Auto Scaling of your consumer fleet as needed.

Launching an Amazon MQ Broker
To get started, I open up the Amazon MQ Console, select the desired AWS Region, enter a name for my broker, and click on Next step:

Then I choose the instance type, indicate that I want to create a standby , and click on Create broker (I can select a VPC and fine-tune other settings in the Advanced settings section):

My broker will be created and ready to use in 5-10 minutes:

The URLs and endpoints that I use to access my broker are all available at a click:

I can access the ActiveMQ Web Console at the link provided:

The broker publishes instance, topic, and queue metrics to CloudWatch. Here are the instance metrics:

Available Now
Amazon MQ is available now and you can start using it today in the US East (Northern Virginia), US East (Ohio), US West (Oregon), EU (Ireland), EU (Frankfurt), and Asia Pacific (Sydney) Regions.

The AWS Free Tier lets you use a single-AZ micro instance for up to 750 hours and to store up to 1 gigabyte each month, for one year. After that, billing is based on instance-hours and message storage, plus charges Internet data transfer if the broker is accessed from outside of AWS.

Jeff;

Torrent Site Blocking Endangers Freedom of Expression, ISP Warns

Post Syndicated from Ernesto original https://torrentfreak.com/torrent-site-blocking-endangers-freedom-expression-isp-warns-171128/

LinkoManija.net is the most visited BitTorrent site in Lithuania. The private tracker has been around for more than a decade and has made quite a name for itself.

While it’s a ‘closed’ community, that name hardly applies anymore considering that it’s the 32nd most-visited site in Lithuania, beating the likes of Twitter, eBay, and even Pornhub.

Over the past several years, Linkomanija has endured its fair share of copyright-related troubles. This includes a multi-million dollar lawsuit launched by Microsoft, which failed to put the site out of business.

Last week the Lithuanian Copyright Protection Association (LATGA) had more success. The anti-piracy group went to court demanding that local ISPs block access to the site. It won.

The Vilnius Regional Court subsequently issued an order which requires Internet providers including Telia, Bitė, LRTC, Cgates, Init, Balticum TV, to start blocking access to the popular torrent tracker.

“We are glad that our courts follow the precedents set in European Courts and are following their practices,” Jonas Liniauskas, head of LATGA told 15min.

“We really hope that internet providers will not fight the decision and that they have finally decided whether they are ready to fight against pirates who take away their customers, or want to continue to contribute to the illegal exploitation of works on the Internet by providing high-speed Internet access to pirated websites.”

LATGA’s lawyer, Andrius Iškauskas, pointed out that the torrent site was operating as a commercial venture. Between 2013 and 2016 it collected hundreds of thousands of euros through donations from its users.

Internet provider Telia is not happy with the verdict and says it endangers people’s freedom of expression and speech. While the company doesn’t condone piracy, sites such as Linkomanija are also used legitimately by copyright holders to share their work.

Telia pointed out in court that the anti-piracy group represented only 28 copyright holders and listed less than 100 works for which links were posted on Linkomanija.net. Despite these relatively small numbers, ISPs must block access to the entire site.

In response, LATGA’s lawyer pointed out that any rightsholders who legally distribute their content through Linkomania can easily find other suitable alternatives, such as YouTube, Spotify, and many more.

While the verdict is a blow to millions of users, the fight may not be over yet. The ISPs have 30 days to appeal the decision of the Vilnius Regional Court. According to Telia, this is likely to happen.

“We are currently analyzing the solution. It is very likely that it will be submitted to the higher court because the dispute is complex. This case can become case-law and determine when content is blocked on the Internet. This includes the possible restriction of freedom of expression and speech” the ISP notes.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Mashup Site Hit With Domain Suspension Following IFPI Copyright Complaint

Post Syndicated from Andy original https://torrentfreak.com/mashup-site-hit-with-domain-suspension-following-ifpi-copyright-complaint-171127/

Mashups are musical compositions, usually made up of two or more tracks seamlessly blended together, which bring something fresh and new to the listener.

There are hundreds of stunning examples online, many created in hobbyist circles, with dedicated communities sharing their often brilliant work.

However, the majority of mashups have something in common – they’re created without any permission from the copyright holders’ of the original tracks. As such they remain controversial, as mashup platform Sowndhaus has just discovered.

This Canada-based platform allows users to upload, share and network with other like-minded mashup enthusiasts. It has an inbuilt player, somewhat like Soundcloud, through which people can play a wide range of user-created mashups. However, sometime last Tuesday, Sowndhaus’ main domain, Sowndhaus.com, became unreachable.

Sowndhaus: High-quality mashups

The site’s operators say that they initially believed there was some kind of configuration issue. Later, however, they discovered that their domain had been “purposefully de-listed” from its DNS servers by its registrar.

“DomainBox had received a DMCA notification from the IFPI (International Federation of the Phonographic Industry) and immediately suspended our .com domain,” Sowndhaus’ operators report.

At this point it’s worth noting that while Sowndhaus is based and hosted in Canada, DomainBox is owned by UK-based Mesh Digital Limited, which is in turn owned by GoDaddy. IFPI, however, reportedly sent a US-focused DMCA notice to the registrar which noted that the music group had “a good faith belief” that activity on Sowndhaus “is not authorized by the copyright owner, its agent, or the law.”

While mashups have always proved controversial, Sowndhaus believe that they operate well within Canadian law.

“We have a good faith belief that the audio files allegedly ‘infringing copyright’ in the DMCA notification are clearly transformative works and meet all criteria for ‘Non-commercial User-generated Content’ under Section 29.21 of the Copyright Act (Canada), and as such are authorized by the law,” the site says.

“Our service, servers, and files are located in Canada which has a ‘Notice and Notice regime’ and where DMCA (a US law) has no jurisdiction. However, the jurisdiction for our .com domain is within the US/EU and thus subject to its laws.”

Despite a belief that the site operates lawfully, Sowndhaus took a decision to not only take down the files listed in IFPI’s complaint but also to ditch its .com domain completely. While this convinced DomainBox to give control of the domain back to the mashup platform, Sowndhaus has now moved to a completely new domain (sowndhaus.audio), to avoid further issues.

“We neither admit nor accept that any unlawful activity or copyright infringement with respect to the DMCA claim had taken place, or has ever been permitted on our servers, or that it was necessary to remove the files or service under Section 29.21 of the Copyright Act (Canada) with which we have always been, and continue to be, in full compliance,” the site notes.

“The use of copyright material as Non-commercial User-generated Content is authorized by law in Canada, where our service resides. We believe that the IFPI are well aware of this, are aware of the jurisdiction of our service, and therefore that their DMCA notification is a misrepresentation of copyright.”

Aside from what appears to have been a rapid suspension of Sowndhaus’ .com domain, the site says that it is being held to a higher standard of copyright protection that others operating under the DMCA.

Unlike YouTube, for example, Sowndhaus says it pro-actively removes files found to infringe copyright. It also bans users who use the site to commit piracy, as per its Terms of Service.

“This is a much stronger regime than would be required under the DMCA guidelines where users generally receive warnings and strikes before being banned, and where websites complying with the DMCA and seeking to avoid legal liability do not actively seek out cases of infringement, leading to some cases of genuine piracy remaining undetected on their services,” the site says.

However, the site remains defiant in respect of the content it hosts, noting that mashups are transformative works that use copyright content “in new and creative ways to form new works of art” and as such are legal for non-commercial purposes.

That hasn’t stopped it from being targeted by copyright holders in the past, however.

This year three music-based organizations (IFPI, RIAA, and France’s SCPP) have sent complaints to Google about the platform, targeting close to 200 URLs. However, at least for more recent complaints, Google hasn’t been removing the URLs from its indexes.

Complaints sent to Google about Sowndhaus in 2017<

Noting that corporations are using their powers “to hinder, stifle, and silence protected new forms of artistic expression with no repercussions”, Sowndhaus says that it is still prepared to work with copyright holders but wishes they would “reconsider their current policies and accept non-commercial transformative works as legitimate art forms with legal protections and/or exemptions in all jurisdictions.”

While Sowndhaus is now operating from a new domain, the switch is not without its inconveniences. All URLs with links to files on sowndhaus.com are broken but can be fixed by changing the .com to .audio.

DomainBox did not respond to TorrentFreak’s request for comment.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Looming Net Neutrality Repeal Sparks BitTorrent Throttling Fears

Post Syndicated from Ernesto original https://torrentfreak.com/looming-net-neutrality-repeal-sparks-bittorrent-throttling-fears-171123/

Ten years ago we uncovered that Comcast was systematically slowing down BitTorrent traffic to ease the load on its network.

The Comcast case ignited a broad discussion about net neutrality and provided the setup for the FCC’s Open Internet Order, which came into effect three years later.

This Open Internet Order then became the foundation of the net neutrality regulation that was adopted in 2015 and still applies today. The big change compared to the earlier attempt was that ISPs can be regulated as carriers under Title II.

These rules provide a clear standard that prevents ISPs from blocking, throttling, and paid prioritization of “lawful” traffic. However, this may soon be over as the FCC is determined to repeal it.

FCC head Ajit Pai recently told Reuters that the current rules are too restrictive and hinder competition and innovation, which is ultimately not in the best interests of consumers

“The FCC will no longer be in the business of micromanaging business models and preemptively prohibiting services and applications and products that could be pro-competitive,” Pai said. “We should simply set rules of the road that let companies of all kinds in every sector compete and let consumers decide who wins and loses.”

This week the FCC released its final repeal draft (pdf), which was met with fierce resistance from the public and various large tech companies. They fear that, if the current net neutrality rules disappear, throttling and ‘fast lanes’ for some services will become commonplace.

This could also mean that BitTorrent traffic could become a target once again, with it being blocked or throttled across many networks, as The Verge just pointed out.

Blocking BitTorrent traffic would indeed become much easier if current net neutrality safeguards were removed. However, the FCC believes that the current “no-throttling rules are unnecessary to prevent the harms that they were intended to thwart,” such as blocking entire file transfer protocols.

Instead, the FCC notes that antitrust law, FTC enforcement of ISP commitments, and consumer expectations will prevent any unwelcome blocking. This is also the reason why ISPs adopted no-blocking policies even when they were not required to, they point out.

Indeed, when the DC Circuit Court of Appeals decimated the Open Internet Order in 2014, Comcast was quick to assure subscribers that it had no plans to start throttling torrents again. Yes, that offers no guarantees for the future.

The FCC goes on to mention that the current net neutrality rules don’t prevent selective blocking. They can already be bypassed by ISPs if they offer “curated services,” which allows them to filter content on viewpoint grounds. And Edge providers also block content because it violates their “viewpoints,” citing the Cloudflare termination of The Daily Stormer.

Net neutrality supporters see these explanations as weak excuses and have less trust in the self-regulating capacity of the ISP industry that the FCC, calling for last minute protests to stop the repeal.

For now it appears, however, that the FCC is unlikely to change its course, as Ars Technica reports.

While net neutrality concerns are legitimate, for BitTorrent users not that much will change.

As we’ve highlighted in the past, blocking pirate sites is already an option under the current rules. The massive copyright loophole made sure of that. Targeting all torrent traffic is even an option, in theory.

If net neutrality is indeed repealed next month, blocking or throttling BitTorrent traffic across the entire network will become easier, no doubt. For now, however, there are no signs that any ISPs plan to do so.

If it does, we will know soon enough. The FCC will require ISPs to be transparent under the new plan. They have to disclose network management practices, blocking efforts, commercial prioritization, and the like.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

170 ‘Pirate’ IPTV Vendors Throw in the Towel Facing Legal Pressure

Post Syndicated from Ernesto original https://torrentfreak.com/170-pirate-iptv-vendors-throw-the-in-the-towel-facing-legal-pressure-171121/

Pirate streaming boxes are all the rage this year. Not just among the dozens of millions of users, they are on top of the anti-piracy agenda as well.

Dubbed Piracy 3.0 by the MPAA, copyright holders are trying their best to curb this worrisome trend. In the Netherlands local anti-piracy group BREIN is leading the charge.

Backed by the major film studios, the organization booked a significant victory earlier this year against Filmspeler. In this case, the European Court of Justice ruled that selling or using devices pre-configured to obtain copyright-infringing content is illegal.

Paired with the earlier GS Media ruling, which held that companies with a for-profit motive can’t knowingly link to copyright-infringing material, this provides a powerful enforcement tool.

With these decisions in hand, BREIN previously pressured hundreds of streaming box vendors to halt sales of hardware with pirate addons, but it didn’t stop there. This week the group also highlighted its successes against vendors of unauthorized IPTV services.

“BREIN has already stopped 170 illegal providers of illegal media players and/or IPTV subscriptions. Even providers that only offer illegal IPTV subscriptions are being dealt with,” BREIN reports.

In addition to shutting down the trade in IPTV services, the anti-piracy group also removed 375 advertisements for such services from various marketplaces.

“This is illegal commerce. If you wait until you are warned, you are too late,” BREIN director Tim Kuik says.

“You can be held personally liable. You can also be charged and criminally prosecuted. Willingly committing commercial copyright infringement can lead to a 82,000 euro fine and 4 years imprisonment,” he adds.

While most pirate IPTV vendors threw in the towel voluntarily, some received an extra incentive. Twenty signed a settlement with BREIN for varying amounts, up to tens of thousands of euros. They all face further penalties if they continue to sell pirate subscriptions.

In some cases, the courts were involved. This includes the recent lawsuit against MovieStreamer, that was ordered to stop its IPTV hyperlinking activities immediately. Failure to do so will result in a 5,000 euro per day fine. In addition, the vendor was also ordered to pay legal costs of 17,527 euros.

While BREIN has booked plenty of successes already, as exampled here, the pirate streaming box problem is far from solved. The anti-piracy group currently has one case pending in court, but more are likely to follow in the near future.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

“The Commercial Usenet Stinks on All Sides,” Anti-Piracy Boss Says

Post Syndicated from Ernesto original https://torrentfreak.com/the-commercial-usenet-stinks-on-all-sides-anti-piracy-boss-says-171118/

Dutch anti-piracy group BREIN has targeted pirates of all shapes and sizes over the past several years.

It’s also one of the few groups keeping a close eye on Usenet piracy. Although Usenet and associated piracy are a few decades old already and relatively old-fashioned, the area still has millions of frequent users. This hasn’t escaped the attention of law enforcement.

Last week police in Germany launched one of the largest anti-piracy operations in recent history. Houses of dozens of suspects connected to Usenet forums were searched, with at least 1,000 gigabytes of data and numerous computers seized for evidence.

In their efforts, German authorities received help from international colleagues in the Netherlands, Spain, San Marino, Switzerland and Canada. Rightfully so, according to BREIN boss Tim Kuik, who describes Usenet as a refuge for pirates.

“Usenet was originally for text only. People were able to ask questions and exchange information via newsgroups. After it became possible to store video and music as Usenet text messages, it became a refuge for illegal copies of everything. That’s where the revenue model is based on today,” Kuik says.

BREIN states that uploaders, Usenet forums, and Usenet resellers all work in tandem. Resellers provide free accounts to popular uploaders, for example, which generates more traffic and demand for subscriptions. That’s how resellers and providers earn their money.

The same resellers also advertise on popular Usenet forums where links to pirated files are shared, suggesting that they specifically target these users. For example, one of the resellers targeted by BREIN in the past, was sponsoring one of the sites that were raided last week, BREIN notes.

Last year BREIN signed settlements with several Usenet uploaders. This was in part facilitated by a court order, directing Usenet provider Eweka to identify a former subscriber who supposedly shared infringing material.

Following this verdict, several Dutch Usenet servers were taken over by a San Marino company. But, according to BREIN this company can also be ordered to share customer information if needed.

“It is not unthinkable that this construction has been called into existence by Usenet companies who find themselves in hot water,” Kuik says.

According to BREIN it’s clear. Large parts of Usenet have turned into a playground for pirates and people who profit from copyright infringement. This all happens while the legitimate rightsholders don’t see a penny.

“For a long time, there’s been a certain smell to the commercial Usenet,” Kuik says. “It’s stinking on all sides.”

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

New White House Announcement on the Vulnerability Equities Process

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/11/new_white_house_1.html

The White House has released a new version of the Vulnerabilities Equities Process (VEP). This is the inter-agency process by which the US government decides whether to inform the software vendor of a vulnerability it finds, or keep it secret and use it to eavesdrop on or attack other systems. You can read the new policy or the fact sheet, but the best place to start is Cybersecurity Coordinator Rob Joyce’s blog post.

In considering a way forward, there are some key tenets on which we can build a better process.

Improved transparency is critical. The American people should have confidence in the integrity of the process that underpins decision making about discovered vulnerabilities. Since I took my post as Cybersecurity Coordinator, improving the VEP and ensuring its transparency have been key priorities, and we have spent the last few months reviewing our existing policy in order to improve the process and make key details about the VEP available to the public. Through these efforts, we have validated much of the existing process and ensured a rigorous standard that considers many potential equities.

The interests of all stakeholders must be fairly represented. At a high level we consider four major groups of equities: defensive equities; intelligence / law enforcement / operational equities; commercial equities; and international partnership equities. Additionally, ordinary people want to know the systems they use are resilient, safe, and sound. These core considerations, which have been incorporated into the VEP Charter, help to standardize the process by which decision makers weigh the benefit to national security and the national interest when deciding whether to disclose or restrict knowledge of a vulnerability.

Accountability of the process and those who operate it is important to establish confidence in those served by it. Our public release of the unclassified portions Charter will shed light on aspects of the VEP that were previously shielded from public review, including who participates in the VEP’s governing body, known as the Equities Review Board. We make it clear that departments and agencies with protective missions participate in VEP discussions, as well as other departments and agencies that have broader equities, like the Department of State and the Department of Commerce. We also clarify what categories of vulnerabilities are submitted to the process and ensure that any decision not to disclose a vulnerability will be reevaluated regularly. There are still important reasons to keep many of the specific vulnerabilities evaluated in the process classified, but we will release an annual report that provides metrics about the process to further inform the public about the VEP and its outcomes.

Our system of government depends on informed and vigorous dialogue to discover and make available the best ideas that our diverse society can generate. This publication of the VEP Charter will likely spark discussion and debate. This discourse is important. I also predict that articles will make breathless claims of “massive stockpiles” of exploits while describing the issue. That simply isn’t true. The annual reports and transparency of this effort will reinforce that fact.

Mozilla is pleased with the new charter. I am less so; it looks to me like the same old policy with some new transparency measures — which I’m not sure I trust. The devil is in the details, and we don’t know the details — and it has giant loopholes that pretty much anything can fall through:

The United States Government’s decision to disclose or restrict vulnerability information could be subject to restrictions by partner agreements and sensitive operations. Vulnerabilities that fall within these categories will be cataloged by the originating Department/Agency internally and reported directly to the Chair of the ERB. The details of these categories are outlined in Annex C, which is classified. Quantities of excepted vulnerabilities from each department and agency will be provided in ERB meetings to all members.

This is me from last June:

There’s a lot we don’t know about the VEP. The Washington Post says that the NSA used EternalBlue “for more than five years,” which implies that it was discovered after the 2010 process was put in place. It’s not clear if all vulnerabilities are given such consideration, or if bugs are periodically reviewed to determine if they should be disclosed. That said, any VEP that allows something as dangerous as EternalBlue — or the Cisco vulnerabilities that the Shadow Brokers leaked last August — to remain unpatched for years isn’t serving national security very well. As a former NSA employee said, the quality of intelligence that could be gathered was “unreal.” But so was the potential damage. The NSA must avoid hoarding vulnerabilities.

I stand by that, and am not sure the new policy changes anything.

More commentary.

Here’s more about the Windows vulnerabilities hoarded by the NSA and released by the Shadow Brokers.

EDITED TO ADD (11/18): More news.

EDITED TO ADD (11/22): Adam Shostack points out that the process does not cover design flaws or trade-offs, and that those need to be covered:

…we need the VEP to expand to cover those issues. I’m not going to claim that will be easy, that the current approach will translate, or that they should have waited to handle those before publishing. One obvious place it gets harder is the sources and methods tradeoff. But we need the internet to be a resilient and trustworthy infrastructure.

The Decision on Transparency

Post Syndicated from Gleb Budman original https://www.backblaze.com/blog/transparency-in-business/

Backblaze transparency

This post by Backblaze’s CEO and co-founder Gleb Budman is the seventh in a series about entrepreneurship. You can choose posts in the series from the list below:

  1. How Backblaze got Started: The Problem, The Solution, and the Stuff In-Between
  2. Building a Competitive Moat: Turning Challenges Into Advantages
  3. From Idea to Launch: Getting Your First Customers
  4. How to Get Your First 1,000 Customers
  5. Surviving Your First Year
  6. How to Compete with Giants
  7. The Decision on Transparency

Use the Join button above to receive notification of new posts in this series.

“Are you crazy?” “Why would you do that?!” “You shouldn’t share that!”

These are just a few of the common questions and comments we heard after posting some of the information we have shared over the years. So was it crazy? Misguided? Should you do it?

With that background I’d like to dig into the decision to become so transparent, from releasing stats on hard drive failures, to storage pod specs, to publishing our cloud storage costs, and open sourcing the Reed-Solomon code. What was the thought process behind becoming so transparent when most companies work so hard to hide their inner workings, especially information such as the Storage Pod specs that would normally be considered a proprietary advantage? Most importantly I’d like to explore the positives and negatives of being so transparent.

Sharing Intellectual Property

The first “transparency” that garnered a flurry of “why would you share that?!” came as a result of us deciding to open source our Storage Pod design: publishing the specs, parts, prices, and how to build it yourself. The Storage Pod was a key component of our infrastructure, gave us a cost (and thus competitive) advantage, took significant effort to develop, and had a fair bit of intellectual property: the “IP.”

The negatives of sharing this are obvious: it allows our competitors to use the design to reduce our cost advantage, and it gives away the IP, which could be patentable or have value as a trade secret.

The positives were certainly less obvious, and at the time we couldn’t have guessed how massive they would be.

We wrestled with the decision: prospective users and others online didn’t believe we could offer our service for such a low price, thinking that we would burn through some cash hoard and then go out of business. We wanted to reassure them, but how?

This is how our response evolved:

We’ve built a lower cost storage platform.
But why would anyone believe us?
Because, we’ve designed our own servers and they’re less expensive.
But why would anyone believe they were so low cost and efficient?
Because here’s how much they cost versus others.
But why would anyone believe they cost that little and still enabled us to efficiently store data?
Because here are all the components they’re made of, this is how to build them, and this is how they work.
Ok, you can’t argue with that.

Great — so that would reassure people. But should we do this? Is it worth it?

This was 2009, we were a tiny company of seven people working from our co-founder’s one-bedroom apartment. We decided that the risk of not having potential customers trust us was more impactful than the risk of our competitors possibly deciding to use our server architecture. The former might kill the company in short order; the latter might make it harder for us to compete in the future. Moreover, we figured that most competitors were established on their own platforms and were unlikely to switch to ours, even if it were better.

Takeaway: Build your brand today. There are no assurances you will make it to tomorrow if you can’t make people believe in you today.

A Sharing Success Story — The Backblaze Storage Pod

So with that, we decided to publish everything about the Storage Pod. As for deciding to actually open source it? That was a ‘thank you’ to the open source community upon whose shoulders we stood as we used software such as Linux, Tomcat, etc.

With eight years of hindsight, here’s what happened:

As best as I can tell, none of our direct competitors ever used our Storage Pod design, opting instead to continue paying more for commercial solutions.

  • Hundreds of press articles have been written about Backblaze as a direct result of sharing the Storage Pod design.
  • Millions of people have read press articles or our blog posts about the Storage Pods.
  • Backblaze was established as a storage tech thought leader, and a resource for those looking for information in the space.
  • Our blog became viewed as a resource, not a corporate mouthpiece.
  • Recruiting has been made easier through the awareness of Backblaze, the appreciation for us taking on challenging tech problems in interesting ways, and for our openness.
  • Sourcing for our Storage Pods has become easier because we can point potential vendors to our blog posts and say, “here’s what we need.”

And those are just the direct benefits for us. One of the things that warms my heart is that doing this has helped others:

  • Several companies have started selling servers based on our Storage Pod designs.
  • Netflix credits Backblaze with being the inspiration behind their CDN servers.
  • Many schools, labs, and others have shared that they’ve been able to do what they didn’t think was possible because using our Storage Pod designs provided lower-cost storage.
  • And I want to believe that in general we pushed forward the development of low-cost storage servers in the industry.

So overall, the decision on being transparent and sharing our Storage Pod designs was a clear win.

Takeaway: Never underestimate the value of goodwill. It can help build new markets that fuel your future growth and create new ecosystems.

Sharing An “Almost Acquisition”

Acquisition announcements are par for the course. No company, however, talks about the acquisition that fell through. If rumors appear in the press, the company’s response is always, “no comment.” But in 2010, when Backblaze was almost, but not acquired, we wrote about it in detail. Crazy?

The negatives of sharing this are slightly less obvious, but the two issues most people worried about were, 1) the fact that the company could be acquired would spook customers, and 2) the fact that it wasn’t would signal to potential acquirers that something was wrong.

So, why share this at all? No one was asking “did you almost get acquired?”

First, we had established a culture of transparency and this was a significant event that occurred for us, thus we defaulted to assuming we would share. Second, we learned that acquisitions fall through all the time, not just during the early fishing stage, but even after term sheets are signed, diligence is done, and all the paperwork is complete. I felt we had learned some things about the process that would be valuable to others that were going through it.

As it turned out, we received emails from startup founders saying they saved the post for the future, and from lawyers, VCs, and advisors saying they shared them with their portfolio companies. Among the most touching emails I received was from a founder who said that after an acquisition fell through she felt so alone that she became incredibly depressed, and that reading our post helped her see that this happens and that things could be OK after. Being transparent about almost getting acquired was worth it just to help that one founder.

And what about the concerns? As for spooking customers, maybe some were — but our sign-ups went up, not down, afterward. Any company can be acquired, and many of the world’s largest have been. That we were being both thoughtful about where to go with it, and open about it, I believe gave customers a sense that we would do the right thing if it happened. And as for signaling to potential acquirers? The ones I’ve spoken with all knew this happens regularly enough that it’s not a factor.

Takeaway: Being open and transparent is also a form of giving back to others.

Sharing Strategic Data

For years people have been desperate to know how reliable are hard drives. They could go to Amazon for individual reviews, but someone saying “this drive died for me” doesn’t provide statistical insight. Google published a study that showed annualized drive failure rates, but didn’t break down the results by manufacturer or model. Since Backblaze has deployed about 100,000 hard drives to store customer data, we have been able to collect a wealth of data on the reliability of the drives by make, model, and size. Was Backblaze the only one with this data? Of course not — Google, Amazon, Microsoft, and any other cloud-scale storage provider tracked it. Yet none would publish. Should Backblaze?

Again, starting with the main negatives: 1) sharing which drives we liked could increase demand for them, thus reducing availability or increasing prices, and 2) publishing the data might make the drive vendors unhappy with us, thereby making it difficult for us to buy drives.

But we felt that the largest drive purchasers (Amazon, Google, etc.) already had their own stats and would buy the drives they chose, and if individuals or smaller companies used our stats, they wouldn’t sufficiently move the overall market demand. Also, we hoped that the drive companies would see that we were being fair in our analysis and, if anything, would leverage our data to make drives even better.

Again, publishing the data resulted in tremendous value for Backblaze, with millions of people having read the analysis that we put out quarterly. Also, becoming known as the place to go for drive reliability information is a natural fit with being a backup and storage provider. In addition, in a twist from many people’s expectations, some of the drive companies actually started working closer with us, seeing that we could be a good source of data for them as feedback. We’ve also seen many individuals and companies make more data-based decisions on which drives to buy, and researchers have used the data for a variety of analyses.

traffic spike from hard drive reliability post

Backblaze blog analytics showing spike in readership after a hard drive stats post

Takeaway: Being open and transparent is rarely as risky as it seems.

Sharing Revenue (And Other Metrics)

Journalists always want to publish company revenue and other metrics, and private companies always shy away from sharing. For a long time we did, too. Then, we opened up about that, as well.

The negatives of sharing these numbers are: 1) external parties may otherwise perceive you’re doing better than you are, 2) if you share numbers often, you may show that growth has slowed or worse, 3) it gives your competitors info to compare their own business too.

We decided that, while some may have perceived we were bigger, our scale was plenty significant. Since we choose what we share and when, it’s up to us whether to disclose at any point. And if our competitors compare, what will they actually change that would affect us?

I did wait to share revenue until I felt I had the right person to write about it. At one point a journalist said she wouldn’t write about us unless I disclosed revenue. I suggested we had a lot to offer for the story, but didn’t want to share revenue yet. She refused to budge and I walked away from the article. Several year later, I reached out to a journalist who had covered Backblaze before and I felt understood our business and offered to share revenue with him. He wrote a deep-dive about the company, with revenue being one of the components of the story.

Sharing these metrics showed that we were at scale and running a real business, one with positive unit economics and margins, but not one where we were gouging customers.

Takeaway: Being open with the press about items typically not shared can be uncomfortable, but the press can amplify your story.

Should You Share?

For Backblaze, I believe the results of transparency have been staggering. However, it’s not for everyone. Apple has, clearly, been wildly successful taking secrecy to the extreme. In their case, early disclosure combined with the long cycle of hardware releases could significantly impact sales of current products.

“For Backblaze, I believe the results of transparency have been staggering.” — Gleb Budman

I will argue, however, that for most startups transparency wins. Most startups need to establish credibility and trust, build awareness and a fan base, show that they understand what their customers need and be useful to them, and show the soul and passion behind the company. Some startup companies try to buy these virtues with investor money, and sometimes amplifying your brand via paid marketing helps. But, authentic transparency can build awareness and trust not only less expensively, but more deeply than money can buy.

Backblaze was open from the beginning. With no outside investors, as founders we were able to express ourselves and make our decisions. And it’s easier to be a company that shares if you do it from the start, but for any company, here are a few suggestions:

  1. Ask about sharing: If something significant happens — good or bad — ask “should we share this?” If you made a tough decision, ask “should we share the thinking behind the decision and why it was tough?”
  2. Default to yes: It’s often scary to share, but look for the reasons to say ‘yes,’ not the reasons to say ‘no.’ That doesn’t mean you won’t sometimes decide not to, but make that the high bar.
  3. Minimize reviews: Press releases tend to be sanitized and boring because they’ve been endlessly wordsmithed by committee. Establish the few things you don’t want shared, but minimize the number of people that have to see anything else before it can go out. Teach, then trust.
  4. Engage: Sharing will result in comments on your blog, social, articles, etc. Reply to people’s questions and engage. It’ll make the readers more engaged and give you a better understanding of what they’re looking for.
  5. Accept mistakes: Things will become public that aren’t perfectly sanitized. Accept that and don’t punish people for oversharing.

Building a culture of a company that is open to sharing takes time, but continuous practice will build that, and over time the company will navigate its voice and approach to sharing.

The post The Decision on Transparency appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

The Pirate Bay & 1337x Must Be Blocked, Austrian Supreme Court Rules

Post Syndicated from Andy original https://torrentfreak.com/the-pirate-bay-1337x-must-be-blocked-austrian-supreme-court-rules-171014/

Following a long-running case, in 2015 Austrian ISPs were ordered by the Commercial Court to block The Pirate Bay and other “structurally-infringing” sites including 1337x.to, isohunt.to, and h33t.to.

The decision was welcomed by the music industry, which looked forward to having more sites blocked in due course.

Soon after, local music rights group LSG sent its lawyers after several other large ISPs urging them to follow suit, or else. However, the ISPs dug in and a year later, in May 2016, things began to unravel. The Vienna Higher Regional Court overruled the earlier decision of the Commercial Court, meaning that local ISPs were free to unblock the previously blocked sites.

The Court concluded that ISP blocks are only warranted if copyright holders have exhausted all their options to take action against those actually carrying out the infringement. This decision was welcomed by the Internet Service Providers Austria (ISPA), which described the decision as an important milestone.

The ISPs argued that only torrent files, not the content itself, was available on the portals. They also had a problem with the restriction of access to legitimate content.

“A problem in this context is that the offending pages also have legal content and it is no longer possible to access that if barriers are put in place,” said ISPA Secretary General Maximilian Schubert.

Taking the case to its ultimate conclusion, the music companies appealed to the Supreme Court. Another year on and its decision has just been published and for the rightsholders, who represent 3,000 artists including The Beatles, Justin Bieber, Eric Clapton, Coldplay, David Guetta, Iggy Azalea, Michael Jackson, Lady Gaga, Metallica, George Michael, One Direction, Katy Perry, and Queen, to name a few, it was worth the effort.

The Court looked at whether “the provision and operation of a BitTorrent platform with the purpose of online file sharing [of non-public domain works]” represents a “communication to the public” under the EU Copyright Directive. Citing the now-familiar BREIN v Filmspeler and BREIN v Ziggo and XS4All cases that both received European Court of Justice rulings earlier this year, the Supreme Court concluded it was.

Citing another Dutch case, in which Playboy publisher Sanoma took on the blog GeenStijl.nl, the Court noted that linking to copyrighted content hosted elsewhere also amounted to a “communication to the public”, a situation mirrored on torrent sites like The Pirate Bay.

“The similarity of the technical procedure in this case when compared to BitTorrent platforms lies in the fact that in both cases the operators of the website did not provide any copyrighted works themselves, but merely provided further information on sites where the protected works were available,” the Court notes in its ruling.

In respect of the potential for blocking legitimate content as well as that infringing copyright, the Court turned the ISPs’ own arguments against them somewhat.

The ISPs had previously argued that blocking The Pirate Bay and other sites was pointless since the torrents they host would still be available elsewhere. The Court noted that point and also found that people can easily upload their torrents to sites that aren’t blocked, since there’s plenty of choice.

The ISPA criticized the Supreme Court’s ruling, noting that in future ISPs will still find themselves being held responsible for decisions concerning blocking.

“We do not support illegal content on the Internet in any way, but consider it extremely questionable that the decision on what is illegal and what is not falls to ISPs, instead of a court,” said ISPA Secretary General Maximilian.

“Although we find it positive that a court of last resort has taken the decision, the assessment of the website in the first instance continues to be left to the Internet provider. The Supreme Court’s expansion of the circle of sites that be potentially blocked further complicates this task for the operator and furthers the privatization of law enforcement.

“It is extremely unpleasant that even after more than 10 years of fierce discussion, there is still no compelling legal basis for a court decision on Internet blocking, which puts providers in the role of both judge and hangman.”

Also of interest is ISPA’s stance on how blocking of content fails to solve the underlying issue. When content is blocked, rather than removed, it simply displaces the problem, leaving others to pick up the pieces, the Internet body argues.

“Illegal content is permanently removed from the network by deletion. Everything else is a placebo with extremely dangerous side effects, which can easily be bypassed by both providers and consumers. The only thing that remains is a blocking infrastructure that can be misused for many purposes and, unfortunately, will be used in many places,” Schubert says.

“The current situation, where providers have to block the rightsholders quasi on the spot, if they do not want to engage in a time-consuming and cost-intensive litigation, is really not sustainable so we issue a call to action to the legislature.”

The domains that were listed in the case, many of which are already defunct, are: thepiratebay.se, thepiratebay.gd, thepiratebay.la, thepiratebay.mn, thepiratebay.mu, thepiratebay.sh, thepiratebay.tw, thepiratebay.fm, thepiratebay.ms, thepiratebay.vg, isohunt.to, 1337x.to and h33t.to.

Whether it will be added later is unclear, but the only domain currently used by The Pirate Bay (thepiratebay.org) is not included in the list.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Dallas Buyers Club Loses Piracy Lawsuit, IP-Address is Not Enough

Post Syndicated from Ernesto original https://torrentfreak.com/dallas-buyers-club-loses-piracy-lawsuit-ip-address-is-not-enough-171110/

In recent years, BitTorrent users around the world have been targeted with threats. They can either pay a significant settlement fee, or face far worse in court.

The scheme started in Germany years ago, and copyright holders later went after alleged pirates in Australia, Denmark, Finland, the UK, US, and elsewhere.

This summer, the copyright holders behind the movie Dallas Buyers Club added Spain to the mix, going after dozens of alleged pirates in Bilbao and San Sebastian. The ‘filmmakers’ are part of a tight group of so-called copyright trolls which are constantly expanding their business to other countries.

While they have had some success, mainly by sending out settlement letters, in Spain the first court case brought bad news.

The Commercial Court of Donostia dismissed the claim against an alleged file-sharer due to a lack of evidence. Dallas Buyers Club identified the infringer through an IP-address, but according to Judge Pedro José Malagón Ruiz, this is not good enough.

“The ruling says that there is no way to know whether the defendant was the P2P user or not, because an IP address only identifies the person who subscribed to the Internet connection, not the user who made use of the connection at a certain moment,” copyright lawyer David Bravo tells TorrentFreak.

“A relative or a guest could have been using the network, or even someone accessing the wifi if it was open,” he adds.

In addition, the Judge agreed with the defense that there is no evidence that the defendant actively made the movie available. This generally requires a form of intent. However, BitTorrent clients automatically share files with others, whether it’s the intention of the user or not.

“The upload of the data from the P2P programs occurs automatically by the program configuration itself. […] This occurs by default without requiring the knowledge or intention of the user,” Judge Malagón Ruiz writes in his verdict, quoted by Genbeta.

In other words, these BitTorrent transfers are not necessarily an act of public communication, therefore, they are not infringing any copyrights.

The case provides hope for other accused file-sharers who are looking to have their cases dismissed as well. Not in the last place because the defense was coordinated online, without active involvement of a lawyer.

Bravo, together with two colleague lawyers, offered self-help forms to accused file-sharers free of charge. Defendants could use these to mount a proper defense, which paid off in this case.

“This ruling sets a precedent,” Bravo tells TorrentFreak, noting that it’s a clear setback for the copyright holders who are involved in these mass file-sharing lawsuits.

While the lawyer cautions that other courts may come to a different conclusion, it appears that Dallas Buyers Club and other copyright trolls will meet some fierce ‘p2p coordinated’ resistance in Spain.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Multi-National Police Operation Shuts Down Pirate Forums

Post Syndicated from Andy original https://torrentfreak.com/multi-national-police-operation-shuts-down-pirate-forums-171110/

Once upon a time, large-scale raids on pirate operations were a regular occurrence, with news of such events making the headlines every few months. These days things have calmed down somewhat but reports coming out of Germany suggests that the war isn’t over yet.

According to a statement from German authorities, the Attorney General in Dresden and various cybercrime agencies teamed up this week to take down sites dedicated to sharing copyright protected material via the Usenet (newsgroups) system.

Huge amounts of infringing items were said to have been made available on a pair of indexing sites – 400,000 on Town.ag and 1,200,000 on Usenet-Town.com.

“Www.town.ag and www.usenet-town.com were two of the largest online portals that provided access to films, series, music, software, e-books, audiobooks, books, newspapers and magazines through systematic and unlawful copyright infringement,” the statement reads.

Visitors to these URLs are no longer greeted by the usual warez-fest, but by a seizure banner placed there by German authorities.

Seizure banner on Town.ag and Usenet-Town.com (translated)

Following an investigation carried out after complaints from rightsholders, 182 officers of various agencies raided homes and businesses Wednesday, each connected to a reported 26 suspects. In addition to searches of data centers located in Germany, servers in Spain, Netherlands, San Marino, Switzerland, and Canada were also targeted.

According to police the sites generated income from ‘sponsors’, netting their operators millions of euros in revenue. One of those appears to be Usenet reseller SSL-News, which displays the same seizure banner. Rightsholders claim that the Usenet portals have cost them many millions of euros in lost sales.

Arrest warrants were issued in Spain and Saxony against two German nationals, 39 and 31-years-old respectively. The man arrested in Spain is believed to be a ringleader and authorities there have been asked to extradite him to Germany.

At least 1,000 gigabytes of data were seized, with police scooping up numerous computers and other hardware for evidence. The true scale of material indexed is likely to be much larger, however.

Online chatter suggests that several other Usenet-related sites have also disappeared during the past day but whether that’s a direct result of the raids or down to precautionary measures taken by their operators isn’t yet clear.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons