Tag Archives: Bits

The AWS Cloud Goes Underground at re:Invent

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/the-aws-cloud-goes-underground-at-reinvent/

As you wander through the AWS re:Invent campus, take a minute to think about your expectations for all of the elements that need to come together…

Starting with the location, my colleagues have chosen the best venues, designed the sessions, picked the speakers, laid out the menu, selected the color schemes, programmed or printed all of the signs, and much more, all with the goal of creating an optimal learning environment for you and tens of thousands of other AWS customers.

However, as is often the case, the part that you can see is just a part of the picture. Behind the scenes, people, processes, plans, and systems come together to put all of this infrastructure in to place and to make it run so smoothly that you don’t usually notice it.

Today I would like to tell you about a mission-critical aspect of the re:Invent infrastructure that is actually underground. In addition to providing great Wi-Fi for your phones, tablets, cameras, laptops, and other devices, we need to make sure that a myriad of events, from the live-streamed keynotes, to the live-streamed keynotes and the WorkSpaces-powered hands-on labs are well-connected to each other and to the Internet. With events running at hotels up and down the Las Vegas Strip, reliable, low-latency connectivity is essential!

Thank You CenturyLink / Level3
Over the years we have been working with the great folks at Level3 to make this happen. They recently became part of CenturyLink and are now the Official Network Sponsor of re:Invent, responsible for the network fiber, circuits, and services that tie the re:Invent campus together.

To make this happen, they set up two miles of dark fiber beneath the Strip, routed to multiple Availability Zones in two separate AWS Regions. The Sands Expo Center is equipped with redundant 10 gigabit connections and the other venues (Aria, MGM, Mirage, and Wynn) are each provisioned for 2 to 10 gigabits, meaning that over half of the Strip is enabled for Direct Connect. According to the IT manager at one of the facilities, this may be the largest temporary hybrid network ever configured in Las Vegas.

On the Wi-Fi side, showNets is plugged in to the same network; your devices are talking directly to Direct Connect access points (how cool is that?).

Here’s a simplified illustration of how it all fits together:

The CenturyLink team will be onsite at re:Invent and will be tweeting live network stats throughout the week.

I hope you have enjoyed this quick look behind the scenes and beneath the street!

Jeff;

Center For Justice Wants Court to Unveil Copyright Trolling Secrets

Post Syndicated from Ernesto original https://torrentfreak.com/center-for-justice-wants-court-to-unveil-copyright-trolling-secrets-171116/

Mass-piracy lawsuits have been plaguing the U.S. for years, targeting hundreds of thousands of alleged downloaders.

While the numbers are massive, there are only a few so-called “copyright trolling” operations running the show.

These are copyright holders, working together with lawyers and piracy tracking firms, trying to extract cash settlements from alleged subscribers.

Getting a settlement is also what the makers of the “Elf-Man” movie tried when they targeted Ryan Lamberson of Spokane Valley, Washington. Unlike most defendants, however, Lamberson put up a fight, questioning the validity of the evidence. After the filmmaker pulled out, the accused pirate ended up winning $100,000 in attorney fees.

All this happened three years ago but it appears that there might be more trouble in store for Elf-Man and related companies.

The Washington non-profit organization Center for Justice (CFJ) recently filed a motion to intervene in the case. The group, which aims to protect “the wider community from abuse by the moneyed few,” has asked the court to unseal several documents that could reveal more about how these copyright trolls operate.

The non-profit asks the court to open up several filings to the public that may reveal how film companies, investigators, and lawyers coordinated an illegal settlement factory.

“The CFJ’s position is simple: if foreign data collectors and local lawyers are feeding on the subpoena power of federal courts to extract settlements from innocent people, then the public deserves to know.

“What makes this case so important is that, based on the unsealed exhibits and declarations, it appears that a German operation is providing the ‘investigators’ and ‘experts’ that claim to identify infringing activities, but its investigators apparently have a direct financial interest and the ‘software’ is questionable at best,” CFJ adds.

Another problem mentioned by the non-profit organization is that not all defense lawyers are familiar with these ‘trolling’ cases. They sometimes need dozens of hours to research them, which costs the defendant more than the cash settlement deal offered by the copyright holder.

As a result, paying off the trolls may seem like the most logical and safe option to the accused, even when they are innocent.

CFJ hopes that the sealed documents will help to expose the copyright trolls’ “mushrooming” enterprise, not just in this particular case, but also in many similar cases where people are pressured into settling.

“The entire lawsuit may have been a sham. Which is where CFJ comes in. Money and information remain the most significant hurdles for those being named as defendants in lawsuits like this one who receive threatening settlement letters like the one Mr. Lamberson received.

“CFJ’s goal is to level the playing field and reduce the plaintiffs’ informational advantage. The common-law right of access to judicial records is especially important where, as here, the copyright ‘trolling’ risks infecting the judicial system,” the non-profit adds.

The recent filings were spotted by SJD from Fight Copyright Trolls, who rightfully notes that we still have to see whether the documents will be made public, or not. If they are indeed unsealed, it may trigger a response from other accused pirates, perhaps even a class action suit.

—–

Center For Justice’s full motion to intervene is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Steal This Show S03E10: The Battle Of The Bots

Post Syndicated from J.J. King original https://torrentfreak.com/steal-show-s03e10-battle-bots/

stslogo180If you enjoy this episode, consider becoming a patron and getting involved with the show. Check out Steal This Show’s Patreon campaign: support us and get all kinds of fantastic benefits!

It seems everyone’s getting in on the “fake news” game today, from far-right parties in Germany to critics of Catalan separatism — but none more concertedly than the Russian state itself.

In this episode we meet Ben Nimmo, Fellow at the Atlantic Council’s Digital Forensic Research Lab, to talk us through the latest patterns and trends in online disinformation and hybrid warfare. ‘People who really want to cause trouble can make up just about anything,’ explains Ben, ‘and the fakes are getting more and more complex. It’s really quite alarming.’

After cluing us in on the state of information warfare today, we discuss evidence that the Russians are deploying a fully-funded ‘Troll Factory’ across dominant social networks whose intent is to distort reality and sway the political conversation in favour of its masters.

We dig deep into the present history of the ‘Battle Of The Bots’, looking specifically at the activities of the fake Twitter account @TEN_GOP, whose misinformation has reached all the way to the highest tier of American government. Can we control or counter these rogue informational entities? What’s the best way to do so? Do we need ‘Asimov Laws’ for a new generation of purely online, but completely real, information entities?

Steal This Show aims to release bi-weekly episodes featuring insiders discussing copyright and file-sharing news. It complements our regular reporting by adding more room for opinion, commentary, and analysis.

The guests for our news discussions will vary, and we’ll aim to introduce voices from different backgrounds and persuasions. In addition to news, STS will also produce features interviewing some of the great innovators and minds.

Host: Jamie King

Guest: Ben Nimmo

Produced by Jamie King
Edited & Mixed by Riley Byrne
Original Music by David Triana
Web Production by Siraje Amarniss

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Steal This Show S03E09: Learning To Love Your Panopticon

Post Syndicated from Ernesto original https://torrentfreak.com/steal-show-s03e09-learning-love-panopticon/

stslogo180If you enjoy this episode, consider becoming a patron and getting involved with the show. Check out Steal This Show’s Patreon campaign: support us and get all kinds of fantastic benefits!

In this episode we meet Diani Barreto from the Berlin Bureau of ExposeFacs. Launched in June 2014, ExposeFacts.org supports and encourages whistleblowers to disclose information that citizens need to make truly informed decisions in a democracy.

ExposeFacts aims to shed light on concealed activities that are relevant to human rights, corporate malfeasance, the environment, civil liberties and war.

Steal This Show aims to release bi-weekly episodes featuring insiders discussing copyright and file-sharing news. It complements our regular reporting by adding more room for opinion, commentary, and analysis.

The guests for our news discussions will vary, and we’ll aim to introduce voices from different backgrounds and persuasions. In addition to news, STS will also produce features interviewing some of the great innovators and minds.

Host: Jamie King

Guest: Diani Barreto

Produced by Jamie King
Edited & Mixed by Riley Byrne
Original Music by David Triana
Web Production by Siraje Amarniss

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

US Senators Ask Apple Why VPN Apps Were Removed in China

Post Syndicated from Andy original https://torrentfreak.com/us-senators-ask-apple-why-vpn-apps-were-removed-in-china-171020/

As part of what is now clearly a crackdown on Great Firewall-evading tools and services, during the summer Chinese government pressure reached technology giant Apple.

On or around July 29, Apple removed many of the most-used VPN applications from its Chinese app store. In a short email from the company, VPN providers were informed that VPN applications are considered illegal in China.

“We are writing to notify you that your application will be removed from the China App Store because it includes content that is illegal in China, which is not in compliance with the App Store Review Guidelines,” Apple informed the affected VPNs.

Apple’s email to VPN providers

Now, in a letter sent to Apple CEO Tim Cook, US senators Ted Cruz and Patrick Leahy express concern at the move by Apple, noting that if reports of the software removals are true, the company could be assisting China’s restrictive approach to the Internet.

“VPNs allow users to access the uncensored Internet in China and other countries that restrict Internet freedom. If these reports are true, we are concerned that Apple may be enabling the Chines government’s censorship and surveillance of the Internet.”

Describing China as a country with “an abysmal human rights record, including with respect to the rights of free expression and free access to information, both online and offline”, the senators cite Reporters Without Borders who previously labeled the country as “the enemy of the Internet”.

While senators Cruz and Leahy go on to praise Apple for its contribution to the spread of information, they criticize the company for going along with the wishes of the Chinese government as it seeks to suppress knowledge and communication.

“While Apple’s many contributions to the global exchange of information are admirable, removing VPN apps that allow individuals in China to evade the Great Firewall and access the Internet privately does not enable people in China to ‘speak up’,” the senators write.

“To the contrary, if Apple complies with such demands from the Chinese government it inhibits free expression for users across China, particularly in light of the Cyberspace Administration of China’s new regulations targeting online anonymity.”

In January, a notice published by China’s Ministry of Industry and Information Technology said that the government had indeed launched a 14-month campaign to crack down on local ‘unauthorized’ Internet platforms.

This means that all VPN services have to be pre-approved by the Government if they want to operate in China. And the aggression against VPNs and their providers didn’t stop there.

In September, a Chinese man who sold Great Firewall-evading VPN software via a website was sentenced to nine months in prison by a Chinese court. Just weeks later, a software developer who set up a VPN for his own use but later sold access to the service was arrested and detained for three days.

This emerging pattern is clearly a concern for the senators who are now demanding that Tim Cook responds to ten questions (pdf), including whether Apple raised concerns about China’s VPN removal demands and details of how many apps were removed from its store. The senators also want to see copies of any pro-free speech statements Apple has made in China.

Whether the letter will make any difference on the ground in China remains to be seen, but the public involvement of the senators and technology giant Apple is certain to thrust censorship and privacy further into the public eye.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Tips to Secure Your Network in the Wake of KRACK (Linux.com)

Post Syndicated from corbet original https://lwn.net/Articles/736798/rss

Konstantin Ryabitsev argues
on Linux.com that WiFi security is only a part of the problem.
Wi-Fi is merely the first link in a long chain of communication
happening over channels that we should not trust. If I were to guess, the
Wi-Fi router you’re using has probably not received a security update since
the day it got put together. Worse, it probably came with default or easily
guessable administrative credentials that were never changed. Unless you
set up and configured that router yourself and you can remember the last
time you updated its firmware, you should assume that it is now controlled
by someone else and cannot be trusted.

IoT Cybersecurity: What’s Plan B?

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2017/10/iot_cybersecuri.html

In August, four US Senators introduced a bill designed to improve Internet of Things (IoT) security. The IoT Cybersecurity Improvement Act of 2017 is a modest piece of legislation. It doesn’t regulate the IoT market. It doesn’t single out any industries for particular attention, or force any companies to do anything. It doesn’t even modify the liability laws for embedded software. Companies can continue to sell IoT devices with whatever lousy security they want.

What the bill does do is leverage the government’s buying power to nudge the market: any IoT product that the government buys must meet minimum security standards. It requires vendors to ensure that devices can not only be patched, but are patched in an authenticated and timely manner; don’t have unchangeable default passwords; and are free from known vulnerabilities. It’s about as low a security bar as you can set, and that it will considerably improve security speaks volumes about the current state of IoT security. (Full disclosure: I helped draft some of the bill’s security requirements.)

The bill would also modify the Computer Fraud and Abuse and the Digital Millennium Copyright Acts to allow security researchers to study the security of IoT devices purchased by the government. It’s a far narrower exemption than our industry needs. But it’s a good first step, which is probably the best thing you can say about this legislation.

However, it’s unlikely this first step will even be taken. I am writing this column in August, and have no doubt that the bill will have gone nowhere by the time you read it in October or later. If hearings are held, they won’t matter. The bill won’t have been voted on by any committee, and it won’t be on any legislative calendar. The odds of this bill becoming law are zero. And that’s not just because of current politics — I’d be equally pessimistic under the Obama administration.

But the situation is critical. The Internet is dangerous — and the IoT gives it not just eyes and ears, but also hands and feet. Security vulnerabilities, exploits, and attacks that once affected only bits and bytes now affect flesh and blood.

Markets, as we’ve repeatedly learned over the past century, are terrible mechanisms for improving the safety of products and services. It was true for automobile, food, restaurant, airplane, fire, and financial-instrument safety. The reasons are complicated, but basically, sellers don’t compete on safety features because buyers can’t efficiently differentiate products based on safety considerations. The race-to-the-bottom mechanism that markets use to minimize prices also minimizes quality. Without government intervention, the IoT remains dangerously insecure.

The US government has no appetite for intervention, so we won’t see serious safety and security regulations, a new federal agency, or better liability laws. We might have a better chance in the EU. Depending on how the General Data Protection Regulation on data privacy pans out, the EU might pass a similar security law in 5 years. No other country has a large enough market share to make a difference.

Sometimes we can opt out of the IoT, but that option is becoming increasingly rare. Last year, I tried and failed to purchase a new car without an Internet connection. In a few years, it’s going to be nearly impossible to not be multiply connected to the IoT. And our biggest IoT security risks will stem not from devices we have a market relationship with, but from everyone else’s cars, cameras, routers, drones, and so on.

We can try to shop our ideals and demand more security, but companies don’t compete on IoT safety — and we security experts aren’t a large enough market force to make a difference.

We need a Plan B, although I’m not sure what that is. E-mail me if you have any ideas.

This essay previously appeared in the September/October issue of IEEE Security & Privacy.

PureVPN Explains How it Helped the FBI Catch a Cyberstalker

Post Syndicated from Andy original https://torrentfreak.com/purevpn-explains-how-it-helped-the-fbi-catch-a-cyberstalker-171016/

Early October, Ryan S. Lin, 24, of Newton, Massachusetts, was arrested on suspicion of conducting “an extensive cyberstalking campaign” against a 24-year-old Massachusetts woman, as well as her family members and friends.

The Department of Justice described Lin’s offenses as a “multi-faceted” computer hacking and cyberstalking campaign. Launched in April 2016 when he began hacking into the victim’s online accounts, Lin allegedly obtained personal photographs and sensitive information about her medical and sexual histories and distributed that information to hundreds of other people.

Details of what information the FBI compiled on Lin can be found in our earlier report but aside from his alleged crimes (which are both significant and repugnant), it was PureVPN’s involvement in the case that caused the most controversy.

In a report compiled by an FBI special agent, it was revealed that the Hong Kong-based company’s logs helped the authorities net the alleged criminal.

“Significantly, PureVPN was able to determine that their service was accessed by the same customer from two originating IP addresses: the RCN IP address from the home Lin was living in at the time, and the software company where Lin was employed at the time,” the agent’s affidavit reads.

Among many in the privacy community, this revelation was met with disappointment. On the PureVPN website the company claims to carry no logs and on a general basis, it’s expected that so-called “no-logging” VPN providers should provide people with some anonymity, at least as far as their service goes. Now, several days after the furor, the company has responded to its critics.

In a fairly lengthy statement, the company begins by confirming that it definitely doesn’t log what websites a user views or what content he or she downloads.

“PureVPN did not breach its Privacy Policy and certainly did not breach your trust. NO browsing logs, browsing habits or anything else was, or ever will be shared,” the company writes.

However, that’s only half the problem. While it doesn’t log user activity (what sites people visit or content they download), it does log the IP addresses that customers use to access the PureVPN service. These, given the right circumstances, can be matched to external activities thanks to logs carried by other web companies.

PureVPN talks about logs held by Google’s Gmail service to illustrate its point.

“A network log is automatically generated every time a user visits a website. For the sake of this example, let’s say a user logged into their Gmail account. Every time they accessed Gmail, the email provider created a network log,” the company explains.

“If you are using a VPN, Gmail’s network log would contain the IP provided by PureVPN. This is one half of the picture. Now, if someone asks Google who accessed the user’s account, Google would state that whoever was using this IP, accessed the account.

“If the user was connected to PureVPN, it would be a PureVPN IP. The inquirer [in the Lin case, the FBI] would then share timestamps and network logs acquired from Google and ask them to be compared with the network logs maintained by the VPN provider.”

Now, if PureVPN carried no logs – literally no logs – it would not be able to help with this kind of inquiry. That was the case last year when the FBI approached Private Internet Access for information and the company was unable to assist.

However, as is made pretty clear by PureVPN’s explanation, the company does log user IP addresses and timestamps which reveal when a user was logged on to the service. It doesn’t matter that PureVPN doesn’t log what the user allegedly did online, since the third-party service already knows that information to the precise second.

Following the example, GMail knows that a user sent an email at 10:22am on Monday October 16 from a PureVPN IP address. So, if PureVPN is approached by the FBI, the company can confirm that User X was using the same IP address at exactly the same time, and his home IP address was XXX.XX.XXX.XX. Effectively, the combined logs link one IP address to the other and the user is revealed. It’s that simple.

It is for this reason that in TorrentFreak’s annual summary of no-logging VPN providers, the very first question we ask every single company reads as follows:

Do you keep ANY logs which would allow you to match an IP-address and a time stamp to a user/users of your service? If so, what information do you hold and for how long?

Clearly, if a company says “yes we log incoming IP addresses and associated timestamps”, any claim to total user anonymity is ended right there and then.

While not completely useless (a logging service will still stop the prying eyes of ISPs and similar surveillance, while also defeating throttling and site-blocking), if you’re a whistle-blower with a job or even your life to protect, this level of protection is entirely inadequate.

The take-home points from this controversy are numerous, but perhaps the most important is for people to read and understand VPN provider logging policies.

Secondly, and just as importantly, VPN providers need to be extremely clear about the information they log. Not tracking browsing or downloading activities is all well and good, but if home IP addresses and timestamps are stored, this needs to be made clear to the customer.

Finally, VPN users should not be evil. There are plenty of good reasons to stay anonymous online but cyberstalking, death threats and ruining people’s lives are not included. Fortunately, the FBI have offline methods for catching this type of offender, and long may that continue.

PureVPN’s blog post is available here.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Tech Giants Protest Looming US Pirate Site Blocking Order

Post Syndicated from Ernesto original https://torrentfreak.com/tech-giants-protest-looming-us-pirate-site-blocking-order-171013/

While domain seizures against pirate sites are relatively common in the United states, ISP and search engine blocking is not. This could change soon though.

In an ongoing case against Sci-Hub, regularly referred to as the “Pirate Bay of Science,” a magistrate judge in Virginia recently recommended a broad order which would require search engines and Internet providers to block the site.

The recommendation followed a request from the academic publisher American Chemical Society (ACS) that wants these third-party services to make the site in question inaccessible. While Sci-Hub has chosen not to defend itself, a group of tech giants has now stepped in to prevent the broad injunction from being issued.

This week the Computer & Communications Industry Association (CCIA), which includes members such as Cloudflare, Facebook, and Google, asked the court to limit the proposed measures. In an amicus curiae brief submitted to the Virginia District Court, they share their concerns.

“Here, Plaintiff is seeking—and the Magistrate Judge has recommended—a permanent injunction that would sweep in various Neutral Service Providers, despite their having violated no laws and having no connection to this case,” CCIA writes.

According to the tech companies, neutral service providers are not “in active concert or participation” with the defendant, and should, therefore, be excluded from the proposed order.

While search engines may index Sci-Hub and ISPs pass on packets from this site, they can’t be seen as “confederates” that are working together with them to violate the law, CCIA stresses.

“Plaintiff has failed to make a showing that any such provider had a contract with these Defendants or any direct contact with their activities—much less that all of the providers who would be swept up by the proposed injunction had such a connection.”

Even if one of the third party services could be found liable the matter should be resolved under the DMCA, which expressly prohibits such broad injunctions, the CCIA claims.

“The DMCA thus puts bedrock limits on the injunctions that can be imposed on qualifying providers if they are named as defendants and are held liable as infringers. Plaintiff here ignores that.

“What ACS seeks, in the posture of a permanent injunction against nonparties, goes beyond what Congress was willing to permit, even against service providers against whom an actual judgment of infringement has been entered.That request must be rejected.”

The tech companies hope the court will realize that the injunction recommended by the magistrate judge will set a dangerous precedent, which goes beyond what the law is intended for, so will impose limits in response to their concerns.

It will be interesting to see whether any copyright holder groups will also chime in, to argue the opposite.

CCIA’s full amicus curiae brief is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

DevOps Cafe Episode 76 – Randy Shoup

Post Syndicated from DevOpsCafeAdmin original http://devopscafe.org/show/2017/10/11/devops-cafe-episode-76-randy-shoup.html

Technical talent is obviously in his jeans (pun intended) 

John and Damon chat with Randy Shoup (Stitch Fix) about what he’s learned building high-scale systems and teams through multiple generations of technology and practices… and how he is doing it again today.

  

Direct download

Follow John Willis on Twitter: @botchagalupe
Follow Damon Edwards on Twitter: @damonedwards 
Follow Randy Shoup on Twitter: @randyshoup

Notes:

 

Please tweet or leave comments or questions below and we’ll read them on the show!

Dynamic Users with systemd

Post Syndicated from Lennart Poettering original http://0pointer.net/blog/dynamic-users-with-systemd.html

TL;DR: you may now configure systemd to dynamically allocate a UNIX
user ID for service processes when it starts them and release it when
it stops them. It’s pretty secure, mixes well with transient services,
socket activated services and service templating.

Today we released systemd
235
. Among
other improvements this greatly extends the dynamic user logic of
systemd. Dynamic users are a powerful but little known concept,
supported in its basic form since systemd 232. With this blog story I
hope to make it a bit better known.

The UNIX user concept is the most basic and well-understood security
concept in POSIX operating systems. It is UNIX/POSIX’ primary security
concept, the one everybody can agree on, and most security concepts
that came after it (such as process capabilities, SELinux and other
MACs, user name-spaces, …) in some form or another build on it, extend
it or at least interface with it. If you build a Linux kernel with all
security features turned off, the user concept is pretty much the one
you’ll still retain.

Originally, the user concept was introduced to make multi-user systems
a reality, i.e. systems enabling multiple human users to share the
same system at the same time, cleanly separating their resources and
protecting them from each other. The majority of today’s UNIX systems
don’t really use the user concept like that anymore though. Most of
today’s systems probably have only one actual human user (or even
less!), but their user databases (/etc/passwd) list a good number
more entries than that. Today, the majority of UNIX users in most
environments are system users, i.e. users that are not the technical
representation of a human sitting in front of a PC anymore, but the
security identity a system service — an executable program — runs
as. Event though traditional, simultaneous multi-user systems slowly
became less relevant, their ground-breaking basic concept became the
cornerstone of UNIX security. The OS is nowadays partitioned into
isolated services — and each service runs as its own system user, and
thus within its own, minimal security context.

The people behind the Android OS realized the relevance of the UNIX
user concept as the primary security concept on UNIX, and took its use
even further: on Android not only system services take benefit of the
UNIX user concept, but each UI app gets its own, individual user
identity too — thus neatly separating app resources from each other,
and protecting app processes from each other, too.

Back in the more traditional Linux world things are a bit less
advanced in this area. Even though users are the quintessential UNIX
security concept, allocation and management of system users is still a
pretty limited, raw and static affair. In most cases, RPM or DEB
package installation scripts allocate a fixed number of (usually one)
system users when you install the package of a service that wants to
take benefit of the user concept, and from that point on the system
user remains allocated on the system and is never deallocated again,
even if the package is later removed again. Most Linux distributions
limit the number of system users to 1000 (which isn’t particularly a
lot). Allocating a system user is hence expensive: the number of
available users is limited, and there’s no defined way to dispose of
them after use. If you make use of system users too liberally, you are
very likely to run out of them sooner rather than later.

You may wonder why system users are generally not deallocated when the
package that registered them is uninstalled from a system (at least on
most distributions). The reason for that is one relevant property of
the user concept (you might even want to call this a design flaw):
user IDs are sticky to files (and other objects such as IPC
objects). If a service running as a specific system user creates a
file at some location, and is then terminated and its package and user
removed, then the created file still belongs to the numeric ID (“UID”)
the system user originally got assigned. When the next system user is
allocated and — due to ID recycling — happens to get assigned the same
numeric ID, then it will also gain access to the file, and that’s
generally considered a problem, given that the file belonged to a
potentially very different service once upon a time, and likely should
not be readable or changeable by anything coming after
it. Distributions hence tend to avoid UID recycling which means system
users remain registered forever on a system after they have been
allocated once.

The above is a description of the status quo ante. Let’s now focus on
what systemd’s dynamic user concept brings to the table, to improve
the situation.

Introducing Dynamic Users

With systemd dynamic users we hope to make make it easier and cheaper
to allocate system users on-the-fly, thus substantially increasing the
possible uses of this core UNIX security concept.

If you write a systemd service unit file, you may enable the dynamic
user logic for it by setting the
DynamicUser=
option in its [Service] section to yes. If you do a system user is
dynamically allocated the instant the service binary is invoked, and
released again when the service terminates. The user is automatically
allocated from the UID range 61184–65519, by looking for a so far
unused UID.

Now you may wonder, how does this concept deal with the sticky user
issue discussed above? In order to counter the problem, two strategies
easily come to mind:

  1. Prohibit the service from creating any files/directories or IPC objects

  2. Automatically removing the files/directories or IPC objects the
    service created when it shuts down.

In systemd we implemented both strategies, but for different parts of
the execution environment. Specifically:

  1. Setting DynamicUser=yes implies
    ProtectSystem=strict
    and
    ProtectHome=read-only. These
    sand-boxing options turn off write access to pretty much the whole OS
    directory tree, with a few relevant exceptions, such as the API file
    systems /proc, /sys and so on, as well as /tmp and
    /var/tmp. (BTW: setting these two options on your regular services
    that do not use DynamicUser= is a good idea too, as it drastically
    reduces the exposure of the system to exploited services.)

  2. Setting DynamicUser=yes implies
    PrivateTmp=yes. This
    option sets up /tmp and /var/tmp for the service in a way that it
    gets its own, disconnected version of these directories, that are not
    shared by other services, and whose life-cycle is bound to the
    service’s own life-cycle. Thus if the service goes down, the user is
    removed and all its temporary files and directories with it. (BTW: as
    above, consider setting this option for your regular services that do
    not use DynamicUser= too, it’s a great way to lock things down
    security-wise.)

  3. Setting DynamicUser=yes implies
    RemoveIPC=yes. This
    option ensures that when the service goes down all SysV and POSIX IPC
    objects (shared memory, message queues, semaphores) owned by the
    service’s user are removed. Thus, the life-cycle of the IPC objects is
    bound to the life-cycle of the dynamic user and service, too. (BTW:
    yes, here too, consider using this in your regular services, too!)

With these four settings in effect, services with dynamic users are
nicely sand-boxed. They cannot create files or directories, except in
/tmp and /var/tmp, where they will be removed automatically when
the service shuts down, as will any IPC objects created. Sticky
ownership of files/directories and IPC objects is hence dealt with
effectively.

The
RuntimeDirectory=
option may be used to open up a bit the sandbox to external
programs. If you set it to a directory name of your choice, it will be
created below /run when the service is started, and removed in its
entirety when it is terminated. The ownership of the directory is
assigned to the service’s dynamic user. This way, a dynamic user
service can expose API interfaces (AF_UNIX sockets, …) to other
services at a well-defined place and again bind the life-cycle of it to
the service’s own run-time. Example: set RuntimeDirectory=foobar in
your service, and watch how a directory /run/foobar appears at the
moment you start the service, and disappears the moment you stop
it again. (BTW: Much like the other settings discussed above,
RuntimeDirectory= may be used outside of the DynamicUser= context
too, and is a nice way to run any service with a properly owned,
life-cycle-managed run-time directory.)

Persistent Data

Of course, a service running in such an environment (although already
very useful for many cases!), has a major limitation: it cannot leave
persistent data around it can reuse on a later run. As pretty much the
whole OS directory tree is read-only to it, there’s simply no place it
could put the data that survives from one service invocation to the
next.

With systemd 235 this limitation is removed: there are now three new
settings:
StateDirectory=,
LogsDirectory= and CacheDirectory=. In many ways they operate like
RuntimeDirectory=, but create sub-directories below /var/lib,
/var/log and /var/cache, respectively. There’s one major
difference beyond that however: directories created that way are
persistent, they will survive the run-time cycle of a service, and
thus may be used to store data that is supposed to stay around between
invocations of the service.

Of course, the obvious question to ask now is: how do these three
settings deal with the sticky file ownership problem?

For that we lifted a concept from container managers. Container
managers have a very similar problem: each container and the host
typically end up using a very similar set of numeric UIDs, and unless
user name-spacing is deployed this means that host users might be able
to access the data of specific containers that also have a user by the
same numeric UID assigned, even though it actually refers to a very
different identity in a different context. (Actually, it’s even worse
than just getting access, due to the existence of setuid file bits,
access might translate to privilege elevation.) The way container
managers protect the container images from the host (and from each
other to some level) is by placing the container trees below a
boundary directory, with very restrictive access modes and ownership
(0700 and root:root or so). A host user hence cannot take advantage
of the files/directories of a container user of the same UID inside of
a local container tree, simply because the boundary directory makes it
impossible to even reference files in it. After all on UNIX, in order
to get access to a specific path you need access to every single
component of it.

How is that applied to dynamic user services? Let’s say
StateDirectory=foobar is set for a service that has DynamicUser=
turned off. The instant the service is started, /var/lib/foobar is
created as state directory, owned by the service’s user and remains in
existence when the service is stopped. If the same service now is run
with DynamicUser= turned on, the implementation is slightly
altered. Instead of a directory /var/lib/foobar a symbolic link by
the same path is created (owned by root), pointing to
/var/lib/private/foobar (the latter being owned by the service’s
dynamic user). The /var/lib/private directory is created as boundary
directory: it’s owned by root:root, and has a restrictive access
mode of 0700. Both the symlink and the service’s state directory will
survive the service’s life-cycle, but the state directory will remain,
and continues to be owned by the now disposed dynamic UID — however it
is protected from other host users (and other services which might get
the same dynamic UID assigned due to UID recycling) by the boundary
directory.

The obvious question to ask now is: but if the boundary directory
prohibits access to the directory from unprivileged processes, how can
the service itself which runs under its own dynamic UID access it
anyway? This is achieved by invoking the service process in a slightly
modified mount name-space: it will see most of the file hierarchy the
same way as everything else on the system (modulo /tmp and
/var/tmp as mentioned above), except for /var/lib/private, which
is over-mounted with a read-only tmpfs file system instance, with a
slightly more liberal access mode permitting the service read
access. Inside of this tmpfs file system instance another mount is
placed: a bind mount to the host’s real /var/lib/private/foobar
directory, onto the same name. Putting this together these means that
superficially everything looks the same and is available at the same
place on the host and from inside the service, but two important
changes have been made: the /var/lib/private boundary directory lost
its restrictive character inside the service, and has been emptied of
the state directories of any other service, thus making the protection
complete. Note that the symlink /var/lib/foobar hides the fact that
the boundary directory is used (making it little more than an
implementation detail), as the directory is available this way under
the same name as it would be if DynamicUser= was not used. Long
story short: for the daemon and from the view from the host the
indirection through /var/lib/private is mostly transparent.

This logic of course raises another question: what happens to the
state directory if a dynamic user service is started with a state
directory configured, gets UID X assigned on this first invocation,
then terminates and is restarted and now gets UID Y assigned on the
second invocation, with X ≠ Y? On the second invocation the directory
— and all the files and directories below it — will still be owned by
the original UID X so how could the second instance running as Y
access it? Our way out is simple: systemd will recursively change the
ownership of the directory and everything contained within it to UID Y
before invoking the service’s executable.

Of course, such recursive ownership changing (chown()ing) of whole
directory trees can become expensive (though according to my
experiences, IRL and for most services it’s much cheaper than you
might think), hence in order to optimize behavior in this regard, the
allocation of dynamic UIDs has been tweaked in two ways to avoid the
necessity to do this expensive operation in most cases: firstly, when
a dynamic UID is allocated for a service an allocation loop is
employed that starts out with a UID hashed from the service’s
name. This means a service by the same name is likely to always use
the same numeric UID. That means that a stable service name translates
into a stable dynamic UID, and that means recursive file ownership
adjustments can be skipped (of course, after validation). Secondly, if
the configured state directory already exists, and is owned by a
suitable currently unused dynamic UID, it’s preferably used above
everything else, thus maximizing the chance we can avoid the
chown()ing. (That all said, ultimately we have to face it, the
currently available UID space of 4K+ is very small still, and
conflicts are pretty likely sooner or later, thus a chown()ing has to
be expected every now and then when this feature is used extensively).

Note that CacheDirectory= and LogsDirectory= work very similar to
StateDirectory=. The only difference is that they manage directories
below the /var/cache and /var/logs directories, and their boundary
directory hence is /var/cache/private and /var/log/private,
respectively.

Examples

So, after all this introduction, let’s have a look how this all can be
put together. Here’s a trivial example:

# cat > /etc/systemd/system/dynamic-user-test.service <<EOF
[Service]
ExecStart=/usr/bin/sleep 4711
DynamicUser=yes
EOF
# systemctl daemon-reload
# systemctl start dynamic-user-test
# systemctl status dynamic-user-test
● dynamic-user-test.service
   Loaded: loaded (/etc/systemd/system/dynamic-user-test.service; static; vendor preset: disabled)
   Active: active (running) since Fri 2017-10-06 13:12:25 CEST; 3s ago
 Main PID: 2967 (sleep)
    Tasks: 1 (limit: 4915)
   CGroup: /system.slice/dynamic-user-test.service
           └─2967 /usr/bin/sleep 4711

Okt 06 13:12:25 sigma systemd[1]: Started dynamic-user-test.service.
# ps -e -o pid,comm,user | grep 2967
 2967 sleep           dynamic-user-test
# id dynamic-user-test
uid=64642(dynamic-user-test) gid=64642(dynamic-user-test) groups=64642(dynamic-user-test)
# systemctl stop dynamic-user-test
# id dynamic-user-test
id: ‘dynamic-user-test’: no such user

In this example, we create a unit file with DynamicUser= turned on,
start it, check if it’s running correctly, have a look at the service
process’ user (which is named like the service; systemd does this
automatically if the service name is suitable as user name, and you
didn’t configure any user name to use explicitly), stop the service
and verify that the user ceased to exist too.

That’s already pretty cool. Let’s step it up a notch, by doing the
same in an interactive transient service (for those who don’t know
systemd well: a transient service is a service that is defined and
started dynamically at run-time, for example via the systemd-run
command from the shell. Think: run a service without having to write a
unit file first):

# systemd-run --pty --property=DynamicUser=yes --property=StateDirectory=wuff /bin/sh
Running as unit: run-u15750.service
Press ^] three times within 1s to disconnect TTY.
sh-4.4$ id
uid=63122(run-u15750) gid=63122(run-u15750) groups=63122(run-u15750) context=system_u:system_r:initrc_t:s0
sh-4.4$ ls -al /var/lib/private/
total 0
drwxr-xr-x. 3 root       root        60  6. Okt 13:21 .
drwxr-xr-x. 1 root       root       852  6. Okt 13:21 ..
drwxr-xr-x. 1 run-u15750 run-u15750   8  6. Okt 13:22 wuff
sh-4.4$ ls -ld /var/lib/wuff
lrwxrwxrwx. 1 root root 12  6. Okt 13:21 /var/lib/wuff -> private/wuff
sh-4.4$ ls -ld /var/lib/wuff/
drwxr-xr-x. 1 run-u15750 run-u15750 0  6. Okt 13:21 /var/lib/wuff/
sh-4.4$ echo hello > /var/lib/wuff/test
sh-4.4$ exit
exit
# id run-u15750
id: ‘run-u15750’: no such user
# ls -al /var/lib/private
total 0
drwx------. 1 root  root   66  6. Okt 13:21 .
drwxr-xr-x. 1 root  root  852  6. Okt 13:21 ..
drwxr-xr-x. 1 63122 63122   8  6. Okt 13:22 wuff
# ls -ld /var/lib/wuff
lrwxrwxrwx. 1 root root 12  6. Okt 13:21 /var/lib/wuff -> private/wuff
# ls -ld /var/lib/wuff/
drwxr-xr-x. 1 63122 63122 8  6. Okt 13:22 /var/lib/wuff/
# cat /var/lib/wuff/test
hello

The above invokes an interactive shell as transient service
run-u15750.service (systemd-run picked that name automatically,
since we didn’t specify anything explicitly) with a dynamic user whose
name is derived automatically from the service name. Because
StateDirectory=wuff is used, a persistent state directory for the
service is made available as /var/lib/wuff. In the interactive shell
running inside the service, the ls commands show the
/var/lib/private boundary directory and its contents, as well as the
symlink that is placed for the service. Finally, before exiting the
shell, a file is created in the state directory. Back in the original
command shell we check if the user is still allocated: it is not, of
course, since the service ceased to exist when we exited the shell and
with it the dynamic user associated with it. From the host we check
the state directory of the service, with similar commands as we did
from inside of it. We see that things are set up pretty much the same
way in both cases, except for two things: first of all the user/group
of the files is now shown as raw numeric UIDs instead of the
user/group names derived from the unit name. That’s because the user
ceased to exist at this point, and “ls” shows the raw UID for files
owned by users that don’t exist. Secondly, the access mode of the
boundary directory is different: when we look at it from outside of
the service it is not readable by anyone but root, when we looked from
inside we saw it it being world readable.

Now, let’s see how things look if we start another transient service,
reusing the state directory from the first invocation:

# systemd-run --pty --property=DynamicUser=yes --property=StateDirectory=wuff /bin/sh
Running as unit: run-u16087.service
Press ^] three times within 1s to disconnect TTY.
sh-4.4$ cat /var/lib/wuff/test
hello
sh-4.4$ ls -al /var/lib/wuff/
total 4
drwxr-xr-x. 1 run-u16087 run-u16087  8  6. Okt 13:22 .
drwxr-xr-x. 3 root       root       60  6. Okt 15:42 ..
-rw-r--r--. 1 run-u16087 run-u16087  6  6. Okt 13:22 test
sh-4.4$ id
uid=63122(run-u16087) gid=63122(run-u16087) groups=63122(run-u16087) context=system_u:system_r:initrc_t:s0
sh-4.4$ exit
exit

Here, systemd-run picked a different auto-generated unit name, but
the used dynamic UID is still the same, as it was read from the
pre-existing state directory, and was otherwise unused. As we can see
the test file we generated earlier is accessible and still contains
the data we left in there. Do note that the user name is different
this time (as it is derived from the unit name, which is different),
but the UID it is assigned to is the same one as on the first
invocation. We can thus see that the mentioned optimization of the UID
allocation logic (i.e. that we start the allocation loop from the UID
owner of any existing state directory) took effect, so that no
recursive chown()ing was required.

And that’s the end of our example, which hopefully illustrated a bit
how this concept and implementation works.

Use-cases

Now that we had a look at how to enable this logic for a unit and how
it is implemented, let’s discuss where this actually could be useful
in real life.

  • One major benefit of dynamic user IDs is that running a
    privilege-separated service leaves no artifacts in the system. A
    system user is allocated and made use of, but it is discarded
    automatically in a safe and secure way after use, in a fashion that is
    safe for later recycling. Thus, quickly invoking a short-lived service
    for processing some job can be protected properly through a user ID
    without having to pre-allocate it and without this draining the
    available UID pool any longer than necessary.

  • In many cases, starting a service no longer requires
    package-specific preparation. Or in other words, quite often
    useradd/mkdir/chown/chmod invocations in “post-inst” package
    scripts, as well as
    sysusers.d
    and
    tmpfiles.d
    drop-ins become unnecessary, as the DynamicUser= and
    StateDirectory=/CacheDirectory=/LogsDirectory= logic can do the
    necessary work automatically, on-demand and with a well-defined
    life-cycle.

  • By combining dynamic user IDs with the transient unit concept, new
    creative ways of sand-boxing are made available. For example, let’s say
    you don’t trust the correct implementation of the sort command. You
    can now lock it into a simple, robust, dynamic UID sandbox with a
    simple systemd-run and still integrate it into a shell pipeline like
    any other command. Here’s an example, showcasing a shell pipeline
    whose middle element runs as a dynamically on-the-fly allocated UID,
    that is released when the pipelines ends.

    # cat some-file.txt | systemd-run ---pipe --property=DynamicUser=1 sort -u | grep -i foobar > some-other-file.txt
    
  • By combining dynamic user IDs with the systemd templating logic it
    is now possible to do much more fine-grained and fully automatic UID
    management. For example, let’s say you have a template unit file
    /etc/systemd/system/[email protected]:

    [Service]
    ExecStart=/usr/bin/myfoobarserviced
    DynamicUser=1
    StateDirectory=foobar/%i
    

    Now, let’s say you want to start one instance of this service for
    each of your customers. All you need to do now for that is:

    # systemctl enable [email protected] --now
    

    And you are done. (Invoke this as many times as you like, each time
    replacing customerxyz by some customer identifier, you get the
    idea.)

  • By combining dynamic user IDs with socket activation you may easily
    implement a system where each incoming connection is served by a
    process instance running as a different, fresh, newly allocated UID
    within its own sandbox. Here’s an example waldo.socket:

    [Socket]
    ListenStream=2048
    Accept=yes
    

    With a matching [email protected]:

    [Service]
    ExecStart=-/usr/bin/myservicebinary
    DynamicUser=yes
    

    With the two unit files above, systemd will listen on TCP/IP port
    2048, and for each incoming connection invoke a fresh instance of
    [email protected], each time utilizing a different, new,
    dynamically allocated UID, neatly isolated from any other
    instance.

  • Dynamic user IDs combine very well with state-less systems,
    i.e. systems that come up with an unpopulated /etc and /var. A
    service using dynamic user IDs and the StateDirectory=,
    CacheDirectory=, LogsDirectory= and RuntimeDirectory= concepts
    will implicitly allocate the users and directories it needs for
    running, right at the moment where it needs it.

Dynamic users are a very generic concept, hence a multitude of other
uses are thinkable; the list above is just supposed to trigger your
imagination.

What does this mean for you as a packager?

I am pretty sure that a large number of services shipped with today’s
distributions could benefit from using DynamicUser= and
StateDirectory= (and related settings). It often allows removal of
post-inst packaging scripts altogether, as well as any sysusers.d
and tmpfiles.d drop-ins by unifying the needed declarations in the
unit file itself. Hence, as a packager please consider switching your
unit files over. That said, there are a number of conditions where
DynamicUser= and StateDirectory= (and friends) cannot or should
not be used. To name a few:

  1. Service that need to write to files outside of /run/<package>,
    /var/lib/<package>, /var/cache/<package>, /var/log/<package>,
    /var/tmp, /tmp, /dev/shm are generally incompatible with this
    scheme. This rules out daemons that upgrade the system as one example,
    as that involves writing to /usr.

  2. Services that maintain a herd of processes with different user
    IDs. Some SMTP services are like this. If your service has such a
    super-server design, UID management needs to be done by the
    super-server itself, which rules out systemd doing its dynamic UID
    magic for it.

  3. Services which run as root (obviously…) or are otherwise
    privileged.

  4. Services that need to live in the same mount name-space as the host
    system (for example, because they want to establish mount points
    visible system-wide). As mentioned DynamicUser= implies
    ProtectSystem=, PrivateTmp= and related options, which all require
    the service to run in its own mount name-space.

  5. Your focus is older distributions, i.e. distributions that do not
    have systemd 232 (for DynamicUser=) or systemd 235 (for
    StateDirectory= and friends) yet.

  6. If your distribution’s packaging guides don’t allow it. Consult
    your packaging guides, and possibly start a discussion on your
    distribution’s mailing list about this.

Notes

A couple of additional, random notes about the implementation and use
of these features:

  1. Do note that allocating or deallocating a dynamic user leaves
    /etc/passwd untouched. A dynamic user is added into the user
    database through the glibc NSS module
    nss-systemd,
    and this information never hits the disk.

  2. On traditional UNIX systems it was the job of the daemon process
    itself to drop privileges, while the DynamicUser= concept is
    designed around the service manager (i.e. systemd) being responsible
    for that. That said, since v235 there’s a way to marry DynamicUser=
    and such services which want to drop privileges on their own. For
    that, turn on DynamicUser= and set
    User=
    to the user name the service wants to setuid() to. This has the
    effect that systemd will allocate the dynamic user under the specified
    name when the service is started. Then, prefix the command line you
    specify in
    ExecStart=
    with a single ! character. If you do, the user is allocated for the
    service, but the daemon binary is is invoked as root instead of the
    allocated user, under the assumption that the daemon changes its UID
    on its own the right way. Not that after registration the user will
    show up instantly in the user database, and is hence resolvable like
    any other by the daemon process. Example:
    ExecStart=!/usr/bin/mydaemond

  3. You may wonder why systemd uses the UID range 61184–65519 for its
    dynamic user allocations (side note: in hexadecimal this reads as
    0xEF00–0xFFEF). That’s because distributions (specifically Fedora)
    tend to allocate regular users from below the 60000 range, and we
    don’t want to step into that. We also want to stay away from 65535 and
    a bit around it, as some of these UIDs have special meanings (65535 is
    often used as special value for “invalid” or “no” UID, as it is
    identical to the 16bit value -1; 65534 is generally mapped to the
    “nobody” user, and is where some kernel subsystems map unmappable
    UIDs). Finally, we want to stay within the 16bit range. In a user
    name-spacing world each container tends to have much less than the full
    32bit UID range available that Linux kernels theoretically
    provide. Everybody apparently can agree that a container should at
    least cover the 16bit range though — already to include a nobody
    user. (And quite frankly, I am pretty sure assigning 64K UIDs per
    container is nicely systematic, as the the higher 16bit of the 32bit
    UID values this way become a container ID, while the lower 16bit
    become the logical UID within each container, if you still follow what
    I am babbling here…). And before you ask: no this range cannot be
    changed right now, it’s compiled in. We might change that eventually
    however.

  4. You might wonder what happens if you already used UIDs from the
    61184–65519 range on your system for other purposes. systemd should
    handle that mostly fine, as long as that usage is properly registered
    in the user database: when allocating a dynamic user we pick a UID,
    see if it is currently used somehow, and if yes pick a different one,
    until we find a free one. Whether a UID is used right now or not is
    checked through NSS calls. Moreover the IPC object lists are checked to
    see if there are any objects owned by the UID we are about to
    pick. This means systemd will avoid using UIDs you have assigned
    otherwise. Note however that this of course makes the pool of
    available UIDs smaller, and in the worst cases this means that
    allocating a dynamic user might fail because there simply are no
    unused UIDs in the range.

  5. If not specified otherwise the name for a dynamically allocated
    user is derived from the service name. Not everything that’s valid in
    a service name is valid in a user-name however, and in some cases a
    randomized name is used instead to deal with this. Often it makes
    sense to pick the user names to register explicitly. For that use
    User= and choose whatever you like.

  6. If you pick a user name with User= and combine it with
    DynamicUser= and the user already exists statically it will be used
    for the service and the dynamic user logic is automatically
    disabled. This permits automatic up- and downgrades between static and
    dynamic UIDs. For example, it provides a nice way to move a system
    from static to dynamic UIDs in a compatible way: as long as you select
    the same User= value before and after switching DynamicUser= on,
    the service will continue to use the statically allocated user if it
    exists, and only operates in the dynamic mode if it does not. This is
    useful for other cases as well, for example to adapt a service that
    normally would use a dynamic user to concepts that require statically
    assigned UIDs, for example to marry classic UID-based file system
    quota with such services.

  7. systemd always allocates a pair of dynamic UID and GID at the same
    time, with the same numeric ID.

  8. If the Linux kernel had a “shiftfs” or similar functionality,
    i.e. a way to mount an existing directory to a second place, but map
    the exposed UIDs/GIDs in some way configurable at mount time, this
    would be excellent for the implementation of StateDirectory= in
    conjunction with DynamicUser=. It would make the recursive
    chown()ing step unnecessary, as the host version of the state
    directory could simply be mounted into a the service’s mount
    name-space, with a shift applied that maps the directory’s owner to the
    services’ UID/GID. But I don’t have high hopes in this regard, as all
    work being done in this area appears to be bound to user name-spacing
    — which is a concept not used here (and I guess one could say user
    name-spacing is probably more a source of problems than a solution to
    one, but you are welcome to disagree on that).

And that’s all for now. Enjoy your dynamic users!

Yes, Backblaze Just Ordered 100 Petabytes of Hard Drives

Post Syndicated from Andy Klein original https://www.backblaze.com/blog/400-petabytes-cloud-storage/

10 Petabyt vault, 100 Petabytes ordered, 400 Petabytes stored

Backblaze just ordered a 100 petabytes’ worth of hard drives, and yes, we’ll use nearly all of them in Q4. In fact, we’ll begin the process of sourcing the Q1 hard drive order in the next few weeks.

What are we doing with all those hard drives? Let’s take a look.

Our First 10 Petabyte Backblaze Vault

Ken clicked the submit button and 10 Petabytes of Backblaze Cloud Storage came online ready to accept customer data. Ken (aka the Pod Whisperer), is one of our Datacenter Operations Managers at Backblaze and with that one click, he activated Backblaze Vault 1093, which was built with 1,200 Seagate 10 TB drives (model: ST10000NM0086). After formatting and configuration of the disks, there is 10.12 Petabytes of free space remaining for customer data. Back in 2011, when Ken started at Backblaze, he was amazed that we had amassed as much as 10 Petabytes of data storage.

The Seagate 10 TB drives we deployed in vault 1093 are helium-filled drives. We had previously deployed 45 HGST 8 TB helium-filled drives where we learned one of the benefits of using helium drives — they consume less power than traditional air-filled drives. Here’s a quick comparison of the power consumption of several high-density drive models we deploy:

MFR Model Fill Size Idle (1) Operating (2)
Seagate ST8000DM002 Air 8 TB 7.2 watts 9.0 watts
Seagate ST8000NM0055 Air 8 TB 7.6 watts 8.6 watts
HGST HUH728080ALE600 Helium 8 TB 5.1 watts 7.4 watts
Seagate ST10000NM0086 Helium 10 TB 4.8 watts 8.6 watts
(1) Idle: Average Idle in watts as reported by the manufacturer.
(2) Operating: The maximum operational consumption in watts as reported by the manufacturer — typically for read operations.

I’d like 100 Petabytes of Hard Drives To Go, Please

“100 Petabytes should get us through Q4.” — Tim Nufire, Chief Cloud Officer, Backblaze

The 1,200 Seagate 10 TB drives are just the beginning. The next Backblaze Vault will be configured with 12 TB drives which will give us 12.2 petabytes of storage in one vault. We are currently building and adding two to three Backblaze Vaults a month to our cloud storage system, so we are going to need more drives. When we did all of our “drive math,” we decided to place an order for 100 petabytes of hard drives comprised of 10 and 12 TB models. Gleb, our CEO and occasional blogger, exhaled mightily as he signed the biggest purchase order in company history. Wait until he sees the one for Q1.

Enough drives for a 10 petabyte vault

400 Petabytes of Cloud Storage

When we added Backblaze Vault 1093, we crossed over 400 Petabytes of total available storage. For those of you keeping score at home, we reached 350 Petabytes about 3 months ago as you can see in the chart below.

Petabytes of data stored by Backblaze

Backblaze Vault Primer

All of the storage capacity we’ve added in the last two years has been on our Backblaze Vault architecture, with vault 1093 being the 60th one we have placed into service. Each Backblaze Vault is comprised of 20 Backblaze Storage Pods logically grouped together into one storage system. Today, each Storage Pod contains sixty 3 ½” hard drives, giving each vault 1,200 drives. Early vaults were built on Storage Pods with 45 hard drives, for a total of 900 drives in a vault.

A Backblaze Vault accepts data directly from an authenticated user. Each data blob (object, file, group of files) is divided into 20 shards (17 data shards and 3 parity shards) using our erasure coding library. Each of the 20 shards is stored on a different Storage Pod in the vault. At any given time, several vaults stand ready to receive data storage requests.

Drive Stats for the New Drives

In our Q3 2017 Drive Stats report, due out in late October, we’ll start reporting on the 10 TB drives we are adding. It looks like the 12 TB drives will come online in Q4. We’ll also get a better look at the 8 TB consumer and enterprise drives we’ve been following. Stay tuned.

Other Big Data Clouds

We have always been transparent here at Backblaze, including about how much data we store, how we store it, even how much it costs to do so. Very few others do the same. But, if you have information on how much data a company or organization stores in the cloud, let us know in the comments. Please include the source and make sure the data is not considered proprietary. If we get enough tidbits we’ll publish a “big cloud” list.

The post Yes, Backblaze Just Ordered 100 Petabytes of Hard Drives appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Browser hacking for 280 character tweets

Post Syndicated from Robert Graham original http://blog.erratasec.com/2017/09/browser-hacking-for-280-character-tweets.html

Twitter has raised the limit to 280 characters for a select number of people. However, they left open a hole, allowing anybody to make large tweets with a little bit of hacking. The hacking skills needed are basic hacking skills, which I thought I’d write up in a blog post.


Specifically, the skills you will exercise are:

  • basic command-line shell
  • basic HTTP requests
  • basic browser DOM editing

The short instructions

The basic instructions were found in tweets like the following:
These instructions are clear to the average hacker, but of course, a bit difficult for those learning hacking, hence this post.

The command-line

The basics of most hacking start with knowledge of the command-line. This is the “Terminal” app under macOS or cmd.exe under Windows. Almost always when you see hacking dramatized in the movies, they are using the command-line.
In the beginning, the command-line is all computers had. To do anything on a computer, you had to type a “command” telling it what to do. What we see as the modern graphical screen is a layer on top of the command-line, one that translates clicks of the mouse into the raw commands.
On most systems, the command-line is known as “bash”. This is what you’ll find on Linux and macOS. Windows historically has had a different command-line that uses slightly different syntax, though in the last couple years, they’ve also supported “bash”. You’ll have to install it first, such as by following these instructions.
You’ll see me use command that may not be yet installed on your “bash” command-line, like nc and curl. You’ll need to run a command to install them, such as:
sudo apt-get install nc curl
The thing to remember about the command-line is that the mouse doesn’t work. You can’t click to move the cursor as you normally do in applications. That’s because the command-line predates the mouse by decades. Instead, you have to use arrow keys.
I’m not going to spend much effort discussing the command-line, as a complete explanation is beyond the scope of this document. Instead, I’m assuming the reader either already knows it, or will learn-from-example as we go along.

Web requests

The basics of how the web works are really simple. A request to a web server is just a small packet of text, such as the following, which does a search on Google for the search-term “penguin” (presumably, you are interested in knowing more about penguins):
GET /search?q=penguin HTTP/1.0
Host: www.google.com
User-Agent: human
The command we are sending to the server is GET, meaning get a page. We are accessing the URL /search, which on Google’s website, is how you do a search. We are then sending the parameter q with the value penguin. We also declare that we are using version 1.0 of the HTTP (hyper-text transfer protocol).
Following the first line there are a number of additional headers. In one header, we declare the Host name that we are accessing. Web servers can contain many different websites, with different names, so this header is usually imporant.
We also add the User-Agent header. The “user-agent” means the “browser” that you use, like Edge, Chrome, Firefox, or Safari. It allows servers to send content optimized for different browsers. Since we are sending web requests without a browser here, we are joking around saying human.
Here’s what happens when we use the nc program to send this to a google web server:
The first part is us typing, until we hit the [enter] key to create a blank line. After that point is the response from the Google server. We get back a result code (OK), followed by more headers from the server, and finally the contents of the webpage, which goes on from many screens. (We’ll talk about what web pages look like below).
Note that a lot of HTTP headers are optional and really have little influence on what’s going on. They are just junk added to web requests. For example, we see Google report a P3P header is some relic of 2002 that nobody uses anymore, as far as I can tell. Indeed, if you follow the URL in the P3P header, Google pretty much says exactly that.
I point this out because the request I show above is a simplified one. In practice, most requests contain a lot more headers, especially Cookie headers. We’ll see that later when making requests.

Using cURL instead

Sending the raw HTTP request to the server, and getting raw HTTP/HTML back, is annoying. The better way of doing this is with the tool known as cURL, or plainly, just curl. You may be familiar with the older command-line tools wget. cURL is similar, but more flexible.
To use curl for the experiment above, we’d do something like the following. We are saving the web page to “penguin.html” instead of just spewing it on the screen.
Underneath, cURL builds an HTTP header just like the one we showed above, and sends it to the server, getting the response back.

Web-pages

Now let’s talk about web pages. When you look at the web page we got back from Google while searching for “penguin”, you’ll see that it’s intimidatingly complex. I mean, it intimidates me. But it all starts from some basic principles, so we’ll look at some simpler examples.
The following is text of a simple web page:
<html>
<body>
<h1>Test</h1>
<p>This is a simple web page</p>
</body>
</html>
This is HTML, “hyper-text markup language”. As it’s name implies, we “markup” text, such as declaring the first text as a level-1 header (H1), and the following text as a paragraph (P).
In a web browser, this gets rendered as something that looks like the following. Notice how a header is formatted differently from a paragraph. Also notice that web browsers can use local files as well as make remote requests to web servers:
You can right-mouse click on the page and do a “View Source”. This will show the raw source behind the web page:
Web pages don’t just contain marked-up text. They contain two other important features, style information that dictates how things appear, and script that does all the live things that web pages do, from which we build web apps.
So let’s add a little bit of style and scripting to our web page. First, let’s view the source we’ll be adding:
In our header (H1) field, we’ve added the attribute to the markup giving this an id of mytitle. In the style section above, we give that element a color of blue, and tell it to align to the center.
Then, in our script section, we’ve told it that when somebody clicks on the element “mytitle”, it should send an “alert” message of “hello”.
This is what our web page now looks like, with the center blue title:
When we click on the title, we get a popup alert:
Thus, we see an example of the three components of a webpage: markup, style, and scripting.

Chrome developer tools

Now we go off the deep end. Right-mouse click on “Test” (not normal click, but right-button click, to pull up a menu). Select “Inspect”.
You should now get a window that looks something like the following. Chrome splits the screen in half, showing the web page on the left, and it’s debug tools on the right.
This looks similar to what “View Source” shows, but it isn’t. Instead, it’s showing how Chrome interpreted the source HTML. For example, our style/script tags should’ve been marked up with a head (header) tag. We forgot it, but Chrome adds it in anyway.
What Google is showing us is called the DOM, or document object model. It shows us all the objects that make up a web page, and how they fit together.
For example, it shows us how the style information for #mytitle is created. It first starts with the default style information for an h1 tag, and then how we’ve changed it with our style specifications.
We can edit the DOM manually. Just double click on things you want to change. For example, in this screen shot, I’ve changed the style spec from blue to red, and I’ve changed the header and paragraph test. The original file on disk hasn’t changed, but I’ve changed the DOM in memory.
This is a classic hacking technique. If you don’t like things like paywalls, for example, just right-click on the element blocking your view of the text, “Inspect” it, then delete it. (This works for some paywalls).
This edits the markup and style info, but changing the scripting stuff is a bit more complicated. To do that, click on the [Console] tab. This is the scripting console, and allows you to run code directly as part of the webpage. We are going to run code that resets what happens when we click on the title. In this case, we are simply going to change the message to “goodbye”.
Now when we click on the title, we indeed get the message:
Again, a common way to get around paywalls is to run some code like that that change which functions will be called.

Putting it all together

Now let’s put this all together in order to hack Twitter to allow us (the non-chosen) to tweet 280 characters. Review Dildog’s instructions above.
The first step is to get to Chrome Developer Tools. Dildog suggests F12. I suggest right-clicking on the Tweet button (or Reply button, as I use in my example) and doing “Inspect”, as I describe above.
You’ll now see your screen split in half, with the DOM toward the right, similar to how I describe above. However, Twitter’s app is really complex. Well, not really complex, it’s all basic stuff when you come right down to it. It’s just so much stuff — it’s a large web app with lots of parts. So we have to dive in without understanding everything that’s going on.
The Tweet/Reply button we are inspecting is going to look like this in the DOM:
The Tweet/Reply button is currently greyed out because it has the “disabled” attribute. You need to double click on it and remove that attribute. Also, in the class attribute, there is also a “disabled” part. Double-click, then click on that and removed just that disabled as well, without impacting the stuff around it. This should change the button from disabled to enabled. It won’t be greyed out, and it’ll respond when you click on it.
Now click on it. You’ll get an error message, as shown below:
What we’ve done here is bypass what’s known as client-side validation. The script in the web page prevented sending Tweets longer than 140 characters. Our editing of the DOM changed that, allowing us to send a bad request to the server. Bypassing client-side validation this way is the source of a lot of hacking.
But Twitter still does server-side validation as well. They know any client-side validation can be bypassed, and are in on the joke. They tell us hackers “You’ll have to be more clever”. So let’s be more clever.
In order to make longer 280 characters tweets work for select customers, they had to change something on the server-side. The thing they added was adding a “weighted_character_count=true” to the HTTP request. We just need to repeat the request we generated above, adding this parameter.
In theory, we can do this by fiddling with the scripting. The way Dildog describes does it a different way. He copies the request out of the browser, edits it, then send it via the command-line using curl.
We’ve used the [Elements] and [Console] tabs in Chrome’s DevTools. Now we are going to use the [Network] tab. This lists all the requests the web page has made to the server. The twitter app is constantly making requests to refresh the content of the web page. The request we made trying to do a long tweet is called “create”, and is red, because it failed.
Google Chrome gives us a number of ways to duplicate the request. The most useful is that it copies it as a full cURL command we can just paste onto the command-line. We don’t even need to know cURL, it takes care of everything for us. On Windows, since you have two command-lines, it gives you a choice to use the older Windows cmd.exe, or the newer bash.exe. I use the bash version, since I don’t know where to get the Windows command-line version of cURL.exe.
There’s a lot of going on here. The first thing to notice is the long xxxxxx strings. That’s actually not in the original screenshot. I edited the picture. That’s because these are session-cookies. If inserted them into your browser, you’d hijack my Twitter session, and be able to tweet as me (such as making Carlos Danger style tweets). Therefore, I have to remove them from the example.
At the top of the screen is the URL that we are accessing, which is https://twitter.com/i/tweet/create. Much of the rest of the screen uses the cURL -H option to add a header. These are all the HTTP headers that I describe above. Finally, at the bottom, is the –data section, which contains the data bits related to the tweet, especially the tweet itself.
We need to edit either the URL above to read https://twitter.com/i/tweet/create?weighted_character_count=true, or we need to add &weighted_character_count=true to the –data section at the bottom (either works). Remember: mouse doesn’t work on command-line, so you have to use the cursor-keys to navigate backwards in the line. Also, since the line is larger than the screen, it’s on several visual lines, even though it’s all a single line as far as the command-line is concerned.
Now just hit [return] on your keyboard, and the tweet will be sent to the server, which at the moment, works. Presto!
Twitter will either enable or disable the feature for everyone in a few weeks, at which point, this post won’t work. But the reason I’m writing this is to demonstrate the basic hacking skills. We manipulate the web pages we receive from servers, and we manipulate what’s sent back from our browser back to the server.

Easier: hack the scripting

Instead of messing with the DOM and editing the HTTP request, the better solution would be to change the scripting that does both DOM client-side validation and HTTP request generation. The only reason Dildog above didn’t do that is that it’s a lot more work trying to find where all this happens.
Others have, though. @Zemnmez did just that, though his technique works for the alternate TweetDeck client (https://tweetdeck.twitter.com) instead of the default client. Go copy his code from here, then paste it into the DevTools scripting [Console]. It’ll go in an replace some scripting functions, such like my simpler example above.
The console is showing a stream of error messages, because TweetDeck has bugs, ignore those.
Now you can effortlessly do long tweets as normal, without all the messing around I’ve spent so much text in this blog post describing.
Now, as I’ve mentioned this before, you are only editing what’s going on in the current web page. If you refresh this page, or close it, everything will be lost. You’ll have to re-open the DevTools scripting console and repaste the code. The easier way of doing this is to use the [Sources] tab instead of [Console] and use the “Snippets” feature to save this bit of code in your browser, to make it easier next time.
The even easier way is to use Chrome extensions like TamperMonkey and GreaseMonkey that’ll take care of this for you. They’ll save the script, and automatically run it when they see you open the TweetDeck webpage again.
An even easier way is to use one of the several Chrome extensions written in the past day specifically designed to bypass the 140 character limit. Since the purpose of this blog post is to show you how to tamper with your browser yourself, rather than help you with Twitter, I won’t list them.

Conclusion

Tampering with the web-page the server gives you, and the data you send back, is a basic hacker skill. In truth, there is a lot to this. You have to get comfortable with the command-line, using tools like cURL. You have to learn how HTTP requests work. You have to understand how web pages are built from markup, style, and scripting. You have to be comfortable using Chrome’s DevTools for messing around with web page elements, network requests, scripting console, and scripting sources.
So it’s rather a lot, actually.
My hope with this page is to show you a practical application of all this, without getting too bogged down in fully explaining how every bit works.

Announcing the 2017-18 European Astro Pi challenge!

Post Syndicated from David Honess original https://www.raspberrypi.org/blog/announcing-2017-18-astro-pi/

Astro Pi is back! Today we’re excited to announce the 2017-18 European Astro Pi challenge in partnership with the European Space Agency (ESA). We are searching for the next generation of space scientists.

YouTube

Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube.

Astro Pi is an annual science and coding competition where student-written code is run on the International Space Station under the oversight of an ESA astronaut. The challenge is open to students from all 22 ESA member countries, including — for the first time — associate members Canada and Slovenia.

The format of the competition is changing slightly this year, and we also have a brand-new non-competitive mission in which participants are guaranteed to have their code run on the ISS for 30 seconds!

Mission Zero

Until now, students have worked on Astro Pi projects in an extra-curricular context and over multiple sessions. For teachers and students who don’t have much spare capacity, we wanted to provide an accessible activity that teams can complete in just one session.

So we came up with Mission Zero for young people no older than 14. To complete it, form a team of two to four people and use our step-by-step guide to help you write a simple Python program that shows your personal message and the ambient temperature on the Astro Pi. If you adhere to a few rules, your code is guaranteed to run in space for 30 seconds, and you’ll receive a certificate showing the exact time period during which your code has run in space. No special hardware is needed for this mission, since everything is done in a web browser.

Mission Zero is open until 26 November 2017! Find out more.

Mission Space Lab

Students aged up to 19 can take part in Mission Space Lab. Form a team of two to six people, and work like real space scientists to design your own experiment. Receive free kit to work with, and write the Python code to carry out your experiment.

There are two themes for Mission Space Lab teams to choose from for their projects:

  • Life in space
    You will make use of Astro Pi Vis (“Ed”) in the European Columbus module. You can use all of its sensors, but you cannot record images or videos.
  • Life on Earth
    You will make use of Astro Pi IR (“Izzy”), which will be aimed towards the Earth through a window. You can use all of its sensors and its camera.

The Astro Pi kit, delivered to Space Lab teams by ESA

If you achieve flight status, your code will be uploaded to the ISS and run for three hours (two orbits). All the data that your code records in space will be downloaded and returned to you for analysis. Then submit a short report on your findings to be in with a chance to win exclusive, money-can’t-buy prizes! You can also submit your project for a Bronze CREST Award.

Mission Space Lab registration is open until 29 October 2017, and accepted teams will continue to spring 2018. Find out more.

How do I get started?

There are loads of materials available that will help you begin your Astro Pi journey — check out the Getting started with the Sense HAT resource and this video explaining how to build the flight case.

Questions?

If you have any questions, please post them in the comments below. We’re standing by to answer them!

The post Announcing the 2017-18 European Astro Pi challenge! appeared first on Raspberry Pi.

Steal This Show S03E08: P2P Money: Trouble For Governments?

Post Syndicated from J.J. King original https://torrentfreak.com/steal-show-s03e08-p2p-money-trouble-governments/

stslogo180If you enjoy this episode, consider becoming a patron and getting involved with the show. Check out Steal This Show’s Patreon campaign: support us and get all kinds of fantastic benefits!

In this episode, we look at how the first P2P revolution in filesharing is segueing into a new P2P money revolution – even bringing along some of the same developers like Zooko and Bram Cohen.

The big question is, given the devastating effect filesharing had on the entertainment industries, how will decentralizing money effect banks and, even more critically, governments?

Steal This Show aims to release bi-weekly episodes featuring insiders discussing copyright and file-sharing news. It complements our regular reporting by adding more room for opinion, commentary, and analysis.

The guests for our news discussions will vary, and we’ll aim to introduce voices from different backgrounds and persuasions. In addition to news, STS will also produce features interviewing some of the great innovators and minds.

Host: Jamie King

Guest: Paige Peterson

Produced by Jamie King
Edited & Mixed by Riley Byrne
Original Music by David Triana
Web Production by Siraje Amarniss

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

KinoX / Movie4K Admin Detained in Kosovo After Three-Year Manhunt

Post Syndicated from Andy original https://torrentfreak.com/kinox-movie4k-admin-detained-in-kosovo-after-three-year-manhunt-170912/

In June 2011, police across Europe carried out the largest anti-piracy operation the region had ever seen. Their target was massive streaming portal Kino.to and several affiliates with links to Spain, France and the Netherlands.

With many sites demonstrating phoenix-like abilities these days, it didn’t take long for a replacement to appear.

Replacement platform KinoX soon attracted a large fanbase and with that almost immediate attention from the authorities. In October 2014, Germany-based investigators acting on behalf of the Attorney General carried out raids in several regions of the country looking for four main suspects.

One raid, focused on a village near to the northern city of Lübeck, targeted two brothers, then aged 21 and 25-years-old. The pair, who were said to have lived with their parents, were claimed to be the main operators of Kinox.to and another large streaming site, Movie4K.to. Although two other men were arrested elsewhere in Germany, the brothers couldn’t be found.

This was to be no ordinary manhunt by the police. In addition to accusing the brothers of copyright infringement and tax evasion, authorities indicated they were wanted for fraud, extortion, and arson too. The suggestion was that they’d targeted a vehicle owned by a pirate competitor, causing it to “burst into flames”.

The brothers were later named as Kastriot and Kreshnik Selimi. Born in 1992, 21-year-old Kreshnik was born in Sweden. 25-year-old Kastriot was born in Kosovo in 1989 and along with his brother, later became a German citizen.

With authorities piling on the charges, the pair were accused of being behind not only KinoX and Movie4K, but also other hosting and sharing platforms including BitShare, Stream4k.to, Shared.sx, Mygully.com and Boerse.sx.

Now, almost three years later, German police are one step closer to getting their men. According to a Handelsblatt report via Tarnkappe, Kreshnik Selimi has been detained by authorities.

The now 24-year-old suspect reportedly handed himself to the German embassy located in the capital of Kosovo, Prestina. The location of the arrest isn’t really a surprise. Older brother Kastriot previously published a picture on Instagram which appeared to show a ticket in his name destined for Kosovo from Zurich in Switzerland.

But while Kreshnik’s arrest reportedly took place in July, there’s still no news of Kastriot. The older brother is still on the run, maybe in Kosovo, or by now, potentially anywhere else in the world.

While his whereabouts remain a mystery, the other puzzle faced by German authorities is the status of the two main sites the brothers were said to maintain.

Despite all the drama and unprecedented allegations of violence and other serious offenses, both Movie4K and KinoX remain stubbornly online, apparently oblivious to the action.

There have been consequences for people connected to the latter, however.

In December 2015, Arvit O (aka “Pedro”) who handled technical issues on KinoX, was sentenced to 40 months in prison for his involvement in the site.

Arvit O, who made a partial confession, was found guilty of copyright infringement by the District Court of Leipzig. The then 29-year-old admitted to infringing 2,889 works. The Court also found that he hacked the computers of two competitors in order to improve Kinox’s market share.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

MPAA: Net Neutrality Rules Should Not Hinder Anti-Piracy Efforts

Post Syndicated from Ernesto original https://torrentfreak.com/mpaa-net-neutrality-rules-should-not-hinder-anti-piracy-efforts-170907/

This summer, millions of people protested the FCC’s plan to repeal the net neutrality rules that were put in place by the former Obama administration.

Well over 22 million comments are listed on the FCC site already and among those we spotted a response from the main movie industry lobby group, the MPAA.

Acting on behalf of six major Hollywood studios, the MPAA is not getting involved in the repeal debate. It instead highlights that, if the FCC maintains any type of network neutrality rules, these shouldn’t get in the way of its anti-piracy efforts.

The Hollywood group stresses that despite an increase in legal services, online piracy remains a problem. Through various anti-piracy measures, rightsholders are working hard to combat this threat, which is their right by law.

“Copyright owners and content providers have a right under the Copyright and Communications acts to combat theft of their content, and the law encourages internet intermediaries to collaborate with content creators to do so,” the MPAA writes.

Now that the net neutrality rules are facing a possible revision or repeal, the MPAA wants to make it very clear that any future regulation should not get in the way of these anti-piracy efforts.

“The MPAA therefore asks that any network neutrality rules the FCC maintains or adopts make explicit that such rules do not limit the ability of copyright owners and their licensees to combat copyright infringement,” the group writes to the FCC.

This means that measures such as website blocking, which could be considered to violate net neutrality as it discriminates against specific traffic, should be allowed. The same is true for other filtering and blocking efforts.

The MPAA’s position doesn’t come as a surprise and given the FCC’s actions in the past, Hollywood has little to worry about. The current net neutrality rules, which were put in place by the Obama administration, specifically exclude pirate traffic.

“Nothing in this part prohibits reasonable efforts by a provider of broadband Internet access service to address copyright infringement or other unlawful activity,” the current net neutrality order reads.

“We reiterate that our rules do not alter the copyright laws and are not intended to prohibit or discourage voluntary practices undertaken to address or mitigate the occurrence of copyright infringement,” the FCC previously clarified.

Still, the MPAA is better safe than sorry.

This is not the first time that the MPAA has got involved in net neutrality debates. Behind the scenes the group has been lobbying US lawmakers on this issue for several years, previously arguing for similar net neutrality exceptions in Brazil and India.

The MPAA’s full comments can be found here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Sci-Hub Faces $4,8 Million Piracy Damages and ISP Blocking

Post Syndicated from Ernesto original https://torrentfreak.com/sci-hub-faces-48-million-piracy-damages-and-isp-blocking-170905/

In June, a New York District Court handed down a default judgment against Sci-Hub.

The pirate site, operated by Alexandra Elbakyan, was ordered to pay $15 million in piracy damages to academic publisher Elsevier.

With the ink on this order barely dry, another publisher soon tagged on with a fresh complaint. The American Chemical Society (ACS), a leading source of academic publications in the field of chemistry, also accused Sci-Hub of mass copyright infringement.

Founded more than 140 years ago, the non-profit organization has around 157,000 members and researchers who publish tens of thousands of articles a year in its peer-reviewed journals. Because many of its works are available for free on Sci-Hub, ACS wants to be compensated.

Sci-Hub was made aware of the legal proceedings but did not appear in court. As a result, a default was entered against the site, and a few days ago ACS specified its demands, which include $4.8 million in piracy damages.

“Here, ACS seeks a judgment against Sci-Hub in the amount of $4,800,000—which is based on infringement of a representative sample of publications containing the ACS Copyrighted Works multiplied by the maximum statutory damages of $150,000 for each publication,” they write.

The publisher notes that the maximum statutory damages are only requested for 32 of its 9,000 registered works. This still adds up to a significant sum of money, of course, but that is needed as a deterrent, ACS claims.

“Sci-Hub’s unabashed flouting of U.S. Copyright laws merits a strong deterrent. This Court has awarded a copyright holder maximum statutory damages where the defendant’s actions were ‘clearly willful’ and maximum damages were necessary to ‘deter similar actors in the future’,” they write.

Although the deterrent effect may sound plausible in most cases, another $4.8 million in debt is unlikely to worry Sci-Hub’s owner, as she can’t pay it off anyway. However, there’s also a broad injunction on the table that may be more of a concern.

The requested injunction prohibits Sci-Hub’s owner to continue her work on the site. In addition, it also bars a wide range of other service providers from assisting others to access it.

Specifically, it restrains “any Internet search engines, web hosting and Internet service providers, domain name registrars, and domain name registries, to cease facilitating access to any or all domain names and websites through which Defendant Sci-Hub engages in unlawful access to [ACS’s works].”

The above suggests that search engines may have to remove the site from their indexes while ISPs could be required to block their users’ access to the site as well, which goes quite far.

Since Sci-Hub is in default, ACS is likely to get what it wants. However, if the organization intends to enforce the order in full, it’s likely that some of these third-party services, including Internet providers, will have to spring into action.

While domain name registries are regularly ordered to suspend domains, search engine removals and ISP blocking are not common in the United States. It would, therefore, be no surprise if this case lingers a little while longer.

A copy of ACS’s proposed default judgment, obtained by TorrentFreak, is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.