Tag Archives: perl

Sci-Hub Loses Domain Names, But Remains Resilient

Post Syndicated from Ernesto original https://torrentfreak.com/sci-hub-loses-domain-names-but-remains-resilient-171122/

While Sci-Hub is praised by thousands of researchers and academics around the world, copyright holders are doing everything in their power to wipe the site from the web.

Following a $15 million defeat against Elsevier in June, the American Chemical Society won a default judgment of $4.8 million in copyright damages earlier this month.

The publisher was further granted a broad injunction, requiring various third-party services to stop providing access to the site. This includes domain registries, which have the power to suspend domains worldwide if needed.

Yesterday, several of Sci-Hub’s domain names became unreachable. While the site had some issues in recent weeks, several people noticed that the present problems are more permanent.

Sci-hub.io, sci-hub.cc, and sci-hub.ac now have the infamous “serverhold” status which suggests that the responsible registries intervened. The status, which has been used previously when domain names are flagged for copyright issues, strips domains of their DNS entries.

Serverhold

This effectively means that the domain names in question have been rendered useless. However, history has also shown that Sci-Hub’s operator Alexandra Elbakyan doesn’t easily back down. Quite the contrary.

In a message posted on the site’s VK page and Twitter, the operator points out that users can update their DNS servers to the IP-addresses 80.82.77.83 and 80.82.77.84, to access it freely again. This rigorous measure will direct all domain name lookups through Sci-Hub’s servers.

Sci-Hub’s tweet

In addition, the Sci-Hub.bz domain and the .onion address on the Tor network still appear to work just fine for most people.

It’s clear that Ukraine-born Elbakyan has no intention of throwing in the towel. By providing free access to published research, she sees it as simply helping millions of less privileged academics to do their work properly.

Authorized or not, among researchers there is still plenty of demand and support for Sci-Hub’s service. The site hosts dozens of millions of academic papers and receives millions of visitors per month.

Many visits come from countries where access to academic journals is limited, such as Iran, Russia and China. But even in countries where access is more common, a lot of researchers visit the site.

While the domain problems may temporarily make the site harder to find for some, it’s not likely to be the end for Sch-Hub.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Security updates for Wednesday

Post Syndicated from jake original https://lwn.net/Articles/739858/rss

Security updates have been issued by Arch Linux (roundcubemail), Debian (optipng, samba, and vlc), Fedora (compat-openssl10, fedpkg, git, jbig2dec, ldns, memcached, openssl, perl-Net-Ping-External, python-copr, python-XStatic-jquery-ui, rpkg, thunderbird, and xen), SUSE (tomcat), and Ubuntu (db, db4.8, db5.3, linux, linux-raspi2, linux-aws, linux-azure, linux-gcp, and samba).

170 ‘Pirate’ IPTV Vendors Throw in the Towel Facing Legal Pressure

Post Syndicated from Ernesto original https://torrentfreak.com/170-pirate-iptv-vendors-throw-the-in-the-towel-facing-legal-pressure-171121/

Pirate streaming boxes are all the rage this year. Not just among the dozens of millions of users, they are on top of the anti-piracy agenda as well.

Dubbed Piracy 3.0 by the MPAA, copyright holders are trying their best to curb this worrisome trend. In the Netherlands local anti-piracy group BREIN is leading the charge.

Backed by the major film studios, the organization booked a significant victory earlier this year against Filmspeler. In this case, the European Court of Justice ruled that selling or using devices pre-configured to obtain copyright-infringing content is illegal.

Paired with the earlier GS Media ruling, which held that companies with a for-profit motive can’t knowingly link to copyright-infringing material, this provides a powerful enforcement tool.

With these decisions in hand, BREIN previously pressured hundreds of streaming box vendors to halt sales of hardware with pirate addons, but it didn’t stop there. This week the group also highlighted its successes against vendors of unauthorized IPTV services.

“BREIN has already stopped 170 illegal providers of illegal media players and/or IPTV subscriptions. Even providers that only offer illegal IPTV subscriptions are being dealt with,” BREIN reports.

In addition to shutting down the trade in IPTV services, the anti-piracy group also removed 375 advertisements for such services from various marketplaces.

“This is illegal commerce. If you wait until you are warned, you are too late,” BREIN director Tim Kuik says.

“You can be held personally liable. You can also be charged and criminally prosecuted. Willingly committing commercial copyright infringement can lead to a 82,000 euro fine and 4 years imprisonment,” he adds.

While most pirate IPTV vendors threw in the towel voluntarily, some received an extra incentive. Twenty signed a settlement with BREIN for varying amounts, up to tens of thousands of euros. They all face further penalties if they continue to sell pirate subscriptions.

In some cases, the courts were involved. This includes the recent lawsuit against MovieStreamer, that was ordered to stop its IPTV hyperlinking activities immediately. Failure to do so will result in a 5,000 euro per day fine. In addition, the vendor was also ordered to pay legal costs of 17,527 euros.

While BREIN has booked plenty of successes already, as exampled here, the pirate streaming box problem is far from solved. The anti-piracy group currently has one case pending in court, but more are likely to follow in the near future.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Security updates for Monday

Post Syndicated from ris original https://lwn.net/Articles/739648/rss

Security updates have been issued by Arch Linux (icu and lib32-icu), CentOS (firefox), Debian (imagemagick, konversation, libspring-ldap-java, libxml-libxml-perl, lynx-cur, ming, opensaml2, poppler, procmail, shibboleth-sp2, and xen), Fedora (firefox, java-9-openjdk, jbig2dec, kernel, knot, knot-resolver, qt5-qtwebengine, and roundcubemail), Gentoo (adobe-flash, couchdb, icedtea-bin, and phpunit), Mageia (apr, bluez, firefox, jq, konversation, libextractor, and quagga), Oracle (firefox), Red Hat (firefox), and Scientific Linux (firefox).

Kodi-Addon Developer Launches Fundraiser to Fight “Copyright Bullies”

Post Syndicated from Ernesto original https://torrentfreak.com/kodi-addon-developer-launches-fundraiser-to-fight-copyright-bullies-171120/

Earlier this year, American satellite and broadcast provider Dish Network targeted two well-known players in the third-party Kodi add-on ecosystem.

In a complaint filed in a federal court in Texas, add-on ZemTV and the TVAddons library were accused of copyright infringement. As a result, both are facing up to $150,000 for each offense.

While the case was filed in Texas, neither of the defendants live there, or even in the United States. The owner and operator of TVAddons is Adam Lackman, who resides in Montreal, Canada. ZemTV’s developer Shahjahan Durrani is even further away in London, UK.

Over the past few months, Lackman has spoken out in public on several occasions, but little was known about the man behind ZemTV. Today, however, he also decided to open up, asking for support in his legal battle against the Dish Network.

Shahjahan Durrani, Shani for short, doesn’t hide the fact that he was the driving force behind the Kodi-addons ZemTV, LiveStreamsPro, and F4MProxy. While the developer has never set foot in Texas, he is willing to defend himself. Problem is, he lacks the funds to do so.

“I’ve never been to Texas in my life, I’m from London, England,” Shani explains. “Somehow a normal chap like me is expected to defend himself against a billion dollar media giant. I don’t have the money to fight this on my own, and hope my friends will help support my fight against the expansion of copyright liability.”

Shani’s fundraiser went live a few hours ago and the first donations are now starting to come in. He has set a target of $8,500 set for his defense fund so there is still a long way to go.

Speaking with TorrentFreak, Shani explains that he got into Kodi addon development to broaden his coding skills and learn Python. ZemTV was a tool to watch recorded shows from zemtv.com, which he always assumed were perfectly legal, on his Apple TV. Then, he decided to help others to do the same.

“The reason why I published the addon was that I saw it as a community helping each other out, and this was my way to give back. I never received any money from anybody and I wanted to keep it pure and free,” Shani tells us.

ZemTV was a passive service, simply scraping content from a third party source, he explains. The addon provided an interface but did not host or control any allegedly infringing content directly.

“I had no involvement nor control over any of the websites or content sources that were allegedly accessible through ZemTV. I did not host nor take part in the sharing of any form of streaming media. As an open source developer, I should not be held liable for the potential abuse of my code,” the developer stresses.

Dish Network sees things differently, of course. In its complaint, the company accused Shani of illegally retransmitting their copyright protected channels while asking for donations to maintain the project.

The case is perhaps not as straightforward as either side presents it. However, it is in the best interests of the general public that both sides are properly heard. This is the first case against a Kodi-addon developer and the outcome will set an important precedent.

“This lawsuit is part of a targeted effort to destroy the Kodi addon community. The fight is rigged against the little guy, they are trying to make something illegal that shouldn’t be illegal. They tried to do it with the VCR, and now years and years later they are trying to do it with Kodi.

“Since I am the only addon developer to date who is actually fighting the wrath of big media bullies, it is crucial that I win my case,” Shani adds.

Going forward, the ZemTV developer believes that copyright holders are better off going after the content providers directly. If the sources are down, any problematic addons will also stop working. Rightholders can even work with addon developers and use addons to find infringing content providers.

“I think the copyright holders should target the sources, it’s as simple as that,” Shani tells us.

The fundraiser campaign is now public on Generosity.com. At the time of writing the ticker sits at $50, so there is still a long way to go before the developer can organize a proper defense.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Capturing Custom, High-Resolution Metrics from Containers Using AWS Step Functions and AWS Lambda

Post Syndicated from Nathan Taber original https://aws.amazon.com/blogs/compute/capturing-custom-high-resolution-metrics-from-containers-using-aws-step-functions-and-aws-lambda/

Contributed by Trevor Sullivan, AWS Solutions Architect

When you deploy containers with Amazon ECS, are you gathering all of the key metrics so that you can correctly monitor the overall health of your ECS cluster?

By default, ECS writes metrics to Amazon CloudWatch in 5-minute increments. For complex or large services, this may not be sufficient to make scaling decisions quickly. You may want to respond immediately to changes in workload or to identify application performance problems. Last July, CloudWatch announced support for high-resolution metrics, up to a per-second basis.

These high-resolution metrics can be used to give you a clearer picture of the load and performance for your applications, containers, clusters, and hosts. In this post, I discuss how you can use AWS Step Functions, along with AWS Lambda, to cost effectively record high-resolution metrics into CloudWatch. You implement this solution using a serverless architecture, which keeps your costs low and makes it easier to troubleshoot the solution.

To show how this works, you retrieve some useful metric data from an ECS cluster running in the same AWS account and region (Oregon, us-west-2) as the Step Functions state machine and Lambda function. However, you can use this architecture to retrieve any custom application metrics from any resource in any AWS account and region.

Why Step Functions?

Step Functions enables you to orchestrate multi-step tasks in the AWS Cloud that run for any period of time, up to a year. Effectively, you’re building a blueprint for an end-to-end process. After it’s built, you can execute the process as many times as you want.

For this architecture, you gather metrics from an ECS cluster, every five seconds, and then write the metric data to CloudWatch. After your ECS cluster metrics are stored in CloudWatch, you can create CloudWatch alarms to notify you. An alarm can also trigger an automated remediation activity such as scaling ECS services, when a metric exceeds a threshold defined by you.

When you build a Step Functions state machine, you define the different states inside it as JSON objects. The bulk of the work in Step Functions is handled by the common task state, which invokes Lambda functions or Step Functions activities. There is also a built-in library of other useful states that allow you to control the execution flow of your program.

One of the most useful state types in Step Functions is the parallel state. Each parallel state in your state machine can have one or more branches, each of which is executed in parallel. Another useful state type is the wait state, which waits for a period of time before moving to the next state.

In this walkthrough, you combine these three states (parallel, wait, and task) to create a state machine that triggers a Lambda function, which then gathers metrics from your ECS cluster.

Step Functions pricing

This state machine is executed every minute, resulting in 60 executions per hour, and 1,440 executions per day. Step Functions is billed per state transition, including the Start and End state transitions, and giving you approximately 37,440 state transitions per day. To reach this number, I’m using this estimated math:

26 state transitions per-execution x 60 minutes x 24 hours

Based on current pricing, at $0.000025 per state transition, the daily cost of this metric gathering state machine would be $0.936.

Step Functions offers an indefinite 4,000 free state transitions every month. This benefit is available to all customers, not just customers who are still under the 12-month AWS Free Tier. For more information and cost example scenarios, see Step Functions pricing.

Why Lambda?

The goal is to capture metrics from an ECS cluster, and write the metric data to CloudWatch. This is a straightforward, short-running process that makes Lambda the perfect place to run your code. Lambda is one of the key services that makes up “Serverless” application architectures. It enables you to consume compute capacity only when your code is actually executing.

The process of gathering metric data from ECS and writing it to CloudWatch takes a short period of time. In fact, my average Lambda function execution time, while developing this post, is only about 250 milliseconds on average. For every five-second interval that occurs, I’m only using 1/20th of the compute time that I’d otherwise be paying for.

Lambda pricing

For billing purposes, Lambda execution time is rounded up to the nearest 100-ms interval. In general, based on the metrics that I observed during development, a 250-ms runtime would be billed at 300 ms. Here, I calculate the cost of this Lambda function executing on a daily basis.

Assuming 31 days in each month, there would be 535,680 five-second intervals (31 days x 24 hours x 60 minutes x 12 five-second intervals = 535,680). The Lambda function is invoked every five-second interval, by the Step Functions state machine, and runs for a 300-ms period. At current Lambda pricing, for a 128-MB function, you would be paying approximately the following:

Total compute

Total executions = 535,680
Total compute = total executions x (3 x $0.000000208 per 100 ms) = $0.334 per day

Total requests

Total requests = (535,680 / 1000000) * $0.20 per million requests = $0.11 per day

Total Lambda Cost

$0.11 requests + $0.334 compute time = $0.444 per day

Similar to Step Functions, Lambda offers an indefinite free tier. For more information, see Lambda Pricing.

Walkthrough

In the following sections, I step through the process of configuring the solution just discussed. If you follow along, at a high level, you will:

  • Configure an IAM role and policy
  • Create a Step Functions state machine to control metric gathering execution
  • Create a metric-gathering Lambda function
  • Configure a CloudWatch Events rule to trigger the state machine
  • Validate the solution

Prerequisites

You should already have an AWS account with a running ECS cluster. If you don’t have one running, you can easily deploy a Docker container on an ECS cluster using the AWS Management Console. In the example produced for this post, I use an ECS cluster running Windows Server (currently in beta), but either a Linux or Windows Server cluster works.

Create an IAM role and policy

First, create an IAM role and policy that enables Step Functions, Lambda, and CloudWatch to communicate with each other.

  • The CloudWatch Events rule needs permissions to trigger the Step Functions state machine.
  • The Step Functions state machine needs permissions to trigger the Lambda function.
  • The Lambda function needs permissions to query ECS and then write to CloudWatch Logs and metrics.

When you create the state machine, Lambda function, and CloudWatch Events rule, you assign this role to each of those resources. Upon execution, each of these resources assumes the specified role and executes using the role’s permissions.

  1. Open the IAM console.
  2. Choose Roles, create New Role.
  3. For Role Name, enter WriteMetricFromStepFunction.
  4. Choose Save.

Create the IAM role trust relationship
The trust relationship (also known as the assume role policy document) for your IAM role looks like the following JSON document. As you can see from the document, your IAM role needs to trust the Lambda, CloudWatch Events, and Step Functions services. By configuring your role to trust these services, they can assume this role and inherit the role permissions.

  1. Open the IAM console.
  2. Choose Roles and select the IAM role previously created.
  3. Choose Trust RelationshipsEdit Trust Relationships.
  4. Enter the following trust policy text and choose Save.
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "lambda.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    },
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "events.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    },
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "states.us-west-2.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

Create an IAM policy

After you’ve finished configuring your role’s trust relationship, grant the role access to the other AWS resources that make up the solution.

The IAM policy is what gives your IAM role permissions to access various resources. You must whitelist explicitly the specific resources to which your role has access, because the default IAM behavior is to deny access to any AWS resources.

I’ve tried to keep this policy document as generic as possible, without allowing permissions to be too open. If the name of your ECS cluster is different than the one in the example policy below, make sure that you update the policy document before attaching it to your IAM role. You can attach this policy as an inline policy, instead of creating the policy separately first. However, either approach is valid.

  1. Open the IAM console.
  2. Select the IAM role, and choose Permissions.
  3. Choose Add in-line policy.
  4. Choose Custom Policy and then enter the following policy. The inline policy name does not matter.
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [ "logs:*" ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [ "cloudwatch:PutMetricData" ],
            "Resource": "*"
        },
        {
            "Effect": "Allow",
            "Action": [ "states:StartExecution" ],
            "Resource": [
                "arn:aws:states:*:*:stateMachine:WriteMetricFromStepFunction"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [ "lambda:InvokeFunction" ],
            "Resource": "arn:aws:lambda:*:*:function:WriteMetricFromStepFunction"
        },
        {
            "Effect": "Allow",
            "Action": [ "ecs:Describe*" ],
            "Resource": "arn:aws:ecs:*:*:cluster/ECSEsgaroth"
        }
    ]
}

Create a Step Functions state machine

In this section, you create a Step Functions state machine that invokes the metric-gathering Lambda function every five (5) seconds, for a one-minute period. If you divide a minute (60) seconds into equal parts of five-second intervals, you get 12. Based on this math, you create 12 branches, in a single parallel state, in the state machine. Each branch triggers the metric-gathering Lambda function at a different five-second marker, throughout the one-minute period. After all of the parallel branches finish executing, the Step Functions execution completes and another begins.

Follow these steps to create your Step Functions state machine:

  1. Open the Step Functions console.
  2. Choose DashboardCreate State Machine.
  3. For State Machine Name, enter WriteMetricFromStepFunction.
  4. Enter the state machine code below into the editor. Make sure that you insert your own AWS account ID for every instance of “676655494xxx”
  5. Choose Create State Machine.
  6. Select the WriteMetricFromStepFunction IAM role that you previously created.
{
    "Comment": "Writes ECS metrics to CloudWatch every five seconds, for a one-minute period.",
    "StartAt": "ParallelMetric",
    "States": {
      "ParallelMetric": {
        "Type": "Parallel",
        "Branches": [
          {
            "StartAt": "WriteMetricLambda",
            "States": {
             	"WriteMetricLambda": {
                  "Type": "Task",
				  "Resource": "arn:aws:lambda:us-west-2:676655494xxx:function:WriteMetricFromStepFunction",
                  "End": true
                } 
            }
          },
    	  {
            "StartAt": "WaitFive",
            "States": {
            	"WaitFive": {
            		"Type": "Wait",
            		"Seconds": 5,
            		"Next": "WriteMetricLambdaFive"
          		},
             	"WriteMetricLambdaFive": {
                  "Type": "Task",
				  "Resource": "arn:aws:lambda:us-west-2:676655494xxx:function:WriteMetricFromStepFunction",
                  "End": true
                } 
            }
          },
    	  {
            "StartAt": "WaitTen",
            "States": {
            	"WaitTen": {
            		"Type": "Wait",
            		"Seconds": 10,
            		"Next": "WriteMetricLambda10"
          		},
             	"WriteMetricLambda10": {
                  "Type": "Task",
                  "Resource": "arn:aws:lambda:us-west-2:676655494xxx:function:WriteMetricFromStepFunction",
                  "End": true
                } 
            }
          },
    	  {
            "StartAt": "WaitFifteen",
            "States": {
            	"WaitFifteen": {
            		"Type": "Wait",
            		"Seconds": 15,
            		"Next": "WriteMetricLambda15"
          		},
             	"WriteMetricLambda15": {
                  "Type": "Task",
                  "Resource": "arn:aws:lambda:us-west-2:676655494xxx:function:WriteMetricFromStepFunction",
                  "End": true
                } 
            }
          },
          {
            "StartAt": "Wait20",
            "States": {
            	"Wait20": {
            		"Type": "Wait",
            		"Seconds": 20,
            		"Next": "WriteMetricLambda20"
          		},
             	"WriteMetricLambda20": {
                  "Type": "Task",
                  "Resource": "arn:aws:lambda:us-west-2:676655494xxx:function:WriteMetricFromStepFunction",
                  "End": true
                } 
            }
          },
          {
            "StartAt": "Wait25",
            "States": {
            	"Wait25": {
            		"Type": "Wait",
            		"Seconds": 25,
            		"Next": "WriteMetricLambda25"
          		},
             	"WriteMetricLambda25": {
                  "Type": "Task",
                  "Resource": "arn:aws:lambda:us-west-2:676655494xxx:function:WriteMetricFromStepFunction",
                  "End": true
                } 
            }
          },
          {
            "StartAt": "Wait30",
            "States": {
            	"Wait30": {
            		"Type": "Wait",
            		"Seconds": 30,
            		"Next": "WriteMetricLambda30"
          		},
             	"WriteMetricLambda30": {
                  "Type": "Task",
                  "Resource": "arn:aws:lambda:us-west-2:676655494xxx:function:WriteMetricFromStepFunction",
                  "End": true
                } 
            }
          },
          {
            "StartAt": "Wait35",
            "States": {
            	"Wait35": {
            		"Type": "Wait",
            		"Seconds": 35,
            		"Next": "WriteMetricLambda35"
          		},
             	"WriteMetricLambda35": {
                  "Type": "Task",
                  "Resource": "arn:aws:lambda:us-west-2:676655494xxx:function:WriteMetricFromStepFunction",
                  "End": true
                } 
            }
          },
          {
            "StartAt": "Wait40",
            "States": {
            	"Wait40": {
            		"Type": "Wait",
            		"Seconds": 40,
            		"Next": "WriteMetricLambda40"
          		},
             	"WriteMetricLambda40": {
                  "Type": "Task",
                  "Resource": "arn:aws:lambda:us-west-2:676655494xxx:function:WriteMetricFromStepFunction",
                  "End": true
                } 
            }
          },
          {
            "StartAt": "Wait45",
            "States": {
            	"Wait45": {
            		"Type": "Wait",
            		"Seconds": 45,
            		"Next": "WriteMetricLambda45"
          		},
             	"WriteMetricLambda45": {
                  "Type": "Task",
                  "Resource": "arn:aws:lambda:us-west-2:676655494xxx:function:WriteMetricFromStepFunction",
                  "End": true
                } 
            }
          },
          {
            "StartAt": "Wait50",
            "States": {
            	"Wait50": {
            		"Type": "Wait",
            		"Seconds": 50,
            		"Next": "WriteMetricLambda50"
          		},
             	"WriteMetricLambda50": {
                  "Type": "Task",
                  "Resource": "arn:aws:lambda:us-west-2:676655494xxx:function:WriteMetricFromStepFunction",
                  "End": true
                } 
            }
          },
          {
            "StartAt": "Wait55",
            "States": {
            	"Wait55": {
            		"Type": "Wait",
            		"Seconds": 55,
            		"Next": "WriteMetricLambda55"
          		},
             	"WriteMetricLambda55": {
                  "Type": "Task",
                  "Resource": "arn:aws:lambda:us-west-2:676655494xxx:function:WriteMetricFromStepFunction",
                  "End": true
                } 
            }
          }
        ],
        "End": true
      }
  }
}

Now you’ve got a shiny new Step Functions state machine! However, you might ask yourself, “After the state machine has been created, how does it get executed?” Before I answer that question, create the Lambda function that writes the custom metric, and then you get the end-to-end process moving.

Create a Lambda function

The meaty part of the solution is a Lambda function, written to consume the Python 3.6 runtime, that retrieves metric values from ECS, and then writes them to CloudWatch. This Lambda function is what the Step Functions state machine is triggering every five seconds, via the Task states. Key points to remember:

The Lambda function needs permission to:

  • Write CloudWatch metrics (PutMetricData API).
  • Retrieve metrics from ECS clusters (DescribeCluster API).
  • Write StdOut to CloudWatch Logs.

Boto3, the AWS SDK for Python, is included in the Lambda execution environment for Python 2.x and 3.x.

Because Lambda includes the AWS SDK, you don’t have to worry about packaging it up and uploading it to Lambda. You can focus on writing code and automatically take a dependency on boto3.

As for permissions, you’ve already created the IAM role and attached a policy to it that enables your Lambda function to access the necessary API actions. When you create your Lambda function, make sure that you select the correct IAM role, to ensure it is invoked with the correct permissions.

The following Lambda function code is generic. So how does the Lambda function know which ECS cluster to gather metrics for? Your Step Functions state machine automatically passes in its state to the Lambda function. When you create your CloudWatch Events rule, you specify a simple JSON object that passes the desired ECS cluster name into your Step Functions state machine, which then passes it to the Lambda function.

Use the following property values as you create your Lambda function:

Function Name: WriteMetricFromStepFunction
Description: This Lambda function retrieves metric values from an ECS cluster and writes them to Amazon CloudWatch.
Runtime: Python3.6
Memory: 128 MB
IAM Role: WriteMetricFromStepFunction

import boto3

def handler(event, context):
    cw = boto3.client('cloudwatch')
    ecs = boto3.client('ecs')
    print('Got boto3 client objects')
    
    Dimension = {
        'Name': 'ClusterName',
        'Value': event['ECSClusterName']
    }

    cluster = get_ecs_cluster(ecs, Dimension['Value'])
    
    cw_args = {
       'Namespace': 'ECS',
       'MetricData': [
           {
               'MetricName': 'RunningTask',
               'Dimensions': [ Dimension ],
               'Value': cluster['runningTasksCount'],
               'Unit': 'Count',
               'StorageResolution': 1
           },
           {
               'MetricName': 'PendingTask',
               'Dimensions': [ Dimension ],
               'Value': cluster['pendingTasksCount'],
               'Unit': 'Count',
               'StorageResolution': 1
           },
           {
               'MetricName': 'ActiveServices',
               'Dimensions': [ Dimension ],
               'Value': cluster['activeServicesCount'],
               'Unit': 'Count',
               'StorageResolution': 1
           },
           {
               'MetricName': 'RegisteredContainerInstances',
               'Dimensions': [ Dimension ],
               'Value': cluster['registeredContainerInstancesCount'],
               'Unit': 'Count',
               'StorageResolution': 1
           }
        ]
    }
    cw.put_metric_data(**cw_args)
    print('Finished writing metric data')
    
def get_ecs_cluster(client, cluster_name):
    cluster = client.describe_clusters(clusters = [ cluster_name ])
    print('Retrieved cluster details from ECS')
    return cluster['clusters'][0]

Create the CloudWatch Events rule

Now you’ve created an IAM role and policy, Step Functions state machine, and Lambda function. How do these components actually start communicating with each other? The final step in this process is to set up a CloudWatch Events rule that triggers your metric-gathering Step Functions state machine every minute. You have two choices for your CloudWatch Events rule expression: rate or cron. In this example, use the cron expression.

A couple key learning points from creating the CloudWatch Events rule:

  • You can specify one or more targets, of different types (for example, Lambda function, Step Functions state machine, SNS topic, and so on).
  • You’re required to specify an IAM role with permissions to trigger your target.
    NOTE: This applies only to certain types of targets, including Step Functions state machines.
  • Each target that supports IAM roles can be triggered using a different IAM role, in the same CloudWatch Events rule.
  • Optional: You can provide custom JSON that is passed to your target Step Functions state machine as input.

Follow these steps to create the CloudWatch Events rule:

  1. Open the CloudWatch console.
  2. Choose Events, RulesCreate Rule.
  3. Select Schedule, Cron Expression, and then enter the following rule:
    0/1 * * * ? *
  4. Choose Add Target, Step Functions State MachineWriteMetricFromStepFunction.
  5. For Configure Input, select Constant (JSON Text).
  6. Enter the following JSON input, which is passed to Step Functions, while changing the cluster name accordingly:
    { "ECSClusterName": "ECSEsgaroth" }
  7. Choose Use Existing Role, WriteMetricFromStepFunction (the IAM role that you previously created).

After you’ve completed with these steps, your screen should look similar to this:

Validate the solution

Now that you have finished implementing the solution to gather high-resolution metrics from ECS, validate that it’s working properly.

  1. Open the CloudWatch console.
  2. Choose Metrics.
  3. Choose custom and select the ECS namespace.
  4. Choose the ClusterName metric dimension.

You should see your metrics listed below.

Troubleshoot configuration issues

If you aren’t receiving the expected ECS cluster metrics in CloudWatch, check for the following common configuration issues. Review the earlier procedures to make sure that the resources were properly configured.

  • The IAM role’s trust relationship is incorrectly configured.
    Make sure that the IAM role trusts Lambda, CloudWatch Events, and Step Functions in the correct region.
  • The IAM role does not have the correct policies attached to it.
    Make sure that you have copied the IAM policy correctly as an inline policy on the IAM role.
  • The CloudWatch Events rule is not triggering new Step Functions executions.
    Make sure that the target configuration on the rule has the correct Step Functions state machine and IAM role selected.
  • The Step Functions state machine is being executed, but failing part way through.
    Examine the detailed error message on the failed state within the failed Step Functions execution. It’s possible that the
  • IAM role does not have permissions to trigger the target Lambda function, that the target Lambda function may not exist, or that the Lambda function failed to complete successfully due to invalid permissions.
    Although the above list covers several different potential configuration issues, it is not comprehensive. Make sure that you understand how each service is connected to each other, how permissions are granted through IAM policies, and how IAM trust relationships work.

Conclusion

In this post, you implemented a Serverless solution to gather and record high-resolution application metrics from containers running on Amazon ECS into CloudWatch. The solution consists of a Step Functions state machine, Lambda function, CloudWatch Events rule, and an IAM role and policy. The data that you gather from this solution helps you rapidly identify issues with an ECS cluster.

To gather high-resolution metrics from any service, modify your Lambda function to gather the correct metrics from your target. If you prefer not to use Python, you can implement a Lambda function using one of the other supported runtimes, including Node.js, Java, or .NET Core. However, this post should give you the fundamental basics about capturing high-resolution metrics in CloudWatch.

If you found this post useful, or have questions, please comment below.

Security updates for Thursday

Post Syndicated from jake original https://lwn.net/Articles/739318/rss

Security updates have been issued by Arch Linux (firefox, flashplugin, lib32-flashplugin, and mediawiki), CentOS (kernel and php), Debian (firefox-esr, jackson-databind, and mediawiki), Fedora (apr, apr-util, chromium, compat-openssl10, firefox, ghostscript, hostapd, icu, ImageMagick, jackson-databind, krb5, lame, liblouis, nagios, nodejs, perl-Catalyst-Plugin-Static-Simple, php, php-PHPMailer, poppler, poppler-data, rubygem-ox, systemd, webkitgtk4, wget, wordpress, and xen), Mageia (flash-player-plugin, icu, jackson-databind, php, and roundcubemail), Oracle (kernel and php), Red Hat (openstack-aodh), SUSE (wget and xen), and Ubuntu (apport and webkit2gtk).

Judge Puts Brakes on Piracy Cases, Doubts Evidence Against Deceased Man

Post Syndicated from Ernesto original https://torrentfreak.com/judge-puts-brakes-on-piracy-cases-doubts-evidence-against-deceased-man-171114/

In recent years, file-sharers around the world have been pressured to pay significant settlement fees, or face legal repercussions.

These so-called “copyright trolling” efforts have been a common occurrence in the United States for more than half a decade, and still are.

While copyright holders should be able to take legitimate piracy claims to court, there are some who resort to dodgy tactics to extract money from alleged pirates. The evidence isn’t exactly rock-solid either, which results in plenty of innocent targets.

A prime candidate for the latter category is a man who was sued by Venice PI, a copyright holder of the film “Once Upon a Time in Venice.” He was sued not once, but twice. That’s not the problem though. What stood out is that defendant is no longer alive.

The man’s wife informed a federal court in Seattle that he passed away recently, at the respectable age of 91. While age doesn’t prove innocence, the widow also mentioned that her husband suffered from dementia and was both mentally and physically incapable of operating a computer at the time of the alleged offense.

These circumstances raised doubt with US District Court Judge Thomas Zilly, who brought them up in a recent order (citations omitted).

“In two different cases, plaintiff sued the same, now deceased, defendant, namely Wilbur Miller. Mr. Miller’s widow submitted a declaration indicating that, for about five years prior to his death at the age of 91, Mr. Miller suffered from dementia and was both mentally and physically incapable of operating a computer,” the Judge writes.

The Judge notes that the IP-address tracking tools used by the copyright holder might not be as accurate as is required. In addition, he adds that the company can’t simply launch a “fishing expedition” based on the IP-address alone.

“The fact that Mr. Miller’s Internet Protocol (‘IP’) address was nevertheless identified as part of two different BitTorrent ‘swarms’ raises significant doubts about the accuracy of whatever IP-address tracking method plaintiff is using.

“Moreover, plaintiff may not, based solely on IP addresses, launch a fishing expedition aimed at coercing individuals into either admitting to copyright infringement or pointing a finger at family members, friends, tenants, or neighbors. Plaintiff must demonstrate the plausibility of their claims before discovery will be permitted,” Judge Zilly adds.

From the order

Since the copyright holder has only provided an IP-address as evidence, the plausibility of the copyright infringement claims is not properly demonstrated. This means that the holder was not allowed to conduct discovery, which includes discussions with defendants.

The court, therefore, ordered Venice PI to cease all communication with defendants effective immediately, until further notice. This order applies to a dozen cases which are now effectively on hold.

The copyright holder has been given 28 days to provide more information on several issues related to the evidence gathering. This offer of proof should be supported by a declaration of an expert in the field.

The Judge wants to know if an IP-address can be spoofed or faked by a BitTorrent tracker, and if so, how likely this is. In addition, he questions if the material that was tracked (possible only part of a download) is actually playable. And finally, the Judge asks what other evidence Venice PI has against each defendant, aside from the IP-address.

“In the absence of a timely filed offer of proof, plaintiff’s claims will be dismissed with prejudice and without costs, and these cases will be closed,” Judge Zilly warns.

The harsh order was noticed by copyright troll skeptic FCT, who notes that Venice PI will have a hard time providing the requested proof.

Venice and other “copyright trolls” use the German company Maverickeye to track BitTorrent pirates on a broad scale. They are also active with their settlement demands in various other countries, most recently in Sweden.

If the provided proof is not sufficient in the court’s opinion, it will be hard for them and other rightsholders to continue their practices in the Washington district.

The full order is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Security updates for Tuesday

Post Syndicated from ris original https://lwn.net/Articles/738995/rss

Security updates have been issued by Arch Linux (konversation), Debian (graphicsmagick and konversation), Fedora (git-annex, ImageMagick, kernel, and libgcrypt), Oracle (kernel), Red Hat (httpd), SUSE (firefox, nss), and Ubuntu (perl and postgresql-9.3, postgresql-9.5, postgresql-9.6).

Microsoft Sued Over ‘Baseless’ Piracy Threats

Post Syndicated from Ernesto original https://torrentfreak.com/microsoft-sued-over-baseless-piracy-threats-171113/

For many years, Microsoft and the Business Software Alliance (BSA) have carried out piracy investigations into organizations large and small.

Companies accused of using Microsoft software without permission usually get a letter asking them to pay up, or face legal consequences.

Rhode Island-based company Hanna Instruments is one of the most recent targets. The company stands accused of using Microsoft Office products without a proper license.

However, instead of Microsoft going after Hanna in court for copyright infringement, Hanna has filed a lawsuit against BSA and Microsoft asking for a declaratory judgment that it did nothing wrong.

The lawsuit is the result of a long back-and-forth that started in June. At the time, BSA’s lawyers sent Hanna a letter accusing it of using Microsoft products without a proper license, while requesting an audit.

Hanna’s management wasn’t aware of any pirated products but after repeated requests, the company decided to go ahead and conduct a thorough investigation. The results, combined in a detailed spreadsheet, showed that it purchased 126 copies of Microsoft Office software, while only 120 were in use.

Perfectly fine, they assumed, but the BSA was not convinced.

Since Hanna only had Microsoft generated key cards for the most recent purchases, the company used purchase orders, requisitions, and price quotes to prove that it properly licensed earlier copies of Microsoft Office. Not good enough, according to the BSA, which wanted to see money instead.

The BSA’s lawyers informed Hanna that the company would face up to $4,950,000 in damages if the case went to court. Instead, however, they offered to settle the matter for $72,074.

From the complaint

Hanna wasn’t planning to pay and pointed out that they sent in as much proof as they could find, documenting legal purchases of Microsoft Office licenses for a period covering more than ten years. While the BSA appreciated the effort, it didn’t accept this as hard evidence.

“…the provision of purchase orders, price quotes, purchase requisitions are not acceptable as valid proof of purchase to our client. Reason being, the aforesaid documents do not demonstrate that a purchase has taken place, they merely establish intent to make a purchase of software,” the BSA wrote in yer another email.

Interestingly, the BSA itself still failed to provide any solid proof that Hanna was using unlicensed software. The Rhode Island company repeatedly requested this, but the BSA simply replied that it’s neither appropriate nor efficient to request evidence from their clients in every case.

The BSA then went a step further and suggested that Microsoft did the company a favor by approaching it directly. The alternative would have been to call in the U.S. Marshals and raid the company’s headquarters.

“The rights holders had the alternative option of simply commencing litigation and seeking a court order permitting a raid by U.S. Marshals,” the BSA’s lawyers wrote in one of their letters.

This ‘threat’ wasn’t completely in vain. In the past, the BSA and Microsoft’s accusations have developed into fully-fledged raids, with armed law enforcement officials assisting the software vendor, taking away computers for further inspection.

Still, Hanna maintained that it didn’t do anything wrong. At this point, they’d spent $25,000 on disproving the BSA’s “baseless” claims, and saw no other option than to take the matter to court.

Late last week the company submitted a complaint against Microsoft and the BSA in a Rhode Island federal court, asking for a declaratory judgment and monetary compensation.

“To date, the Defendants have not provided any documentation supporting the baseless allegation that Hanna illegally copied Microsoft Office, in spite of repeated requests by Plaintiff’s counsel that BSA produce such information,” the complaint reads.

“By this Complaint, Hanna seeks a declaration by the Court that it has not infringed any Microsoft copyrights, that Hanna has been harmed by BSA’s relentless and unsupported charges, and that Defendants pay Hanna’s costs and expenses for this action, together with reasonable attorney fees, and any additional monetary award this Court deems appropriate.”

It’s now up to the court to decide who’s right and who’s wrong, but the case already provides a rare and intriguing insight into the anti-piracy practices of Microsoft and the BSA.

This isn’t the first time that one of these cases has gone to court. In Belgium, the BSA and Microsoft lost a similar case. Here, a local company was ordered to pay a settlement on the spot or lose its computers. With law enforcement at the ready, the owner decided to pay, despite owning valid licenses.

The full complaint is available here (pdf).

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN discounts, offers and coupons

Security updates for Monday

Post Syndicated from ris original https://lwn.net/Articles/738890/rss

Security updates have been issued by Debian (graphicsmagick, imagemagick, mupdf, postgresql-common, ruby2.3, and wordpress), Fedora (tomcat), Gentoo (cacti, chromium, eGroupWare, hostapd, imagemagick, libXfont2, lxc, mariadb, vde, wget, and xorg-server), Mageia (flash-player-plugin and libjpeg), openSUSE (ansible, ImageMagick, java-1_8_0-openjdk, krb5, redis, shadow, virtualbox, and webkit2gtk3), Red Hat (rh-eclipse46-jackson-databind and rh-eclipse47-jackson-databind), SUSE (java-1_8_0-openjdk, mysql, openssl, and storm, storm-kit), and Ubuntu (perl).

Weekly roundup: Pedal to the medal

Post Syndicated from Eevee original https://eev.ee/dev/2017/11/09/weekly-roundup-pedal-to-the-medal/

Hi! Sorry. I’m a bit late. I’ve actually been up to my eyeballs in doing stuff for a few days, which has been pretty cool.

  • fox flux: Definitely been ramping up how much I’m working on this game. Finished another landing animation blah blah player sprites. Some more work on visual effects, this time a cool silhouette stencil effect thing.

  • art: Drew a pic celebrating 1000 followers on my nsfw art Twitter, wow!

  • blog: Wrote half of another cross-cutting programming languages post, for October. Then forgot about it for, uhhh, ten days. Whoops! Will definitely get back to that, um, soon.

  • writing: Actually made some “good ass legit progress” (according to my notes) on the little Flora twine I’m writing, now including some actual prose instead of just JavaScript wankery.

  • bots: I added a bunch more patterns to my Perlin noise Twitter bot and finally implemented a little “masking” thing that will let me make more complex patterns while still making it obvious what they’re supposed to be.

    Alas, while Twitter recently bumped the character limit to 280, that doesn’t mean the bot’s output can now be twice as big — emoji now count as two characters. (No, not because of UTF-16; Twitter is deliberately restricting CJK to 140. It’s super weird.)

  • cc: I got undo working with this accursèd sprite animation UI, and I fixed just a whole mess of bugs.

This week has been even more busy, which I think bodes well. I’m up to a lot of stuff, hope you’re looking forward to it!

Backing Up the Modern Enterprise with Backblaze for Business

Post Syndicated from Roderick Bauer original https://www.backblaze.com/blog/endpoint-backup-solutions/

Endpoint backup diagram

Organizations of all types and sizes need reliable and secure backup. Whether they have as few as 3 or as many as 300,000 computer users, an organization’s computer data is a valuable business asset that needs to be protected.

Modern organizations are changing how they work and where they work, which brings new challenges to making sure that company’s data assets are not only available, but secure. Larger organizations have IT departments that are prepared to address these needs, but often times in smaller and newer organizations the challenge falls upon office management who might not be as prepared or knowledgeable to face a work environment undergoing dramatic changes.

Whether small or large, local or world-wide, for-profit or non-profit, organizations need a backup strategy and solution that matches the new ways of working in the enterprise.

The Enterprise Has Changed, and So Has Data Use

More and more, organizations are working in the cloud. These days organizations can operate just fine without their own file servers, database servers, mail servers, or other IT infrastructure that used to be standard for all but the smallest organization.

The reality is that for most organizations, though, it’s a hybrid work environment, with a combination of cloud-based and PC and Macintosh-based applications. Legacy apps aren’t going away any time soon. They will be with us for a while, with their accompanying data scattered amongst all the desktops, laptops and other endpoints in corporate headquarters, home offices, hotel rooms, and airport waiting areas.

In addition, the modern workforce likely combines regular full-time employees, remote workers, contractors, and sometimes interns, volunteers, and other temporary workers who also use company IT assets.

The Modern Enterprise Brings New Challenges for IT

These changes in how enterprises work present a problem for anyone tasked with making sure that data — no matter who uses it or where it lives — is adequately backed-up. Cloud-based applications, when properly used and managed, can be adequately backed up, provided that users are connected to the internet and data transfers occur regularly — which is not always the case. But what about the data on the laptops, desktops, and devices used by remote employees, contractors, or just employees whose work keeps them on the road?

The organization’s backup solution must address all the needs of the modern organization or enterprise using both cloud and PC and Mac-based applications, and not be constrained by employee or computer location.

A Ten-Point Checklist for the Modern Enterprise for Backing Up

What should the modern enterprise look for when evaluating a backup solution?

1) Easy to deploy to workers’ computers

Whether installed by the computer user or an IT person locally or remotely, the backup solution must be easy to implement quickly with minimal demands on the user or administrator.

2) Fast and unobtrusive client software

Backups should happen in the background by efficient (native) PC and Macintosh software clients that don’t consume valuable processing power or take memory away from applications the user needs.

3) Easy to configure

The backup solutions must be easy to configure for both the user and the IT professional. Ease-of-use means less time to deploy, configure, and manage.

4) Defaults to backing up all valuable data

By default, the solution backs up commonly used files and folders or directories, including desktops. Some backup solutions are difficult and intimidating because they require that the user chose what needs to be backed up, often missing files and folders/directories that contain valuable data.

5) Works automatically in the background

Backups should happen automatically, no matter where the computer is located. The computer user, especially the remote or mobile one, shouldn’t be required to attach cables or drives, or remember to initiate backups. A working solution backs up automatically without requiring action by the user or IT administrator.

6) Data restores are fast and easy

Whether it’s a single file, directory, or an entire system that must be restored, a user or IT sysadmin needs to be able to restore backed up data as quickly as possible. In cases of large restores to remote locations, the ability to send a restore via physical media is a must.

7) No limitations on data

Throttling, caps, and data limits complicate backups and require guesses about how much storage space will be needed.

8) Safe & Secure

Organizations require that their data is secure during all phases of initial upload, storage, and restore.

9) Easy-to-manage

The backup solution needs to provide a clear and simple web management interface for all functions. Designing for ease-of-use leads to efficiency in management and operation.

10) Affordable and transparent pricing

Backup costs should be predictable, understandable, and without surprises.

Two Scenarios for the Modern Enterprise

Enterprises exist in many forms and types, but wanting to meet the above requirements is common across all of them. Below, we take a look at two common scenarios showing how enterprises face these challenges. Three case studies are available that provide more information about how Backblaze customers have succeeded in these environments.

Enterprise Profile 1

The needs of a smaller enterprise differ from those of larger, established organizations. This organization likely doesn’t have anyone who is devoted full-time to IT. The job of on-boarding new employees and getting them set up with a computer likely falls upon an executive assistant or office manager. This person might give new employees a checklist with the software and account information and lets users handle setting up the computer themselves.

Organizations in this profile need solutions that are easy to install and require little to no configuration. Backblaze, by default, backs up all user data, which lets the organization be secure in knowing all the data will be backed up to the cloud — including files left on the desktop. Combined with Backblaze’s unlimited data policy, organizations have a truly “set it and forget it” platform.

Customizing Groups To Meet Teams’ Needs

The Groups feature of Backblaze for Business allows an organization to decide whether an individual client’s computer will be Unmanaged (backups and restores under the control of the worker), or Managed, in which an administrator can monitor the status and frequency of backups and handle restores should they become necessary. One group for the entire organization might be adequate at this stage, but the organization has the option to add additional groups as it grows and needs more flexibility and control.

The organization, of course, has the choice of managing and monitoring users using Groups. With Backblaze’s Groups, organizations can set user-based access rules, which allows the administrator to create restores for lost files or entire computers on an employee’s behalf, to centralize billing for all client computers in the organization, and to redeploy a recovered computer or new computer with the backed up data.

Restores

In this scenario, the decision has been made to let each user manage her own backups, including restores, if necessary, of individual files or entire systems. If a restore of a file or system is needed, the restore process is easy enough for the user to handle it by herself.

Case Study 1

Read about how PagerDuty uses Backblaze for Business in a mixed enterprise of cloud and desktop/laptop applications.

PagerDuty Case Study

In a common approach, the employee can retrieve an accidentally deleted file or an earlier version of a document on her own. The Backblaze for Business interface is easy to navigate and was designed with feedback from thousands of customers over the course of a decade.

In the event of a lost, damaged, or stolen laptop,  administrators of Managed Groups can  initiate the restore, which could be in the form of a download of a restore ZIP file from the web management console, or the overnight shipment of a USB drive directly to the organization or user.

Enterprise Profile 2

This profile is for an organization with a full-time IT staff. When a new worker joins the team, the IT staff is tasked with configuring the computer and delivering it to the new employee.

Backblaze for Business Groups

Case Study 2

Global charitable organization charity: water uses Backblaze for Business to back up workers’ and volunteers’ laptops as they travel to developing countries in their efforts to provide clean and safe drinking water.

charity: water Case Study

This organization can take advantage of additional capabilities in Groups. A Managed Group makes sense in an organization with a geographically dispersed work force as it lets IT ensure that workers’ data is being regularly backed up no matter where they are. Billing can be company-wide or assigned to individual departments or geographical locations. The organization has the choice of how to divide the organization into Groups (location, function, subsidiary, etc.) and whether the Group should be Managed or Unmanaged. Using Managed Groups might be suitable for most of the organization, but there are exceptions in which sensitive data might dictate using an Unmanaged Group, such as could be the case with HR, the executive team, or finance.

Deployment

By Invitation Email, Link, or Domain

Backblaze for Business allows a number of options for deploying the client software to workers’ computers. Client installation is fast and easy on both Windows and Macintosh, so sending email invitations to users or automatically enrolling users by domain or invitation link, is a common approach.

By Remote Deployment

IT might choose to remotely and silently deploy Backblaze for Business across specific Groups or the entire organization. An administrator can silently deploy the Backblaze backup client via the command-line, or use common RMM (Remote Monitoring and Management) tools such as Jamf and Munki.

Restores

Case Study 3

Read about how Bright Bear Technology Solutions, an IT Managed Service Provider (MSP), uses the Groups feature of Backblaze for Business to manage customer backups and restores, deploy Backblaze licenses to their customers, and centralize billing for all their client-based backup services.

Bright Bear Case Study

Some organizations are better equipped to manage or assist workers when restores become necessary. Individual users will be pleased to discover they can roll-back files to an earlier version if they wish, but IT will likely manage any complete system restore that involves reconfiguring a computer after a repair or requisitioning an entirely new system when needed.

This organization might chose to retain a client’s entire computer backup for archival purposes, using Backblaze B2 as the cloud storage solution. This is another advantage of having a cloud storage provider that combines both endpoint backup and cloud object storage among its services.

The Next Step: Server Backup & Data Archiving with B2 Cloud Storage

As organizations grow, they have increased needs for cloud storage beyond Macintosh and PC data backup. Backblaze’s object cloud storage, Backblaze B2, provides low-cost storage and archiving of records, media, and server data that can grow with the organization’s size and needs.

B2 Cloud Storage is available through the same Backblaze management console as Backblaze Computer Backup. This means that Admins have one console for billing, monitoring, deployment, and role provisioning. B2 is priced at 1/4 the cost of Amazon S3, or $0.005 per month per gigabyte (which equals $5/month per terabyte).

Why Modern Enterprises Chose Backblaze

Backblaze for Business

Businesses and organizations select Backblaze for Business for backup because Backblaze is designed to meet the needs of the modern enterprise. Backblaze customers are part of a a platform that has a 10+ year track record of innovation and over 400 petabytes of customer data already under management.

Backblaze’s backup model is proven through head-to-head comparisons to back up data that other backup solutions overlook in their default configurations — including valuable files that are needed after an accidental deletion, theft, or computer failure.

Backblaze is the only enterprise-level backup company that provides TOTP (Time-based One-time Password) via both SMS and Authentication app to all accounts at no incremental charge. At just $50/year/computer, Backblaze is affordable for any size of enterprise.

Modern Enterprises can Meet The Challenge of The Changing Data Environment

With the right backup solution and strategy, the modern enterprise will be prepared to ensure that its data is protected from accident, disaster, or theft, whether its data is in one office or dispersed among many locations, and remote and mobile employees.

Backblaze for Business is an affordable solution that enables organizations to meet the evolving data demands facing the modern enterprise.

The post Backing Up the Modern Enterprise with Backblaze for Business appeared first on Backblaze Blog | Cloud Storage & Cloud Backup.

Security updates for Wednesday

Post Syndicated from jake original https://lwn.net/Articles/737882/rss

Security updates have been issued by Debian (graphicsmagick, libdatetime-timezone-perl, openjpeg2, thunderbird, and tzdata), Fedora (curl, glusterfs, java-1.8.0-openjdk, lame, lucene, SDL2, systemd, and xen), Red Hat (python-django), and Ubuntu (linux-lts-trusty and quagga).

A Raspberry Pi Halloween projects spectacular

Post Syndicated from Janina Ander original https://www.raspberrypi.org/blog/halloween-projects-2017/

Come with us on a journey to discover the 2017 Raspberry Pi Halloween projects that caught our eye, raised our hair, or sent us screaming into the night.

A clip of someone being pulled towards a trap door by hands reaching up from it - Raspberry Pi Halloween projects

Happy Halloween

Whether you’re easily scared or practically unshakeable, you can celebrate Halloween with Pi projects of any level of creepiness.

Even makers of a delicate constitution will enjoy making this Code Club Ghostbusters game, or building an interactive board game using Halloween lights with this MagPi tutorial by Mike Cook. And how about a wearable, cheerily LED-enhanced pumpkin created with the help of this CoderDojo resource? Cute, no?

Felt pumpkin with blinking LED smiley face - Raspberry Pi Halloween projects

Speaking of wearables, Derek Woodroffe’s be-tentacled hat may writhe disconcertingly, but at least it won’t reach out for you. Although, you could make it do that, if you were a terrible person.

Slightly queasy Halloween

Your decorations don’t have to be terrifying: this carved Pumpkin Pi and the Poplawskis’ Halloween decorations are controlled remotely via the web, but they’re more likely to give you happy goosebumps than cold sweats.

A clip of blinking Halloween decorations covering a house - Raspberry Pi Halloween projects

The Snake Eyes Bonnet pumpkin and the monster-face projection controlled by Pis that we showed you in our Halloween Twitter round-up look fairly friendly. Even the 3D-printed jack-o’-lantern by wermy, creator of mintyPi, is kind of adorable, if you ignore the teeth. And who knows, that AlexaPi-powered talking skull that’s staring at you could be an affable fellow who just fancies a chat, right? Right?

Horror-struck Halloween

OK, fine. You’re after something properly frightening. How about the haunted magic mirror by Kapitein Haak, or this one, with added Philips Hue effects, by Ben Eagan. As if your face first thing in the morning wasn’t shocking enough.

Haunted magic mirror demonstration - Raspberry Pi Halloween projects

If you find those rigid-faced, bow-lipped, plastic dolls more sinister than sweet – and you’re right to do so: they’re horrible – you won’t like this evil toy. Possessed by an unquiet shade, it’s straight out of my nightmares.

Earlier this month we covered Adafruit’s haunted portrait how-to. This build by Dominick Marino takes that concept to new, terrifying, heights.

Haunted portrait project demo - Raspberry Pi Halloween projects

Why not add some motion-triggered ghost projections to your Halloween setup? They’ll go nicely with the face-tracking, self-winding, hair-raising jack-in-the-box you can make thanks to Sean Hodgins’ YouTube tutorial.

And then, last of all, there’s this.

The Saw franchise's Billy the puppet on a tricycle - Raspberry Pi Halloween projects

NO.

This recreation of Billy the Puppet from the Saw franchise is Pi-powered, it’s mobile, and it talks. You can remotely control it, and I am not even remotely OK with it. That being said, if you’re keen to have one of your own, be my guest. Just follow the guide on Instructables. It’s your funeral.

Make your Halloween

It’s been a great year for scary Raspberry Pi makes, and we hope you have a blast using your Pi to get into the Halloween spirit.

And speaking of spirits, Matt Reed of RedPepper has created a Pi-based ghost detector! It uses Google’s Speech Neural Network AI to listen for voices in the ether, and it’s live-streaming tonight. Perfect for watching while you’re waiting for the trick-or-treaters to show up.

The post A Raspberry Pi Halloween projects spectacular appeared first on Raspberry Pi.

‘Pirate’ IPTV Provider Loses Case, Despite Not Offering Content Itself

Post Syndicated from Andy original https://torrentfreak.com/pirate-iptv-provider-loses-case-despite-not-offering-content-itself-171031/

In 2017, there can be little doubt that streaming is the big piracy engine of the moment. Dubbed Piracy 3.0 by the MPAA, the movement is causing tremendous headaches for rightsholders on a global scale.

One of the interesting things about this phenomenon is the distributed nature of the content on offer. Sourced from thousands of online locations, from traditional file-hosters to Google Drive, the big challenge is to aggregate it all into one place, to make it easy to find. This is often achieved via third-party addons for the legal Kodi software.

One company offering such a service was MovieStreamer.nl in the Netherlands. Via its website MovieStreamer the company offered its Easy Use Interface 2.0, a piece of software that made Kodi easy to use and other streams easy to find for 79 euros. It also sold ‘VIP’ access to thousands of otherwise premium channels for around 20 euros per month.

MovieStreamer Easy Interface 2.0

“Thanks to the unique Easy Use Interface, we have the unique 3-step process,” the company’s marketing read.

“Click tile of choice, activate subtitles, and play! Fully automated and instantly the most optimal settings. Our youngest user is 4 years old and the ‘oldest’ 86 years. Ideal for young and old, beginner and expert.”

Of course, being based in the Netherlands it wasn’t long before MovieStreamer caught the attention of BREIN. The anti-piracy outfit says it tried to get the company to stop offering the illegal product but after getting no joy, took the case to court.

From BREIN’s perspective, the case was cut and dried. MovieStreamer had no right to provide access to the infringing content so it was in breach of copyright law (unauthorized communication to the public) and should stop its activities immediately. MovieStreamer, however, saw things somewhat differently.

At the core of its defense was the claim that did it not provide content itself and was merely a kind of middleman. MovieStreamer said it provided only a referral service in the form of a hyperlink formatted as a shortened URL, which in turn brought together supply and demand.

In effect, MovieStreamer claimed that it was several steps away from any infringement and that only the users themselves could activate the shortener hyperlink and subsequent process (including a corresponding M3U playlist file, which linked to other hyperlinks) to access any pirated content. Due to this disconnect, MovieStreamer said that there was no infringement, for-profit or otherwise.

A judge at the District Court in Utrecht disagreed, ruling that by providing a unique hyperlink to customers which in turn lead to protected works was indeed a “communication to the public” based on the earlier Filmspeler case.

The Court also noted that MovieStreamer knew or indeed ought to have known the illegal nature of the content being linked to, not least since BREIN had already informed them of that fact. Since the company was aware, the for-profit element of the GS Media decision handed down by the European Court of Justice came into play.

In an order handed down October 27, the Court ordered MovieStreamer to stop its IPTV hyperlinking activities immediately, whether via its Kodi Easy Use Interface or other means. Failure to do so will result in a 5,000 euro per day fine, payable to BREIN, up to a maximum of 500,000 euros. MovieStreamer was also ordered to pay legal costs of 17,527 euros.

“Moviestreamer sold a link to illegal content. Then you are required to check if that content is legally on the internet,” BREIN Director Tim Kuik said in a statement.

“You can not claim that you have nothing to do with the content if you sell a link to that content.”

Speaking with Tweakers, MovieStreamer owner Bernhard Ohler said that the packages in question were removed from his website on Saturday night. He also warned that other similar companies could experience the same issues with BREIN.

“With this judgment in hand, BREIN has, of course, a powerful weapon to force them offline,” he said.

Ohler said that the margins on hardware were so small that the IPTV subscriptions were the heart of his company. Contacted by TorrentFreak on what this means for his business, he had just two words.

“The end,” he said.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.

Assassins Creed Origin DRM Hammers Gamers’ CPUs

Post Syndicated from Andy original https://torrentfreak.com/assassins-creed-origin-drm-hammers-gamers-cpus-171030/

There’s a war taking place on the Internet. On one side: gaming companies, publishers, and anti-piracy outfits. On the other: people who varying reasons want to play and/or test games for free.

While these groups are free to battle it out in a manner of their choosing, innocent victims are getting caught up in the crossfire. People who pay for their games without question should be considered part of the solution, not the problem, but whether they like it or not, they’re becoming collateral damage in an increasingly desperate conflict.

For the past several days, some players of the recently-released Assassin’s Creed Origins have emerged as what appear to be examples of this phenomenon.

“What is the normal CPU usage for this game?” a user asked on Steam forums. “I randomly get between 60% to 90% and I’m wondering if this is too high or not.”

The individual reported running an i7 processor, which is no slouch. However, for those running a CPU with less oomph, matters are even worse. Another gamer, running an i5, reported a 100% load on all four cores of his processor, even when lower graphics settings were selected in an effort to free up resources.

“It really doesn’t seem to matter what kind of GPU you are using,” another complained. “The performance issues most people here are complaining about are tied to CPU getting maxed out 100 percent at all times. This results in FPS [frames per second] drops and stutter. As far as I know there is no workaround.”

So what could be causing these problems? Badly configured machines? Terrible coding on the part of the game maker?

According to Voksi, whose ‘Revolt’ team cracked Wolfenstein II: The New Colossus before its commercial release last week, it’s none of these. The entire problem is directly connected to desperate anti-piracy measures.

As widely reported (1,2), the infamous Denuvo anti-piracy technology has been taking a beating lately. Cracking groups are dismantling it in a matter of days, sometimes just hours, making the protection almost pointless. For Assassin’s Creed Origins, however, Ubisoft decided to double up, Voksi says.

“Basically, Ubisoft have implemented VMProtect on top of Denuvo, tanking the game’s performance by 30-40%, demanding that people have a more expensive CPU to play the game properly, only because of the DRM. It’s anti-consumer and a disgusting move,” he told TorrentFreak.

Voksi says he knows all of this because he got an opportunity to review the code after obtaining the binaries for the game. Here’s how it works.

While Denuvo sits underneath doing its thing, it’s clearly vulnerable to piracy, given recent advances in anti-anti-piracy technology. So, in a belt-and-braces approach, Ubisoft opted to deploy another technology – VMProtect – on top.

VMProtect is software that protects other software against reverse engineering and cracking. Although the technicalities are different, its aims appear to be somewhat similar to Denuvo, in that both seek to protect underlying systems from being subverted.

“VMProtect protects code by executing it on a virtual machine with non-standard architecture that makes it extremely difficult to analyze and crack the software. Besides that, VMProtect generates and verifies serial numbers, limits free upgrades and much more,” the company’s marketing reads.

VMProtect and Denuvo didn’t appear to be getting on all that well earlier this year but they later settled their differences. Now their systems are working together, to try and solve the anti-piracy puzzle.

“It seems that Ubisoft decided that Denuvo is not enough to stop pirates in the crucial first days [after release] anymore, so they have implemented an iteration of VMProtect over it,” Voksi explains.

“This is great if you are looking to save your game from those pirates, because this layer of VMProtect will make Denuvo a lot more harder to trace and keygen than without it. But if you are a legit customer, well, it’s not that great for you since this combo could tank your performance by a lot, especially if you are using a low-mid range CPU. That’s why we are seeing 100% CPU usage on 4 core CPUs right now for example.”

The situation is reportedly so bad that some users are getting the dreaded BSOD (blue screen of death) due to their machines overheating after just an hour or two’s play. It remains unclear whether these crashes are indeed due to the VMProtect/Denuvo combination but the perception is that these anti-piracy measures are at the root of users’ CPU utilization problems.

While gaming companies can’t be blamed for wanting to protect their products, there’s no sense in punishing legitimate consumers with an inferior experience. The great irony, of course, is that when Assassin’s Creed gets cracked (if that indeed happens anytime soon), pirates will be the only ones playing it without the hindrance of two lots of anti-piracy tech battling over resources.

The big question now, however, is whether the anti-piracy wall will stand firm. If it does, it raises the bizarre proposition that future gamers might need to buy better hardware in order to accommodate anti-piracy technology.

And people worry about bitcoin mining……?

Source: TF, for the latest info on copyright, file-sharing, torrent sites and ANONYMOUS VPN services.