Anti-Piracy Efforts Are Unlikely to Beat Sci-Hub

Post Syndicated from Ernesto original https://torrentfreak.com/anti-piracy-efforts-are-unlikely-to-beat-sci-hub/

Sci-Hub has often been referred to as “The Pirate Bay of Science,” but that description really sells the site short.

While both sites are helping the public to access copyrighted content without permission, Sci-Hub has also become a crucial tool that arguably helps the progress of science.

The site allows researchers to bypass expensive paywalls so they can read articles written by their fellow colleagues. The information in these ‘pirated’ articles is then used to provide the foundation for future research.

What the site does is not permitted, according to the law, but in the academic world, Sci-Hub is praised by many. In particular, those who don’t have direct access to expensive journals but aspire to excel in their academic field.

This leads to a rather intriguing situation where many of the ‘creators,’ the people who write academic articles, are openly supporting the site. By doing so, they go directly against the major publishers, including the billion-dollar company Elsevier, which are the rightsholders.

Elsevier previously convinced the courts that Sci-Hub is a force of evil. Many scientists, however, see it as an extremely useful tool. This was illustrated once again by a ‘letter to the editor’ Dr. Prasanna R Deshpande sent to the Journal of Health & Allied Sciences recently.

While Deshpande works at the Department of Clinical Pharmacy at Poona College of Pharmacy, his latest writing is entirely dedicated to copyright and Sci-Hub. In his published letter (no paywall), the researcher explains why a site such as Sci-Hub is important for the scientific community as a whole.

The Indian researcher points out that Sci-Hub’s main advantage is that it’s free of charge. This is particularly important for academics in developing countries, who otherwise don’t have the means to access crucial articles. Sci-Hub actually allows these people to carry out better research.

“A researcher generally has to pay some money ($30 or more per article on an average) for accessing the scholarly articles. However, the amount may not be ‘small’ for a researcher/research scholar, especially from a developing country,” Deshpande notes.

Aside from the cost issue, Sci-hub is often seen as more convenient as well. Many professors use the site and a recent survey found that it’s used to conduct research by 62.5% of all medical students across six countries in Latin America.

According to Deshpande, these and other arguments lead to the conclusion that Sci-Hub should be supported, at least until there is a good alternative.

“Reading updated knowledge is one of the essential parts of lifelong learning. Currently, Sci‑Hub is the only answer for this. Therefore, Sci‑Hub has various advantages because of which it should be supported,”
Deshpande concludes.

This is of course just the opinion of one researcher, but the web is riddled with similar examples. A simple Twitter search shows that many academics are sharing Sci-Hub links among each other, and some have even created dedicated websites to show some of the latest working Sci-Hub mirrors.

The major publishers are obviously not happy with this. Aside from lawsuits against Sci-Hub, they regularly send takedown notices to sites that link to infringing articles, including Google.

Recently Elsevier took it a step further by going after Citationsy, a tool that allows academics and researchers to manage citations and reference lists. The service previously published a blog post summing up some options for people to download free research articles.

This blog post also linked to Sci-Hub. Elsevier clearly didn’t like this, and sent its lawyer after Citationsy, requesting it to remove the link.

Citantionsy founder Cenk Özbakır initially wasn’t sure how to respond. Linking to a website isn’t necessarily copyright infringement. However, challenging a multi-billion dollar company on an issue like this is a battle that’s hard to win.

Eventually, Özbakır decided to remove it, pointing to a Google search instead. However, not without being rather critical of the move by Elsevier and its law firm Bird & Bird.

“I have of course taken down any links to Sci-Hub on Citationsy.com. @ElsevierLabs obviously thinks making money is more important than furthering science. Congratulations, @twobirds! We all now that the only thing this will achieve is less people reading papers,” Özbakır wrote on Twitter.

The ‘linking’ issue was later picked up by BoingBoing which also pointed out that many of Elsevier’s own publications include links to Sci-Hub, as we also highlighted in the past.

While not all researchers are unanimously backing Sci-Hub, it appears that this type of enforcement may not be the best way forward.

Pressuring people with cease and desist notices, filing lawsuits, and sending takedown notices certainly isn’t sustainable in the long term, especially if they target people in the academic community.

Perhaps Elsevier and other publishers should use the massive popularity of Sci-Hub as a signal that something is clearly wrong with what they are offering. Instead of trying to hide piracy by sweeping it under the rug, Elsevier could learn from it and adapt.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

How the D candidates would introduce themselves at the next debate if they were honest

Post Syndicated from esr original http://esr.ibiblio.org/?p=8434

Hi, I’m Joe Biden. I’m the perfect apparatchik – nor principles, no convictions, and no plan. I’m senile, and I have a problem with groping children. But vote for me anyway because orange man bad.

Hi, I’m Kamala Harris. My white ancestors owned slaves, but I use the melanin I got from my Indian ancestors to pretend to be black. My own father has publicly rebuked me for the pandering lies I tell. I fellated my way into politics; put me into the White house so I can suck even more!

Hi, I’m Elizabeth Warren. Even though I’m as white as library paste, I pretended to be an American Indian to get preferment. My research on medical bankruptcies was as fraudulent as the way I gamed the racial spoils system. So you should totally trust me when I say I’m “capitalist to my bones”!

Hi, I’m Bernie Sanders. I honeymooned in the Soviet Union. I’m an unreconstructed, hammer-and-sickle-worshiping Communist.

Hi, I’m Kirsten Gillibrand. I used to be what passes for a moderate among Democrats – I even supported gun rights. Now I’ve swung hard left, and will let you just guess whether I ever had any issue convictions or it was just pandering all the way down. Tee-hee!

Hi, I’m Amy Klobuchar, and I’ve demonstrated my grasp on the leadership skills necessarily for the leader of the Free World by being notoriously abusive towards my staff.

Hi, I’m Robert Francis O’Rourke. It’s not actually true that my friends call me Beto, that was fiction invented by a campaign consultant as a play for the Hispanic vote. I’m occupying the “imitate the Kennedy” lane in this race, and my credentials for it include DUI and fleeing an accident scene. The rumors that I’m a furry are false; the rumors that I’m a dimwitted child of privilege are true. But vote for me anyway, crucial white-suburban-female demographic, because I have such a nice smile!

Hi, I’m Pete Buttigieg. I was such a failure as the mayor of South Bend that my own constituents criticize me for having entered this race, but the Acela Corridor press loves me because I’m fashionably gay. And how right they are; any candidate you choose is going to bugger you up the ass eventually, but I’ll do it like an expert!

Hi, I’m Bill de Blasio. I’m as Communist as Bernie, but I hide it better. And if Pete thinks his constituents don’t want him in this race? Hold…my…beer!

Hi, I’m Cory Booker, and I’m totally not gay. OK, maybe I’m just a little gay. My city was a shithole when I was elected and I’ve done nothing to change that; I’m really just an empty suit with a plausible line of patter, especially the “I am Spartacus” part. But you should totally vote for me because I’m…what was the phrase? Oh, yeah. “Clean and articulate.”

Hi, I’m Marianne Williamson. If elected, I will redecorate the White House so it has proper feng shui. I am the sanest and least pretentious person on this stage.

Man Tried to Burn Down Telecoms Watchdog to Avenge Pirate Site-Blocking

Post Syndicated from Andy original https://torrentfreak.com/man-tried-to-burn-down-telecoms-watchdog-to-avenge-pirate-site-blocking-190817/

While copyright holders and many governments see site-blocking as a reasoned and measured response to copyright infringement, some people view it as overkill.

People should be able to access whatever content they want without rich corporations deciding what should and should not appear on computer screens, the argument goes.

For former student Pavel Kopylov, blocking of pirate sites in Russia has gone too far. So, to make his displeasure obvious to Roscomnadzor, the government entity responsible for carrying it out, last year he attempted to burn one of its offices down – three times.

On April 2, 2018, reportedly dissatisfied that his favorite torrent tracker had been blocked, Kopylov went to the local offices of Roscomnadzor,
smashed a window, and threw a bottle of flammable liquid inside together with a burning match. The attempt was a failure – the fire didn’t ignite and a guard was alerted by the noise.

Almost two weeks later, Kopylov returned for a second try. This time a fire did ensue but it was put out, without causing catastrophic damage. A third attempt, on May 9, 2018, ended in complete failure, with a guard catching the would-be arsonist before he could carry out his plan.

Nevertheless, the prosecutor’s office saw the attacks as an attempt to destroy Roscomnadzor’s property by arson, an offense carrying a penalty of up to five years in prison. The prosecution sought two years but in the end, had to settle for considerably less.

Interfax reports that a court in the Ulyanovsk region has now sentenced the man for repeatedly trying to burn down Roscomnadzor’s regional office. He received 18 months probation but the prosecution intends to appeal, describing the sentence as excessively lenient.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Disney and Charter Team Up on Piracy Mitigation

Post Syndicated from Ernesto original https://torrentfreak.com/disney-and-charter-team-up-on-piracy-mitigation/

With roughly 22 million subscribers, Charter Communications is one of the largest Internet providers in the US.

The company operates under the Spectrum brand and offers a wide variety of services including TV and Internet access.

In an effort to provide more engaging content to its customers, this week Charter signed a major new distribution agreement with The Walt Disney Company.

The new partnership will provide the telco’s customers with access to popular titles in Disney’s services, including Hulu, ESPN+ and the yet-to-be-launched streaming service Disney+.

The fact that these giant companies have teamed-up is a big deal, business-wise and for consumers. Most Spectrum subscribers will likely be pleased to have more options, but there may also be a subgroup that has concerns.

Away from the major headline, both companies also state that they have agreed to partner up on piracy mitigation.

“This agreement will allow Spectrum to continue delivering to its customers popular Disney content […] and will begin an important collaborative effort to address the significant issue of piracy mitigation,” says Tom Montemagno, EVP, Programming Acquisition for Charter.

The public press releases give no concrete details of what this “piracy mitigation” will entail. It does mention that the two companies will work together to “implement business rules” and address issues such as “unauthorized access and password sharing.”

TorrentFreak reached out to Charter for further details, but the company said that it’s not elaborating beyond the press release at this time.

The term “mitigating” suggests that both companies will actively work together to reduce piracy. This is interesting because Charter is currently caught up in a major piracy liability lawsuit in a US federal court in Colorado.

Earlier this year the Internet provider was sued by several music companies which argued that the company turned a blind eye to piracy by failing to terminate accounts of repeat infringers. In addition, Charter stands accused of willingly profiting from these alleged copyright infringements.

Charter’s new agreement with Disney suggests that there could be a more proactive anti-piracy stance going forward. One possibility might be a more strict repeat infringer policy but, without further details, it remains unclear what the “piracy mitigation” entails precisely.

In any case, it will be interesting to see how the two companies plan to put a dent in current piracy levels, and what that means for Charter customers.

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

Friday Squid Blogging: Robot Squid Propulsion

Post Syndicated from Bruce Schneier original https://www.schneier.com/blog/archives/2019/08/friday_squid_bl_690.html

Interesting research:

The squid robot is powered primarily by compressed air, which it stores in a cylinder in its nose (do squids have noses?). The fins and arms are controlled by pneumatic actuators. When the robot wants to move through the water, it opens a value to release a modest amount of compressed air; releasing the air all at once generates enough thrust to fire the robot squid completely out of the water.

The jumping that you see at the end of the video is preliminary work; we’re told that the robot squid can travel between 10 and 20 meters by jumping, whereas using its jet underwater will take it just 10 meters. At the moment, the squid can only fire its jet once, but the researchers plan to replace the compressed air with something a bit denser, like liquid CO2, which will allow for extended operation and multiple jumps. There’s also plenty of work to do with using the fins for dynamic control, which the researchers say will “reveal the superiority of the natural flying squid movement.”

I can’t find the paper online.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Discover metadata with AWS Lake Formation: Part 2

Post Syndicated from Julia Soscia original https://aws.amazon.com/blogs/big-data/discover-metadata-with-aws-lake-formation-part-2/

Data lakes are an increasingly popular way to aggregate, store, and analyze both structured and unstructured data. AWS Lake Formation makes it easy for you to set up, secure, and manage your data lakes.

In Part 1 of this post series, you learned how to create and explore a data lake using Lake Formation. This post walks you through data discovery using the metadata search capabilities of Lake Formation in the console, and metadata search results restricted by column permissions.

Prerequisites

For this post, you need the following:

Metadata search in the console

In this post, we demonstrate the catalog search capabilities offered by the Lake Formation console:

  • Search by classification
  • Search by keyword
  • Search by tag: attribute
  • Multiple filter searches

Search by classification

Using the metadata catalog search capabilities, search across all tables within your data lake. Two share the name amazon_reviews but separately belong to your simulated “prod” and “test” databases, and the third is trip-data.

  1. In the Lake Formation console, under Data catalog, choose Tables.
  2. In the search bar, under Resource Attributes, choose Classification, type CSV, and press Enter. You should see only the trip_data table, which you formatted as CSV in your data lake. The amazon_reviews tables do not appear because they are in Parquet format.
  3. In the Name column, choose trip_data. Under Table details, you can see that the classification CSV is correctly identified by the metadata search filter.

Search by keyword

Next, search across your entire data lake filtering metadata by keyword.

  1. To refresh the list of tables, under Data catalog, choose Tables again.
  2. From the search bar, type star_rating, and press Enter. Now that you have applied the filter, you should see only the amazon_reviews tables because they both contain a column named star_rating.
  3. By choosing either of the two tables, you can scroll down to the Schema section, and confirm that they contain a star_rating column.

Search by tag: attribute

Next, search across your data lake and filter results by metadata tags and their attribute value.

  1. To refresh the list of tables, under Data catalog, choose Tables.
  2. From the search bar, type department: research, and press Enter. Now that you have applied the filter, you should see only the trip_data table because this is the only table containing the value of ‘research’ in the table property of ‘department’.
  3. Select the trip_data table. Under Table details, you can see the tag: attribute of department | research listed under Table properties.

Multiple filter searches

Finally, try searching across your entire data lake using multiple filters at one time.

  1. To refresh the list of tables, under Data catalog, choose Tables.
  2. In the search bar, choose Location, type S3, and press Enter. For this post, all of your catalog tables are in S3, so all three tables display.
  3. In the search bar, choose Classification, type parquet, and press Enter. You should see only the amazon_reviews tables because they are the only tables stored in S3 in Parquet format.
  4. Choose either of the displayed amazon_reviews tables. Under Table details, you can see that the following is true.
  • Location: S3
  • Classification: parquet

Metadata search results restricted by column permissions

The metadata search capabilities return results based on the permissions specified within Lake Formation. If a user or a role does not have permission to a particular database, table, or column, that element doesn’t appear in that user’s search results.

To demonstrate this, first create an IAM user, dataResearcher, with AWS Management Console access. Make sure to store the password somewhere safe.

To simplify this post, attach the AdministratorAccess policy to the user. This policy grants full access to your AWS account, which is overly permissive. I recommend that you either remove this user after completing the post, or remove this policy, and enable multi-factor authentication (MFA). For more information, see Creating an IAM user in the console.

In Part 1 of this series, you allowed Everyone to view the tables that the AWS Glue crawlers created. Now, revoke those permissions for the ny-taxi database.

  1. In the Lake Formation console, under Permissions, choose Data permissions.
  2. Scroll down or search until you see the Everyone record for the trip_data table.
  3. Select the record and choose Revoke, Revoke.

Now, your dataResearcher IAM user cannot see the ny-taxi database or the trip_data table. Resolve this issue by setting up Lake Formation permissions.

  1. Under Permissions, choose Data Permission, Grant.
  2. Select the dataResearcher user, the ny-taxi database, and the trip_data table.
  3. Under Table permissions, check Select and choose Grant.
  4. Log out of the console and sign back in using the dataResearcher IAM user that you created earlier.
  5. In the Lake Formation console, choose Tables, select the trip_data table, and look at its properties:

The dataResearcher user currently has visibility across all of these columns. However, you don’t want to allow this user to see the pickup or drop off locations, as those are potential privacy risks. Remove these columns from the dataResearcher user’s permissions.

  1. Log out of the dataResearcher user and log back in with your administrative account.
  2. In the Lake Formation console, under Permissions, choose Data Permissions.
  3. Select the dataResearcher record and choose Revoke.
  4. On the Revoke page, under Column, choose All columns except the exclude columns and then choose the vendor_id, passenger_count, trip_distance, and total_amount columns.
  5. Under Table permissions, check Select. These settings revoke all permissions of the dataResearcher user to the trip_data table except those selected in the window. In other words, the dataResearcher user can only Select(view) the four selected columns.
  6. Choose Revoke.
  7. Log back in as the dataResearcher user.
  8. In the Lake Formation console, choose Data catalog, Tables. Search for vendor_id and press Enter. The trip_data table appears in the search, as shown in the following screenshot.
  9. Search for pu_location_id. This returns no results because you revoked permissions to this column, as shown in the following screenshot.

Conclusion

Congratulations: You have learned how to use the metadata search capabilities of Lake Formation. By defining specific user permissions, Lake Formation allowed you to grant and revoke access to metadata in the Data Catalog as well as the underlying data stored in S3. Therefore, you can discover your data sources across your entire AWS environment using a single pane of glass. To learn more, see AWS Lake Formation.

 


About the Authors

Julia Soscia is a solutions architect at Amazon Web Services based out of New York City. Her main focus is to help customers create well-architected environments on the AWS cloud platform. She is an experienced data analyst with a focus in Big Data and Analytics.

 

 

 

Eric Weinberg is a systems development engineer on the AWS Envision Engineering team. He has 15 years of experience building and designing software applications.

 

 

 

 

Francesco Marelli is a senior solutions architect at Amazon Web Services. He has more than twenty years experience in Analytics and Data Management.

 

 

 

 

Mat Werber is a solutions architect on the AWS Community SA Team. He is responsible for providing architectural guidance across the full AWS stack with a focus on Serverless, Redshift, DynamoDB, and RDS. He also has an audit background in IT governance, risk, and controls.

 

 

 

 

Creating custom Pinpoint dashboards using Amazon QuickSight, part 1

Post Syndicated from Brent Meyer original https://aws.amazon.com/blogs/messaging-and-targeting/creating-custom-pinpoint-dashboards-using-amazon-quicksight-part-1/

Note: This post was written by Manan Nayar and Aprajita Arora, Software Development Engineers on the AWS Digital User Engagement team.


Amazon Pinpoint helps you create customer-centric engagement experiences across the mobile, web, and other messaging channels. It also provides a variety of Key Performance Indicators (KPIs) that you can use to track the performance of your messaging programs.

You can access these KPIs through the console, or by using the Amazon Pinpoint API. In some cases, you might want to create custom dashboards that aren’t included by default, or even combine these metrics with other data. Over the next few days, we’ll discuss several different methods that you can use to create your own custom dashboards.

In this post, you’ll learn how to use the Amazon Pinpoint API to retrieve metrics, and then display them in visualizations that you create in Amazon QuickSight. This option is ideal for creating custom dashboards that highlight a specific set of metrics, or for embedding these metrics in your existing application or website.

In the next post (which we’ll post on Monday, August 19), you’ll learn how to export raw event data to an S3 bucket, and use that data to create dashboards in by using QuickSight’s Super-fast, Parallel, In-memory Calculation Engine (SPICE). This option enables you to perform in-depth analyses and quickly update visualizations. It’s also cost-effective, because all of the event data is stored in an S3 bucket.

The final post (which we’ll post on Wednesday, August 21) will also discuss the process of creating visualizations from event stream data. However, in this solution, the data will be sent from Amazon Kinesis to a Redshift cluster. This option is ideal if you need to process very large volumes of event data.

Creating a QuickSight dashboard that uses specific metrics

You can use the Amazon Pinpoint API to programmatically access many of the metrics that are shown on the Analytics pages of the Amazon Pinpoint console. You can learn more about using the API to obtain specific KPIs in our recent blog post, Tracking Campaign Performance Using the Metrics APIs.

The following sections show you how to parse and store those results in Amazon S3, and then create custom dashboards by using Amazon Quicksight. The steps below are meant to provide general guidance, rather than specific procedures. If you’ve used other AWS services in the past, most of the concepts here will be familiar. If not, don’t worry—we’ve included links to the documentation to make things easier.

Step 1: Package the Dependencies

Lambda currently uses a version of the AWS SDK that is a few versions behind the current version. However, the ability to retrieve Pinpoint metrics programmatically is a relatively new feature. For this reason, you have to download the latest version of the SDK libraries to your computer, create a .zip archive, and then upload that archive to Lambda.

To package the dependencies

    1. Paste the following code into a text editor:
      from datetime import datetime
      import boto3
      import json
      
      AWS_REGION = "<us-east-1>"
      PROJECT_ID = "<projectId>"
      BUCKET_NAME = "<bucketName>"
      BUCKET_PREFIX = "quicksight-data"
      DATE = datetime.now()
      
      # Get today's push open rate KPI values.
      def get_kpi(kpi_name):
      
          client = boto3.client('pinpoint',region_name=AWS_REGION)
      
          response = client.get_application_date_range_kpi(
              ApplicationId=PROJECT_ID,
              EndTime=DATE.strftime("%Y-%m-%d"),
              KpiName=kpi_name,
              StartTime=DATE.strftime("%Y-%m-%d")
          )
          rows = response['ApplicationDateRangeKpiResponse']['KpiResult']['Rows'][0]['Values']
      
          # Create a JSON object that contains the values we'll use to build QuickSight visualizations.
          data = construct_json_object(rows[0]['Key'], rows[0]['Value'])
      
          # Send the data to the S3 bucket.
          write_results_to_s3(kpi_name, json.dumps(data).encode('UTF-8'))
      
      # Create the JSON object that we'll send to S3.
      def construct_json_object(kpi_name, value):
          data = {
              "applicationId": PROJECT_ID,
              "kpiName": kpi_name,
              "date": str(DATE),
              "value": value
          }
      
          return data
      
      # Send the data to the designated S3 bucket.
      def write_results_to_s3(kpi_name, data):
          # Create a file path with folders for year, month, date, and hour.
          path = (
              BUCKET_PREFIX + "/"
              + DATE.strftime("%Y") + "/"
              + DATE.strftime("%m") + "/"
              + DATE.strftime("%d") + "/"
              + DATE.strftime("%H") + "/"
              + kpi_name
          )
      
          client = boto3.client('s3')
      
          # Send the data to the S3 bucket.
          response = client.put_object(
              Bucket=BUCKET_NAME,
              Key=path,
              Body=bytes(data)
          )
      
      def lambda_handler(event, context):
          get_kpi('email-open-rate')
          get_kpi('successful-delivery-rate')
          get_kpi('unique-deliveries')

      In the preceding code, make the following changes:

      • Replace <us-east-1> with the name of the AWS Region that you use Amazon Pinpoint in.
      • Replace <projectId> with the ID of the Amazon Pinpoint project that the metrics are associated with.
      • Replace <bucketName> with the name of the Amazon S3 bucket that you want to use to store the data. For more information about creating S3 buckets, see Create a Bucket in the Amazon S3 Getting Started Guide.
      • Optionally, modify the lambda_handler function so that it calls the get_kpi function for the specific metrics that you want to retrieve.

      When you finish, save the file as retrieve_pinpoint_kpis.py.

  1. Use pip to download the latest versions of the boto3 and botocore libraries. Add these libraries to a .zip file. Also add retrieve_pinpoint_kpis.py to the .zip file. You can learn more about all of these tasks in Updating a Function with Additional Dependencies With a Virtual Environment in the AWS Lambda Developer Guide.

Step 2: Set up the Lambda function

In this section, you upload the package that you created in the previous section to Lambda.

To set up the Lambda function

  1. In the Lambda console, create a new function from scratch. Choose the Python 3.7 runtime.
  2. Choose a Lambda execution role that contains the following permissions:
    • Allows the action mobiletargeting:GetApplicationDateRangeKpi for the resource arn:aws:mobiletargeting:<awsRegion>:<yourAwsAccountId>:apps/*/kpis/*/*, where <awsRegion> is the Region where you use Amazon Pinpoint, and <yourAwsAccountId> is your AWS account number.
    • Allows the action s3:PutObject for the resource arn:aws:s3:::<my_bucket>/*, where <my_bucket> is the name of the S3 bucket where you want to store the metrics.
  3. Upload the .zip file that you created in the previous section.
  4. Change the Handler value to retrieve_pinpoint_kpis.lambda_handler.
  5. Save your changes.

Step 3: Schedule the execution of the function

At this point, the Lambda function is ready to run. The next step is to set up the trigger that will cause it to run. In this case, since we’re retrieving an entire day’s worth of data, we’ll set up a scheduled trigger that runs every day at 11:59 PM.

To set up the trigger

  1. In the Lambda console, in the Designer section, choose Add trigger.
  2. Create a new CloudWatch Events rule that uses the Schedule expression rule type.
  3. For the schedule expression, enter cron(59 23 ? * * *).

Step 4: Create QuickSight Analyses

Once the data is populated in S3, you can start creating analyses in Amazon QuickSight. The process of creating new analyses involves a couple of tasks: creating a new data set, and creating your visualizations.

To create analyses in QuickSight
1.    In a text editor, create a new file. Paste the following code:

{
    "fileLocations": [
        {
            "URIPrefixes": [ 
                "s3://<bucketName>/quicksight-data/"          
            ]
        }
    ],
    "globalUploadSettings": {
        "format": "JSON"
    }
}

In the preceding code, replace <bucketName> with the name of the S3 bucket that you’re using to store the metrics data. Save the file as manifest.json.
2.    Sign in to the QuickSight console at https://quicksight.aws.amazon.com.
3.    Create a new S3 data set. When prompted, choose the manifest file that you created in step 1. For more information about creating S3 data sets, see Creating a Data Set Using Amazon S3 Files in the Amazon QuickSight User Guide.
4.    Create a new analysis. From here, you can start creating visualizations of your data. To learn more, see Creating an Analysis in the Amazon QuickSight User Guide.

Amazon Prime Day 2019 – Powered by AWS

Post Syndicated from Jeff Barr original https://aws.amazon.com/blogs/aws/amazon-prime-day-2019-powered-by-aws/

What did you buy for Prime Day? I bought a 34″ Alienware Gaming Monitor and used it to replace a pair of 25″ monitors that had served me well for the past six years:

 

As I have done in years past, I would like to share a few of the many ways that AWS helped to make Prime Day a reality for our customers. You can read How AWS Powered Amazon’s Biggest Day Ever and Prime Day 2017 – Powered by AWS to learn more about how we evaluate the results of each Prime Day and use what we learn to drive improvements to our systems and processes.

This year I would like to focus on three ways that AWS helped to support record-breaking amounts of traffic and sales on Prime Day: Amazon Prime Video Infrastructure, AWS Database Infrastructure, and Amazon Compute Infrastructure. Let’s take a closer look at each one…

Amazon Prime Video Infrastructure
Amazon Prime members were able to enjoy the second Prime Day Concert (presented by Amazon Music) on July 10, 2019. Headlined by 10-time Grammy winner Taylor Swift, this live-streamed event also included performances from Dua Lipa, SZA, and Becky G.

Live-streaming an event of this magnitude and complexity to an audience in over 200 countries required a considerable amount of planning and infrastructure. Our colleagues at Amazon Prime Video used multiple AWS Media Services including AWS Elemental MediaPackage and AWS Elemental live encoders to encode and package the video stream.

The streaming setup made use of two AWS Regions, with a redundant pair of processing pipelines in each region. The pipelines delivered 1080p video at 30 fps to multiple content distribution networks (including Amazon CloudFront), and worked smoothly.

AWS Database Infrastructure
A combination of NoSQL and relational databases were used to deliver high availability and consistent performance at extreme scale during Prime Day:

Amazon DynamoDB supports multiple high-traffic sites and systems including Alexa, the Amazon.com sites, and all 442 Amazon fulfillment centers. Across the 48 hours of Prime Day, these sources made 7.11 trillion calls to the DynamoDB API, peaking at 45.4 million requests per second.

Amazon Aurora also supports the network of Amazon fulfillment centers. On Prime Day, 1,900 database instances processed 148 billion transactions, stored 609 terabytes of data, and transferred 306 terabytes of data.

Amazon Compute Infrastructure
Prime Day 2019 also relied on a massive, diverse collection of EC2 instances. The internal scaling metric for these instances is known as a server equivalent; Prime Day started off with 372K server equivalents and scaled up to 426K at peak.

Those EC2 instances made great use of a massive fleet of Elastic Block Store (EBS) volumes. The team added an additional 63 petabytes of storage ahead of Prime Day; the resulting fleet handled 2.1 trillion requests per day and transferred 185 petabytes of data per day.

And That’s a A Wrap
These are some impressive numbers, and show you the kind of scale that you can achieve with AWS. As you can see, scaling up for one-time (or periodic) events and then scaling back down afterward, is easy and straightforward, even at world scale!

If you want to run your own world-scale event, I’d advise you to check out the blog posts that I linked above, and also be sure to read about AWS Infrastructure Event Management. My colleagues are ready (and eager) to help you to plan for your large-scale product or application launch, infrastructure migration, or marketing event. Here’s an overview of their process:

 

Jeff;

Cloudflare Flags Copyright Lawsuits as Potential Liabilities Ahead of IPO

Post Syndicated from Andy original https://torrentfreak.com/cloudflare-flags-copyright-lawsuits-as-potential-liabilities-ahead-of-ipo-190816/

As a CDN and security company, Cloudflare currently serves around 20 million “Internet properties”, ranging from domains and websites through to application programming interfaces (APIs) and mobile applications.

At least hundreds of those properties, potentially more, are considered ‘pirate’ platforms by copyright groups, which has resulted in Cloudflare being sucked into copyright infringement lawsuits due to the activities of its customers.

On Thursday, Cloudflare filed to go public by submitting the required S-1 registration statement. It contains numerous warnings that copyright infringement lawsuits, both current and those that may appear in the future, could present significant issues of liability for the company.

Noting that some of Cloudflare’s customers may use its services in violation of the law, the company states that existing laws relating to the liability of service providers are “highly unsettled and in flux”, both in the United States and further afield.

“For example, we have been named as a defendant in a number of lawsuits, both in the United States and abroad, alleging copyright infringement based on content that is made available through our customers’ websites,” the filing reads.

“There can be no assurance that we will not face similar litigation in the future or that we will prevail in any litigation we may face. An adverse decision in one or more of these lawsuits could materially and adversely affect our business, results of operations, and financial condition.”

Cloudflare goes on to reference the safe harbor provisions of the DMCA, noting that they may not offer “complete protection” for the company or could even be amended in the future to its detriment.

“If we are found not to be protected by the safe harbor provisions of the DMCA, CDA [Communications Decency Act] or other similar laws, or if we are deemed subject to laws in other countries that may not have the same protections or that may impose more onerous obligations on us, we may face claims for substantial damages and our brand, reputation, and financial results may be harmed. Such claims may result in liability that exceeds our ability to pay or our insurance coverage,” Cloudflare warns.

As a global company, it’s not only US law the company has to consider. Cloudflare references the recently-approved Copyright Directive in the EU, noting that also has the potential to expose Cloudflare and other online platforms to liability.

As recently as last month and in advance of any claims under that particular legislation, Cloudflare experienced an adverse ruling in an Italian court. Local broadcaster RTI successfully argued that Cloudflare can be held liable if it willingly fails to act in response to copyright infringement notices. In addition, Cloudflare was ordered to terminate the accounts of several pirate sites.

Of course, it’s not uncommon for S-1 filings to contain statements that can be interpreted as impending doom, since companies are required to be frank about their business’s prospects. However, with single copyright cases often dealing with millions of dollars worth of alleged infringement, Cloudflare’s appraisal of the risks seems entirely warranted.

Cloudflare’s S-1 filing can be viewed here

Source: TF, for the latest info on copyright, file-sharing, torrent sites and more. We also have VPN reviews, discounts, offers and coupons.

The collective thoughts of the interwebz

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close